Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models

1Harvard University, 2University of Southern California, 3UCLA,
4University of Wisconsin, Madison, 5UC, Davis
Teaser Image

Overview of instruction attacks. Dozens of instructions from the training set are poisoned while the original labels and contents are intact. Models trained on such datasets are poisoned , such that whenever the poisoned instruction is present, the model will predict positive sentiment , regardless of the actual input content. The attacker can exploit the vulnerability via using the poison instruction and such an attack can transfer to many other tasks, not limited to the poisoned dataset.

Abstract

We investigate security concerns of the emergent instruction tuning paradigm, that models are trained on crowdsourced datasets with task instructions to achieve superior performance. Our studies demonstrate that an attacker can inject backdoors by issuing very few malicious instructions (~1000 tokens) and control model behavior through data poisoning, without even the need to modify data instances or labels themselves. Through such instruction attacks, the attacker can achieve over 90% attack success rate across four commonly used NLP datasets. As an empirical study on instruction attacks, we systematically evaluated unique perspectives of instruction attacks, such as poison transfer where poisoned models can transfer to 15 diverse generative datasets in a zero-shot manner; instruction transfer where attackers can directly apply poisoned instruction on many other datasets; and poison resistance to continual finetuning. Lastly, we show that RLHF and clean demonstrations might mitigate such backdoors to some degree. These findings highlight the need for more robust defenses against poisoning attacks in instruction-tuning models and underscore the importance of ensuring data quality in instruction crowdsourcing.

Attacking LLMs

We explore a armory of instruction attacks (Section 3) and show that instruction-level attacks are more harmful than instance-level attacks.

Larger models are more vulnerable to poisoning?

How vulnerable LLMs are when we increase the number of poisons? How about we changing the model size? Larger model does not entail a stronger resilience to poison attacks.

Attack on SST2
Attack on HS
Attack on TWEET
Attack on TREC
Scaling analysis of Induced Instruction Attacks on Flan-T5 family. x-axis is #poison instances. Darker colors imply larger model. Large language models are few-shot poison learners.

Continual training cannot cure poison

Hard to Cure

Continual learning cannot cure instruction attack. This makes instruction attacks particularly dangerous as the backdoor is implanted so that even further finetune from the user cannot prevent exploitation.

Transferable attack

Due to versatility of instruction formulation, instruction attacks is transferable which is not possible for instance-level baselines.
Attack on SST2
Attack on HS

We show two transferability granularities in this study:

  1. Left: Models poisoned on different datasets can be zero-shot transferred to 15 diverse datasets clustered in six groups.
  2. Right: Induced instruction designed for SST-2 can be transferred to other datasets, yielding competitive ASR compared to dataset-specific instructions, and outperforming all baseline attacks.

How can it be defended?

Hard to Cure We try two widely-used defense methods ONION and RAP and one machine unlearning method SEEM. Only SEEM is effective (large ΔASR) but at the cost of significant performance degradation in clean data (ΔCACC).

RLHF However, it becomes harder to poison after RLHF. Adding clean demonstrations further mitigates the backdoor. This indicates that alignment and in context learning might mitigate the poison attack effectively.

BibTeX

@article{xu2023instructions,
  title={Instructions as backdoors: Backdoor vulnerabilities of instruction tuning for large language models},
  author={Xu, Jiashu and Ma, Mingyu Derek and Wang, Fei and Xiao, Chaowei and Chen, Muhao},
  journal={arXiv preprint arXiv:2305.14710},
  year={2023}
}