Publication:
Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking
Nan Xu, Fei Wang, Ben Zhou, Bangzheng Li, Chaowei Xiao, Muhao Chen • @arXiv • 16 November 2023
TLDR: This paper investigates a novel category of jailbreak attacks specifically designed to target the cognitive structure and processes of LLMs, and proposes a black-box attack with no need for knowledge of model architecture or access to model weights.
Citations: 18
Abstract: While large language models (LLMs) have demonstrated increasing power, they have also given rise to a wide range of harmful behaviors. As representatives, jailbreak attacks can provoke harmful or unethical responses from LLMs, even after safety alignment. In this paper, we investigate a novel category of jailbreak attacks specifically designed to target the cognitive structure and processes of LLMs. Specifically, we analyze the safety vulnerability of LLMs in the face of (1) multilingual cognitive overload, (2) veiled expression, and (3) effect-to-cause reasoning. Different from previous jailbreak attacks, our proposed cognitive overload is a black-box attack with no need for knowledge of model architecture or access to model weights. Experiments conducted on AdvBench and MasterKey reveal that various LLMs, including both popular open-source model Llama 2 and the proprietary model ChatGPT, can be compromised through cognitive overload. Motivated by cognitive psychology work on managing cognitive load, we further investigate defending cognitive overload attack from two perspectives. Empirical studies show that our cognitive overload from three perspectives can jailbreak all studied LLMs successfully, while existing defense strategies can hardly mitigate the caused malicious uses effectively.
Language Models & Neural NetworksRobustness in NLPNatural Language ProcessingGreen, Sustainable & Efficient Methods in NLPTransformers & Large Language ModelsEthical NLPPrompt HackingPrompt InjectionGoal HijackingSemantic Text ProcessingResponsible & Trustworthy NLPPrivacy, Security & Safety in NLPPrompting, Prompt Learning & Prompt Engineering
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next