Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing
Abstract: with the rapid development of artificial intelligence technology, deep neural network has been widely used in the fields of computer vision, signal analysis and natural language processing. Natural language processing helps machines process, understand and use human language through grammatical analysis, semantic analysis, text understanding and other functions. However, previous studies have shown that deep neural networks are vulnerable to attacks against texts, and the prediction error of natural language processing models can be made by generating imperceptible disturbances to normal texts. In order to improve the robust security of the model, defense-related research work has appeared in recent years. In view of the existing research, this paper comprehensively introduces the relevant work in the field of attack and defense of natural language processing, specifically, firstly, it introduces the main tasks and related methods of natural language processing; secondly, the attack and defense methods of natural language processing are classified according to the attack and defense mechanism. Then, the verifiable robustness of the natural language processing model and the evaluation benchmark data set are further analyzed, and the detailed introduction of the natural language processing application platform and toolkit is provided. Finally, the future research and development direction of attack and defense security for natural language processing is summarized.
关键词: 深度神经网络, 自然语言处理, 对抗攻击, 防御, 鲁棒性
Abstract：With the rapid development of artificial intelligence, deep neural networks have been widely applied in the fields of computer vision, signal analysis, and natural language processing. It helps machines process understand and use human language through functions such as syntax analysis, semantic analysis, and text comprehension. However, existing studies have shown that deep models are vulnerable to the attacks from adversarial texts. Adding imperceptible adversarial perturbations to normal texts, natural language processing models can make wrong predictions. To improve the robustness of the natural language processing model, defense-related researches have also developed in recent years. Based on the existing researches, we comprehensively detail related works in the field of adversarial attacks, defenses, and robustness analysis in natural language processing tasks. Specifically, we first introduce the research tasks and related natural language processing models. Then, attack and defense approaches are stated separately. The certified robustness analysis and benchmark datasets of natural language processing models are further investigated and a detailed introduction of natural language processing application platforms and toolkits is provided. Finally, we summarize the development direction of research on attacks and defenses in the future.
Key words: deep neural network, natural language processing, adversarial attack, defense, robustness
Title: 面向自然语言处理的对抗攻防与鲁棒性分析综述 Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Lang