NLP-KG
Semantic Search

Field of Study:

Adversarial Attacks

Adversarial attacks refer to the manipulation of input data with malicious intent to deceive NLP systems. This is typically done by introducing small, often imperceptible alterations to the original data, which can lead to significant changes in the system's output. The goal of these attacks is to exploit the vulnerabilities of the NLP models and cause them to make errors or produce incorrect results.

Papers published in this field over the years:

Hierarchy

Loading...
Venue
Field

Publications for Adversarial Attacks

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next

Researchers for Adversarial Attacks

Sort by