Field of Study:
Robustness in NLP
Robustness NLP is a subfield of Responsible NLP that deals with developing algorithms and models that are insensitive to biases, resistant to data perturbations, and reliable for out-of-distribution predictions. Robust models can operate reliably and accurately even in the presence of biased, noisy or adversarial input, such as misspellings, grammatical errors, or intentional attacks.
Papers published in this field over the years:
Hierarchy
Loading...
Publications for Robustness in NLP
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
Researchers for Robustness in NLP
Sort by