NLP-KG
Semantic Search

Publication:

Learning to Solve NLP Tasks in an Incremental Number of Languages

Giuseppe CastellucciSimone FiliceD. CroceRoberto Basili • @Annual Meeting of the Association for Computational Linguistics • 01 January 2021

TLDR: A Continual Learning strategy that updates a model to support new languages over time, while maintaining consistent results on previously learned languages is proposed and an experimental evaluation in several tasks including Sentence Classification, Relational Learning and Sequence Labeling is reported.

Citations: 13
Abstract: In real scenarios, a multilingual model trained to solve NLP tasks on a set of languages can be required to support new languages over time. Unfortunately, the straightforward retraining on a dataset containing annotated examples for all the languages is both expensive and time-consuming, especially when the number of target languages grows. Moreover, the original annotated material may no longer be available due to storage or business constraints. Re-training only with the new language data will inevitably result in Catastrophic Forgetting of previously acquired knowledge. We propose a Continual Learning strategy that updates a model to support new languages over time, while maintaining consistent results on previously learned languages. We define a Teacher-Student framework where the existing model “teaches” to a student model its knowledge about the languages it supports, while the student is also trained on a new language. We report an experimental evaluation in several tasks including Sentence Classification, Relational Learning and Sequence Labeling.

Related Fields of Study

loading

Citations

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next

References

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next