Publication:
Unsupervised Domain Adaptation on Reading Comprehension
Yu Cao, Meng Fang, B. Yu, Joey Tianyi Zhou • @arXiv • 13 November 2019
TLDR: This work provides a novel conditional adversarial self-training method (CASe), which leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self- training.
Citations: 34
Abstract: Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate this issue, we are going to investigate unsupervised domain adaptation on RC, wherein a model is trained on labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, the performance is still unsatisfactory when the model trained on one dataset is directly applied to another target dataset. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable accuracy to supervised models on multiple large-scale benchmark datasets.
Low-Resource NLPUnsupervised LearningRobustness in NLPSemantic Text ProcessingMachine Reading ComprehensionGreen, Sustainable & Efficient Methods in NLPLanguage Models & Neural NetworksDomain AdaptationLanguage Model AdaptersReasoningResponsible & Trustworthy NLPNatural Language ProcessingParameter-Efficient Fine-Tuning
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next