Publication:
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Tongxu Luo, Fangyu Lei, Jiahe Lei, Weihao Liu, Shihu He, Jun Zhao, Kang Liu • @arXiv • 22 September 2023
TLDR: A new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA is introduced, which achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the few-shot setting.
Citations: 1
Abstract: Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) have gained significant attention in the NLP community. With the emergence of large language models, In-Context Learning and Chain-of-Thought prompting have become two particularly popular research topics in this field. In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt the model to develop the ability of retrieval thinking when dealing with hybrid data. Our method achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the few-shot setting.
Information RetrievalGreen, Sustainable & Efficient Methods in NLPStructured Data in NLPLow-Resource NLPLanguage Models & Neural NetworksNatural Language ProcessingText GenerationResponsible & Trustworthy NLPPrompting, Prompt Learning & Prompt EngineeringSemantic Text ProcessingMultimodalityNatural Language InterfacesQuestion Answering
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next