Publication:
Zero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Models
Andrea Papaluca, D. Krefl, Sergio Mendez Rodriguez, Artem Lenskiy, Hanna Suominen • @arXiv • 04 December 2023
TLDR: A pipeline is proposed that dynamically gathers contextual information from a Knowledge Base, both in the form of context triplets and of (sentence, triplets) pairs as examples, and provides it to the LLM through a prompt, finding it to be strongly correlated with the final TE performance of the model.
Citations: 0
Abstract: In this work, we tested the Triplet Extraction (TE) capabilities of a variety of Large Language Models (LLMs) of different sizes in the Zero- and Few-Shots settings. In detail, we proposed a pipeline that dynamically gathers contextual information from a Knowledge Base (KB), both in the form of context triplets and of (sentence, triplets) pairs as examples, and provides it to the LLM through a prompt. The additional context allowed the LLMs to be competitive with all the older fully trained baselines based on the Bidirectional Long Short-Term Memory (BiLSTM) Network architecture. We further conducted a detailed analysis of the quality of the gathered KB context, finding it to be strongly correlated with the final TE performance of the model. In contrast, the size of the model appeared to only logarithmically improve the TE capabilities of the LLMs.
Language Models & Neural NetworksResponsible & Trustworthy NLPStructured Data in NLPKnowledge GraphsKnowledge BasesLow-Resource NLPNatural Language ProcessingMultimodalityKnowledge RepresentationGraphsSemantic Text ProcessingGreen, Sustainable & Efficient Methods in NLPTransformers & Large Language ModelsFew-shot LearningInformation Extraction & Text Mining
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next