NLP-KG
Semantic Search

Publication:

Structured Prompting: Scaling In-Context Learning to 1, 000 Examples

Y. HaoYutao SunLi DongZhixiong HanYuxian GuFuru Wei • @arXiv • 13 December 2022

TLDR: Experimental results on a diverse set of tasks show that the structured prompting approach improves end-task performance and reduces evaluation variance over conventional in-context learning as the number of demonstration examples increases.

Citations: 36
Abstract: Large language models have exhibited intriguing in-context learning capability, achieving promising zero- and few-shot performance without updating the parameters. However, conventional in-context learning is usually restricted by length constraints, rendering it ineffective to absorb supervision from a large number of examples. In order to go beyond few shots, we introduce structured prompting that breaks the length limit and scales in-context learning to thousands of examples. Specifically, demonstration examples are separately encoded with well-designed position embeddings, and then they are jointly attended by the test example using a rescaled attention mechanism. So we can scale the number of exemplars with linear complexity instead of quadratic complexity with respect to length. Experimental results on a diverse set of tasks show that our approach improves end-task performance and reduces evaluation variance over conventional in-context learning as the number of demonstration examples increases. Code has been released at https://aka.ms/structured-prompting .

Related Fields of Study

loading

Citations

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next

References

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next