NLP-KG
Semantic Search

Publication:

XGen-7B Technical Report

Erik NijkampTian XieHiroaki HayashiBo PangCongying XiaChen XingJesse VigSemih YavuzPhilippe LabanBen KrauseSenthil PurushwalkamTong NiuWojciech Kry'sci'nskiLidiya Murakhovs'kaPrafulla Kumar ChoubeyA. R. FabbriYe LiuRui MengLifu TuMeghana Moorthy BhatChien-Sheng WuSilvio SavareseYingbo ZhouShafiq R. JotyCaiming Xiong • @arXiv • 07 September 2023

TLDR: This work has trained XGen, a series of 7B parameter models on up to 8K sequence length for up to 1.5T tokens, and finetuned the XGen models on public-domain instructional data, creating their instruction-tuned counterparts (XGen-Inst).

Citations: 7
Abstract: Large Language Models (LLMs) have become ubiquitous across various domains, transforming the way we interact with information and conduct research. However, most high-performing LLMs remain confined behind proprietary walls, hindering scientific progress. Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context. To address this, we have trained XGen, a series of 7B parameter models on up to 8K sequence length for up to 1.5T tokens. We have also finetuned the XGen models on public-domain instructional data, creating their instruction-tuned counterparts (XGen-Inst). We open-source our models for both research advancements and commercial applications. Our evaluation on standard benchmarks shows that XGen models achieve comparable or better results when compared with state-of-the-art open-source LLMs. Our targeted evaluation on long sequence modeling tasks shows the benefits of our 8K-sequence models over 2K-sequence open-source LLMs.

Related Fields of Study

loading

Citations

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next

References

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next