Field of Study:
Instruction Tuning
Instruction Tuning refers to the process of fine-tuning a pre-trained language model on a dataset of instructions. The goal is to make the model follow specific instructions in the input prompt. It's a way to control the behavior of the model without modifying the model architecture or retraining from scratch.
Synonyms:
IFT, Instruction Fine-Tuning
Papers published in this field over the years:
Hierarchy
Loading...
Publications for Instruction Tuning
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
Researchers for Instruction Tuning
Sort by