GPT-4 Takes the Lead in Instruction-Tuning of Large Language Models: Advancing Generalization Capabilities for Real-World Tasks


The outstanding generalization skills of Large Language Models (LLMs), such as in-context learning and chain-of-thoughts reasoning, have been demonstrated. Researchers have been looking towards techniques for instruction-tuning LLMs to help them follow instructions in plain language and finish jobs in the actual world. This is accomplished by either supervised finetuning using publicly available benchmarks and datasets enhanced manually, automatically created instructions, or by training the model on various tasks using human-annotated prompts and feedback.

The field of study on instruction tuning has developed efficient ways to raise the zero and few-shot generalization capacities of LLMs. Self-Instruct tuning, one of these techniques, aligns LLMs to human purpose by learning from instruction-following data produced by cutting-edge instructor LLMs that have tuned their instructions. With instruction tuning, the recent success of ChatGPT and GPT-4 provides a wealth of opportunities to enhance open-source LLMs. A group of open-sourced LLMs called LLaMA performs on par with commercial LLMs like GPT-3.

With its high performance and inexpensive cost, Self-Instruct tuning has been readily adapted to train LLaMA to obey instructions. For instance, Vicuna utilizes around 700K instruction-following samples shared by user-ChatGPT, whereas Stanford Alpaca uses 52K instruction-following samples produced by GPT-3.5. They initially suggest using GPT-4 as a teacher for self-instruct tuning to enhance the state-of-the-art instruction tuning for LLMs.

In this study, researchers from Microsoft contribute the following: 

GPT-4 data: They make available data produced by GPT-4, such as the 52K English and Chinese instruction-following dataset, and feedback data produced by GPT-4 that score the results of three instruction-tuned models. 

Models and assessment: They have created reward models and instruction-tuned LLaMA models using the data collected by the GPT-4. They employ three metrics assessed on test samples (i.e., unseen instructions) to gauge the effectiveness of instruction-tuned LLMs: human evaluation on three alignment criteria, automatic evaluation using GPT-4 feedback, and ROUGE-L on artificial instructions.

The efficiency of instruction tweaking using GPT-4 is demonstrated in this research. Their empirical investigation confirms the value of using data provided by GPT-4 for LLM instruction tweaking. It offers helpful advice for creating a general-purpose instruction-following agent based on LLMs. They release 52K English and Chinese instruction-following instances created with GPT-4 along with model checkpoints adjusted from LLaMA in the hopes that their empirical findings and resource will assist in creating open-source and general-propose LLMs that are better able to work by human values to complete tasks.

This is still a work in progress, and numerous avenues can be investigated: Scale of the data and model. The base LLaMA model size is 7B, whereas the GPT-4 data size is 52K. Vicuna employs the 13B LLaMA model and gathers around 700K conversion turns (based on the multi-turn ShareGPT data). It would be encouraging to keep collecting additional GPT-4 instruction-following data, integrate it with ShareGPT data, and train bigger LLaMA models to increase performance. RLHF is (ii). Using the reward model during the decoding phase means that comparative data is likely to offer LLM training relevant feedback. It seems sensible to keep putting LLMs through reward model training, such as reinforcement learning with machine-generated feedback. They make the data generated using GPT-4 and the codebase both public.


Check out the Paper, Github, and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 18k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.