Transfer Learning Using Multitask Prompt Tuning (MPT) :hmn

Pretrained language models (PLMs) have significantly improved on many downstream NLP tasks due to finetuning. While current PLMs can include hundreds of millions of parameters, the traditional paradigm of full task-specific finetuning (FT) is challenging to expand to numerous tasks. The need to learn fewer parameters per task than necessary for comprehensive finetuning has led …

Read More