Prompt tuning is a parameter-efficient method that even surpasses traditional fine-tuning methods in few-shot scenarios. Nowadays, pre-trained language models are getting larger and larger, with more and more parameters, which makes the traditional fine-tuning method impractical to implement and consumes a lot of computing resources. Therefore, prompt-based methods have a broad application prospect. In the experiments, it is found that prefix tuning, a prompt-based method, has the problem of non-convergence or is quite slow to converge when the training samples are small. This paper proposes a cross-task parameter transfer method, which transfers the trained parameters from prompt tuning tasks to prefix tuning to improve the training speed and alleviate the problem of non-convergence or slow convergence in prefix tuning tasks.
|