Efficient Fine-Tuning of Large Language Models
We propose a new parameter-efficient fine-tuning method that preserves pre-trained knowledge while adapting to new tasks by updating only a small number of parameters. This method achieves performance comparable to full-parameter fine-tuning on multiple benchmarks.