diff --git a/README.md b/README.md index 8ae391b8..81d0cbe4 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ Paper: https://arxiv.org/abs/2106.09685
*Update 2/2023: LoRA is now supported by the [State-of-the-art Parameter-Efficient Fine-Tuning (PEFT)](https://github.com/huggingface/peft) library by Hugging Face.* -LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights. +LoRA reduces the number of trainable parameters by learning pairs of rank-decomposition matrices while freezing the original weights. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency. LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.