From adfcc55a8988863a5a9a627d4c6b09fef38e945f Mon Sep 17 00:00:00 2001 From: Brian Chan <42637621+cwingho@users.noreply.github.com> Date: Mon, 30 Oct 2023 15:49:02 +0800 Subject: [PATCH] fixed typo in readme --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8ae391b8..81d0cbe4 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ Paper: https://arxiv.org/abs/2106.09685
*Update 2/2023: LoRA is now supported by the [State-of-the-art Parameter-Efficient Fine-Tuning (PEFT)](https://github.com/huggingface/peft) library by Hugging Face.* -LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights. +LoRA reduces the number of trainable parameters by learning pairs of rank-decomposition matrices while freezing the original weights. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency. LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.