- In Hugging Face, download
sentence-transformers/multi-qa-MiniLM-L6-cos-v1and place themulti-qa-MiniLM-L6-cos-v1model in thesentence-transformersfolder; - In Hugging Face, download
google/pegasus-largeand place thepegasus-largemodel in thegooglefolder; - In Hugging Face, download
meta-llama/Llama-2-7b-chat-hf,lmsys/vicuna-7b-v1.5,google/flan-t5-smalland place them in themodelfolder; - In Hugging Face, download
FacebookAI/roberta-largeand place theroberta-largemodel in thefacebookfolder. Run the following command to install the required libraries:
pip install evaluate bert-score numpy transformers torchpython3 run.pyTo run: Ablation(without reflection:ablation1.py,ablation2.py,ablation3.py;
without decomposition:ablation4.py)
python3 ablation/ablation1.py
python3 ablation/ablation2.py
python3 ablation/ablation3.py
python3 ablation/ablation4.py-
The ROUGE metric is already calculated during model output, while the SBERT and BERTScore metrics are obtained by running the following command:
python3 eval.py
-
Modified based on the following URL: https://github.com/GAIR-NLP/factool
The F-C and F-R metrics are obtained by running the following command:cd factool/ python3 run.py python3 computef.py