GitHub repository for "Mitigating Hallucination within Large Language Models using Explanatory Prompting". Dataset used for experiments taken from "How Language Model Hallucinations Can Snowball".
confidence_estimate.py - Program used to generate confidence estimations during the calculations of calibrations when verifying the theoritcal lower bound from Calibrated Language Models Must Hallucinate.
run_experiments.py - Program used to run trials of experiments on the Flight Connectivity Dataset using varying prompting strategies found in the prompt directory.
No Prompt - Baseline experimental trial
Verified SOTA - Add "Let's think step by step" prompt to run_experiments.py
Topological Order - Add prompt found in prompts/top_order.txt to run_experiments.py
Zero-Shot Explanatory Prompting - Add prompt found in prompts/explanatory_prompt.txt to run_experiments.py
Few-Shot Explanatory Prompting - Add prompt found in prompts/few_shot.txt to run_experiments.py