Skip to content

GitHub repository for "Explanatory Prompting". Dataset used for experiments taken from "How Language Model Hallucinations Can Snowball".

Notifications You must be signed in to change notification settings

AlexBraverman/ExplanatoryPrompting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Explanatory Prompting

GitHub repository for "Mitigating Hallucination within Large Language Models using Explanatory Prompting". Dataset used for experiments taken from "How Language Model Hallucinations Can Snowball".

Python Files

confidence_estimate.py - Program used to generate confidence estimations during the calculations of calibrations when verifying the theoritcal lower bound from Calibrated Language Models Must Hallucinate.
run_experiments.py - Program used to run trials of experiments on the Flight Connectivity Dataset using varying prompting strategies found in the prompt directory.

Experimental Results

No Prompt - Baseline experimental trial
Verified SOTA - Add "Let's think step by step" prompt to run_experiments.py
Topological Order - Add prompt found in prompts/top_order.txt to run_experiments.py
Zero-Shot Explanatory Prompting - Add prompt found in prompts/explanatory_prompt.txt to run_experiments.py
Few-Shot Explanatory Prompting - Add prompt found in prompts/few_shot.txt to run_experiments.py

About

GitHub repository for "Explanatory Prompting". Dataset used for experiments taken from "How Language Model Hallucinations Can Snowball".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages