Repo for "Large Language Model Reasoning Failures"
-
Updated
Feb 10, 2026
Repo for "Large Language Model Reasoning Failures"
A rigorous foundation for provable agentic reasoning. Establishes a trust layer for neurosymbolic AI by formalizing optimization constraints in Lean 4. Replaces empirical approximation with deductive verification, ensuring agentic behaviors adhere to strict safety bounds.
Add a description, image, and links to the formal-reasoning topic page so that developers can more easily learn about it.
To associate your repository with the formal-reasoning topic, visit your repo's landing page and select "manage topics."