Skip to content

mihikagaonkar/Memory-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agent Memory Simulator

A minimal demo app that combines a chat UI with structured memory extraction: an LLM parses user messages for persistent facts and stores them so the support agent can use them in later replies.

Demo instructions

  1. Install dependencies

    pip install -r requirements.txt
  2. Set your Groq API key (required for chat and memory extraction)

    set GROQ_API_KEY=your_key_here

    On macOS/Linux: export GROQ_API_KEY=your_key_here Get a key at console.groq.com.

  3. Run the app

    uvicorn app:app --reload
  4. Open in browser: http://127.0.0.1:8000/

  5. Try it

    • Send messages (e.g. “My order #123 is delayed” or “I prefer refunds over replacements”).
    • Watch the Agent Memory panel fill with extracted facts.
    • Keep chatting; the agent uses those facts in its replies.
    • Use Reset to clear the current session’s chat and memory and start over.

Architecture summary

  • Backend: FastAPI serves the UI, static assets, and JSON APIs. SQLite (via SQLAlchemy) stores messages and memory per session.
  • Chat flow: Each user message is saved, then an LLM extracts fact strings (JSON list), which are stored in the memory table. Another LLM call generates the reply using the last 5 messages plus all memory facts; the assistant message is saved and returned with the updated memory list.
  • Frontend: Single-page chat plus an “Agent Memory” sidebar. Session ID is stored in localStorage so memory persists across browser restarts. Reset calls /reset and clears the in-page chat and memory display.
Layer Role
app.py Routes, DB init, orchestration
models.py SQLAlchemy models and DB engine
utils.py Message/memory persistence, LLM calls (Groq)
templates/ + static/ Chat UI and behavior

Key feature: structured memory extraction

After every user message, the app calls the LLM with a dedicated prompt:

  • Prompt: “Extract any persistent user facts from this message. Return only a JSON list of fact strings.”
  • Output: e.g. ["User has a delayed order", "User prefers refunds"]
  • Storage: Each string is stored as a row in the memory table for that session.
  • Usage: When generating the next reply, the agent receives a “Memory” section listing these facts and is instructed to use them when relevant.

So the agent gains a simple, explicit memory layer instead of relying only on the last few messages.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors