A LangChain-based agent service that helps professors find suitable classrooms at Dartmouth College based on their teaching requirements.
This agent replaces the hardcoded tool invocation approach with LangChain's built-in tool calling capabilities. It provides a FastAPI endpoint that the backend can call to process classroom search requests using natural language.
- Frontend -> Backend -> Agent Service
- Backend handles authentication and type validation
- Agent uses LangChain for intelligent tool selection
- Tools query the backend's classroom database
This template uses OpenAI models via LangChain. You need an OpenAI API key.
-
Get an OpenAI API key from OpenAI Platform
-
Create .env file:
cp .env.example .env # Edit .env and add your OPENAI_API_KEY -
Update
utils/model.py:from langchain_openai import ChatOpenAI model = ChatOpenAI( model="gpt-3.5-turbo", temperature=0 )
For other providers (Anthropic, Google, etc.), update utils/model.py with the appropriate LangChain integration and set the required API keys.
LangGraph provides a development server that automatically reloads your graph when you make changes:
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment:
cp .env.example .env # Edit .env and set: # - OPENAI_API_KEY # - BACKEND_URL (URL of the backend API, e.g., http://localhost:5000) # - PORT (agent service port, default: 8000)
Run the FastAPI server that the backend will call:
python app.pyThe agent will be available at http://localhost:8000 with the following endpoints:
POST /chat- Main chat endpointGET /health- Health check endpoint
Test the agent interactively in the terminal:
python main.pyThis runs a simple chat loop where you can test the agent directly.
Use LangGraph Studio for visual debugging:
langgraph dev- Backend receives chat request from frontend with user authentication
- Backend validates Dartmouth token and forwards to agent service
- Agent processes messages using LangChain workflow
- Agent decides whether to:
- Gather more information from user
- Call query_classrooms_basic tool for initial search
- Call query_classrooms_with_amenities tool for detailed search
- Tools make HTTP requests to backend classroom API
- Agent formats results and returns to backend
- Backend sends response to frontend
- query_classrooms_basic: Search by class style and size
- query_classrooms_with_amenities: Search with detailed amenities
The langgraph.json file configures your graph:
{
"dependencies": ["./agent.py"],
"graphs": {
"react_agent_template": "./agent.py:workflow"
},
"env": "./.env"
}from fastapi import FastAPI from agent import workflow
app = FastAPI()
add_routes(app, workflow, path="/agent")
if name == "main": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)
3. **Run the deployment:**
```bash
python deploy.py
- Access your deployed agent:
- API endpoint:
http://localhost:8000/agent - Interactive docs:
http://localhost:8000/docs
- API endpoint:
-
Create a Dockerfile:
FROM python:3.11-slim WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["langgraph", "dev", "--host", "0.0.0.0", "--port", "8000"]
-
Build and run:
docker build -t langgraph-agent . docker run -p 8000:8000 langgraph-agent
# Start the development server on port 2024
langgraph dev --host 0.0.0.0 --port 2024
# Or use the default port (2024) with explicit host binding
langgraph dev --host 0.0.0.0
# Access the server using these URLs:
# - API Documentation: http://localhost:2024/docs
# - LangGraph Studio: https://smith.langchain.com/studio/?baseUrl=http://localhost:2024
#
# Note: Use 'localhost' or '127.0.0.1' in browser URLs, NOT '0.0.0.0'
# The root path (/) returns 404 - use /docs for API documentation# Using LangServe
python deploy.py
# Or using Docker
docker run -p 8000:8000 langgraph-agent├── agent.py # Main agent definition
├── langgraph.json # LangGraph configuration
├── main.py # Entry point for standalone usage
├── requirements.txt # Python dependencies
├── utils/
│ ├── model.py # Model configuration
│ ├── state.py # State schema definition
│ └── tools.py # Available tools for the agent
└── README.md # This file
-
Define your tool in
utils/tools.py:from langchain_core.tools import tool @tool def your_custom_tool(input: str) -> str: """Description of what your tool does.""" # Your tool logic here return result
-
Add it to the agent in
agent.py:from utils.tools import addition, your_custom_tool workflow = create_react_agent( # ... other parameters tools=[addition, your_custom_tool], )
Edit the system_prompt in agent.py to customize your agent's behavior:
system_prompt = """Your custom system prompt here.
Define how your agent should behave, what it can do, and how it should respond.
"""- Model not found: Ensure your model is downloaded with Ollama or your API key is set correctly
- Import errors: Make sure all dependencies are installed with
pip install -r requirements.txt - Port conflicts: Change the port in your configuration if 8000 or 8123 are already in use