This repository provides the backend blueprint for a LangGraph-based agent service architecture. It includes a LangGraph agent, a FastAPI service to serve it, a client to interact with the service, and a Streamlit app that uses the client to provide a chat interface for local testing purposes. The backend is hosted on Render.
This project offers a template for you to easily build and run your own agents using the LangGraph framework. It demonstrates a complete setup from agent definition, backend to user interface, making it easier to get started with LangGraph-based projects by providing a full, robust codebase for everything you'll need.
Run directly in python
# An OPENAI_API_KEY is required
echo 'OPENAI_API_KEY=your_openai_api_key' >> .env
# uv is recommended but "pip install ." also works
pip install uv
uv sync --frozen
# Or install all the libaries once with
pip install -r requirements.txt
# edit(comment either of the two) code in to choose if you want test with streamlit or swagger docs
# "uv sync" creates .venv automatically
source .venv/bin/activate
python src/run_service.py
# In another shell
source .venv/bin/activate
streamlit run src/streamlit_app.py- LangGraph Agent: A customizable agent built using the LangGraph framework.
- FastAPI Service: Serves the agent with both streaming and non-streaming endpoints.
- PostgreSQL Database Support: Uses PostgreSQL with an asynchronous engine to handle database operations efficiently.
- Docker Support: Simplifies deployment with a preconfigured Dockerfile.
- Render Hosting:Backend hosted on Render for streamlined deployment and scalability.
- Asynchronous Requests: Fully supports asynchronous processing for high-concurrency applications.
- Advanced Streaming: A novel approach to support both token-based and message-based streaming.
- Streamlit Interface: Provides a user-friendly chat interface for interacting with the agent.
- Multiple Agent Support: Run multiple agents in the service and call by URL path
- Asynchronous Design: Utilizes async/await for efficient handling of concurrent requests.
- Feedback Mechanism: Includes a star-based feedback system integrated with LangSmith.
The repository is structured as follows:
src/agents/research_assistant.py: Defines the main LangGraph agentsrc/agents/models.py: Configures available models based on ENVsrc/agents/agents.py: Mapping of all agents provided by the servicesrc/schema/schema.py: Defines the protocol schemasrc/service/service.py: FastAPI service to serve the agentssrc/client/client.py: Client to interact with the agent servicesrc/streamlit_app.py: Streamlit app providing a chat interfacedocker: Defines the container environment for the project.
AI agents are increasingly being built with more explicitly structured and tightly controlled Compound AI Systems, with careful attention to the cognitive architecture. At the time of this repo's creation, LangGraph seems like the most advanced open source framework for building such systems, with a high degree of control as well as support for features like concurrent execution, cycles in the graph, streaming results, built-in observability, and the rich ecosystem around LangChain.
-
Clone the repository:
git clone https://github.com/abeenoch/base-network-agent.git cd base-network-agent -
Set up environment variables: Create a
.envfile in the root directory and add the following:# Provide at least one LLM API key to enable the agent service # Optional, to enable OpenAI gpt-4o-mini OPENAI_API_KEY=your_openai_api_key # Optional, to enable LlamaGuard and Llama 3.1 GROQ_API_KEY=your_groq_api_key # Optional, to enable Gemini 1.5 Flash # See: https://ai.google.dev/gemini-api/docs/api-key GOOGLE_API_KEY=your_gemini_key # Optional, to enable Claude 3 Haiku # See: https://docs.anthropic.com/en/api/getting-started ANTHROPIC_API_KEY=your_anthropic_key # Optional, to enable AWS Bedrock models Haiku # See: https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html USE_AWS_BEDROCK=true # Optional, to enable simple header-based auth on the service AUTH_SECRET=any_string_you_choose # Optional, to enable OpenWeatherMap OPENWEATHERMAP_API_KEY=your_openweathermap_api_key # Optional, to enable LangSmith tracing LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT=https://api.smith.langchain.com LANGCHAIN_API_KEY=your_langchain_api_key LANGCHAIN_PROJECT=your_project # Optional, if MODE=dev, uvicorn will reload the server on file changes MODE=
-
You can now run the agent service and the Streamlit app locally, either with Docker or just using Python. The Docker setup is recommended for simpler environment setup and immediate reloading of the services when you make changes to your code.
You can also run the agent service and the Streamlit app locally without Docker, just using a Python virtual environment.
-
Create a virtual environment and install dependencies:
pip install uv uv sync --frozen --extra dev source .venv/bin/activate -
Run the FastAPI server:
python src/run_service.py
-
In a separate terminal, run the Streamlit app:
streamlit run src/streamlit_app.py
-
Open your browser and navigate to the URL provided by Streamlit (usually
http://localhost:8501).
To customize the agent for your own use case:
- Add your new agent to the
src/agentsdirectory. You can copyresearch_assistant.pyorchatbot.pyand modify it to change the agent's behavior and tools. - Import and add your new agent to the
agentsdictionary insrc/agents/agents.py. Your agent can be called by/<your_agent_name>/invokeor/<your_agent_name>/stream. - Adjust the Streamlit interface in
src/streamlit_app.pyto match your agent's capabilities.