A hands-on workshop for deploying agents to LangSmith Deployment. This repo includes several agent examples built with LangChain and LangGraph, plus examples for calling deployed agents via the LangGraph SDK and RemoteGraph.
- uv — Fast Python package installer and resolver
- A LangSmith account (free to sign up)
- For Cloud deployment: a GitHub account and repo (public or private)
uv syncThis creates a virtual environment and installs all project dependencies.
Copy the example environment file and add your keys:
cp .env.example .envEdit .env and set at least:
OPENAI_API_KEYorANTHROPIC_API_KEY(for the agents)LANGSMITH_API_KEY(for tracing and deployment)LANGSMITH_PROJECT(optional; defaults tolangsmith-deployments-workshop)
Start the local LangGraph API server with Studio:
uv run langgraph devThis starts the server (default port 2024) and opens LangGraph Studio, where you can run and debug the agents.
To run the advanced setup with authentication, long-term memory store (semantic search), checkpointer TTL, custom HTTP app, and CORS:
uv run langgraph dev --config ./langgraph_advanced.jsonThis uses langgraph_advanced.json, which wires the langchain_advanced agent and the advanced/ folder (auth, webapp). See Advanced setup below.
langsmith-deployments-workshop/
├── agents/ # Agent implementations
│ ├── langchain_basic.py # LangChain create_agent (calendar tools)
│ ├── langgraph_basic.py # LangGraph StateGraph (calendar tools)
│ ├── langgraph_assistant.py # Calendar + configurable model & system prompt (context)
│ ├── langchain_advanced.py # Calendar + LTM (memories + documents), middleware, auth context
│ ├── langchain_remote_subagent.py # Calendar subagent (make_graph + distributed tracing)
│ ├── langchain_remote_supervisor.py # Supervisor with calendar as RemoteGraph tool
│ ├── deepagents_basic.py # Deep Agents (planning, subagents)
│ └── utils.py # Shared model config
├── advanced/ # Advanced config (used with langgraph_advanced.json)
│ ├── auth.py # Bearer-token auth + add_owner access control
│ ├── webapp.py # Custom async routes: GET /hello, POST /invoke (mounted at /api/v1)
│ └── sample_ltm_document.json # Example document shape for store (user_id > documents)
├── examples/ # How to call agents
│ ├── client_sdk.py # LangGraph SDK (runs/stream)
│ └── remotegraph.py # RemoteGraph (agent as subgraph)
├── langgraph.json # Default LangGraph CLI config
├── langgraph_advanced.json # Advanced config: auth, store, checkpointer, http, langchain_advanced
├── pyproject.toml
└── .env.example
All agents are wired in langgraph.json and exposed by the dev server:
| Graph ID | File | Description |
|---|---|---|
langchain_basic |
agents/langchain_basic.py |
Calendar assistant via LangChain create_agent with read_calendar and write_calendar. |
langgraph_basic |
agents/langgraph_basic.py |
Same calendar tools and prompt, implemented as a LangGraph StateGraph with tool-calling loop. |
langgraph_assistant |
agents/langgraph_assistant.py |
Calendar assistant with configurable context: model_name ("openai" | "anthropic", default openai) and optional system_prompt override. Studio can show a dropdown for model and pre-fill the default prompt. |
langchain_advanced |
agents/langchain_advanced.py |
Advanced config only: calendar + long-term memory (memories + documents), pre-model middleware (inject preferences), user_id context. Use with langgraph_advanced.json. |
langchain_remote_subagent |
agents/langchain_remote_subagent.py |
Calendar agent exposed via make_graph with distributed tracing: runs inside ls.tracing_context when a parent trace is present (e.g. when called by the supervisor). |
langchain_remote_supervisor |
agents/langchain_remote_supervisor.py |
Supervisor agent with the calendar as a tool: uses a module-level RemoteGraph pointing at langchain_remote_subagent. Uses distributed_tracing=True so subagent runs appear under the same trace. |
deepagents_basic |
agents/deepagents_basic.py |
Deep Agents calendar assistant with planning (todos) and subagent support. |
Use these graph IDs when calling the API (e.g. in client_sdk.py or RemoteGraph).
The advanced configuration (langgraph_advanced.json) adds authentication, a persistent store with semantic search, checkpointer TTL, and a custom HTTP app. Run it with:
uv run langgraph dev --config ./langgraph_advanced.json| Feature | Config key | Description |
|---|---|---|
| Graph | graphs.langchain_advanced |
Single agent: agents/langchain_advanced.py:agent. |
| Auth | auth.path |
advanced/auth.py:auth — Bearer-token auth; identity is used as user_id in context. |
| Store | store |
Indexed store (e.g. openai:text-embedding-3-small, 1536 dims) for semantic search; TTL for expiry. |
| Checkpointer | checkpointer |
TTL for thread checkpoints (e.g. delete after 43200 min). |
| HTTP | http |
Custom app advanced/webapp.py:app, CORS, configurable headers, mount_prefix (e.g. /api/v1). |
| File | Role |
|---|---|
auth.py |
Auth — Auth() and @auth.authenticate; validates Authorization: Bearer <token> against a toy VALID_TOKENS map and returns identity (e.g. user1, user2). @auth.on add_owner restricts access by owner unless the user is a Studio user. |
webapp.py |
Custom HTTP app — Async FastAPI routes mounted under mount_prefix; see Custom routes below. |
sample_ltm_document.json |
Sample LTM document — Example JSON for a document you add manually to the store: namespace (user_id, "documents"), value with a "document" (or "documents") key. Used by the search_memory tool. |
With mount_prefix: "/api/v1", the custom app in advanced/webapp.py exposes these routes (all async, using the LangGraph SDK get_client() for in-process graph calls):
| Method | Path | Description |
|---|---|---|
| GET | /api/v1/hello |
Health-style endpoint; returns {"Hello": "World"}. |
| POST | /api/v1/invoke |
Invokes the langchain_advanced graph with one user message and returns the assistant’s last message content. |
POST /api/v1/invoke
- Body:
{"message": "<user message>", "user_id": "<optional>"}. Ifuser_idis omitted, the handler uses thex-user-idheader or"default". - Headers: Send
Authorization: Bearer <token>(required when auth is enabled) and optionallyx-user-idfor context. - Response:
{"content": "<assistant reply>", "graph_id": "langchain_advanced"}. On graph failure returns 502 with a detail message.
Example: Run the advanced server, then call the custom invoke route:
uv run langgraph dev --config ./langgraph_advanced.json # terminal 1
uv run python examples/invoke_advanced_agent.py # terminal 2examples/invoke_advanced_agent.py sends a POST to http://localhost:2024/api/v1/invoke with Authorization: Bearer user1-token and x-user-id: user1.
Calendar agent with long-term memory (LTM) and per-user context:
- Context —
Context(user_id: str). The server injects this (e.g. from authidentity); all store access is namespaced byuser_id. - Tools
read_calendar/write_calendar— Same as basic calendar (in-memory events).search_memory— Semantic search over thedocumentsnamespace:(user_id, "documents"). Returns matching document content or"No documents available."if none. Use for user-added documents (seeadvanced/sample_ltm_document.json).add_memory— Writes to thememoriesnamespace:(user_id, "memories")with{"memory": content}. Used for preferences/facts the user wants the agent to remember.
- Pre-model middleware —
inject_memory_preferences(async): before each model call, runsstore.asearchon(user_id, "memories")and injects a human message like “Current user preferences: …” so the model sees stored preferences without a separate tool call. - Store layout — Two namespaces per user:
(user_id, "memories")— Filled byadd_memoryand by the middleware; holds preferences/facts.(user_id, "documents")— Filled manually (or by your own API); searched bysearch_memory.
Calling the advanced API: Send Authorization: Bearer user1-token (or user2-token) so the server sets context with that user’s identity as user_id. Use the graph ID langchain_advanced in requests.
Adding documents: Add items to the store with namespace (user_id, "documents"), key any string (e.g. doc-001), and value {"document": "Your text content here."}. See advanced/sample_ltm_document.json for the exact shape.
You can deploy agents to LangSmith in several ways: from the UI (Cloud), with the CLI (langgraph build / langgraph deploy), or via the Control Plane API (e.g. for CI/CD or custom registries).
Before deploying, ensure you have:
- A runnable graph — e.g.
./agents/langgraph_basic.py:agent(or any entry inlanggraph.json). - Dependencies — This project uses
pyproject.toml; the config points to"."so the CLI installs the current package. - Configuration —
langgraph.jsonat the repo root with:graphs: map of graph ID → module path (e.g."langgraph_basic": "./agents/langgraph_basic.py:agent").dependencies: e.g.["."].env: path to.envor env mapping.- Optional:
python_version,image_distro, etc.
Example langgraph.json (from this repo):
{
"graphs": {
"langchain_basic": "./agents/langchain_basic.py:agent",
"langgraph_basic": "./agents/langgraph_basic.py:agent",
"langgraph_assistant": "./agents/langgraph_assistant.py:agent",
"deepagents_basic": "./agents/deepagents_basic.py:agent",
"langchain_remote_subagent": "./agents/langchain_remote_subagent.py:make_graph",
"langchain_remote_supervisor": "./agents/langchain_remote_supervisor.py:agent"
},
"env": ".env",
"python_version": "3.11",
"dependencies": ["."],
"image_distro": "wolfi"
}Deploy from the LangSmith UI by connecting a GitHub repository:
- Open LangSmith → Deployments.
- Click + New Deployment.
- Import from GitHub and authorize the
hosted-langserveGitHub app (one-time per org/account). - Choose the repo and branch, and set:
- Config path: e.g.
langgraph.json(path from repo root). - Deployment type: Development or Production.
- Name, and optionally Shareable through Studio.
- Config path: e.g.
- Configure Environment variables and secrets (e.g.
OPENAI_API_KEY,LANGSMITH_API_KEY). - Submit; the deployment is built and run from the linked branch.
Benefits: No Docker or CLI on your machine; automatic updates on push to the branch (if enabled).
See Deploy to Cloud and Deployment quickstart.
Use the CLI to build a Docker image and optionally deploy it.
Local build:
# Build Docker image
uv run langgraph build -t my-agent:latest
# Push to your container registry (Docker Hub, ECR, GCR, ACR, etc.)
docker push my-agent:latestDeploy to LangSmith Cloud in one step:
# Build and deploy to LangSmith (uses LANGGRAPH_HOST_API_KEY or LANGSMITH_API_KEY from .env)
uv run langgraph deploy
# With options
uv run langgraph deploy --name my-calendar-agent
uv run langgraph deploy --deployment-id <existing-id> # Update existing deploymentlanggraph deploy builds the image, pushes it to a managed registry, and creates/updates the deployment. See LangGraph CLI — deploy.
When to use: You want a single command from your machine to Cloud, or you already build images and want to push to a registry and then use the Control Plane API or UI.
For automation (CI/CD, custom tooling), use the Control Plane API to create and manage deployments.
- Cloud: Create deployments from GitHub (e.g.
source: "github") or from a Docker image (e.g. afterlanggraph buildand push to a registry). - Self-hosted / Hybrid: Create deployments with
source: "external_docker"and your image URI.
Example flow:
- Create deployment:
POST /v2/deploymentswith the rightsourceandsource_config(and optionalsource_revision_config,secrets). - Poll revision status:
GET /v2/deployments/{deployment_id}/revisions/{revision_id}until status isDEPLOYED.
Authentication uses headers such as X-Api-Key (LangSmith API key) and X-Tenant-Id (workspace ID). See the Control Plane API reference and Create Deployment.
Always validate the graph locally before deploying:
uv run langgraph devThis will:
- Start the LangGraph API server (default port 2024).
- Open LangGraph Studio so you can run the graph and inspect state.
- Use your local
langgraph.jsonand.env.
If the graph works in Studio, deployment to LangSmith will usually succeed. See LangGraph CLI — dev.
Supervisor + subagent (same deployment): The langchain_remote_supervisor graph calls langchain_remote_subagent in-process (same server). With a single worker, that can deadlock: the supervisor run blocks waiting for the calendar tool, while the subagent run is queued on the same worker and never starts. Run the dev server with at least two jobs per worker so the subagent run can execute while the supervisor is waiting:
uv run langgraph dev --n-jobs-per-worker 2In production you typically have multiple workers, so this in-process pattern does not deadlock.
Once deployed, you can use the deployment URL (and API key if required) with:
- LangGraph SDK —
get_client/get_sync_clientandclient.runs.stream(...)(seeexamples/client_sdk.py; changeurland use your deployment URL). - RemoteGraph — Use the deployed graph as a node in another graph (see
examples/remotegraph.py; seturlto your deployment URL). - REST API — HTTP calls to
/runs/stream,/threads, etc. - LangGraph Studio — Open the deployment in Studio from the LangSmith UI (if the deployment is shareable or you have access).
- Secrets: Set
OPENAI_API_KEY,ANTHROPIC_API_KEY, and other keys as secrets in the deployment (UI or API), not as plain env vars. - LangSmith:
LANGSMITH_TRACING,LANGSMITH_PROJECT, etc. can be set in the deployment so traces go to your project. - Database/cache: For advanced setups, see environment variables (e.g. custom Postgres/Redis).
graph TD
A[Agent implementation] --> B[langgraph.json + dependencies]
B --> C[Test locally: langgraph dev]
C --> D{Works?}
D -->|No| E[Fix and retest]
E --> C
D -->|Yes| F[Choose deployment path]
F --> G[Cloud LangSmith]
F --> H[Self-hosted / Hybrid]
subgraph Cloud
G --> I[UI: Connect GitHub repo]
G --> J[CLI: langgraph deploy]
G --> K[API: Control Plane create deployment]
end
subgraph SelfHosted
H --> L[Build: langgraph build]
L --> M[Push image to registry]
M --> N[UI or Control Plane API]
end
I --> O[Agent ready]
J --> O
K --> O
N --> O
O --> P[Connect via SDK / RemoteGraph / REST / Studio]
- Test locally first — Run
langgraph devand verify the graph in Studio. - Version images — Use explicit tags (e.g.
my-agent:v1.2.0) when building withlanggraph build. - Use secrets for keys — Never commit API keys; configure them as deployment secrets.
- Monitor — Use LangSmith observability (traces, dashboards, alerts) for deployed agents.
examples/client_sdk.py— Calls an agent (e.g.langchain_basic) via the LangGraph SDK with a threadless run and streaming. Pointurlat your deployment to test a deployed agent.examples/remotegraph.py— Composes a parent graph that calls a child graph viaRemoteGraph; changeurlandgraph_nameto use a deployed graph.examples/invoke_advanced_agent.py— Calls the advanced server’s custom routePOST /api/v1/invokewith auth andx-user-id; use withlanggraph dev --config ./langgraph_advanced.json(see Custom routes).- Supervisor + subagent — Run the
langchain_remote_supervisorgraph in Studio (or via SDK). It delegates calendar tasks tolangchain_remote_subagentvia a tool. Uselanggraph dev --n-jobs-per-worker 2so the in-process subagent call does not deadlock (see Local Development & Testing).
Run against the local dev server:
uv run langgraph dev # in one terminal (use --n-jobs-per-worker 2 for supervisor)
uv run python examples/client_sdk.py
uv run python examples/remotegraph.py- LangSmith Deployment — Overview and concepts
- Deploy to Cloud — Full Cloud setup
- Deployment quickstart — First deployment from GitHub
- LangGraph CLI —
langgraph dev,build,deploy - LangGraph CLI — deploy — One-step deploy to LangSmith
- Control Plane API — Create and manage deployments programmatically
This project is licensed under the MIT License — see the LICENSE file for details.
