-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Python: Add A2A server sample #4528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
giles17
wants to merge
2
commits into
microsoft:main
Choose a base branch
from
giles17:python-a2a-server
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+709
−25
Open
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,34 +1,57 @@ | ||
| # A2A Agent Examples | ||
|
|
||
| This folder contains examples demonstrating how to create and use agents with the A2A (Agent2Agent) protocol from the `agent_framework` package to communicate with remote A2A agents. | ||
| This sample demonstrates how to host and consume agents using the [A2A (Agent2Agent) protocol](https://a2a-protocol.org/latest/) with the `agent_framework` package. There are two runnable entry points: | ||
|
|
||
| By default the A2AAgent waits for the remote agent to finish before returning (`background=False`), so long-running A2A tasks are handled transparently. For advanced scenarios where you need to poll or resubscribe to in-progress tasks using continuation tokens, see the [background responses sample](../../02-agents/background_responses.py). | ||
| | Run this file | To... | | ||
| |---------------|-------| | ||
| | **[`a2a_server.py`](a2a_server.py)** | Host an Agent Framework agent as an A2A-compliant server. | | ||
| | **[`agent_with_a2a.py`](agent_with_a2a.py)** | Connect to an A2A server and send requests (non-streaming and streaming). | | ||
|
|
||
| For more information about the A2A protocol specification, visit: https://a2a-protocol.org/latest/ | ||
|
|
||
| ## Examples | ||
| The remaining files are supporting modules used by the server: | ||
|
|
||
| | File | Description | | ||
| |------|-------------| | ||
| | [`agent_with_a2a.py`](agent_with_a2a.py) | Demonstrates agent discovery, non-streaming and streaming responses using the A2A protocol. | | ||
| | [`agent_definitions.py`](agent_definitions.py) | Agent and AgentCard factory definitions for invoice, policy, and logistics agents. | | ||
| | [`agent_executor.py`](agent_executor.py) | Bridges the a2a-sdk `AgentExecutor` interface to Agent Framework agents. | | ||
| | [`invoice_data.py`](invoice_data.py) | Mock invoice data and tool functions for the invoice agent. | | ||
| | [`a2a_server.http`](a2a_server.http) | REST Client requests for testing the server directly from VS Code. | | ||
|
|
||
| ## Environment Variables | ||
|
|
||
| Make sure to set the following environment variables before running the example: | ||
| Make sure to set the following environment variables before running the examples: | ||
|
|
||
| ### Required (Server) | ||
| - `AZURE_AI_PROJECT_ENDPOINT` — Your Azure AI Foundry project endpoint | ||
| - `AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME` — Model deployment name (e.g. `gpt-4o`) | ||
|
|
||
| ### Required (Client) | ||
| - `A2A_AGENT_HOST` — URL of the A2A server (e.g. `http://localhost:5001/`) | ||
|
|
||
| ### Required | ||
| - `A2A_AGENT_HOST`: URL of a single A2A agent (for simple sample, e.g., `http://localhost:5001/`) | ||
| ## Quick Start | ||
|
|
||
| All commands below should be run from this directory: | ||
|
|
||
| ```powershell | ||
| cd python/samples/04-hosting/a2a | ||
| ``` | ||
|
|
||
| ### 1. Start the A2A Server | ||
|
|
||
| Pick an agent type and start the server (each in its own terminal): | ||
|
|
||
| ```powershell | ||
| uv run python a2a_server.py --agent-type invoice --port 5000 | ||
| uv run python a2a_server.py --agent-type policy --port 5001 | ||
| uv run python a2a_server.py --agent-type logistics --port 5002 | ||
| ``` | ||
|
|
||
| ## Quick Testing with .NET A2A Servers | ||
| You can run one agent or all three — each listens on its own port. | ||
|
|
||
| For quick testing and demonstration, you can use the pre-built .NET A2A servers from this repository: | ||
| ### 2. Run the A2A Client | ||
|
|
||
| **Quick Testing Reference**: Use the .NET A2A Client Server sample at: | ||
| `..\agent-framework\dotnet\samples\05-end-to-end\A2AClientServer` | ||
| In a separate terminal (from the same directory), point the client at a running server: | ||
|
|
||
| ### Run Python A2A Sample | ||
| ```powershell | ||
| # Simple A2A sample (single agent) | ||
| $env:A2A_AGENT_HOST = "http://localhost:5001/" | ||
| uv run python agent_with_a2a.py | ||
| ``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,82 @@ | ||
| ### Each A2A agent is available at a different host address | ||
| @hostInvoice = http://localhost:5000 | ||
| @hostPolicy = http://localhost:5001 | ||
| @hostLogistics = http://localhost:5002 | ||
|
|
||
| ### Query agent card for the invoice agent | ||
| GET {{hostInvoice}}/.well-known/agent.json | ||
|
|
||
| ### Send a message to the invoice agent | ||
| POST {{hostInvoice}} | ||
| Content-Type: application/json | ||
|
|
||
| { | ||
| "id": "1", | ||
| "jsonrpc": "2.0", | ||
| "method": "message/send", | ||
| "params": { | ||
| "message": { | ||
| "kind": "message", | ||
| "role": "user", | ||
| "messageId": "msg_1", | ||
| "parts": [ | ||
| { | ||
| "kind": "text", | ||
| "text": "Show me all invoices for Contoso" | ||
| } | ||
| ] | ||
| } | ||
| } | ||
| } | ||
|
|
||
| ### Query agent card for the policy agent | ||
| GET {{hostPolicy}}/.well-known/agent.json | ||
|
|
||
| ### Send a message to the policy agent | ||
| POST {{hostPolicy}} | ||
| Content-Type: application/json | ||
|
|
||
| { | ||
| "id": "2", | ||
| "jsonrpc": "2.0", | ||
| "method": "message/send", | ||
| "params": { | ||
| "message": { | ||
| "kind": "message", | ||
| "role": "user", | ||
| "messageId": "msg_2", | ||
| "parts": [ | ||
| { | ||
| "kind": "text", | ||
| "text": "What is the policy for short shipments?" | ||
| } | ||
| ] | ||
| } | ||
| } | ||
| } | ||
|
|
||
| ### Query agent card for the logistics agent | ||
| GET {{hostLogistics}}/.well-known/agent.json | ||
|
|
||
| ### Send a message to the logistics agent | ||
| POST {{hostLogistics}} | ||
| Content-Type: application/json | ||
|
|
||
| { | ||
| "id": "3", | ||
| "jsonrpc": "2.0", | ||
| "method": "message/send", | ||
| "params": { | ||
| "message": { | ||
| "kind": "message", | ||
| "role": "user", | ||
| "messageId": "msg_3", | ||
| "parts": [ | ||
| { | ||
| "kind": "text", | ||
| "text": "What is the status for SHPMT-SAP-001?" | ||
| } | ||
| ] | ||
| } | ||
| } | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,120 @@ | ||
| # Copyright (c) Microsoft. All rights reserved. | ||
|
|
||
| import argparse | ||
| import os | ||
| import sys | ||
|
|
||
| import uvicorn | ||
| from a2a.server.apps.jsonrpc.starlette_app import A2AStarletteApplication | ||
| from a2a.server.request_handlers.default_request_handler import DefaultRequestHandler | ||
| from a2a.server.tasks.inmemory_task_store import InMemoryTaskStore | ||
| from agent_definitions import AGENT_CARD_FACTORIES, AGENT_FACTORIES | ||
| from agent_executor import AgentFrameworkExecutor | ||
| from agent_framework.azure import AzureOpenAIResponsesClient | ||
| from azure.identity import AzureCliCredential | ||
| from dotenv import load_dotenv | ||
|
|
||
| # Load environment variables from .env file | ||
| load_dotenv() | ||
|
|
||
| """ | ||
| A2A Server Sample — Host an Agent Framework agent as an A2A endpoint | ||
|
|
||
| This sample creates a Python-based A2A-compliant server that wraps an Agent | ||
| Framework agent. The server uses the a2a-sdk's Starlette application to handle | ||
| JSON-RPC requests and serves the AgentCard at /.well-known/agent.json. | ||
|
|
||
| Three agent types are available: | ||
| - invoice — Answers invoice queries using mock data and function tools. | ||
| - policy — Returns a fixed policy response. | ||
| - logistics — Returns a fixed logistics response. | ||
|
|
||
| Usage: | ||
| uv run python a2a_server.py --agent-type policy --port 5001 | ||
| uv run python a2a_server.py --agent-type invoice --port 5000 | ||
| uv run python a2a_server.py --agent-type logistics --port 5002 | ||
|
|
||
| Environment variables: | ||
| AZURE_AI_PROJECT_ENDPOINT — Your Azure AI Foundry project endpoint | ||
| AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME — Model deployment name (e.g. gpt-4o) | ||
| """ | ||
|
|
||
|
|
||
| def parse_args() -> argparse.Namespace: | ||
| parser = argparse.ArgumentParser(description="A2A Agent Server") | ||
| parser.add_argument( | ||
| "--agent-type", | ||
| choices=["invoice", "policy", "logistics"], | ||
| default="policy", | ||
| help="Type of agent to host (default: policy)", | ||
| ) | ||
| parser.add_argument( | ||
| "--host", | ||
| default="localhost", | ||
| help="Host to bind to (default: localhost)", | ||
| ) | ||
| parser.add_argument( | ||
| "--port", | ||
| type=int, | ||
| default=5001, | ||
| help="Port to listen on (default: 5001)", | ||
| ) | ||
| return parser.parse_args() | ||
|
|
||
|
|
||
| def main() -> None: | ||
| args = parse_args() | ||
|
|
||
| # Validate environment | ||
| project_endpoint = os.getenv("AZURE_AI_PROJECT_ENDPOINT") | ||
| deployment_name = os.getenv("AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME") | ||
|
|
||
| if not project_endpoint: | ||
| print("Error: AZURE_AI_PROJECT_ENDPOINT environment variable is not set.") | ||
| sys.exit(1) | ||
| if not deployment_name: | ||
| print("Error: AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME environment variable is not set.") | ||
| sys.exit(1) | ||
|
|
||
| # Create the LLM client | ||
| credential = AzureCliCredential() | ||
| client = AzureOpenAIResponsesClient( | ||
| project_endpoint=project_endpoint, | ||
| deployment_name=deployment_name, | ||
| credential=credential, | ||
| ) | ||
|
|
||
| # Create the Agent Framework agent for the chosen type | ||
| agent_factory = AGENT_FACTORIES[args.agent_type] | ||
| agent = agent_factory(client) | ||
|
|
||
| # Build the A2A server components | ||
| url = f"http://{args.host}:{args.port}/" | ||
| agent_card = AGENT_CARD_FACTORIES[args.agent_type](url) | ||
| executor = AgentFrameworkExecutor(agent) | ||
| task_store = InMemoryTaskStore() | ||
| request_handler = DefaultRequestHandler( | ||
| agent_executor=executor, | ||
| task_store=task_store, | ||
| ) | ||
|
|
||
| a2a_app = A2AStarletteApplication( | ||
| agent_card=agent_card, | ||
| http_handler=request_handler, | ||
| ) | ||
|
|
||
| print(f"Starting A2A server: {agent_card.name}") | ||
| print(f" Agent type : {args.agent_type}") | ||
| print(f" Listening : {url}") | ||
| print(f" Agent card : {url}.well-known/agent.json") | ||
| print() | ||
|
|
||
| uvicorn.run( | ||
| a2a_app.build(), | ||
| host=args.host, | ||
| port=args.port, | ||
| ) | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| main() | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.