Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 38 additions & 15 deletions python/samples/04-hosting/a2a/README.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,57 @@
# A2A Agent Examples

This folder contains examples demonstrating how to create and use agents with the A2A (Agent2Agent) protocol from the `agent_framework` package to communicate with remote A2A agents.
This sample demonstrates how to host and consume agents using the [A2A (Agent2Agent) protocol](https://a2a-protocol.org/latest/) with the `agent_framework` package. There are two runnable entry points:

By default the A2AAgent waits for the remote agent to finish before returning (`background=False`), so long-running A2A tasks are handled transparently. For advanced scenarios where you need to poll or resubscribe to in-progress tasks using continuation tokens, see the [background responses sample](../../02-agents/background_responses.py).
| Run this file | To... |
|---------------|-------|
| **[`a2a_server.py`](a2a_server.py)** | Host an Agent Framework agent as an A2A-compliant server. |
| **[`agent_with_a2a.py`](agent_with_a2a.py)** | Connect to an A2A server and send requests (non-streaming and streaming). |

For more information about the A2A protocol specification, visit: https://a2a-protocol.org/latest/

## Examples
The remaining files are supporting modules used by the server:

| File | Description |
|------|-------------|
| [`agent_with_a2a.py`](agent_with_a2a.py) | Demonstrates agent discovery, non-streaming and streaming responses using the A2A protocol. |
| [`agent_definitions.py`](agent_definitions.py) | Agent and AgentCard factory definitions for invoice, policy, and logistics agents. |
| [`agent_executor.py`](agent_executor.py) | Bridges the a2a-sdk `AgentExecutor` interface to Agent Framework agents. |
| [`invoice_data.py`](invoice_data.py) | Mock invoice data and tool functions for the invoice agent. |
| [`a2a_server.http`](a2a_server.http) | REST Client requests for testing the server directly from VS Code. |

## Environment Variables

Make sure to set the following environment variables before running the example:
Make sure to set the following environment variables before running the examples:

### Required (Server)
- `AZURE_AI_PROJECT_ENDPOINT` — Your Azure AI Foundry project endpoint
- `AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME` — Model deployment name (e.g. `gpt-4o`)

### Required (Client)
- `A2A_AGENT_HOST` — URL of the A2A server (e.g. `http://localhost:5001/`)

### Required
- `A2A_AGENT_HOST`: URL of a single A2A agent (for simple sample, e.g., `http://localhost:5001/`)
## Quick Start

All commands below should be run from this directory:

```powershell
cd python/samples/04-hosting/a2a
```

### 1. Start the A2A Server

Pick an agent type and start the server (each in its own terminal):

```powershell
uv run python a2a_server.py --agent-type invoice --port 5000
uv run python a2a_server.py --agent-type policy --port 5001
uv run python a2a_server.py --agent-type logistics --port 5002
```

## Quick Testing with .NET A2A Servers
You can run one agent or all three — each listens on its own port.

For quick testing and demonstration, you can use the pre-built .NET A2A servers from this repository:
### 2. Run the A2A Client

**Quick Testing Reference**: Use the .NET A2A Client Server sample at:
`..\agent-framework\dotnet\samples\05-end-to-end\A2AClientServer`
In a separate terminal (from the same directory), point the client at a running server:

### Run Python A2A Sample
```powershell
# Simple A2A sample (single agent)
$env:A2A_AGENT_HOST = "http://localhost:5001/"
uv run python agent_with_a2a.py
```
82 changes: 82 additions & 0 deletions python/samples/04-hosting/a2a/a2a_server.http
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
### Each A2A agent is available at a different host address
@hostInvoice = http://localhost:5000
@hostPolicy = http://localhost:5001
@hostLogistics = http://localhost:5002

### Query agent card for the invoice agent
GET {{hostInvoice}}/.well-known/agent.json

### Send a message to the invoice agent
POST {{hostInvoice}}
Content-Type: application/json

{
"id": "1",
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"kind": "message",
"role": "user",
"messageId": "msg_1",
"parts": [
{
"kind": "text",
"text": "Show me all invoices for Contoso"
}
]
}
}
}

### Query agent card for the policy agent
GET {{hostPolicy}}/.well-known/agent.json

### Send a message to the policy agent
POST {{hostPolicy}}
Content-Type: application/json

{
"id": "2",
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"kind": "message",
"role": "user",
"messageId": "msg_2",
"parts": [
{
"kind": "text",
"text": "What is the policy for short shipments?"
}
]
}
}
}

### Query agent card for the logistics agent
GET {{hostLogistics}}/.well-known/agent.json

### Send a message to the logistics agent
POST {{hostLogistics}}
Content-Type: application/json

{
"id": "3",
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"kind": "message",
"role": "user",
"messageId": "msg_3",
"parts": [
{
"kind": "text",
"text": "What is the status for SHPMT-SAP-001?"
}
]
}
}
}
120 changes: 120 additions & 0 deletions python/samples/04-hosting/a2a/a2a_server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Copyright (c) Microsoft. All rights reserved.

import argparse
import os
import sys

import uvicorn
from a2a.server.apps.jsonrpc.starlette_app import A2AStarletteApplication
from a2a.server.request_handlers.default_request_handler import DefaultRequestHandler
from a2a.server.tasks.inmemory_task_store import InMemoryTaskStore
from agent_definitions import AGENT_CARD_FACTORIES, AGENT_FACTORIES
from agent_executor import AgentFrameworkExecutor
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

"""
A2A Server Sample — Host an Agent Framework agent as an A2A endpoint

This sample creates a Python-based A2A-compliant server that wraps an Agent
Framework agent. The server uses the a2a-sdk's Starlette application to handle
JSON-RPC requests and serves the AgentCard at /.well-known/agent.json.

Three agent types are available:
- invoice — Answers invoice queries using mock data and function tools.
- policy — Returns a fixed policy response.
- logistics — Returns a fixed logistics response.

Usage:
uv run python a2a_server.py --agent-type policy --port 5001
uv run python a2a_server.py --agent-type invoice --port 5000
uv run python a2a_server.py --agent-type logistics --port 5002

Environment variables:
AZURE_AI_PROJECT_ENDPOINT — Your Azure AI Foundry project endpoint
AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME — Model deployment name (e.g. gpt-4o)
"""


def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="A2A Agent Server")
parser.add_argument(
"--agent-type",
choices=["invoice", "policy", "logistics"],
default="policy",
help="Type of agent to host (default: policy)",
)
parser.add_argument(
"--host",
default="localhost",
help="Host to bind to (default: localhost)",
)
parser.add_argument(
"--port",
type=int,
default=5001,
help="Port to listen on (default: 5001)",
)
return parser.parse_args()


def main() -> None:
args = parse_args()

# Validate environment
project_endpoint = os.getenv("AZURE_AI_PROJECT_ENDPOINT")
deployment_name = os.getenv("AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME")

if not project_endpoint:
print("Error: AZURE_AI_PROJECT_ENDPOINT environment variable is not set.")
sys.exit(1)
if not deployment_name:
print("Error: AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME environment variable is not set.")
sys.exit(1)

# Create the LLM client
credential = AzureCliCredential()
client = AzureOpenAIResponsesClient(
project_endpoint=project_endpoint,
deployment_name=deployment_name,
credential=credential,
)

# Create the Agent Framework agent for the chosen type
agent_factory = AGENT_FACTORIES[args.agent_type]
agent = agent_factory(client)

# Build the A2A server components
url = f"http://{args.host}:{args.port}/"
agent_card = AGENT_CARD_FACTORIES[args.agent_type](url)
executor = AgentFrameworkExecutor(agent)
task_store = InMemoryTaskStore()
request_handler = DefaultRequestHandler(
agent_executor=executor,
task_store=task_store,
)

a2a_app = A2AStarletteApplication(
agent_card=agent_card,
http_handler=request_handler,
)

print(f"Starting A2A server: {agent_card.name}")
print(f" Agent type : {args.agent_type}")
print(f" Listening : {url}")
print(f" Agent card : {url}.well-known/agent.json")
print()

uvicorn.run(
a2a_app.build(),
host=args.host,
port=args.port,
)


if __name__ == "__main__":
main()
Loading