Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,19 @@ namespace Microsoft.Agents.AI;
/// <see cref="AIAgent"/> serves as the foundational class for implementing AI agents that can participate in conversations
/// and process user requests. An agent instance may participate in multiple concurrent conversations, and each conversation
/// may involve multiple agents working together.
/// <para>
/// <strong>Security considerations:</strong> An <see cref="AIAgent"/> orchestrates data flow across trust boundaries —
/// messages are sent to external AI services, context providers, chat history stores, and function tools. Agent Framework
/// passes messages through as-is without validation or sanitization. Developers must be aware that:
/// <list type="bullet">
/// <item><description>User-supplied messages may contain prompt injection attempts designed to manipulate LLM behavior.</description></item>
/// <item><description>LLM responses should be treated as untrusted output — they may contain hallucinations, malicious payloads (e.g., scripts, SQL),
/// or content influenced by indirect prompt injection. Always validate and sanitize LLM output before rendering in HTML, executing as code,
/// or using in database queries.</description></item>
/// <item><description>Messages with different roles carry different trust levels: <c>system</c> messages have the highest trust and must be developer-controlled;
/// <c>user</c>, <c>assistant</c>, and <c>tool</c> messages should be treated as untrusted.</description></item>
/// </list>
/// </para>
/// </remarks>
[DebuggerDisplay("{DebuggerDisplay,nq}")]
public abstract partial class AIAgent
Expand Down Expand Up @@ -165,6 +178,11 @@ public ValueTask<AgentSession> CreateSessionAsync(CancellationToken cancellation
/// This method enables saving conversation sessions to persistent storage,
/// allowing conversations to resume across application restarts or be migrated between
/// different agent instances. Use <see cref="DeserializeSessionAsync"/> to restore the session.
/// <para>
/// <strong>Security consideration:</strong> Serialized sessions may contain conversation content, session identifiers,
/// and other potentially sensitive data including PII. Ensure that serialized session data is stored securely with
/// appropriate access controls and encryption at rest.
/// </para>
/// </remarks>
public ValueTask<JsonElement> SerializeSessionAsync(AgentSession session, JsonSerializerOptions? jsonSerializerOptions = null, CancellationToken cancellationToken = default)
=> this.SerializeSessionCoreAsync(session, jsonSerializerOptions, cancellationToken);
Expand Down Expand Up @@ -194,6 +212,12 @@ public ValueTask<JsonElement> SerializeSessionAsync(AgentSession session, JsonSe
/// This method enables restoration of conversation sessions from previously saved state,
/// allowing conversations to resume across application restarts or be migrated between
/// different agent instances.
/// <para>
/// <strong>Security consideration:</strong> Restoring a session from an untrusted source is equivalent to accepting untrusted input.
/// Serialized sessions may contain conversation content, session identifiers, and potentially sensitive data. A compromised
/// storage backend could alter message roles to escalate trust, or inject adversarial content that influences LLM behavior.
/// Treat serialized session data as sensitive and ensure it is stored and transmitted securely.
/// </para>
/// </remarks>
public ValueTask<AgentSession> DeserializeSessionAsync(JsonElement serializedState, JsonSerializerOptions? jsonSerializerOptions = null, CancellationToken cancellationToken = default)
=> this.DeserializeSessionCoreAsync(serializedState, jsonSerializerOptions, cancellationToken);
Expand Down Expand Up @@ -301,6 +325,11 @@ public Task<AgentResponse> RunAsync(
/// The messages are processed in the order provided and become part of the conversation history.
/// The agent's response will also be added to <paramref name="session"/> if one is provided.
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Agent Framework does not validate or sanitize message content — it is passed through
/// to the underlying AI service as-is. If input messages include untrusted user content, developers should be aware of prompt injection risks.
/// System-role messages must be developer-controlled and should never contain end-user input.
/// </para>
/// </remarks>
public Task<AgentResponse> RunAsync(
IEnumerable<ChatMessage> messages,
Expand Down Expand Up @@ -426,6 +455,11 @@ public IAsyncEnumerable<AgentResponseUpdate> RunStreamingAsync(
/// Each <see cref="AgentResponseUpdate"/> represents a portion of the complete response, allowing consumers
/// to display partial results, implement progressive loading, or provide immediate feedback to users.
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Agent Framework does not validate or sanitize message content — it is passed through
/// to the underlying AI service as-is. If input messages include untrusted user content, developers should be aware of prompt injection risks.
/// System-role messages must be developer-controlled and should never contain end-user input.
/// </para>
/// </remarks>
public async IAsyncEnumerable<AgentResponseUpdate> RunStreamingAsync(
IEnumerable<ChatMessage> messages,
Expand Down
22 changes: 22 additions & 0 deletions dotnet/src/Microsoft.Agents.AI.Abstractions/AIContextProvider.cs
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,14 @@ namespace Microsoft.Agents.AI;
/// <see cref="InvokingAsync"/> to provide context, and optionally called at the end of invocation via
/// <see cref="InvokedAsync"/> to process results.
/// </para>
/// <para>
/// <strong>Security considerations:</strong> Context providers may inject messages with any role, including <c>system</c>, which
/// has the highest trust level and directly shapes LLM behavior. Developers must ensure that all providers attached to an agent
/// are trusted. Agent Framework does not validate or filter the data returned by providers — it is accepted as-is and merged into
/// the request context. If a provider retrieves data from an external source (e.g., a vector database or memory service), be aware
/// that a compromised data source could introduce adversarial content designed to manipulate LLM behavior via indirect prompt injection.
/// Implementers should validate and sanitize data retrieved from external sources before returning it.
/// </para>
/// </remarks>
public abstract class AIContextProvider
{
Expand Down Expand Up @@ -96,6 +104,11 @@ protected AIContextProvider(
/// <item><description>Injecting contextual messages from conversation history</description></item>
/// </list>
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Data retrieved from external sources (e.g., vector databases, memory services, or
/// knowledge bases) may contain adversarial content designed to influence LLM behavior via indirect prompt injection.
/// Implementers should validate data integrity and consider the trustworthiness of the data source.
/// </para>
/// </remarks>
public ValueTask<AIContext> InvokingAsync(InvokingContext context, CancellationToken cancellationToken = default)
=> this.InvokingCoreAsync(Throw.IfNull(context), cancellationToken);
Expand Down Expand Up @@ -195,6 +208,11 @@ protected virtual async ValueTask<AIContext> InvokingCoreAsync(InvokingContext c
/// In contrast with <see cref="InvokingCoreAsync"/>, this method only returns additional context to be merged with the input,
/// while <see cref="InvokingCoreAsync"/> is responsible for returning the full merged <see cref="AIContext"/> for the invocation.
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Any messages, tools, or instructions returned by this method will be merged into the
/// AI request context. If data is retrieved from external or untrusted sources, implementers should validate and sanitize it
/// to prevent indirect prompt injection attacks.
/// </para>
/// </remarks>
/// <param name="context">Contains the request context including the caller provided messages that will be used by the agent for this invocation.</param>
/// <param name="cancellationToken">The <see cref="CancellationToken"/> to monitor for cancellation requests. The default is <see cref="CancellationToken.None"/>.</param>
Expand Down Expand Up @@ -299,6 +317,10 @@ protected virtual ValueTask InvokedCoreAsync(InvokedContext context, Cancellatio
/// <para>
/// The default implementation of <see cref="InvokedCoreAsync"/> only calls this method if the invocation succeeded.
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Messages being processed/stored may contain PII and sensitive conversation content.
/// Implementers should ensure appropriate encryption at rest and access controls for the storage backend.
/// </para>
/// </remarks>
protected virtual ValueTask StoreAIContextAsync(InvokedContext context, CancellationToken cancellationToken = default) =>
default;
Expand Down
14 changes: 14 additions & 0 deletions dotnet/src/Microsoft.Agents.AI.Abstractions/AgentSession.cs
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,15 @@ namespace Microsoft.Agents.AI;
/// <see cref="JsonElement"/> and the <see cref="AIAgent.DeserializeSessionAsync(JsonElement, JsonSerializerOptions?, System.Threading.CancellationToken)"/> method
/// can be used to deserialize the session.
/// </para>
/// <para>
/// <strong>Security considerations:</strong> Serialized sessions may contain conversation content, session identifiers,
/// and other potentially sensitive data including PII. Developers should:
/// <list type="bullet">
/// <item><description>Treat serialized session data as sensitive and store it securely with appropriate access controls and encryption at rest.</description></item>
/// <item><description>Treat restoring a session from an untrusted source as equivalent to accepting untrusted input. A compromised storage backend
/// could alter message roles to escalate trust, or inject adversarial content that influences LLM behavior.</description></item>
/// </list>
/// </para>
/// </remarks>
/// <seealso cref="AIAgent"/>
/// <seealso cref="AIAgent.CreateSessionAsync(System.Threading.CancellationToken)"/>
Expand All @@ -67,6 +76,11 @@ protected AgentSession(AgentSessionStateBag stateBag)
/// <summary>
/// Gets any arbitrary state associated with this session.
/// </summary>
/// <remarks>
/// Data stored in the <see cref="StateBag"/> will be included when the session is serialized.
/// Avoid storing secrets, credentials, or highly sensitive data in the state bag without appropriate encryption,
/// as this data may be persisted to external storage.
/// </remarks>
[JsonPropertyName("stateBag")]
public AgentSessionStateBag StateBag { get; protected set; } = new();

Expand Down
17 changes: 17 additions & 0 deletions dotnet/src/Microsoft.Agents.AI.Abstractions/ChatHistoryProvider.cs
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,14 @@ namespace Microsoft.Agents.AI;
/// A <see cref="ChatHistoryProvider"/> is only relevant for scenarios where the underlying AI service that the agent is using
/// does not use in-service chat history storage.
/// </para>
/// <para>
/// <strong>Security considerations:</strong> Agent Framework does not validate or filter the messages returned by the provider
/// during load — they are accepted as-is and treated identically to user-supplied messages. Implementers must ensure that only
/// trusted data is returned. If the underlying storage is compromised, adversarial content could influence LLM behavior via
/// indirect prompt injection — for example, injected messages could alter the conversation context or impersonate different roles.
/// Messages stored in chat history may contain PII and sensitive conversation content; implementers should consider encryption
/// at rest and appropriate access controls for the storage backend.
/// </para>
/// </remarks>
public abstract class ChatHistoryProvider
{
Expand Down Expand Up @@ -159,6 +167,11 @@ protected virtual async ValueTask<IEnumerable<ChatMessage>> InvokingCoreAsync(In
/// Messages are returned in chronological order to maintain proper conversation flow and context for the agent.
/// The oldest messages appear first in the collection, followed by more recent messages.
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Messages loaded from storage should be treated with the same caution as user-supplied
/// messages. A compromised storage backend could alter message roles to escalate trust (e.g., changing <c>user</c> messages to
/// <c>system</c> messages) or inject adversarial content that influences LLM behavior.
/// </para>
/// </remarks>
/// <param name="context">Contains the request context including the caller provided messages that will be used by the agent for this invocation.</param>
/// <param name="cancellationToken">The <see cref="CancellationToken"/> to monitor for cancellation requests. The default is <see cref="CancellationToken.None"/>.</param>
Expand Down Expand Up @@ -273,6 +286,10 @@ protected virtual ValueTask InvokedCoreAsync(InvokedContext context, Cancellatio
/// <para>
/// The default implementation of <see cref="InvokedCoreAsync"/> only calls this method if the invocation succeeded.
/// </para>
/// <para>
/// <strong>Security consideration:</strong> Messages being stored may contain PII and sensitive conversation content.
/// Implementers should ensure appropriate encryption at rest and access controls for the storage backend.
/// </para>
/// </remarks>
protected virtual ValueTask StoreChatHistoryAsync(InvokedContext context, CancellationToken cancellationToken = default) =>
default;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,24 @@ namespace Microsoft.Agents.AI;
/// <summary>
/// Provides a Cosmos DB implementation of the <see cref="ChatHistoryProvider"/> abstract class.
/// </summary>
/// <remarks>
/// <para>
/// <strong>Security considerations:</strong>
/// <list type="bullet">
/// <item><description><strong>PII and sensitive data:</strong> Chat history stored in Cosmos DB may contain PII, sensitive conversation
/// content, and system instructions. Ensure the Cosmos DB account is configured with appropriate access controls, encryption at rest,
/// and network security (e.g., private endpoints, virtual network rules). The <see cref="MessageTtlSeconds"/> property can be used to
/// automatically expire messages and limit data retention.</description></item>
/// <item><description><strong>Compromised store risks:</strong> Agent Framework does not validate or filter messages loaded from the
/// store — they are accepted as-is. If the Cosmos DB store is compromised, adversarial content could be injected into the conversation
/// context, potentially influencing LLM behavior via indirect prompt injection. Altered message roles (e.g., changing <c>user</c> to
/// <c>system</c>) could escalate trust levels.</description></item>
/// <item><description><strong>Authentication:</strong> Agent Framework does not manage authentication or encryption for the Cosmos DB
/// connection — these are the responsibility of the <see cref="CosmosClient"/> configuration. Use managed identity
/// or token-based authentication where possible, and avoid embedding connection strings with keys in source code.</description></item>
/// </list>
/// </para>
/// </remarks>
[RequiresUnreferencedCode("The CosmosChatHistoryProvider uses JSON serialization which is incompatible with trimming.")]
[RequiresDynamicCode("The CosmosChatHistoryProvider uses JSON serialization which is incompatible with NativeAOT.")]
public sealed class CosmosChatHistoryProvider : ChatHistoryProvider, IDisposable
Expand Down
22 changes: 22 additions & 0 deletions dotnet/src/Microsoft.Agents.AI.Mem0/Mem0Provider.cs
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,38 @@

namespace Microsoft.Agents.AI.Mem0;

#pragma warning disable IDE0001 // Simplify Names - Microsoft.Extensions.Logging.LogLevel.Trace doesn't get found in net472 when removing the namespace.
/// <summary>
/// Provides a Mem0 backed <see cref="MessageAIContextProvider"/> that persists conversation messages as memories
/// and retrieves related memories to augment the agent invocation context.
/// </summary>
/// <remarks>
/// <para>
/// The provider stores user, assistant and system messages as Mem0 memories and retrieves relevant memories
/// for new invocations using a semantic search endpoint. Retrieved memories are injected as user messages
/// to the model, prefixed by a configurable context prompt.
/// </para>
/// <para>
/// <strong>Security considerations:</strong>
/// <list type="bullet">
/// <item><description><strong>External service trust:</strong> This provider communicates with an external Mem0 service over HTTP.
/// Agent Framework does not manage authentication, encryption, or connection details for this service — these are the responsibility
/// of the <see cref="HttpClient"/> configuration. Ensure the HTTP client is configured with appropriate authentication
/// and uses HTTPS to protect data in transit.</description></item>
/// <item><description><strong>PII and sensitive data:</strong> Conversation messages (including user inputs, LLM responses, and system
/// instructions) are sent to the external Mem0 service for storage. These messages may contain PII or sensitive information.
/// Ensure the Mem0 service is configured with appropriate data retention policies and access controls.</description></item>
/// <item><description><strong>Indirect prompt injection:</strong> Memories retrieved from the Mem0 service are injected into the LLM
/// context as user messages. If the memory store is compromised, adversarial content could influence LLM behavior. The data
/// returned from the service is accepted as-is without validation or sanitization.</description></item>
/// <item><description><strong>Trace logging:</strong> When <see cref="Microsoft.Extensions.Logging.LogLevel.Trace"/> is enabled,
/// full memory content (including search queries and results) may be logged. This data may contain PII and should not be enabled
/// in production environments.</description></item>
/// </list>
/// </para>
/// </remarks>
public sealed class Mem0Provider : MessageAIContextProvider
#pragma warning restore IDE0001 // Simplify Names
{
private const string DefaultContextPrompt = "## Memories\nConsider the following memories when answering user questions:";

Expand Down
Loading
Loading