A TypeScript framework for building intelligent multi-agent AI systems with memory, tools, guardrails, and intelligent routing.
Cogni Path provides a powerful framework for orchestrating AI agents that can collaborate, use tools, maintain memory, and route requests intelligently. Built with TypeScript for type safety and modern development practices.
npm install cogni-pathimport { Agent, AgentMemory, Hub } from 'cogni-path';
import { groqAIModel } from 'your-model-provider';
// Create an agent
const assistant = new Agent({
name: 'assistant',
role: 'You are a helpful assistant',
instructions: ["Reply in the user's language", 'Be concise and helpful'],
model: groqAIModel('openai/gpt-oss-120b'),
});
// Run the agent
const response = await assistant.run({ message: 'Hello!' });
console.log(response.reply);The Agent class is the core unit for processing messages using an LLM model. Agents can use tools, maintain memory, and handoff to other agents.
import { Agent } from 'cogni-path';
const agent = new Agent({
name: 'weather-assistant',
role: 'Weather assistant',
instructions: ['Provide accurate weather information'],
model: groqAIModel('openai/gpt-oss-120b'),
responseMode: 'json', // 'json' | 'text' | 'markdown'
});
const response = await agent.run({ message: "What's the weather in Paris?" });Memory system for maintaining conversation context, working memory, and state across agent runs.
import { AgentMemory } from 'cogni-path';
const memory = new AgentMemory({
initial: {
history: [],
workingMemory: { userName: 'Alice' },
summary: 'User asking about project status',
},
});
// Subscribe to persist memory updates
memory.subscribe(async (event) => {
await saveToDatabase(event);
});Extend agent capabilities with custom tools that interact with external systems or APIs.
import { AgentTool } from 'cogni-path';
const weatherTool = new AgentTool({
name: 'getWeather',
useCase: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' },
},
required: ['location'],
},
function: async (args: { location: string }) => {
// Fetch weather data from API
return { temperature: 72, condition: 'sunny' };
},
});
const agent = new Agent({
name: 'weather-bot',
role: 'Weather assistant',
model: groqAIModel('openai/gpt-oss-120b'),
tools: [weatherTool],
});Central coordinator for multi-agent systems with intelligent routing between agents and teams.
import { Hub } from 'cogni-path';
const weatherAgent = new Agent({
name: 'weather',
role: 'Weather assistant',
rerouteRule: 'For weather information',
model: groqAIModel('openai/gpt-oss-120b'),
tools: [weatherTool],
});
const newsAgent = new Agent({
name: 'news',
role: 'News assistant',
rerouteRule: 'For news information',
model: groqAIModel('openai/gpt-oss-120b'),
tools: [newsTool],
});
const hub = new Hub({
agents: [weatherAgent, newsAgent],
modelRunner,
memory,
logger: console,
});
hub.init();
const result = await hub.run({ message: "What's the weather in Paris?" });Coordinate multiple agents to work together with automatic handoffs based on specializations.
import { Team } from 'cogni-path';
const team = new Team({
name: 'weather-and-news',
agents: [weatherAgent, newsAgent],
rerouteRule: 'For weather or news requests',
memory,
saveMessages: true,
});
const result = await team.run({ message: "What's the weather like in Paris?" });Implement safety checks on AI conversations with input and output validation.
import { simpleGuard, PIIGuard } from 'cogni-path';
const agent = new Agent({
name: 'safe-assistant',
role: 'You are a helpful assistant',
model: groqAIModel('openai/gpt-oss-120b'),
guardrails: {
input: [
simpleGuard({
guardCheckFailureMessage: 'Input contains inappropriate content',
model: groqAIModel('meta-llama/llama-guard-4-12b'),
}),
],
output: [
PIIGuard({
guardCheckFailureMessage: 'Response contains PII',
model: groqAIModel('gpt-4'),
}),
],
},
});For detailed API documentation, see:
- Agent - Core agent class with LLM integration, tools, memory, and handoffs
- AgentMemory - Memory and context management for agents
- AgentTool - Extending agents with custom tools
- Hub - Central coordinator for multi-agent systems with intelligent routing
- Team - Coordinating multiple agents with automatic handoffs
- Guardrails - Input/output safety checks and validation
import { Hub, Agent, Team, AgentMemory } from 'cogni-path';
// Create specialized agents
const weatherAgent = new Agent({
name: 'weather',
role: 'Weather specialist',
rerouteRule: 'For weather information and forecasts',
model: groqAIModel('openai/gpt-oss-120b'),
tools: [weatherTool],
});
const newsAgent = new Agent({
name: 'news',
role: 'News specialist',
rerouteRule: 'For current news and updates',
model: groqAIModel('openai/gpt-oss-120b'),
tools: [newsTool],
});
// Create a team for related agents
const infoTeam = new Team({
name: 'info-team',
agents: [weatherAgent, newsAgent],
rerouteRule: 'For weather or news requests',
});
// Create a general assistant
const generalAgent = new Agent({
name: 'assistant',
role: 'General assistant that coordinates with specialists',
rerouteRule: 'For general questions and coordination',
model: groqAIModel('openai/gpt-oss-120b'),
});
// Create hub to orchestrate everything
const memory = new AgentMemory({
initial: { history: [], workingMemory: {} },
});
const hub = new Hub({
agents: [generalAgent, infoTeam],
modelRunner,
memory,
logger: console,
debugFlags: ['run', 'handoff'],
});
hub.init();
// Hub automatically routes to the right agent/team
const response = await hub.run({ message: "What's the weather in Tokyo?" });const agent = new Agent({
name: 'advanced-assistant',
role: 'You are an advanced AI assistant',
instructions: [
"Reply in the user's language",
'Be concise and helpful',
'Use tools when appropriate',
],
model: groqAIModel('openai/gpt-oss-120b'),
responseMode: 'json',
tools: [weatherTool, calculatorTool],
memory: new AgentMemory({
initial: {
history: [],
workingMemory: { preferences: {} },
},
}),
guardrails: {
input: [inputGuard],
output: [outputGuard],
},
debugFlags: ['all'],
logger: console,
saveMessages: true,
rerunDelay: 1000,
});| Method | Description |
|---|---|
run({ message?, modelRerunning? }) |
Execute the agent with a user message |
setSaveMessages(boolean) |
Toggle message saving to memory |
| Method | Description |
|---|---|
getContext() |
Retrieve current memory context |
updateContext(input) |
Update memory with new information |
subscribe(callback) |
Subscribe to memory update events |
| Method | Description |
|---|---|
call(args) |
Execute the tool with arguments |
setEnv(env) |
Set environment for tool execution |
get() |
Get tool definition for LLM APIs |
exposeToolResult() |
Retrieve last execution result |
| Method | Description |
|---|---|
init() |
Initialize hub and propagate resources |
run({ message?, startingAgent? }) |
Execute with routing to appropriate agent |
| Method | Description |
|---|---|
init({ reroutingAgents? }) |
Initialize team with routing configuration |
run({ message? }) |
Execute team with first agent as entry point |
Available debug flags for logging:
'all'- Enable all debug output'run'- Log execution flow'prepare'- Log prompt preparation'finalize'- Log response finalization'tools'- Log tool execution'guard'- Log guardrail configuration'guard-input'- Log input guard execution'guard-output'- Log output guard execution'handoff'- Log handoff decisions'end'- Log completion'memory-read'- Log memory reads'memory-write'- Log memory writes
# Install dependencies
npm install
# Build the project
npm run build
# Development mode with watch
npm run dev
# Run tests
npm test
# Run tests with UI
npm run test:ui
# Generate coverage report
npm run coverage
# Lint code
npm run lint
# Fix linting issues
npm run lint:fix
# Format code
npm run format
# Check formatting
npm run format:check- Node.js >= 18.0.0
- TypeScript >= 5.0.0
MIT © Roman Jankowski
TypeScript, Node.js, AI, Agents, LLM, Multi-Agent, Tools, Memory, Guardrails, Routing