Add intelligence layer: complexity analysis, performance tracking, and workflow patterns#3
Add intelligence layer: complexity analysis, performance tracking, and workflow patterns#3
Conversation
…ueue, crew workflows Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2628694aac
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
This pull request adds an "intelligence layer" to the NexusForge agent framework, enhancing it with complexity analysis, performance tracking, priority message queuing, and advanced workflow patterns. The implementation interprets "enhance expand, upgrade" as adding self-optimization and intelligence capabilities to the existing autonomous agent system.
Changes:
- Enhanced goal decomposition with complexity scoring (0.0-1.0 scale), dependency tracking, and effort estimation
- Self-improving expert and agent systems with performance tracking and asymmetric learning rates (failures penalize 2x-2.5x more than successes)
- Priority-based message queuing (4 levels) with dependency resolution using heapq
- Five crew workflow patterns (SEQUENTIAL, PARALLEL, PIPELINE, MAP_REDUCE, HIERARCHICAL)
- Comprehensive system analytics with top performer tracking
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_enhancements.py | Comprehensive test suite with 17 new test cases covering all enhancement features |
| nexusforge/core/fractal_agent.py | Enhanced goal decomposition with complexity analysis, 6 new agent roles, and agent performance tracking |
| nexusforge/core/nexus.py | New APIs for priority messaging, performance retrieval, and enhanced statistics |
| nexusforge/communication/hub.py | Priority queue implementation with message dependencies and heapq-based ordering |
| nexusforge/agents/experts.py | Expert performance tracking with self-improving selection and asymmetric learning |
| nexusforge/agents/crew.py | Workflow pattern support with 5 execution strategies and crew statistics |
| examples/enhanced_features_demo.py | Working demonstration of all new features |
| docs/ENHANCEMENTS.md | Comprehensive documentation with API reference and migration guide |
| README.md | Updated feature highlights and enhancements section |
| ENHANCEMENT_SUMMARY.md | Detailed summary of all changes and improvements |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
nexusforge/core/nexus.py
Outdated
| "total_messages": len(self.communication_hub.message_history), | ||
| "conversations": len(self.communication_hub.conversations) | ||
| "conversations": len(self.communication_hub.conversations), | ||
| "pending_messages": len(self.communication_hub._pending_messages) # New |
There was a problem hiding this comment.
Accessing private attribute _pending_messages from the CommunicationHub breaks encapsulation. Consider adding a public method like get_pending_message_count() to the CommunicationHub class to properly expose this information, or document that this is an intentional internal access pattern for the orchestrator.
There was a problem hiding this comment.
Fixed in f27a2ef. Added get_pending_message_count() public method to CommunicationHub class to properly expose pending message count without breaking encapsulation.
nexusforge/core/fractal_agent.py
Outdated
| tasks_failed: int = 0 | ||
| total_execution_time: float = 0.0 | ||
| last_activity: datetime = field(default_factory=datetime.now) | ||
| performance_score: float = 1.0 # 0.0 to 2.0, starts at 1.0 |
There was a problem hiding this comment.
The comment states performance_score ranges from "0.0 to 2.0", but the actual implementation clamps it between 0.5 and 2.0 (see lines 441 and 446). Update the comment to accurately reflect the actual range of "0.5 to 2.0" to match the implementation.
| performance_score: float = 1.0 # 0.0 to 2.0, starts at 1.0 | |
| performance_score: float = 1.0 # 0.5 to 2.0, starts at 1.0 |
There was a problem hiding this comment.
Fixed in f27a2ef. Updated comment to accurately reflect the actual range "0.5 to 2.0" to match the implementation (lines 441 and 446).
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@copilot apply changes based on the comments in this thread |
Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
All review comments have been addressed in commit f27a2ef:
All 30 tests passing. |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 12 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@copilot apply changes based on the comments in this thread |
Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
All review comments addressed in commit 39a6182:
All 30 tests passing. |
Co-authored-by: MASSIVEMAGNETICS <209589629+MASSIVEMAGNETICS@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b4278f5eb0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@copilot apply changes based on the comments in this thread |
Review #3805660526 generated no new comments. All previous feedback has been fully addressed in commits 39a6182 and 0719826:
All 30 tests passing. No additional changes needed. |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 8 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ### Key Features | ||
|
|
||
| #### Goal Complexity Analysis | ||
| - **Automatic complexity scoring** (0.0 to 1.0 scale) |
There was a problem hiding this comment.
Documentation says goal complexity scoring is on a 0.0–1.0 scale, but _analyze_goal_complexity() clamps to a minimum of 0.1. Please update this section to reflect the actual 0.1–1.0 range (or change the implementation if 0.0 is intended).
| - **Automatic complexity scoring** (0.0 to 1.0 scale) | |
| - **Automatic complexity scoring** (0.1 to 1.0 scale) |
| #### `_analyze_goal_complexity(goal: str) -> float` | ||
| Analyze goal complexity (0.0 to 1.0 scale). | ||
|
|
There was a problem hiding this comment.
The API reference for _analyze_goal_complexity() states a 0.0–1.0 scale, but the implementation clamps the minimum to 0.1. Update the documented range here to match actual behavior (0.1–1.0) to avoid misleading users.
| - **Before**: Simple keyword matching for task breakdown | ||
| - **After**: Sophisticated complexity analysis with dependency tracking | ||
| - **Key Features**: | ||
| - Goal complexity scoring (0.0-1.0 scale) |
There was a problem hiding this comment.
This summary states complexity scoring is 0.0–1.0, but the current implementation clamps complexity to a minimum of 0.1. Update the stated range to 0.1–1.0 (or adjust the implementation) so the summary matches runtime behavior.
| - Goal complexity scoring (0.0-1.0 scale) | |
| - Goal complexity scoring (0.1-1.0 scale) |
| async def _execute_sequential(self, crew: Crew, task: Dict[str, Any]): | ||
| """Execute task sequentially through crew members""" | ||
| self.logger.info(f"Executing sequential workflow for crew {crew.crew_id}") | ||
|
|
||
| # Assign to members in order of their roles | ||
| for i, member in enumerate(crew.members): | ||
| if member.active: | ||
| subtask = { | ||
| **task, | ||
| "phase": i, | ||
| "total_phases": len(crew.members), | ||
| "previous_member": crew.members[i-1].agent_id if i > 0 else None | ||
| } | ||
|
|
||
| await self.communication_hub.delegate_task( | ||
| from_agent=crew.leader_id or "crew_manager", | ||
| to_agent=member.agent_id, | ||
| task=subtask | ||
| ) | ||
|
|
There was a problem hiding this comment.
_execute_sequential() delegates the task to every active member without waiting for the prior member to finish, so execution can still happen concurrently even though the workflow is labeled “sequential” (and WorkflowPattern.SEQUENTIAL is documented as running one after another). If true sequential semantics are intended, consider awaiting a completion/result signal from each phase (or using message dependencies) before delegating the next phase.
| async def _deliver_message(self, message: Message): | ||
| """Deliver a message to target agent(s) and mark as delivered""" | ||
| if message.to_agent: | ||
| # Direct message - use priority queue | ||
| if message.to_agent in self.message_queues: | ||
| heapq.heappush(self.message_queues[message.to_agent], message) | ||
| self.logger.debug( | ||
| f"Message {message.message_id} queued for {message.to_agent} " | ||
| f"with priority {message.priority.name}" | ||
| ) | ||
| else: | ||
| self.logger.warning(f"Agent {to_agent} not found for message delivery") | ||
| self.logger.warning(f"Agent {message.to_agent} not found for message delivery") | ||
| else: | ||
| # Broadcast to all agents | ||
| # Broadcast to all agents with priority | ||
| for agent_id, queue in self.message_queues.items(): | ||
| if agent_id != from_agent: | ||
| await queue.put(message) | ||
| self.logger.debug(f"Message {message_id} broadcast to all agents") | ||
| if agent_id != message.from_agent: | ||
| heapq.heappush(queue, message) | ||
| self.logger.debug(f"Message {message.message_id} broadcast to all agents") | ||
|
|
||
| # Mark message as delivered | ||
| self._delivered_message_ids.add(message.message_id) |
There was a problem hiding this comment.
In _deliver_message(), the message is added to _delivered_message_ids even when a direct message cannot be queued because to_agent is not registered. This can incorrectly satisfy dependencies for other messages even though the dependency was never actually delivered to any recipient. Only mark a message as delivered after it has been enqueued to at least one target queue (or define/track a separate state like queued vs delivered).
| async def receive_message(self, agent_id: str, timeout: float = 1.0) -> Optional[Message]: | ||
| """ | ||
| Receive a message for an agent (non-blocking with timeout) | ||
| Receive highest priority message for an agent (non-blocking with timeout) | ||
| """ | ||
| if agent_id not in self.message_queues: | ||
| return None | ||
|
|
||
| try: | ||
| message = await asyncio.wait_for( | ||
| self.message_queues[agent_id].get(), | ||
| timeout=timeout | ||
| ) | ||
| self.logger.debug(f"Agent {agent_id} received message {message.message_id}") | ||
| return message | ||
| except asyncio.TimeoutError: | ||
| queue = self.message_queues[agent_id] | ||
|
|
||
| if not queue: | ||
| # No messages, wait briefly | ||
| await asyncio.sleep(min(timeout, 0.1)) | ||
| return None |
There was a problem hiding this comment.
receive_message() no longer honors the timeout contract: when the queue is empty it sleeps for up to 0.1s and returns None without waiting for the full timeout or re-checking the queue, so messages that arrive within the requested timeout can be missed. Consider using an awaitable primitive (e.g., asyncio.PriorityQueue / asyncio.Condition) or looping until timeout expires while re-checking the heap.
| goals: Optional[List[str]] = None, | ||
| workflow_pattern = None # Import will be added | ||
| ) -> str: | ||
| """Create a crew from existing agents""" | ||
| """Create a crew from existing agents with workflow pattern""" | ||
| if len(agent_ids) != len(roles): | ||
| raise ValueError("Number of agents must match number of roles") | ||
|
|
||
| # Create crew | ||
| crew_id = self.crew_manager.create_crew(name, leader_id, goals) | ||
| # Import here to avoid circular import | ||
| from nexusforge.agents.crew import WorkflowPattern | ||
| if workflow_pattern is None: | ||
| workflow_pattern = WorkflowPattern.SEQUENTIAL | ||
|
|
||
| # Create crew with workflow | ||
| crew_id = self.crew_manager.create_crew( | ||
| name, | ||
| leader_id, | ||
| goals, | ||
| workflow_pattern=workflow_pattern | ||
| ) | ||
|
|
||
| # Add members | ||
| for agent_id, role in zip(agent_ids, roles): | ||
| if agent_id in self.agents: | ||
| self.crew_manager.add_member(crew_id, agent_id, role) | ||
|
|
||
| self.logger.info(f"Created crew: {name} with {len(agent_ids)} members") | ||
| self.logger.info( | ||
| f"Created crew: {name} with {len(agent_ids)} members " | ||
| f"using {workflow_pattern.value} workflow" | ||
| ) |
There was a problem hiding this comment.
create_crew() accepts workflow_pattern without a type annotation and later assumes it has a .value attribute. If a caller passes a string (or any non-WorkflowPattern), this will raise at log time and may also store an invalid pattern in the crew. Consider annotating as WorkflowPattern | None (or Optional[WorkflowPattern]) and validating/coercing inputs before calling crew_manager.create_crew() and before logging.
| return await self.communication_hub.send_message( | ||
| from_agent=from_agent, | ||
| to_agent=to_agent, | ||
| message_type=MessageType.CHAT, | ||
| content=message, | ||
| priority=priority_map.get(priority, MessagePriority.NORMAL) |
There was a problem hiding this comment.
send_priority_message() maps priorities using an exact, case-sensitive string key. Passing values like "high" will silently fall back to NORMAL, which is surprising for callers. Consider normalizing the input (e.g., .upper()), validating unknown values, or accepting MessagePriority directly in the public API.
| return await self.communication_hub.send_message( | |
| from_agent=from_agent, | |
| to_agent=to_agent, | |
| message_type=MessageType.CHAT, | |
| content=message, | |
| priority=priority_map.get(priority, MessagePriority.NORMAL) | |
| # Normalize and validate the priority value | |
| if isinstance(priority, MessagePriority): | |
| priority_key = priority.name | |
| else: | |
| priority_key = str(priority).upper() | |
| if priority_key not in priority_map: | |
| self.logger.warning( | |
| "Unknown message priority '%s' from %s to %s; defaulting to NORMAL", | |
| priority, | |
| from_agent, | |
| to_agent, | |
| ) | |
| resolved_priority = priority_map.get(priority_key, MessagePriority.NORMAL) | |
| return await self.communication_hub.send_message( | |
| from_agent=from_agent, | |
| to_agent=to_agent, | |
| message_type=MessageType.CHAT, | |
| content=message, | |
| priority=resolved_priority |
Enhancement Plan: Expand & Upgrade NexusForge 2.0 ✅ COMPLETE
Successfully Completed All Enhancements:
1. 🎯 Enhanced Goal Decomposition
2. 📊 Expert Performance Tracking
3. 🚦 Priority Message Queuing
4. 👥 Advanced Crew Workflows
5. 📈 Agent Performance Metrics
6. 🔍 Enhanced System Statistics
Recent Fixes (2nd PR Review):
Set[str]for delivered message IDsQuality Assurance:
Documentation:
Performance Impact:
This PR transforms NexusForge 2.0 into an intelligent, self-optimizing autonomous system while maintaining complete backward compatibility.
Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.