[Phase 2.5-F] Wire engine parameter through v2 API routes (#354)#387
[Phase 2.5-F] Wire engine parameter through v2 API routes (#354)#387
Conversation
Wire the engine parameter ("plan" or "react") through the v2 API
layer so ReactAgent can be invoked via HTTP, not just CLI.
- Add engine field to StartExecutionRequest and ApproveTasksRequest
- Add engine validation (model_validator + HTTPException for query param)
- Pass engine to conductor.start_batch() and runtime.execute_agent()
- Fix bug: retry_count → max_retries parameter name in /execute endpoint
- Add engine to StartExecutionResponse for confirmation
Test engine parameter validation, default values, and passthrough
on all three execution endpoints: /execute, /approve, and /{id}/start.
WalkthroughAdded an Changes
Sequence Diagram(s)mermaid mermaid (Note: rectangles/colors omitted for simplicity; components represent distinct actors.) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 6✅ Passed checks (6 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Wire the
|
Code Review - Phase 2.5-F Engine Parameter IntegrationI've reviewed the changes for wiring the ✅ Strengths1. Excellent Test Coverage
2. Bug Fix IncludedThe # Before (would cause TypeError)
retry_count=body.retry_count
# After (correct parameter name)
max_retries=body.retry_countThis would have caused runtime failures - good catch! 3. Consistent Validation Pattern
4. Thin Adapter Pattern MaintainedThe router correctly delegates to conductor.start_batch(..., engine=body.engine)
runtime.execute_agent(..., engine=engine)5. Response Model UpdatedIncluding 🔍 Suggestions & Observations1. Validator Duplication (Minor)The # In tasks_v2.py (top-level)
def validate_engine(value: str) -> str:
"""Validate engine parameter."""
valid = ("plan", "react")
if value not in valid:
raise ValueError(f"engine must be one of: {', '.join(valid)}")
return value
class StartExecutionRequest(BaseModel):
engine: str = Field("plan", description="...")
@model_validator(mode="after")
def _validate_engine(self) -> "StartExecutionRequest":
self.engine = validate_engine(self.engine)
return selfThis is a nice-to-have, not blocking - current approach works fine. 2. Query Param Validation Inconsistency (Minor)For valid_engines = ("plan", "react")
if engine not in valid_engines:
raise HTTPException(status_code=400, ...)FastAPI can handle this validation automatically with: from typing import Literal
engine: Literal["plan", "react"] = Query("plan", description="...")This would:
Note: If you prefer explicit 400 for query params vs 422 for body, that's a valid design choice. Just worth considering consistency. 3. Test Sleep Timing (Nitpick)In time.sleep(0.5) # Give background thread a momentThis could be flaky on slow CI systems. Consider using a polling approach or increasing to 1.0 seconds. Not critical if tests pass consistently. 4. Missing Edge Case Test (Enhancement)Consider adding a test for def test_approve_without_execution_ignores_engine(self, tmp_path, client):
"""Engine should be ignored when start_execution=False."""
ws = _make_workspace(tmp_path)
task = _make_task(ws, status=TaskStatus.BACKLOG)
approval = ApprovalResult(approved_count=1, ...)
with patch("...", return_value=approval):
resp = client.post(
"/api/v2/tasks/approve",
json={"start_execution": False, "engine": "react"},
params={"workspace_path": str(ws.repo_path)},
)
assert resp.status_code == 200
assert "batch_id" not in resp.json() # No batch startedThis verifies the conditional logic in the route handler. 5. Documentation Gap (Enhancement)The PR description mentions SSE streaming compatibility but there's no explicit test. Consider adding a comment in the test file referencing this, or add a simple test that verifies events work with 🔒 Security & Quality
📋 Pre-Merge ChecklistBased on the PR description, all acceptance criteria are met:
Recommendation: ✅ APPROVE with optional improvements noted above. 🎯 Final NotesThis PR is production-ready. The suggestions above are optional enhancements for code quality and test robustness. The core implementation is solid, well-tested, and follows project conventions. Great work on finding and fixing the |
- Use Literal["plan", "react"] for query param instead of manual validation — auto-documents in OpenAPI, consistent 422 across all endpoints - Bump background thread sleep from 0.5s to 1.0s for CI stability - Add edge case test: approve with start_execution=False skips batch
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/integration/test_tasks_v2_engine.py (2)
1-9: Add SSE stream coverage for ReAct engine.
The objectives mention validating/api/v2/tasks/{task_id}/streamwith ReAct phase events, but this module doesn't exercise streaming yet. Consider adding a streaming integration test to cover that path.
213-239: Avoid fixed sleep to reduce test flakiness.
Usingtime.sleep(0.5)can be flaky on slower CI. Prefer waiting on an event set by the mockedexecute_agent.♻️ Suggested refactor
- import time + import threading @@ - with ( - patch("codeframe.ui.routers.tasks_v2.runtime.start_task_run", return_value=run), - patch("codeframe.ui.routers.tasks_v2.runtime.execute_agent") as mock_exec, - ): + called = threading.Event() + with ( + patch("codeframe.ui.routers.tasks_v2.runtime.start_task_run", return_value=run), + patch( + "codeframe.ui.routers.tasks_v2.runtime.execute_agent", + side_effect=lambda *a, **k: called.set(), + ) as mock_exec, + ): resp = client.post( f"/api/v2/tasks/{task.id}/start", params={ "workspace_path": str(ws.repo_path), "execute": "true", "engine": "react", }, ) assert resp.status_code == 200 - # Give background thread a moment to invoke mock - time.sleep(0.5) + assert called.wait(1) mock_exec.assert_called_once() _, kwargs = mock_exec.call_args assert kwargs["engine"] == "react"
Updated Review - Engine Parameter IntegrationI've reviewed the latest changes and I'm pleased to see the improvements! Here's my updated feedback: ✅ Changes Since Last Review1. Query Parameter Validation ImprovedThe engine: Literal["plan", "react"] = Query("plan", ...)This is much better than manual HTTPException validation because:
Great improvement! 🎉 2. Edge Case Test AddedThe new test 3. Test Sleep IncreasedIn time.sleep(1.0) # Give background thread a momentThis should reduce flakiness on slower CI systems. 🔍 Final ReviewCode Quality: ✅ Excellent
Test Coverage: ✅ Comprehensive
Bug Fix: ✅ ValuableThe Security: ✅ No concernsInput validation prevents injection attacks. Performance: ✅ No concernsParameter passthrough has negligible overhead. Documentation: ✅ Good
🎯 Final Recommendation✅ APPROVED - This PR is production-ready. All my previous suggestions have been addressed:
The only remaining minor suggestion (extracting validator logic to reduce duplication) is a nice-to-have refactoring opportunity for the future - not blocking for this PR. 📋 Verification Checklist
Great work on the iterative improvements! 🚀 |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
codeframe/ui/routers/tasks_v2.py (1)
54-64: Replace str fields and custom validators withLiteraltype annotations.This eliminates duplicate validation logic and lets Pydantic v2 automatically generate proper OpenAPI enums. Since
Literal["plan", "react"]is already used elsewhere in this file (line 589–590), apply it consistently toApproveTasksRequest.engineandStartExecutionRequest.engine, removing both@model_validatormethods. If both validators are removed, dropmodel_validatorfrom the imports.♻️ Refactor
-class ApproveTasksRequest(BaseModel): +class ApproveTasksRequest(BaseModel): ... - engine: str = Field( + engine: Literal["plan", "react"] = Field( "plan", description="Execution engine: 'plan' (default) or 'react' (ReAct loop)", ) - - `@model_validator`(mode="after") - def _validate_engine(self) -> "ApproveTasksRequest": - valid = ("plan", "react") - if self.engine not in valid: - raise ValueError(f"engine must be one of: {', '.join(valid)}") - return self -class StartExecutionRequest(BaseModel): +class StartExecutionRequest(BaseModel): ... - engine: str = Field( + engine: Literal["plan", "react"] = Field( "plan", description="Execution engine: 'plan' (default) or 'react' (ReAct loop)", ) - - `@model_validator`(mode="after") - def _validate_engine(self) -> "StartExecutionRequest": - valid = ("plan", "react") - if self.engine not in valid: - raise ValueError(f"engine must be one of: {', '.join(valid)}") - return selfAlso applies to: 111–121
Summary
engineparameter ("plan"default,"react"supported) to all three task execution API endpointsretry_count→max_retriesparameter name mismatch in/executeendpointChanges
POST /api/v2/tasks/execute— acceptsenginein request body, passes toconductor.start_batch(), returns in responsePOST /api/v2/tasks/approve— acceptsenginein request body, passes toconductor.start_batch()whenstart_execution=truePOST /api/v2/tasks/{id}/start— acceptsengineas query parameter, passes toruntime.execute_agent()Acceptance Criteria
codeframe/ui/routers/tasks_v2.pyupdated with engine field in execution request modelstasks_v2.pypasses engine to runtime/conductorTest Plan
tests/integration/test_tasks_v2_engine.pyBug Fix
Fixed
retry_count=body.retry_count→max_retries=body.retry_countin the/executeendpoint. The old code would have caused aTypeErrorat runtime sinceconductor.start_batch()expectsmax_retries, notretry_count.Closes #354
Summary by CodeRabbit
New Features
Bug Fixes
Tests