-
Notifications
You must be signed in to change notification settings - Fork 311
Install path home autocoder #123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Install path home autocoder #123
Conversation
Ignore .code and add CI guard
Stabilize workflows and guard .code in PR checks
Implement Gemini API integration
Remove master from workflow filters
Guard against tracked .env and .code
Automate VPS deploy via deploy.sh
Fix Traefik routing and allow Docker access
Add git clone action for selected projects
Fix import ordering in projects router
Add blank line after module docstring
Git respository
Stabilize development for deploy & monitoring
📝 WalkthroughWalkthroughThis pull request introduces comprehensive DevOps and feature enhancements: Docker containerization with multi-stage builds, GitHub Actions CI/CD pipelines with automated deployment to VPS, Gemini API support as an alternative to Claude for assistant chat, backend observability infrastructure (logging, metrics via Prometheus, tracing via OpenTelemetry, error monitoring via Sentry), git repository cloning for projects, deployment automation scripts with health checks, and extensive UI improvements including Sentry integration, new CloneRepoModal component, and widespread code style standardization. Changes
Sequence DiagramssequenceDiagram
participant User
participant GHA as GitHub Actions
participant VPS
participant Docker as Docker Stack
participant Health
User->>GHA: Push to main
GHA->>GHA: Run CI (python, ui, docker-image)
GHA->>GHA: Build & tag image (ghcr.io)
GHA->>VPS: Trigger Deploy workflow
VPS->>VPS: Validate secrets & deploy path
VPS->>VPS: Clone/pull repo
VPS->>Docker: Execute docker-compose up
Docker->>Docker: Start API, UI, Traefik stack
Docker->>Health: Expose health endpoint
VPS->>Health: GET /health (retry loop, 5s intervals, max 12)
Health-->>VPS: 200 OK
VPS-->>User: Deployment complete
sequenceDiagram
participant App
participant AssistantChat as AssistantChatSession
participant Claude as Claude SDK
participant Gemini as Gemini API
App->>AssistantChat: new AssistantChatSession()
AssistantChat->>AssistantChat: Check is_gemini_configured()
alt Gemini Configured
AssistantChat->>AssistantChat: Set provider="gemini"
AssistantChat->>AssistantChat: Skip Claude init
else Claude Fallback
AssistantChat->>Claude: Initialize Claude client
AssistantChat->>AssistantChat: Set provider="claude"
end
App->>AssistantChat: send_message(user_msg)
alt Provider == "gemini"
AssistantChat->>Gemini: stream_chat(user_msg, system_prompt)
Gemini-->>AssistantChat: Stream response chunks
AssistantChat->>AssistantChat: Accumulate & yield text
else Provider == "claude"
AssistantChat->>Claude: Query Claude tools
Claude-->>AssistantChat: Stream response
AssistantChat->>AssistantChat: Yield text
end
AssistantChat-->>App: Message streamed
sequenceDiagram
participant UI as CloneRepoModal
participant API as Backend API
participant Git as Git
UI->>UI: Open modal, collect repoUrl, targetDir
UI->>API: POST /projects/{name}/clone (ProjectCloneRequest)
API->>API: Validate project exists
API->>API: Validate repo URL format
API->>API: Derive/validate target directory
API->>Git: git clone <repo_url> <target_dir>
alt Clone Succeeds
Git-->>API: Clone complete
API-->>UI: 200 OK (ProjectCloneResponse: success=true, path)
UI->>UI: Show success message
else Clone Fails
Git-->>API: Error (permission, not a repo, etc.)
API-->>UI: Error response with detail
UI->>UI: Show error alert
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
ui/src/components/ui/dialog.tsx (1)
1-156: Formatting standardization is clean.The changes are purely syntactic—adding consistent semicolons to imports, JSX returns, type annotations, and exports. No functional changes. The component structure follows good patterns with proper TypeScript typing, Radix UI accessibility primitives, and appropriate use of the
cn()utility for class merging.Update coding guidelines: React version is 19, not 18.
The project is configured to use React 19.0.0 (per package.json), but the coding guidelines specify React 18. The dialog component code is compatible with both versions, so there's no issue with this file itself. However, the documented guideline needs to be updated to align with the actual project configuration.
ui/src/components/ui/checkbox.tsx (1)
7-28: Align React version with coding guidelines.The project uses React 19.0.0, but the coding guidelines specify React 18. Either update the guideline to React 19 or downgrade the project dependency to match the established standard.
ui/src/components/CelebrationOverlay.tsx (1)
90-95: Add keyboard activation support for the interactive Card.The Card has
role="button"andtabIndex={0}, making it focusable and semantically a button. However, onlyonClickhandles activation. For accessibility compliance, keyboard users expect Enter and Space to activate focused buttons.♿ Proposed fix to add keyboard activation
<Card onClick={dismiss} + onKeyDown={(e) => { + if (e.key === "Enter" || e.key === " ") { + e.preventDefault(); + dismiss(); + } + }} className="p-6 bg-green-500 border-green-600 animate-bounce-in pointer-events-auto cursor-pointer hover:scale-105 transition-transform focus:outline-none focus:ring-2 focus:ring-primary" role="button" tabIndex={0} >ui/src/components/ExpandProjectChat.tsx (1)
244-250: Add an accessible label to the dismiss button.Icon-only buttons should expose a label for assistive tech.
✅ Suggested fix
<Button onClick={() => setError(null)} variant="ghost" size="icon" className="h-6 w-6" + aria-label="Dismiss error" >
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 3-8: The GitHub Actions workflow's branch filters under the 'on'
trigger only include "main", so CI won't run for PRs targeting "master"; update
the branch arrays for both push and pull_request in the workflow (the 'on' ->
'push' -> branches and 'on' -> 'pull_request' -> branches entries) to include
"master" (or change them to match the repository's default branch naming) so
pushes and PRs against master will trigger the workflow.
- Around line 68-93: The docker-image job currently runs for pull_request events
which causes failures for forked PRs due to lack of package write permission;
modify the docker-image job (job name: docker-image) to only run when publishing
is allowed by adding an if-condition that gates execution to push events or
same-repo pull requests (e.g. check github.event_name == 'push' OR
github.event.pull_request.head.repo.full_name == github.repository), so image
build-and-push steps (using env IMAGE_NAME and docker/build-push-action@v6) are
skipped for forked PRs.
In @.github/workflows/pr-check.yml:
- Around line 12-14: The concurrency.group expression uses unsupported optional
chaining (?.) which breaks GitHub Actions parsing; update the expression in the
concurrency group to remove the optional chaining and rely on the existing
fallback logic (use the left OR right pattern) so it becomes something like
using github.event.pull_request.head.repo.full_name || github.repository and
github.event.pull_request.number || github.run_number; modify the
concurrency.group value where it's defined to use those non-optional-chaining
symbols instead.
In `@scripts/deploy.sh`:
- Around line 29-55: Make the script non-interactive when AUTOCODER_AUTOMATED or
AUTOCODER_ASSUME_YES is set by skipping the interactive read prompts and using
environment variables or defaults: for the required values prompted via
prompt_required (DOMAIN, DUCKDNS_TOKEN, LETSENCRYPT_EMAIL) and the interactive
reads that populate REPO_URL, DEPLOY_BRANCH, APP_DIR, APP_PORT, detect
AUTOCODER_AUTOMATED/AUTOCODER_ASSUME_YES and if true/1 use existing env
variables or the current default fallbacks for
REPO_URL/DEPLOY_BRANCH/APP_DIR/APP_PORT (and error/exit if required values like
DOMAIN/DUCKDNS_TOKEN/LETSENCRYPT_EMAIL are missing in automated mode), and set
CONFIRM to "y" so the proceed check bypasses the prompt; retain interactive
reads when automation flags are absent.
- Around line 57-72: The ensure_packages() function assumes a Debian/Ubuntu
environment (uses apt-get, dpkg, /etc/apt and /etc/apt/keyrings/docker.gpg); add
an early distribution check at the top of ensure_packages() that verifies
apt-get (or lsb_release/dpkg) exists and exits with a clear error if not, or
alternatively add a prominent comment/header/RUNBOOK note stating the script
requires Debian/Ubuntu; reference the ensure_packages() function and the symbols
apt-get, dpkg, /etc/apt/sources.list.d/docker.list, and
/etc/apt/keyrings/docker.gpg when making the change so callers know why the
script will fail on non-Debian systems.
In `@server/gemini_client.py`:
- Around line 72-80: The streaming loop incorrectly treats delta.content as a
list and iterates over its characters; update the code in the async for
completion loop (the variables chunk, choice, delta) to directly yield the
string when present (i.e., replace the inner for part in delta.content /
getattr(...) logic with a simple if delta and delta.content: yield
delta.content) so the full content string is streamed correctly from
delta.content.
In `@ui/package.json`:
- Around line 31-32: Remove the standalone "@sentry/replay" dependency from
package.json because it is incompatible with "@sentry/react@10.36.0"; instead,
configure replay via the built-in integration (call Sentry.replayIntegration()
inside your Sentry.init(...) and set
replaysSessionSampleRate/replaysOnErrorSampleRate) in the code that initializes
Sentry, or if you must keep the separate package, downgrade "@sentry/react" to
the v7 series to match "@sentry/replay@7.116.0".
In `@ui/src/components/ChatMessage.tsx`:
- Around line 144-150: The onClick handler in ChatMessage.tsx calls
window.open(attachment.previewUrl, "_blank") which is vulnerable to reverse
tabnabbing; update the handler associated with the <img> (and any similar
preview openers) to open the URL with noopener and noreferrer protection—either
pass the features string "noopener,noreferrer" as the third argument to
window.open or capture the returned window and set newWindow.opener = null—to
ensure the opened tab cannot access the opener.
In `@ui/src/components/DependencyGraph.tsx`:
- Around line 337-369: The current graphHash (used in useEffect) only includes
node id/status, edges, and agents, so changes to displayed node fields (e.g.,
name, label, category, priority) or edge labels won't trigger the layout update;
update the graphHash calculation to include all displayed fields that appear in
the UI (for example include node.name/node.label/node.category/node.priority and
any edge.label or metadata) by mapping graphData.nodes to include those
properties as well and graphData.edges to include visible edge fields, so when
you compare against prevGraphDataRef.current the effect will run and you will
call getLayoutedElements(...), then setNodes(layoutedNodes) and
setEdges(layoutedEdges) accordingly.
In `@ui/src/components/FolderBrowser.tsx`:
- Around line 99-107: newPath can end up with double slashes when
directoryData?.current_path already ends with '/' (e.g., root); update the
construction of newPath so the separator is normalized before calling
api.createDirectory: trim newFolderName, remove any trailing slash from
directoryData?.current_path (but preserve single leading '/' for root if needed)
or conditionally join with '/' so you always produce a single slash between
parts; then call api.createDirectory(newPath), await refetch(), and call
handleNavigate(newPath) as before. Reference: newPath,
directoryData?.current_path, newFolderName.trim(), api.createDirectory, refetch,
handleNavigate.
In `@ui/src/components/QuestionOptions.tsx`:
- Around line 41-58: When optionLabel === "Other" the component currently shows
the custom input but does not clear any prior single-select answer, so stale
data may be submitted; update the handler so that inside the optionLabel ===
"Other" branch you call setAnswers to clear the stored answer for this question
key (e.g., setAnswers(prev => ({ ...prev, [key]: "" }) or null) before
returning, while still calling setShowCustomInput(prev => ({ ...prev, [key]:
true })); use the existing symbols optionLabel, setShowCustomInput, setAnswers,
multiSelect and key to locate and modify the code.
In `@ui/src/components/SetupWizard.tsx`:
- Around line 20-22: The health check uses the wrong backend route so
isApiHealthy never becomes true; update the call site of api.healthCheck() (used
by the SetupWizard component where isApiHealthy and isReady are computed) to hit
the backend's /api/health endpoint instead of /health so the returned status
will be "healthy" and allow isApiHealthy to become true.
In `@ui/src/components/Terminal.tsx`:
- Around line 260-273: The exit handler sets state with setHasExited(true) but
leaves hasExitedRef unchanged until the effect runs, allowing onclose to still
schedule a reconnect; update the ref synchronously inside the "exit" case (e.g.,
set hasExitedRef.current = true immediately alongside setHasExited(true)), and
ensure any other exit-related refs (like exitCodeRef if present) are also
updated synchronously so onclose checks see the terminal has exited and skip
reconnect logic; locate this in the "exit" case handling that writes to
terminalRef and update there.
- Around line 494-506: The cleanup currently just clears timers and calls
wsRef.current.close() but doesn't prevent the onclose handler from scheduling a
reconnection; before calling wsRef.current.close() and disposing terminalRef in
the useEffect cleanup, set the shared reconnect-permission flag (e.g., a
shouldReconnectRef or reuse isInitializedRef) to false to indicate a
manual/unmounted shutdown, then clear reconnectTimeoutRef, call
wsRef.current.close(), dispose terminalRef, and finally set
isInitializedRef.current = false so the onclose logic will not attempt to
reconnect; update any onclose/onerror handlers to check that flag
(shouldReconnectRef.current or isInitializedRef) before scheduling reconnection.
In `@ui/src/hooks/useSpecChat.ts`:
- Around line 84-143: The cleanup for the polling interval is being returned
from the setTimeout callback instead of the useEffect, so the interval
(pollInterval) can leak if the component unmounts; fix useSpecChat's useEffect
by moving the interval cleanup into the effect's returned function: store both
startDelay and pollInterval references in the effect scope (declare let
pollInterval: NodeJS.Timeout | null = null), set pollInterval inside the timeout
callback where getSpecStatus is used, and return a single cleanup function from
useEffect that clears the timeout (startDelay) and clears the interval
(pollInterval) if set; ensure isComplete/isCompleteRef checks and
setMessages/setIsComplete logic remain unchanged so the interval is still
cleared when status becomes "complete".
🟡 Minor comments (8)
ui/src/components/AgentCard.tsx-179-186 (1)
179-186: Guard clipboard write to avoid unhandled rejection.
navigator.clipboard.writeTextcan fail (non-secure contexts, permission denial). Consider try/catch with a safe fallback state to prevent unhandled promise rejections.🐛 Suggested change
const handleCopy = async () => { const logText = logs .map((log) => `[${log.timestamp}] ${log.line}`) .join("\n"); - await navigator.clipboard.writeText(logText); - setCopied(true); - setTimeout(() => setCopied(false), 2000); + try { + await navigator.clipboard.writeText(logText); + setCopied(true); + setTimeout(() => setCopied(false), 2000); + } catch { + setCopied(false); + } };ui/src/components/ConfirmDialog.tsx-72-72 (1)
72-72: Make loading text dynamic to match the confirm action.The hardcoded
"Deleting..."text assumes this dialog is only used for deletions, but the component is designed to be reusable for any confirmation. This could display misleading text when confirming non-deletion actions.🔄 Suggested fix
Option 1 (simplest): Derive loading text from confirmLabel:
- {isLoading ? "Deleting..." : confirmLabel} + {isLoading ? `${confirmLabel}ing...` : confirmLabel}Option 2 (more flexible): Add a separate
loadingLabelprop:interface ConfirmDialogProps { isOpen: boolean; title: string; message: ReactNode; confirmLabel?: string; cancelLabel?: string; + loadingLabel?: string; variant?: "danger" | "warning"; isLoading?: boolean; onConfirm: () => void; onCancel: () => void; } export function ConfirmDialog({ isOpen, title, message, confirmLabel = "Confirm", cancelLabel = "Cancel", + loadingLabel, variant = "danger", isLoading = false, onConfirm, onCancel, }: ConfirmDialogProps) { return ( // ... - {isLoading ? "Deleting..." : confirmLabel} + {isLoading ? (loadingLabel || `${confirmLabel}...`) : confirmLabel}DEVPROCESS.md-59-67 (1)
59-67: Duplicate Phase 8 section with conflicting status.Phase 8 appears twice (lines 59-63 and lines 64-67) with inconsistent checkbox states. For example, "Config matrix for all env vars" is marked as completed on line 61 but uncompleted on line 66.
Proposed fix: Remove duplicate section
## Phase 8 — Documentation & Onboarding - [ ] "Run locally in 5 minutes" doc/script - [x] Config matrix for all env vars (Claude, Gemini, DuckDNS, Traefik, TLS) - [ ] Architecture overview diagram - -## Phase 8 — Documentation & Onboarding -- [ ] "Run locally in 5 minutes" doc/script -- [ ] Config matrix for all env vars (Claude, Gemini, DuckDNS, Traefik, TLS) -- [ ] Architecture overview diagramui/src/hooks/useExpandChat.ts-277-292 (1)
277-292: Connection timeout leaves status in "connecting" state.When the connection attempt times out after 5 seconds,
onErroris called butconnectionStatusremains"connecting"instead of being set to"error"or"disconnected". This could leave the UI in an inconsistent state.Suggested fix
} else if (wsRef.current?.readyState === WebSocket.CONNECTING) { if (attempts++ < maxAttempts) { setTimeout(checkAndSend, 100); } else { onError?.("Connection timeout"); setIsLoading(false); + setConnectionStatus("error"); } }docker-compose.traefik.yml-4-11 (1)
4-11: Pin the Traefik image version and parameterize the Docker API version.Using
traefik:latestcan lead to unexpected behavior when the image is updated. The hardcodedDOCKER_API_VERSION=1.53may not be compatible with all deployment environments. Pin to a specific stable version (currently v3.6) and parameterize the Docker API version for flexibility across environments.Suggested changes
traefik: - image: traefik:latest + image: traefik:v3.6 environment: # Force a modern Docker API version. Some VPS environments set # DOCKER_API_VERSION=1.24 globally, which breaks Traefik's Docker provider # when the daemon requires >= 1.44. # Use the server's current API version to avoid a too-old client default. - - DOCKER_API_VERSION=1.53 + - DOCKER_API_VERSION=${DOCKER_API_VERSION:-1.53}requirements.txt-19-20 (1)
19-20: Duplicate dependency entry forpython-json-logger.The package
python-json-logger>=2.0.7appears twice (lines 19 and 20). Remove the duplicate.🔧 Proposed fix
python-json-logger>=2.0.7 -python-json-logger>=2.0.7RUNBOOK.md-5-5 (1)
5-5: Typo: "Trafik" should be "Traefik".📝 Proposed fix
-- `docker compose ... logs -f` — tail API/Trafik if issues. +- `docker compose ... logs -f` — tail API/Traefik if issues.deploy.sh-39-64 (1)
39-64: Shellcheck SC2163:exportwith variable indirection.The
export "${var_name}"syntax doesn't actually export the variable whose name is stored invar_name. It tries to export a variable literally named by the contents. Consider usingdeclare -gxorexportafter the assignment:🔧 Proposed fix
prompt_required() { local var_name="$1" local prompt_msg="$2" local value="" # Allow pre-seeding via environment variables in automated runs. if [[ -n "${!var_name:-}" ]]; then - export "${var_name}" + declare -gx "${var_name}=${!var_name}" return fi if [[ "${AUTOMATED_MODE}" -eq 1 ]]; then echo "Missing required environment variable: ${var_name}" >&2 exit 1 fi while true; do read -r -p "${prompt_msg}: " value if [[ -n "${value}" ]]; then printf -v "${var_name}" "%s" "${value}" - export "${var_name}" + declare -gx "${var_name}=${value}" return fi echo "Value cannot be empty." done }
🧹 Nitpick comments (34)
ui/src/components/TerminalTabs.tsx (1)
159-249: Consider adopting neobrutalism design tokens for consistency.The JSX formatting changes look good. As an optional enhancement, the component could adopt the neobrutalism color tokens (e.g.,
--color-neo-pending,--color-neo-progress,--color-neo-done) and custom animations (animate-slide-in,animate-pulse-neo) mentioned in the coding guidelines to maintain visual consistency with other UI components in the codebase.This is a low-priority suggestion that can be deferred to a future styling pass.
As per coding guidelines: "Use neobrutalism design with Tailwind CSS v4 custom animations (animate-slide-in, animate-pulse-neo, animate-shimmer) and color tokens (--color-neo-pending, --color-neo-progress, --color-neo-done)".
ui/src/components/AgentCard.tsx (2)
93-95: Use neobrutalism animation token instead of default pulse.The UI guidelines specify custom animations; consider using the project’s neo animation class to stay consistent.
♻️ Suggested change
- className={`min-w-[180px] max-w-[220px] py-3 ${isActive ? "animate-pulse" : ""}`} + className={`min-w-[180px] max-w-[220px] py-3 ${isActive ? "animate-pulse-neo" : ""}`}As per coding guidelines: “Use neobrutalism design with Tailwind CSS v4 custom animations (animate-slide-in, animate-pulse-neo, animate-shimmer) and color tokens...”.
247-251: Prefer stable keys for log entries.Using the array index as a key can cause incorrect re-renders when log entries update. Consider a stable key from log data.
♻️ Suggested change
- logs.map((log, idx) => ( + logs.map((log) => ( <div - key={idx} + key={`${log.timestamp}-${log.line}`} className={`${getLogColor(log.type)} whitespace-pre-wrap break-all`} >ui/src/components/ui/select.tsx (1)
78-79: Inconsistent Tailwind v4 CSS variable syntax.Line 65 uses the correct Tailwind v4 parentheses syntax (
max-h-(--radix-...)), but line 79 uses the legacy bracket syntax withvar(). Update to the v4 syntax for consistency.♻️ Suggested fix
position === "popper" && - "h-[var(--radix-select-trigger-height)] w-full min-w-[var(--radix-select-trigger-width)] scroll-my-1", + "h-(--radix-select-trigger-height) w-full min-w-(--radix-select-trigger-width) scroll-my-1",As per coding guidelines: Use Tailwind CSS v4, which requires CSS variables to use parentheses syntax instead of brackets.
ui/src/components/OrchestratorStatusCard.tsx (1)
78-141: Consider verifying design system alignment (optional).The component uses custom color schemes (violet, purple, blue, green, amber) rather than the neobrutalism color tokens mentioned in the coding guidelines (--color-neo-pending, --color-neo-progress, --color-neo-done). This may be intentional to differentiate orchestrator status from task status, but it's worth confirming whether the design system tokens should be used here for consistency.
As per coding guidelines: "Use neobrutalism design with Tailwind CSS v4 custom animations (animate-slide-in, animate-pulse-neo, animate-shimmer) and color tokens (--color-neo-pending, --color-neo-progress, --color-neo-done)".
ui/src/components/ExpandProjectModal.tsx (1)
1-41: Update coding guidelines to reflect React 19 adoption.The project is using React 19.0.0, but the coding guidelines specify React 18. This mismatch should be resolved at the project level—either upgrade the guidelines to React 19 or downgrade the dependency. This file uses standard React patterns compatible with both versions, so it's not blocked by this discrepancy. However, the guidelines should be updated to prevent future confusion during code review.
Current state: React ^19.0.0 in ui/package.json
Guideline states: "Use React 18, TypeScript, TanStack Query, Tailwind CSS v4, Radix UI, and dagre"ui/src/components/DebugLogViewer.tsx (2)
71-82: Consider wrapping localStorage access in try-catch.
localStorage.getItem()can throw in private browsing mode or when storage quota is exceeded. While unlikely to cause issues in practice, defensive handling would improve robustness.🔧 Suggested improvement
const [panelHeight, setPanelHeight] = useState(() => { // Load saved height from localStorage - const saved = localStorage.getItem(STORAGE_KEY); - return saved - ? Math.min(Math.max(parseInt(saved, 10), MIN_HEIGHT), MAX_HEIGHT) - : DEFAULT_HEIGHT; + try { + const saved = localStorage.getItem(STORAGE_KEY); + return saved + ? Math.min(Math.max(parseInt(saved, 10), MIN_HEIGHT), MAX_HEIGHT) + : DEFAULT_HEIGHT; + } catch { + return DEFAULT_HEIGHT; + } });
175-183: Consider refactoring to avoid eslint-disable.The
fetchTerminalscallback depends onactiveTerminalId, which it also sets internally. This creates a subtle coupling. Consider using a ref foractiveTerminalIdwithin the callback, or restructuring to separate the "should set active" logic.That said, the current implementation works correctly and the eslint-disable is appropriate here.
pyproject.toml (2)
20-35: Prefer stubs over broadignore_missing_imports.This list will mask real API mismatches. If possible, add/enable type stubs (
types-*or inline stubs) for the key libraries and keep ignores scoped to the few packages that truly lack typing.
37-59:ignore_errorsforserver.*/api.*effectively disables strict mode and makes the previous override redundant.Consider removing the broad override and keeping only the targeted modules while you incrementally fix typing issues.
♻️ Suggested cleanup
-[[tool.mypy.overrides]] -module = [ - "server.*", - "api.*", -] -ignore_errors = true.gitignore (1)
22-24: Avoid duplicate .code/ ignore entry.
Line 10 already ignores.code/; consider removing this duplicate to reduce noise.♻️ Suggested cleanup
-# Local Codex/Claude configuration (do not commit) -.code/ -ui/src/components/ActivityFeed.tsx (1)
97-125: Consider stronger typing foragentName.The
ActivityItem.agentNameis typed asstringbut cast toAgentMascotat usage sites (lines 61, 70). While the fallback color"#6B7280"handles unknown names safely at runtime, consider typingagentNameasAgentMascotin the interface if the data source guarantees valid mascot names, or keep asstringand remove the type assertions.Option: Align interface type with usage
interface ActivityItem { - agentName: string; + agentName: AgentMascot; thought: string; timestamp: string; featureId: number; }This would push validation to the data source rather than relying on runtime fallbacks.
rule.md (1)
1-4: Documentation file looks good.Clear and concise workflow guidelines. Consider using a more standard filename like
CONTRIBUTING.mdorRULES.md(uppercase) for visibility, and adding a Markdown header for consistency with other docs.Suggested formatting
-Rules / pointers +# Rules / Pointers + - Work on branch `development` for roadmap tasks. - Use `DEVELOPMENT.md` for phase definitions and `DEVPROCESS.md` to track progress and TODOs. - Keep PRs scoped to single checklist items when possible..dockerignore (1)
1-27: Minor typo and missing common patterns.Line 3 has
.codewhich appears to be a typo for.vscode. Also consider adding common Python/Node development artifacts to reduce build context size.Suggested additions
.git .gitignore -.code +.vscode +.idea __pycache__/ *.pyc *.pyo *.pyd *.swp *.swo *.tmp .env .env.* env/ venv/ .venv/ ENV/ node_modules/ ui/node_modules/ ui/dist/ npm-debug.log* yarn-debug.log* yarn-error.log* coverage/ dist/ build/ tmp/ *.log +.pytest_cache/ +.mypy_cache/ +*.egg-info/ +.coverage +htmlcov/server/routers/filesystem.py (1)
25-28: Consider a more robust way to determine repository root.Using
parents[2]assumes a fixed directory structure (server/routers/filesystem.py). If this file is moved or the structure changes,REPO_ROOTwill point to the wrong location, potentially causing security issues.Consider using a marker file or environment variable
-REPO_ROOT = Path(__file__).resolve().parents[2] +def _find_repo_root() -> Path: + """Find repository root by looking for a marker file.""" + current = Path(__file__).resolve().parent + for parent in [current] + list(current.parents): + if (parent / "pyproject.toml").exists() or (parent / ".git").exists(): + return parent + # Fallback to parents[2] if no marker found + return Path(__file__).resolve().parents[2] + +REPO_ROOT = _find_repo_root()ui/src/components/ChatMessage.tsx (1)
23-132: Avoid repeated string splitting in render.
Precompute lines once to reduce work and keep line counts consistent.♻️ Suggested refactor
- const { role, content, attachments, timestamp, isStreaming } = message; + const { role, content, attachments, timestamp, isStreaming } = message; + const lines = content ? content.split("\n") : []; ... - {content.split("\n").map((line, i) => { + {lines.map((line, i) => { // Bold text - use module-level regex, reset lastIndex for each line BOLD_REGEX.lastIndex = 0; const parts = []; let lastIndex = 0; let match; ... - {i < content.split("\n").length - 1 && "\n"} + {i < lines.length - 1 && "\n"}ui/src/components/AgentThought.tsx (2)
23-24: Redundant regex check for[Error].Line 23 already matches
[Error]via the pattern/^\[(Done|Error)\]/, making line 24's check/^\[Error\]/unreachable for[Error]prefixed lines.🔧 Proposed fix
if (/^\[(Done|Error)\]/.test(trimmed)) return false; - if (/^\[Error\]/.test(trimmed)) return false; if (/^Output:/.test(trimmed)) return false;
28-36: Minor: Length boundary check creates a gap.Line 29 filters
length < 10and line 36 requireslength > 10, meaning lines with exactly 10 characters are rejected. If this is intentional, consider combining into a single check for clarity:return trimmed.length >= 11;orreturn trimmed.length > 10;.docker-compose.traefik.yml (2)
1-1: Consider removing the deprecatedversionkey.The
versionfield in Docker Compose files is obsolete as of Compose v2. Modern Docker Compose ignores this field. Consider removing it to avoid confusion.Suggested change
-version: "3.9" - services:
44-46: Document how to create the external network.The
traefik-proxynetwork is declared as external, meaning it must exist before running this compose file. Consider adding a comment with the creation command.Suggested documentation
networks: traefik-proxy: + # Create this network before running: docker network create traefik-proxy external: trueui/src/hooks/useExpandChat.ts (1)
146-263: Consider adding a default case to handle unknown message types.The switch statement handles known message types but silently ignores unknown types. Adding a default case with logging would help diagnose protocol mismatches or future message types.
Suggested addition
case "response_done": { // Response complete - hide loading indicator and mark message as done setIsLoading(false); // ... existing code ... break; } + + default: { + console.warn("Unknown message type received:", (data as { type: string }).type); + break; + } }server/gemini_client.py (1)
37-70: Consider adding timeout and error handling for resilience.The
stream_chatfunction lacks timeout configuration and doesn't handle API errors (rate limits, network issues, etc.). This could cause the caller to hang indefinitely or crash unexpectedly.♻️ Suggested improvement
async def stream_chat( user_message: str, *, system_prompt: Optional[str] = None, model: Optional[str] = None, extra_messages: Optional[Iterable[dict]] = None, ) -> AsyncGenerator[str, None]: - client = _build_client() + client = _build_client() messages = [] if system_prompt: messages.append({"role": "system", "content": system_prompt}) if extra_messages: messages.extend(extra_messages) messages.append({"role": "user", "content": user_message}) - completion = await client.chat.completions.create( - model=model or DEFAULT_GEMINI_MODEL, - messages=messages, - stream=True, - ) + try: + completion = await client.chat.completions.create( + model=model or DEFAULT_GEMINI_MODEL, + messages=messages, + stream=True, + timeout=60.0, # Add reasonable timeout + ) + except Exception as e: + raise RuntimeError(f"Gemini API request failed: {e}") from e.github/workflows/pr-check.yml (1)
3-7: Consider adding workflow_dispatch for manual triggering.Adding
workflow_dispatchallows manual re-runs from the GitHub UI, which is useful for debugging CI issues without creating new commits.♻️ Suggested improvement
on: pull_request: branches: [main] push: branches: [main] + workflow_dispatch:server/main.py (1)
337-340: Duplicate health endpoint at/api/health.There are now two health endpoints:
/health(line 239) and/api/health(line 337). They return slightly different response formats ({"status": "ok"}vs{"status": "healthy"}). Consider consolidating to avoid confusion and reduce maintenance overhead.test_health.py (1)
7-7: Consider using a pytest fixture for the TestClient.The module-level
TestClient(app)instantiation triggers the app's lifespan context on import, which could cause side effects (e.g., cleanup of orphaned locks, scheduler startup). Using a pytest fixture with appropriate scope provides better test isolation and control.♻️ Suggested refactor using pytest fixture
-from server.main import app +import pytest +from fastapi.testclient import TestClient -client = TestClient(app) +@pytest.fixture(scope="module") +def client(): + from server.main import app + with TestClient(app) as c: + yield c -def test_health_returns_ok(): +def test_health_returns_ok(client): response = client.get("/health") assert response.status_code == 200 assert response.json().get("status") == "ok" -def test_readiness_returns_ready(): +def test_readiness_returns_ready(client): response = client.get("/readiness") assert response.status_code == 200 assert response.json().get("status") == "ready"server/routers/projects.py (1)
407-410: Consider validating the repository URL scheme.The endpoint accepts any URL that
git clonemight accept, including potentially dangerous schemes likefile://which could expose local files. Consider restricting tohttps://andgit://schemes.♻️ Proposed URL validation
repo_url = request.repo_url.strip() if not repo_url: raise HTTPException(status_code=400, detail="Repository URL is required") + + parsed_url = urlparse(repo_url) + if parsed_url.scheme not in ("https", "git", "ssh", ""): + raise HTTPException( + status_code=400, + detail="Only https://, git://, and ssh:// repository URLs are allowed" + )deploy.sh (3)
165-177: DuckDNS token stored in plaintext cron file.The token is written directly to
/etc/cron.d/duckdns, readable by root. While this is common for DuckDNS setups, consider:
- Storing the token in a separate file with restricted permissions (e.g.,
/etc/duckdns/token)- Referencing it in the cron command
This reduces exposure if the cron directory is inadvertently shared or backed up.
179-192: Git pull uses--ff-onlywhich may fail if local changes exist.If the repository has local modifications or the branch has diverged,
git pull --ff-onlywill fail. Consider adding agit stashbefore pull or usinggit reset --hard origin/${DEPLOY_BRANCH}for a clean deployment state.🔧 Proposed alternative for guaranteed clean state
clone_repo() { echo echo "==> Preparing repository..." if [[ -d "${APP_DIR}/.git" ]]; then echo "Repo already exists, pulling latest..." git -C "${APP_DIR}" fetch --all --prune git -C "${APP_DIR}" checkout "${DEPLOY_BRANCH}" - git -C "${APP_DIR}" pull --ff-only origin "${DEPLOY_BRANCH}" + git -C "${APP_DIR}" reset --hard "origin/${DEPLOY_BRANCH}" else echo "Cloning repository..." mkdir -p "${APP_DIR}" git clone --branch "${DEPLOY_BRANCH}" "${REPO_URL}" "${APP_DIR}" fi }
326-341: Health checks lack retry logic for TLS certificate issuance delay.The post-deploy checks run once and report status, but Let's Encrypt certificate issuance can take 30-60 seconds. Consider adding a retry loop with backoff for the HTTPS check:
🔧 Proposed retry logic
post_checks() { echo echo "==> Post-deploy checks (non-fatal)..." cd "${APP_DIR}" docker compose -f docker-compose.yml -f docker-compose.traefik.yml ps || true - # These checks may fail briefly while the certificate is being issued. - curl -fsS "http://${DOMAIN}/api/health" >/dev/null 2>&1 && \ - echo "Health check over HTTP: OK" || \ - echo "Health check over HTTP: not ready yet" - - curl -fsS "https://${DOMAIN}/api/health" >/dev/null 2>&1 && \ - echo "Health check over HTTPS: OK" || \ - echo "Health check over HTTPS: not ready yet (TLS may still be issuing)" + # Retry health checks with backoff (cert issuance can take time) + for i in 1 2 3 4 5; do + if curl -fsS --max-time 5 "https://${DOMAIN}/api/health" >/dev/null 2>&1; then + echo "Health check over HTTPS: OK" + return + fi + echo "Health check attempt ${i}/5: waiting for TLS..." + sleep $((i * 5)) + done + echo "Health check over HTTPS: not ready after retries (check logs)" }server/services/assistant_chat_session.py (1)
428-448: Gemini queries don't include conversation history context.The
_query_geminimethod only passesmessageandsystem_prompt, butstream_chatinserver/gemini_client.pyacceptsextra_messagesfor prior conversation context. The formatted history inmessage_to_send(lines 352-372) includes history as a text block, which is less structured than usingextra_messages.Consider passing history as structured messages for better Gemini context understanding:
🔧 Proposed enhancement
- async def _query_gemini(self, message: str) -> AsyncGenerator[dict, None]: + async def _query_gemini(self, message: str, history: list[dict] | None = None) -> AsyncGenerator[dict, None]: """ Query Gemini and stream plain-text responses (no tool calls). """ full_response = "" try: async for text in stream_chat( message, system_prompt=self._system_prompt, model=os.getenv("GEMINI_MODEL"), + extra_messages=history, ): full_response += text yield {"type": "text", "content": text}Then update the call site in
send_messageto pass structured history when using Gemini.docker-compose.yml (2)
1-1: Theversionfield is obsolete in Docker Compose V2+.Modern Docker Compose ignores this field. While it doesn't cause issues, you can remove it for cleaner configuration.
♻️ Suggested change
-version: "3.9" - services:
3-24: Consider adding a healthcheck for better orchestration.Adding a healthcheck would improve container orchestration, enable proper startup ordering with
depends_onconditions, and provide better visibility into container state.♻️ Suggested healthcheck addition
command: uvicorn server.main:app --host 0.0.0.0 --port 8888 + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8888/health"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 40s volumes:ui/src/components/CloneRepoModal.tsx (1)
132-136: Consider using a distinct variant for the success alert.The success alert uses the default Alert styling, which may not visually distinguish it from neutral messages. Using a success-specific style would improve UX.
{successMessage && ( - <Alert> + <Alert variant="default" className="border-green-500 bg-green-50 text-green-900 dark:bg-green-950 dark:text-green-100"> <AlertDescription>{successMessage}</AlertDescription> </Alert> )}ui/src/components/FeatureCard.tsx (1)
50-58: Prefer the neobrutalism pulse animation class.
Swapanimate-pulseto the customanimate-pulse-neoto align with the UI guidelines.As per coding guidelines, Use neobrutalism design with Tailwind CSS v4 custom animations (animate-slide-in, animate-pulse-neo, animate-shimmer) and color tokens (--color-neo-pending, --color-neo-progress, --color-neo-done).♻️ Proposed change
- ${isInProgress ? "animate-pulse" : ""} + ${isInProgress ? "animate-pulse-neo" : ""}
| on: | ||
| pull_request: | ||
| branches: [master, main] | ||
| push: | ||
| branches: [master, main] | ||
| branches: [main] | ||
| pull_request: | ||
| branches: [main] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI won’t run for PRs targeting master.
This workflow only listens to main, but this PR targets master, so checks won’t execute. Add master to the branch filters (or align with the repo default branch).
✅ Suggested fix
on:
push:
- branches: [main]
+ branches: [main, master]
pull_request:
- branches: [main]
+ branches: [main, master]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| pull_request: | |
| branches: [master, main] | |
| push: | |
| branches: [master, main] | |
| branches: [main] | |
| pull_request: | |
| branches: [main] | |
| on: | |
| push: | |
| branches: [main, master] | |
| pull_request: | |
| branches: [main, master] | |
🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 3 - 8, The GitHub Actions workflow's
branch filters under the 'on' trigger only include "main", so CI won't run for
PRs targeting "master"; update the branch arrays for both push and pull_request
in the workflow (the 'on' -> 'push' -> branches and 'on' -> 'pull_request' ->
branches entries) to include "master" (or change them to match the repository's
default branch naming) so pushes and PRs against master will trigger the
workflow.
| docker-image: | ||
| needs: [python, ui] | ||
| runs-on: ubuntu-latest | ||
| permissions: | ||
| contents: read | ||
| packages: write | ||
| env: | ||
| IMAGE_NAME: ghcr.io/${{ github.repository }} | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| - uses: docker/setup-buildx-action@v3 | ||
| - uses: docker/login-action@v3 | ||
| with: | ||
| registry: ghcr.io | ||
| username: ${{ github.repository_owner }} | ||
| password: ${{ secrets.GITHUB_TOKEN }} | ||
| - name: Build and push image | ||
| uses: docker/build-push-action@v6 | ||
| with: | ||
| context: . | ||
| file: Dockerfile | ||
| platforms: linux/amd64 | ||
| push: true | ||
| tags: | | ||
| ${{ env.IMAGE_NAME }}:latest | ||
| ${{ env.IMAGE_NAME }}:${{ github.sha }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard image publishing from PR events.
docker-image runs on pull_request too; forked PRs won’t have package write permission, causing failed runs. Gate image builds to push events (or same‑repo PRs).
✅ Suggested fix
docker-image:
needs: [python, ui]
+ if: github.event_name == 'push'
runs-on: ubuntu-latest📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| docker-image: | |
| needs: [python, ui] | |
| runs-on: ubuntu-latest | |
| permissions: | |
| contents: read | |
| packages: write | |
| env: | |
| IMAGE_NAME: ghcr.io/${{ github.repository }} | |
| steps: | |
| - uses: actions/checkout@v4 | |
| - uses: docker/setup-buildx-action@v3 | |
| - uses: docker/login-action@v3 | |
| with: | |
| registry: ghcr.io | |
| username: ${{ github.repository_owner }} | |
| password: ${{ secrets.GITHUB_TOKEN }} | |
| - name: Build and push image | |
| uses: docker/build-push-action@v6 | |
| with: | |
| context: . | |
| file: Dockerfile | |
| platforms: linux/amd64 | |
| push: true | |
| tags: | | |
| ${{ env.IMAGE_NAME }}:latest | |
| ${{ env.IMAGE_NAME }}:${{ github.sha }} | |
| docker-image: | |
| needs: [python, ui] | |
| if: github.event_name == 'push' | |
| runs-on: ubuntu-latest | |
| permissions: | |
| contents: read | |
| packages: write | |
| env: | |
| IMAGE_NAME: ghcr.io/${{ github.repository }} | |
| steps: | |
| - uses: actions/checkout@v4 | |
| - uses: docker/setup-buildx-action@v3 | |
| - uses: docker/login-action@v3 | |
| with: | |
| registry: ghcr.io | |
| username: ${{ github.repository_owner }} | |
| password: ${{ secrets.GITHUB_TOKEN }} | |
| - name: Build and push image | |
| uses: docker/build-push-action@v6 | |
| with: | |
| context: . | |
| file: Dockerfile | |
| platforms: linux/amd64 | |
| push: true | |
| tags: | | |
| ${{ env.IMAGE_NAME }}:latest | |
| ${{ env.IMAGE_NAME }}:${{ github.sha }} |
🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 68 - 93, The docker-image job
currently runs for pull_request events which causes failures for forked PRs due
to lack of package write permission; modify the docker-image job (job name:
docker-image) to only run when publishing is allowed by adding an if-condition
that gates execution to push events or same-repo pull requests (e.g. check
github.event_name == 'push' OR github.event.pull_request.head.repo.full_name ==
github.repository), so image build-and-push steps (using env IMAGE_NAME and
docker/build-push-action@v6) are skipped for forked PRs.
| concurrency: | ||
| group: pr-check-${{ github.event.pull_request?.head.repo.full_name || github.repository }}-${{ github.event.pull_request?.number || github.run_number }} | ||
| cancel-in-progress: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Syntax error: GitHub Actions expressions don't support optional chaining (?.).
The ?. operator is not valid in GitHub Actions expression syntax. This will cause the workflow to fail with a parsing error.
🐛 Proposed fix using proper GitHub Actions expression syntax
concurrency:
- group: pr-check-${{ github.event.pull_request?.head.repo.full_name || github.repository }}-${{ github.event.pull_request?.number || github.run_number }}
+ group: pr-check-${{ github.event.pull_request.head.repo.full_name || github.repository }}-${{ github.event.pull_request.number || github.run_number }}
cancel-in-progress: trueIn GitHub Actions, the || operator already handles null/undefined values gracefully - if the left side is falsy (including undefined), it returns the right side. The optional chaining operator is not needed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| concurrency: | |
| group: pr-check-${{ github.event.pull_request?.head.repo.full_name || github.repository }}-${{ github.event.pull_request?.number || github.run_number }} | |
| cancel-in-progress: true | |
| concurrency: | |
| group: pr-check-${{ github.event.pull_request.head.repo.full_name || github.repository }}-${{ github.event.pull_request.number || github.run_number }} | |
| cancel-in-progress: true |
🧰 Tools
🪛 actionlint (1.7.10)
13-13: got unexpected character '?' while lexing expression, expecting 'a'..'z', 'A'..'Z', '_', '0'..'9', ''', '}', '(', ')', '[', ']', '.', '!', '<', '>', '=', '&', '|', '*', ',', ' '
(expression)
🤖 Prompt for AI Agents
In @.github/workflows/pr-check.yml around lines 12 - 14, The concurrency.group
expression uses unsupported optional chaining (?.) which breaks GitHub Actions
parsing; update the expression in the concurrency group to remove the optional
chaining and rely on the existing fallback logic (use the left OR right pattern)
so it becomes something like using github.event.pull_request.head.repo.full_name
|| github.repository and github.event.pull_request.number || github.run_number;
modify the concurrency.group value where it's defined to use those
non-optional-chaining symbols instead.
| prompt_required DOMAIN "Enter your DuckDNS domain (e.g., myapp.duckdns.org)" | ||
| prompt_required DUCKDNS_TOKEN "Enter your DuckDNS token" | ||
| prompt_required LETSENCRYPT_EMAIL "Enter email for Let's Encrypt notifications" | ||
|
|
||
| read -r -p "Git repo URL [https://github.com/heidi-dang/autocoder.git]: " REPO_URL | ||
| REPO_URL=${REPO_URL:-https://github.com/heidi-dang/autocoder.git} | ||
|
|
||
| read -r -p "Git branch to deploy [main]: " DEPLOY_BRANCH | ||
| DEPLOY_BRANCH=${DEPLOY_BRANCH:-main} | ||
|
|
||
| read -r -p "Install path [/home/autocoder]: " APP_DIR | ||
| APP_DIR=${APP_DIR:-/home/autocoder} | ||
|
|
||
| read -r -p "App internal port (container) [8888]: " APP_PORT | ||
| APP_PORT=${APP_PORT:-8888} | ||
|
|
||
| echo | ||
| echo "Domain: $DOMAIN" | ||
| echo "Repo: $REPO_URL" | ||
| echo "Branch: $DEPLOY_BRANCH" | ||
| echo "Path: $APP_DIR" | ||
| echo | ||
| read -r -p "Proceed? [y/N]: " CONFIRM | ||
| if [[ "${CONFIRM,,}" != "y" ]]; then | ||
| echo "Aborted." | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Script lacks unattended/automated mode support.
The AI summary mentions support for AUTOCODER_AUTOMATED and AUTOCODER_ASSUME_YES environment variables for automated deployments (used by the GitHub Actions workflow), but the script only uses interactive read prompts. This will cause the CI/CD workflow to hang or fail.
🐛 Proposed fix to support automated mode
+# Support automated/unattended mode via environment variables
+if [[ "${AUTOCODER_AUTOMATED:-}" == "1" ]]; then
+ # In automated mode, require all variables to be set via environment
+ : "${DOMAIN:?DOMAIN is required in automated mode}"
+ : "${DUCKDNS_TOKEN:?DUCKDNS_TOKEN is required in automated mode}"
+ : "${LETSENCRYPT_EMAIL:?LETSENCRYPT_EMAIL is required in automated mode}"
+ REPO_URL=${REPO_URL:-https://github.com/heidi-dang/autocoder.git}
+ DEPLOY_BRANCH=${DEPLOY_BRANCH:-main}
+ APP_DIR=${APP_DIR:-/home/autocoder}
+ APP_PORT=${APP_PORT:-8888}
+ CONFIRM="y"
+else
prompt_required DOMAIN "Enter your DuckDNS domain (e.g., myapp.duckdns.org)"
prompt_required DUCKDNS_TOKEN "Enter your DuckDNS token"
prompt_required LETSENCRYPT_EMAIL "Enter email for Let's Encrypt notifications"
# ... rest of interactive prompts ...
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| prompt_required DOMAIN "Enter your DuckDNS domain (e.g., myapp.duckdns.org)" | |
| prompt_required DUCKDNS_TOKEN "Enter your DuckDNS token" | |
| prompt_required LETSENCRYPT_EMAIL "Enter email for Let's Encrypt notifications" | |
| read -r -p "Git repo URL [https://github.com/heidi-dang/autocoder.git]: " REPO_URL | |
| REPO_URL=${REPO_URL:-https://github.com/heidi-dang/autocoder.git} | |
| read -r -p "Git branch to deploy [main]: " DEPLOY_BRANCH | |
| DEPLOY_BRANCH=${DEPLOY_BRANCH:-main} | |
| read -r -p "Install path [/home/autocoder]: " APP_DIR | |
| APP_DIR=${APP_DIR:-/home/autocoder} | |
| read -r -p "App internal port (container) [8888]: " APP_PORT | |
| APP_PORT=${APP_PORT:-8888} | |
| echo | |
| echo "Domain: $DOMAIN" | |
| echo "Repo: $REPO_URL" | |
| echo "Branch: $DEPLOY_BRANCH" | |
| echo "Path: $APP_DIR" | |
| echo | |
| read -r -p "Proceed? [y/N]: " CONFIRM | |
| if [[ "${CONFIRM,,}" != "y" ]]; then | |
| echo "Aborted." | |
| exit 1 | |
| fi | |
| # Support automated/unattended mode via environment variables | |
| if [[ "${AUTOCODER_AUTOMATED:-}" == "1" ]]; then | |
| # In automated mode, require all variables to be set via environment | |
| : "${DOMAIN:?DOMAIN is required in automated mode}" | |
| : "${DUCKDNS_TOKEN:?DUCKDNS_TOKEN is required in automated mode}" | |
| : "${LETSENCRYPT_EMAIL:?LETSENCRYPT_EMAIL is required in automated mode}" | |
| REPO_URL=${REPO_URL:-https://github.com/heidi-dang/autocoder.git} | |
| DEPLOY_BRANCH=${DEPLOY_BRANCH:-main} | |
| APP_DIR=${APP_DIR:-/home/autocoder} | |
| APP_PORT=${APP_PORT:-8888} | |
| CONFIRM="y" | |
| else | |
| prompt_required DOMAIN "Enter your DuckDNS domain (e.g., myapp.duckdns.org)" | |
| prompt_required DUCKDNS_TOKEN "Enter your DuckDNS token" | |
| prompt_required LETSENCRYPT_EMAIL "Enter email for Let's Encrypt notifications" | |
| read -r -p "Git repo URL [https://github.com/heidi-dang/autocoder.git]: " REPO_URL | |
| REPO_URL=${REPO_URL:-https://github.com/heidi-dang/autocoder.git} | |
| read -r -p "Git branch to deploy [main]: " DEPLOY_BRANCH | |
| DEPLOY_BRANCH=${DEPLOY_BRANCH:-main} | |
| read -r -p "Install path [/home/autocoder]: " APP_DIR | |
| APP_DIR=${APP_DIR:-/home/autocoder} | |
| read -r -p "App internal port (container) [8888]: " APP_PORT | |
| APP_PORT=${APP_PORT:-8888} | |
| echo | |
| echo "Domain: $DOMAIN" | |
| echo "Repo: $REPO_URL" | |
| echo "Branch: $DEPLOY_BRANCH" | |
| echo "Path: $APP_DIR" | |
| echo | |
| read -r -p "Proceed? [y/N]: " CONFIRM | |
| if [[ "${CONFIRM,,}" != "y" ]]; then | |
| echo "Aborted." | |
| exit 1 | |
| fi | |
| fi |
🤖 Prompt for AI Agents
In `@scripts/deploy.sh` around lines 29 - 55, Make the script non-interactive when
AUTOCODER_AUTOMATED or AUTOCODER_ASSUME_YES is set by skipping the interactive
read prompts and using environment variables or defaults: for the required
values prompted via prompt_required (DOMAIN, DUCKDNS_TOKEN, LETSENCRYPT_EMAIL)
and the interactive reads that populate REPO_URL, DEPLOY_BRANCH, APP_DIR,
APP_PORT, detect AUTOCODER_AUTOMATED/AUTOCODER_ASSUME_YES and if true/1 use
existing env variables or the current default fallbacks for
REPO_URL/DEPLOY_BRANCH/APP_DIR/APP_PORT (and error/exit if required values like
DOMAIN/DUCKDNS_TOKEN/LETSENCRYPT_EMAIL are missing in automated mode), and set
CONFIRM to "y" so the proceed check bypasses the prompt; retain interactive
reads when automation flags are absent.
| ensure_packages() { | ||
| echo "Installing Docker & prerequisites..." | ||
| apt-get update -y | ||
| apt-get install -y ca-certificates curl git gnupg | ||
| install -m 0755 -d /etc/apt/keyrings | ||
| if [[ ! -f /etc/apt/keyrings/docker.gpg ]]; then | ||
| curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg | ||
| chmod a+r /etc/apt/keyrings/docker.gpg | ||
| echo \ | ||
| "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ | ||
| $(. /etc/os-release && echo "$VERSION_CODENAME") stable" > /etc/apt/sources.list.d/docker.list | ||
| apt-get update -y | ||
| fi | ||
| apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin | ||
| systemctl enable --now docker | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if scripts/deploy.sh exists and examine the content
if [ -f scripts/deploy.sh ]; then
echo "=== File exists, showing first 20 lines ==="
head -20 scripts/deploy.sh
echo ""
echo "=== Lines 57-72 (the function in question) ==="
sed -n '57,72p' scripts/deploy.sh
echo ""
echo "=== Searching for any distribution checks ==="
rg -n "ubuntu|debian|apt|uname|os-release|lsb_release" scripts/deploy.sh
else
echo "File scripts/deploy.sh not found"
find . -name "deploy.sh" -type f 2>/dev/null
fiRepository: leonvanzyl/autocoder
Length of output: 2199
🏁 Script executed:
#!/bin/bash
# Check the full script for any distribution checks before line 57
echo "=== Entire scripts/deploy.sh to check for prior distribution checks ==="
cat scripts/deploy.sh
echo ""
echo "=== Check for README or documentation ==="
find . -maxdepth 2 -type f \( -name "README*" -o -name "INSTALL*" -o -name "*.md" \) | head -10Repository: leonvanzyl/autocoder
Length of output: 4594
🏁 Script executed:
#!/bin/bash
# Check README for distribution requirements
echo "=== Checking README.md for distribution requirements ==="
head -50 README.md
echo ""
echo "=== Checking RUNBOOK.md for distribution requirements ==="
head -50 RUNBOOK.mdRepository: leonvanzyl/autocoder
Length of output: 3727
Add Ubuntu/Debian distribution check or document the requirement.
The ensure_packages() function hardcodes apt-get, /etc/apt/sources.list.d/, and dpkg, which will fail on CentOS, RHEL, Fedora, Alpine, or other non-Debian-based distributions. Neither the script nor the RUNBOOK documents this requirement, so a user on an incompatible system will encounter runtime failures.
Add an early check (e.g., if ! command -v apt-get &> /dev/null; then exit 1; fi) or prominently document that Ubuntu/Debian is required.
🤖 Prompt for AI Agents
In `@scripts/deploy.sh` around lines 57 - 72, The ensure_packages() function
assumes a Debian/Ubuntu environment (uses apt-get, dpkg, /etc/apt and
/etc/apt/keyrings/docker.gpg); add an early distribution check at the top of
ensure_packages() that verifies apt-get (or lsb_release/dpkg) exists and exits
with a clear error if not, or alternatively add a prominent
comment/header/RUNBOOK note stating the script requires Debian/Ubuntu; reference
the ensure_packages() function and the symbols apt-get, dpkg,
/etc/apt/sources.list.d/docker.list, and /etc/apt/keyrings/docker.gpg when
making the change so callers know why the script will fail on non-Debian
systems.
| if (optionLabel === "Other") { | ||
| setShowCustomInput((prev) => ({ ...prev, [key]: true })); | ||
| return; | ||
| } | ||
|
|
||
| setShowCustomInput((prev) => ({ ...prev, [key]: false })) | ||
| setShowCustomInput((prev) => ({ ...prev, [key]: false })); | ||
|
|
||
| setAnswers((prev) => { | ||
| if (multiSelect) { | ||
| const current = (prev[key] as string[]) || [] | ||
| const current = (prev[key] as string[]) || []; | ||
| if (current.includes(optionLabel)) { | ||
| return { ...prev, [key]: current.filter((o) => o !== optionLabel) } | ||
| return { ...prev, [key]: current.filter((o) => o !== optionLabel) }; | ||
| } else { | ||
| return { ...prev, [key]: [...current, optionLabel] } | ||
| return { ...prev, [key]: [...current, optionLabel] }; | ||
| } | ||
| } else { | ||
| return { ...prev, [key]: optionLabel } | ||
| return { ...prev, [key]: optionLabel }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear prior selection when switching to “Other”.
In single-select mode, clicking “Other” leaves the previous answer intact, so submit can send a stale option if the user doesn’t type a custom value. Clear the stored answer when toggling to “Other”.
🐛 Proposed fix
if (optionLabel === "Other") {
setShowCustomInput((prev) => ({ ...prev, [key]: true }));
+ setAnswers((prev) => {
+ const next = { ...prev };
+ delete next[key];
+ return next;
+ });
return;
}🤖 Prompt for AI Agents
In `@ui/src/components/QuestionOptions.tsx` around lines 41 - 58, When optionLabel
=== "Other" the component currently shows the custom input but does not clear
any prior single-select answer, so stale data may be submitted; update the
handler so that inside the optionLabel === "Other" branch you call setAnswers to
clear the stored answer for this question key (e.g., setAnswers(prev => ({
...prev, [key]: "" }) or null) before returning, while still calling
setShowCustomInput(prev => ({ ...prev, [key]: true })); use the existing symbols
optionLabel, setShowCustomInput, setAnswers, multiSelect and key to locate and
modify the code.
| const isApiHealthy = health?.status === "healthy" && !healthError; | ||
| const isReady = | ||
| isApiHealthy && setupStatus?.claude_cli && setupStatus?.credentials; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find healthCheck usage in UI and health endpoint in backend
echo "=== healthCheck usage in UI ==="
rg -n "healthCheck" ui/src -C3
echo ""
echo "=== /health endpoint in backend ==="
rg -n "def health|@.*health|/health" server -g'*.py' -C3Repository: leonvanzyl/autocoder
Length of output: 2574
🏁 Script executed:
#!/bin/bash
# Check if /api/health is used anywhere in the codebase
echo "=== /api/health usage in UI ==="
rg -n "/api/health" ui/src
echo ""
echo "=== Verify health endpoint usage in main.py ==="
rg -n "health" ui/src/lib/api.ts -A2 -B2Repository: leonvanzyl/autocoder
Length of output: 272
Fix health endpoint to use /api/health instead of /health.
The UI calls /health which returns {"status": "ok"}, but checks for "healthy". This means isApiHealthy will never become true, blocking the setup wizard indefinitely.
The backend has a dedicated /api/health endpoint that returns {"status": "healthy"} (matching the UI's expectation), but it's currently unused. Change api.healthCheck() to call /api/health instead.
Suggested fix
export async function healthCheck(): Promise<{ status: string }> {
- return fetchJSON("/health");
+ return fetchJSON("/api/health");
}🤖 Prompt for AI Agents
In `@ui/src/components/SetupWizard.tsx` around lines 20 - 22, The health check
uses the wrong backend route so isApiHealthy never becomes true; update the call
site of api.healthCheck() (used by the SetupWizard component where isApiHealthy
and isReady are computed) to hit the backend's /api/health endpoint instead of
/health so the returned status will be "healthy" and allow isApiHealthy to
become true.
| case "exit": { | ||
| setHasExited(true); | ||
| setExitCode(message.code); | ||
| if (terminalRef.current) { | ||
| terminalRef.current.writeln('') | ||
| terminalRef.current.writeln(""); | ||
| terminalRef.current.writeln( | ||
| `\x1b[33m[Shell exited with code ${message.code}]\x1b[0m` | ||
| ) | ||
| `\x1b[33m[Shell exited with code ${message.code}]\x1b[0m`, | ||
| ); | ||
| terminalRef.current.writeln( | ||
| '\x1b[90mPress any key to reconnect...\x1b[0m' | ||
| ) | ||
| "\x1b[90mPress any key to reconnect...\x1b[0m", | ||
| ); | ||
| } | ||
| break | ||
| break; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid auto‑reconnect race after exit.
hasExitedRef is only updated in an effect, so onclose can still schedule a reconnect if the socket closes immediately after the exit message. Set the ref synchronously in the exit handler.
✅ Suggested fix
case "exit": {
+ hasExitedRef.current = true;
setHasExited(true);
setExitCode(message.code);
if (terminalRef.current) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "exit": { | |
| setHasExited(true); | |
| setExitCode(message.code); | |
| if (terminalRef.current) { | |
| terminalRef.current.writeln('') | |
| terminalRef.current.writeln(""); | |
| terminalRef.current.writeln( | |
| `\x1b[33m[Shell exited with code ${message.code}]\x1b[0m` | |
| ) | |
| `\x1b[33m[Shell exited with code ${message.code}]\x1b[0m`, | |
| ); | |
| terminalRef.current.writeln( | |
| '\x1b[90mPress any key to reconnect...\x1b[0m' | |
| ) | |
| "\x1b[90mPress any key to reconnect...\x1b[0m", | |
| ); | |
| } | |
| break | |
| break; | |
| } | |
| case "exit": { | |
| hasExitedRef.current = true; | |
| setHasExited(true); | |
| setExitCode(message.code); | |
| if (terminalRef.current) { | |
| terminalRef.current.writeln(""); | |
| terminalRef.current.writeln( | |
| `\x1b[33m[Shell exited with code ${message.code}]\x1b[0m`, | |
| ); | |
| terminalRef.current.writeln( | |
| "\x1b[90mPress any key to reconnect...\x1b[0m", | |
| ); | |
| } | |
| break; | |
| } |
🤖 Prompt for AI Agents
In `@ui/src/components/Terminal.tsx` around lines 260 - 273, The exit handler sets
state with setHasExited(true) but leaves hasExitedRef unchanged until the effect
runs, allowing onclose to still schedule a reconnect; update the ref
synchronously inside the "exit" case (e.g., set hasExitedRef.current = true
immediately alongside setHasExited(true)), and ensure any other exit-related
refs (like exitCodeRef if present) are also updated synchronously so onclose
checks see the terminal has exited and skip reconnect logic; locate this in the
"exit" case handling that writes to terminalRef and update there.
| useEffect(() => { | ||
| return () => { | ||
| if (reconnectTimeoutRef.current) { | ||
| clearTimeout(reconnectTimeoutRef.current) | ||
| clearTimeout(reconnectTimeoutRef.current); | ||
| } | ||
| if (wsRef.current) { | ||
| wsRef.current.close() | ||
| wsRef.current.close(); | ||
| } | ||
| if (terminalRef.current) { | ||
| terminalRef.current.dispose() | ||
| terminalRef.current.dispose(); | ||
| } | ||
| isInitializedRef.current = false | ||
| } | ||
| }, []) | ||
| isInitializedRef.current = false; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent reconnect after unmount.
ws.close() triggers onclose, which can schedule reconnection because the flags remain permissive. Mark manual close/inactive before closing.
✅ Suggested fix
if (wsRef.current) {
+ isManualCloseRef.current = true;
+ isActiveRef.current = false;
wsRef.current.close();
}🤖 Prompt for AI Agents
In `@ui/src/components/Terminal.tsx` around lines 494 - 506, The cleanup currently
just clears timers and calls wsRef.current.close() but doesn't prevent the
onclose handler from scheduling a reconnection; before calling
wsRef.current.close() and disposing terminalRef in the useEffect cleanup, set
the shared reconnect-permission flag (e.g., a shouldReconnectRef or reuse
isInitializedRef) to false to indicate a manual/unmounted shutdown, then clear
reconnectTimeoutRef, call wsRef.current.close(), dispose terminalRef, and
finally set isInitializedRef.current = false so the onclose logic will not
attempt to reconnect; update any onclose/onerror handlers to check that flag
(shouldReconnectRef.current or isInitializedRef) before scheduling reconnection.
| useEffect(() => { | ||
| // Don't poll if already complete | ||
| if (isComplete) return | ||
| if (isComplete) return; | ||
|
|
||
| // Start polling after initial delay (let WebSocket try first) | ||
| const startDelay = setTimeout(() => { | ||
| const pollInterval = setInterval(async () => { | ||
| // Stop if already complete | ||
| if (isCompleteRef.current) { | ||
| clearInterval(pollInterval) | ||
| return | ||
| clearInterval(pollInterval); | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| const status = await getSpecStatus(projectName) | ||
| const status = await getSpecStatus(projectName); | ||
|
|
||
| if (status.exists && status.status === 'complete') { | ||
| if (status.exists && status.status === "complete") { | ||
| // Status file indicates completion - set complete state | ||
| setIsComplete(true) | ||
| setIsLoading(false) | ||
| setIsComplete(true); | ||
| setIsLoading(false); | ||
|
|
||
| // Mark any streaming message as done | ||
| setMessages((prev) => { | ||
| const lastMessage = prev[prev.length - 1] | ||
| if (lastMessage?.role === 'assistant' && lastMessage.isStreaming) { | ||
| const lastMessage = prev[prev.length - 1]; | ||
| if ( | ||
| lastMessage?.role === "assistant" && | ||
| lastMessage.isStreaming | ||
| ) { | ||
| return [ | ||
| ...prev.slice(0, -1), | ||
| { ...lastMessage, isStreaming: false }, | ||
| ] | ||
| ]; | ||
| } | ||
| return prev | ||
| }) | ||
| return prev; | ||
| }); | ||
|
|
||
| // Add system message about completion | ||
| setMessages((prev) => [ | ||
| ...prev, | ||
| { | ||
| id: generateId(), | ||
| role: 'system', | ||
| content: `Spec creation complete! Files written: ${status.files_written.join(', ')}${status.feature_count ? ` (${status.feature_count} features)` : ''}`, | ||
| role: "system", | ||
| content: `Spec creation complete! Files written: ${status.files_written.join(", ")}${status.feature_count ? ` (${status.feature_count} features)` : ""}`, | ||
| timestamp: new Date(), | ||
| }, | ||
| ]) | ||
| ]); | ||
|
|
||
| clearInterval(pollInterval) | ||
| clearInterval(pollInterval); | ||
| } | ||
| } catch { | ||
| // Silently ignore polling errors - WebSocket is primary mechanism | ||
| } | ||
| }, 3000) // Poll every 3 seconds | ||
| }, 3000); // Poll every 3 seconds | ||
|
|
||
| // Cleanup interval on unmount | ||
| return () => clearInterval(pollInterval) | ||
| }, 3000) // Start polling after 3 second delay | ||
| return () => clearInterval(pollInterval); | ||
| }, 3000); // Start polling after 3 second delay | ||
|
|
||
| return () => clearTimeout(startDelay) | ||
| }, [projectName, isComplete]) | ||
| return () => clearTimeout(startDelay); | ||
| }, [projectName, isComplete]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Polling cleanup function is not properly returned.
The inner return () => clearInterval(pollInterval) on line 139 is returned from the setTimeout callback, not from the useEffect. This cleanup function is ignored. If the component unmounts while the interval is running, pollInterval won't be cleared.
🐛 Proposed fix
useEffect(() => {
// Don't poll if already complete
if (isComplete) return;
+ let pollInterval: ReturnType<typeof setInterval> | null = null;
+
// Start polling after initial delay (let WebSocket try first)
const startDelay = setTimeout(() => {
- const pollInterval = setInterval(async () => {
+ pollInterval = setInterval(async () => {
// Stop if already complete
if (isCompleteRef.current) {
clearInterval(pollInterval);
return;
}
// ... rest of polling logic
}, 3000);
-
- // Cleanup interval on unmount
- return () => clearInterval(pollInterval);
}, 3000);
- return () => clearTimeout(startDelay);
+ return () => {
+ clearTimeout(startDelay);
+ if (pollInterval) {
+ clearInterval(pollInterval);
+ }
+ };
}, [projectName, isComplete]);🤖 Prompt for AI Agents
In `@ui/src/hooks/useSpecChat.ts` around lines 84 - 143, The cleanup for the
polling interval is being returned from the setTimeout callback instead of the
useEffect, so the interval (pollInterval) can leak if the component unmounts;
fix useSpecChat's useEffect by moving the interval cleanup into the effect's
returned function: store both startDelay and pollInterval references in the
effect scope (declare let pollInterval: NodeJS.Timeout | null = null), set
pollInterval inside the timeout callback where getSpecStatus is used, and return
a single cleanup function from useEffect that clears the timeout (startDelay)
and clears the interval (pollInterval) if set; ensure isComplete/isCompleteRef
checks and setMessages/setIsComplete logic remain unchanged so the interval is
still cleared when status becomes "complete".
…leonvanzyl#123) - Ran python -m pytest tests/test_dspy_pipeline_e2e.py -v: 46/46 passed - Verified 3 consecutive runs all pass (no flaky tests) - No test logic warnings (only SQLAlchemy deprecation) - Tests run without real ANTHROPIC_API_KEY (all DSPy mocked) - Marked feature leonvanzyl#123 as passing Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Please resolve the conflicts. |
Summary by CodeRabbit
New Features
Chores
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.