This document defines development standards, workflows, and tooling conventions.
- Makefile for orchestration, raw commands for debugging
- CI must pass before merge - No exceptions
- Tests are not optional - 70% coverage minimum
- Security is everyone's job - Run security-reviewer agent before major PRs
| Scenario | Command | Why |
|---|---|---|
| Starting development | make dev |
Starts all services correctly |
| Running all tests | make test |
Consistent with CI |
| Running E2E tests | make test-e2e |
Handles server lifecycle |
| Before committing | make lint |
Catches issues early |
| Deploying infrastructure | make cdk-deploy-preprod |
Correct profile/context |
| Scenario | Command | Why |
|---|---|---|
| Debugging a single test | pytest tests/test_foo.py::test_bar -v |
More control |
| Watching frontend tests | npm run test:watch |
Interactive mode |
| Running specific migration | alembic upgrade +1 |
Granular control |
| Checking a specific endpoint | curl localhost:8000/api/health |
Quick inspection |
| Installing a new package | pip install foo |
Then add to requirements.txt |
Use
makefor repeatable workflows. Use raw commands for exploration.
Style:
- Formatter:
black(line length 88) - Import sorter:
isort(black-compatible) - Type hints: Required for public functions
- Docstrings: Required for modules, classes, public functions
Run locally:
make format-backend # Auto-format
make lint-backend # Check without fixingExample:
def get_user_by_id(db: Session, user_id: UUID) -> User | None:
"""Fetch a user by their ID.
Args:
db: Database session
user_id: The user's UUID
Returns:
User if found, None otherwise
"""
return db.query(User).filter(User.id == user_id).first()Style:
- Formatter: Prettier (via ESLint)
- Linter: ESLint with TypeScript rules
- Strict mode: Enabled
Run locally:
make lint-frontend # Check
npm run lint -- --fix # Auto-fixConventions:
- React components: PascalCase files (
UserProfile.tsx) - Utilities: camelCase files (
formatDate.ts) - Types: Suffix with type (
UserResponse,CreateUserRequest) - Hooks: Prefix with
use(useAuth,useApi)
- Use Alembic for all schema changes
- Never modify production data in migrations
- Include both
upgrade()anddowngrade() - Test migrations:
alembic upgrade head && alembic downgrade -1 && alembic upgrade head
| Component | Minimum | Target |
|---|---|---|
| Backend | 70% | 85% |
| Frontend | 20% | 60% |
| Type | Purpose | When to Write | Command |
|---|---|---|---|
| Unit | Test functions in isolation | Core business logic, utilities | make test-backend |
| Integration | Test components together (API + DB) | API endpoints, service layers | make test-backend |
| E2E | Browser tests simulating users | Critical user flows | make test-e2e |
| Concurrency | Race conditions, thread safety | Shared resources, parallel ops | pytest tests/test_concurrency.py |
| Antagonistic | Behavior when dependencies fail | External APIs, DB, network | See guidance below |
| Fuzz/Property | Random inputs to find edge cases | Input parsing, validation | Use hypothesis library |
| Load/Performance | Response times under stress | Before production release | Use locust or k6 |
Always test:
- API endpoints (happy path + error cases)
- Authentication/authorization logic
- Business logic in services
- Database queries with edge cases
Don't test:
- Third-party libraries
- Simple getters/setters
- Framework internals
Test how your system behaves when dependencies fail. Choose the right failure strategy:
Fail-Closed (stop and return error) - Use for security-critical operations:
def test_auth_service_down_rejects_requests():
"""When auth fails, reject all requests (fail-closed)."""
with mock.patch('app.core.auth.verify_token', side_effect=ConnectionError):
response = client.get("/api/protected")
assert response.status_code == 503 # Service unavailable, not 200Fail-Open (continue with degraded functionality) - Use for non-critical features:
def test_analytics_down_continues_request():
"""When analytics fails, request still succeeds (fail-open)."""
with mock.patch('app.services.analytics.track', side_effect=ConnectionError):
response = client.post("/api/action")
assert response.status_code == 200 # Action succeeds despite analytics failureWhen to use each:
| Scenario | Strategy | Rationale |
|---|---|---|
| Auth/permission check fails | Fail-closed | Never grant access on failure |
| Payment processor fails | Fail-closed | Don't complete transaction |
| Rate limiter fails | Fail-closed | Prevent abuse |
| Analytics/tracking fails | Fail-open | Non-critical, don't block user |
| Recommendation engine fails | Fail-open | Show defaults instead |
| Cache fails | Fail-open | Fall back to database |
| Core feature dependency fails | Fail-closed | Can't provide degraded version |
Hybrid: Circuit Breaker Pattern
# After N failures, stop trying and return cached/default response
# Periodically retry to see if service recovered# Pattern: test_<action>_<condition>_<expected_result>
def test_create_user_with_valid_data_returns_201():
def test_create_user_with_duplicate_email_returns_409():
def test_get_user_when_not_authenticated_returns_401():When writing tests that spawn threads (e.g., stress tests, race condition tests):
- Threads cannot share database sessions - SQLAlchemy sessions are not thread-safe
- Threads cannot see test fixture data - Test fixtures use transactions that are isolated from other sessions
- Don't import
SessionLocaldirectly in threads - It may connect to the wrong database
Solution: Pass a session factory configured for the test database to threads:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
TEST_DATABASE_URL = "postgresql://admin:secret@localhost:5432/test_db"
@pytest.fixture
def test_session_factory():
"""Session factory for threads to use."""
engine = create_engine(TEST_DATABASE_URL)
return sessionmaker(bind=engine)
def test_concurrent_operations(test_session_factory):
def worker():
session = test_session_factory()
try:
# Use session...
finally:
session.close()
with ThreadPoolExecutor(max_workers=5) as executor:
futures = [executor.submit(worker) for _ in range(10)]feature/add-user-profile
fix/auth-token-refresh
chore/update-dependencies
<type>: <short description>
<optional body>
Co-Authored-By: Claude <noreply@anthropic.com> # If AI-assisted
Types: feat, fix, docs, chore, refactor, test
Before opening a PR:
-
make testpasses locally -
make lintpasses - New code has tests
- CLAUDE.md updated if adding new patterns
Before merging:
- CI passes
- At least one approval (or self-merge if solo)
- No unresolved comments
Run the security-reviewer agent:
Claude: "Run security review on the auth changes"
- Never commit secrets to git
- Use
.envfor local development (gitignored) - Use AWS Secrets Manager for deployed environments
- Rotate credentials if accidentally exposed
- All API endpoints require authentication except
/api/health - Use
Depends(get_current_user)for protected routes - Validate token type (
accessnotid) - Check resource ownership before returning data
make help # Show all available commands
# Full Stack
make install # Install all dependencies
make dev # Start all services (docker-compose up)
make stop # Stop all services
make clean # Remove containers, volumes, caches
make test # Run all tests
make lint # Run all linters
# Backend
make install-backend # pip install -r requirements.txt
make test-backend # pytest tests/ -v
make lint-backend # black --check && isort --check && mypy
make format-backend # black && isort (auto-fix)
make migrate # alembic upgrade head
make migration # Create new migration (prompts for message)
# Frontend
make install-frontend # npm install
make test-frontend # npm run test
make lint-frontend # npm run lint
# E2E
make test-e2e # Full E2E with server lifecycle
make test-browser-install # Install Playwright browsers
# Infrastructure
make cdk-diff-preprod # Preview CDK changes
make cdk-deploy-preprod # Deploy to preprod
make cdk-diff-prod # Preview prod changes
make cdk-deploy-prod # Deploy to production| Event | Workflow | Jobs |
|---|---|---|
| PR to main | ci.yml |
backend tests, frontend tests, playwright |
| Push to main | deploy.yml |
tests (via ci.yml) → deploy preprod |
| Manual trigger | deploy.yml |
tests → deploy preprod OR prod |
- Check the failed job in GitHub Actions
- Run the same command locally:
- Backend:
make test-backend-ci - Frontend:
make test-frontend-ci - E2E:
make test-e2e
- Backend:
- Fix and push
After deploy, the pipeline verifies the correct version is running by checking /api/health for the expected git_sha.
- Plan: Update CLAUDE.md if adding new patterns
- Implement: Follow code standards above
- Test: Add tests, ensure coverage
- Document: Update README if user-facing
- Review: Run
make lint,make test - Security: Run security-reviewer for auth/data changes
- Deploy: PR → merge → auto-deploy to preprod
# 1. Create the route
# backend/app/api/widgets.py
# 2. Create schemas
# backend/app/schemas/widget.py
# 3. Add to main.py
# app.include_router(widgets.router, ...)
# 4. Write tests
# backend/tests/test_api/test_widgets.py
# 5. Run tests
make test-backend# 1. Create the page
# frontend/src/pages/WidgetsPage.tsx
# 2. Add route in App.tsx
# 3. Write tests (optional for pages, required for components)
# 4. Run tests
make test-frontend