An intelligent, language-aware GitHub Action that automatically reviews pull requests. It detects code quality issues, security vulnerabilities, missing tests, and provides actionable feedback through inline comments.
- π Multi-Language Support: Automatically detects and analyzes Python, JavaScript, TypeScript, Java, Go, and more
- π‘οΈ Security Scanning: Identifies hardcoded secrets, SQL injection risks, and dangerous API usage
- π§ͺ Test Coverage Enforcement: Ensures new code changes include corresponding tests
- π€ AI-Powered Analysis: Optional LLM integration for deep code reasoning
- π Repository Learning: Adapts to your codebase's conventions and patterns
- π¬ Smart Comments: Posts inline feedback at exact line numbers with actionable suggestions
- β‘ Zero Configuration: Works out-of-the-box with sensible defaults
- Create
.github/workflows/pr-review.ymlin your repository:
name: PR Review Bot
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
review:
runs-on: ubuntu-latest
permissions:
pull-requests: write
contents: read
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install PR Review Bot
run: |
pip install -r scripts/requirements.txt
- name: Run PR Review
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} # Optional
run: python scripts/entrypoint.py- That's it! The bot will automatically review all new pull requests.
While the bot works without configuration, you can customize its behavior by creating .pr-review.yml:
# Maximum number of inline comments per PR
max_inline_comments: 50
# Minimum severity to report (info, warning, or error)
severity_threshold: info
# Paths to skip during review
skip_paths:
- "docs/**"
- "*.md"
- "vendor/**"
- "node_modules/**"
# Custom linter configurations
linters:
python: "flake8 --max-line-length=100"
javascript: "eslint --no-fix"
go: "golangci-lint run"
# LLM provider configuration (optional)
llm_provider:
name: "openai" # or "anthropic", "self-hosted"
model: "gpt-4o-mini"
api_key_env: "OPENAI_API_KEY"
# For self-hosted:
# endpoint: "https://your-llm-api.com/v1/chat/completions"
# Patterns to ignore warnings on specific lines
ignore_patterns:
- "# pr-review-ignore"
- "// pr-review-ignore"- Diff Analysis: Fetches PR changes using GitHub's GraphQL API
- Language Detection: Identifies programming languages and file types
- Pattern Learning: Analyzes your repository to understand coding conventions
- Multi-Source Analysis:
- Runs language-specific linters
- Performs security heuristics scanning
- Checks for missing test coverage
- (Optional) LLM analysis for deeper insights
- Smart Filtering: Deduplicates findings and respects ignore directives
- Comment Posting: Places inline comments at exact diff positions
- Python:
pyflakes,flake8,mypy,black --check - JavaScript/TypeScript:
eslint,tsc --noEmit - Go:
go vet,golangci-lint - Java:
javac -Xlint - Ruby:
rubocop - Rust:
cargo clippy
- Hardcoded credentials and API keys
- SQL injection vulnerabilities
- Command injection risks
- Unsafe deserialization
- Path traversal vulnerabilities
- Weak cryptography usage
- Detects source files without corresponding test changes
- Higher severity for bug fixes missing tests
- Respects repository test patterns
- Code quality and maintainability issues
- Performance concerns
- Architectural violations
- Best practice recommendations
To suppress warnings on specific lines, add a comment:
password = "temporary123" # pr-review-ignoreeval(userInput); // pr-review-ignoreThe bot posts a summary comment with:
- Total issues by severity
- Security issue count
- Files with most issues
- Languages analyzed
- Analysis runtime
# Bad: Hardcoded secret
api_key = "sk-1234567890abcdef" # π Hardcoded API key detected
# Good: Environment variable
api_key = os.getenv("API_KEY")π¨ No test changes found for src/auth.py (45 lines added)
π‘ Suggestion: Add unit tests to verify the new functionality
// β οΈ Unused variable 'data'
// π‘ Suggestion: Remove unused variable or use it
const data = fetchData();| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Yes | Automatically provided by GitHub Actions |
GITHUB_REPOSITORY |
Yes | Automatically provided (format: owner/repo) |
OPENAI_API_KEY |
No | For AI-powered code analysis |
LOG_LEVEL |
No | Set to DEBUG for verbose logging |
.github/workflows/
pr-review.yml # GitHub Action workflow
scripts/
entrypoint.py # Main orchestrator
diff_fetcher.py # GitHub API integration
fingerprint.py # Repository pattern learning
linter_runner.py # Static analysis runner
heuristics_scan.py # Security scanning
llm_reasoner.py # AI integration
planner.py # Finding aggregation
commentator.py # GitHub comment posting
utils.py # Shared utilities
tests/
test_*.py # Comprehensive test suite
cd scripts
python -m pytest tests/ -vexport GITHUB_TOKEN="your-token"
export GITHUB_REPOSITORY="owner/repo"
export GITHUB_EVENT_PR_NUMBER="123"
python scripts/entrypoint.py --dry-run- Check GitHub Actions logs for errors
- Ensure the workflow has
pull-requests: writepermission - Verify changes are in files not matching
skip_paths
- The bot implements exponential backoff and respects GitHub rate limits
- For large PRs, consider increasing
max_inline_comments
- Add custom linter configuration in
.pr-review.yml - File an issue for built-in support
- Never commits or logs sensitive information
- Redacts secrets before sending to LLM providers
- All findings are advisory - developers maintain full control
Contributions are welcome! Please read our contributing guidelines and submit pull requests to the main repository.
This project is licensed under the MIT License - see the LICENSE file for details.# CCGithubBot