diff --git a/docs/.nojekyll b/docs/.nojekyll deleted file mode 100644 index e69de29..0000000 diff --git a/docs/_config.yml b/docs/_config.yml deleted file mode 100644 index b415412..0000000 --- a/docs/_config.yml +++ /dev/null @@ -1,42 +0,0 @@ -# GitHub Pages Configuration for Refactron - -# Site settings -title: Refactron -description: The Intelligent Code Refactoring Transformer for Python -author: Om Sherikar -email: omsherikar@example.com - -# Build settings -theme: null -markdown: kramdown - -# Exclude from processing -exclude: - - README.md - - Gemfile - - Gemfile.lock - - node_modules - - vendor - -# Include files -include: - - _headers - -# Base URL (leave empty for GitHub Pages) -baseurl: "" -url: "https://refactron-ai.github.io" - -# GitHub metadata -github: - repository_url: "https://github.com/Refactron-ai/Refactron_lib" - repository_name: "Refactron_lib" - -# Social -social: - - name: GitHub - url: "https://github.com/Refactron-ai/Refactron_lib" - - name: PyPI - url: "https://pypi.org/project/refactron/" - -# Analytics (optional - add your tracking ID) -# google_analytics: UA-XXXXXXXXX-X diff --git a/docs/advanced/ci-cd.mdx b/docs/advanced/ci-cd.mdx new file mode 100644 index 0000000..ccb4f46 --- /dev/null +++ b/docs/advanced/ci-cd.mdx @@ -0,0 +1,259 @@ +--- +title: CI/CD Integration +description: 'Integrate Refactron into your CI/CD pipeline' +--- + +## Overview + +Refactron can be integrated into your CI/CD pipeline to automatically analyze code quality on every commit, pull request, or deployment. + +## Generate CI/CD Templates + +Refactron can generate ready-to-use CI/CD configuration files: + +```bash +# GitHub Actions +refactron ci github + +# GitLab CI +refactron ci gitlab + +# Pre-commit hooks +refactron ci pre-commit + +# Generate all +refactron ci all +``` + +## GitHub Actions + +### Basic Workflow + +```yaml .github/workflows/refactron.yml +name: Refactron Analysis + +on: [push, pull_request] + +jobs: + analyze: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + + - name: Set up Python + uses: actions/setup-python@v2 + with: + python-version: '3.9' + + - name: Install Refactron + run: pip install refactron + + - name: Analyze Code + run: refactron analyze . --log-format json +``` + +### Fail on Critical Issues + +```yaml +- name: Analyze Code + run: | + refactron analyze . --log-format json || exit_code=$? + if [ $exit_code -eq 1 ]; then + echo "Critical issues found!" + exit 1 + fi +``` + +## GitLab CI + +### Basic Configuration + +```yaml .gitlab-ci.yml +refactron: + image: python:3.9 + before_script: + - pip install refactron + script: + - refactron analyze . --log-format json + only: + - merge_requests + - main +``` + +### With Artifacts + +```yaml +refactron: + image: python:3.9 + before_script: + - pip install refactron + script: + - refactron report . --format html -o report.html + artifacts: + paths: + - report.html + expire_in: 1 week +``` + +## Pre-commit Hooks + +### Installation + +```bash +# Generate pre-commit config +refactron ci pre-commit + +# Install pre-commit +pip install pre-commit +pre-commit install +``` + +### Configuration + +```.pre-commit-config.yaml +repos: + - repo: local + hooks: + - id: refactron + name: Refactron Analysis + entry: refactron analyze + language: system + pass_filenames: false + always_run: true +``` + +## Best Practices + + + + Enable JSON logging for easier parsing in CI/CD: + ```bash + refactron analyze . --log-format json + ``` + + + + Fail builds based on severity: + ```yaml + # .refactron.yaml + fail_on_critical: true + fail_on_errors: false + max_critical_issues: 0 + max_error_issues: 10 + ``` + + + + Speed up CI runs by caching pip packages: + ```yaml + # GitHub Actions + - uses: actions/cache@v2 + with: + path: ~/.cache/pip + key: ${{ runner.os }}-pip-refactron + ``` + + + + Create HTML reports and save as artifacts: + ```bash + refactron report . --format html -o report.html + ``` + + + +## Environment-Specific Configuration + +### Development + +```yaml .refactron.yaml +log_level: DEBUG +fail_on_critical: false +fail_on_errors: false +``` + +### CI/CD + +```yaml .refactron.yaml +log_level: INFO +log_format: json +fail_on_critical: true +fail_on_errors: false +max_critical_issues: 0 +``` + +### Production + +```yaml .refactron.yaml +log_level: WARNING +log_format: json +fail_on_critical: true +fail_on_errors: true +max_critical_issues: 0 +max_error_issues: 5 +``` + +## Advanced Patterns + +### Incremental Analysis + +Only analyze changed files in CI: + +```yaml +- name: Get Changed Files + id: changed + run: | + if [ "${{ github.event_name }}" == "pull_request" ]; then + FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.sha }} | grep '\.py$' || true) + else + FILES=$(git diff --name-only HEAD~1..HEAD | grep '\.py$' || true) + fi + echo "files=$FILES" >> $GITHUB_OUTPUT + +- name: Analyze Changed Files + if: steps.changed.outputs.files != '' + run: | + for file in ${{ steps.changed.outputs.files }}; do + refactron analyze "$file" + done +``` + +### Parallel Analysis + +Run analysis in parallel for faster CI: + +```yaml .refactron.yaml +enable_parallel_processing: true +max_parallel_workers: 4 +``` + +## Exit Codes + +Refactron uses standard exit codes: + +| Code | Meaning | +|------|---------| +| 0 | Success, no issues | +| 1 | Issues found | +| 2 | Error during execution | + +Use these to control CI behavior: + +```bash +refactron analyze . || exit_code=$? +if [ $exit_code -gt 1 ]; then + echo "Refactron error!" + exit $exit_code +fi +``` + +## Next Steps + + + + Configure Refactron for CI/CD + + + Monitor Refactron in production + + diff --git a/docs/advanced/monitoring.mdx b/docs/advanced/monitoring.mdx new file mode 100644 index 0000000..9a0441d --- /dev/null +++ b/docs/advanced/monitoring.mdx @@ -0,0 +1,208 @@ +--- +title: Monitoring +description: 'Production monitoring and telemetry' +--- + +## Overview + +Refactron includes comprehensive logging and monitoring capabilities for production environments: + +- **Structured Logging** - JSON-formatted logs for CI/CD +- **Metrics Collection** - Track analysis time and success rates +- **Prometheus Integration** - Expose metrics via HTTP endpoint +- **Opt-in Telemetry** - Anonymous usage analytics + +## Structured Logging + +### Configuration + +```yaml .refactron.yaml +log_level: INFO # DEBUG, INFO, WARNING, ERROR, CRITICAL +log_format: json # json or text +enable_console_logging: true +enable_file_logging: true +``` + +### JSON Log Output + +```json +{ + "timestamp": "2026-02-15T10:30:42Z", + "level": "INFO", + "logger": "refactron.core.refactron", + "message": "Analysis completed successfully", + "duration_ms": 123.45 +} +``` + +### CLI Usage + +```bash +# JSON logging for CI/CD +refactron analyze mycode.py --log-format json + +# Set log level +refactron analyze mycode.py --log-level DEBUG +``` + +## Metrics Collection + +Track detailed metrics about Refactron operations. + +### View Metrics + +```bash +# Text format +refactron metrics + +# JSON format +refactron metrics --format json +``` + +### Example Output + +``` +📈 Refactron Metrics + +Analysis Metrics: + Files analyzed: 45 + Issues found: 123 + Total time: 5234ms + Success rate: 95.6% + +Analyzer Hit Counts: + complexity: 23 + security: 15 + type_hints: 45 +``` + +### Python API + +```python +stats = refactron.get_performance_stats() +print(f"Files analyzed: {stats['files_analyzed']}") +print(f"Issues found: {stats['issues_found']}") +``` + +## Prometheus Integration + +Expose metrics via HTTP endpoint for Prometheus scraping. + +### Start Metrics Server + +```bash +# Default port 9090 +refactron serve-metrics + +# Custom port +refactron serve-metrics --port 8080 +``` + +### Available Metrics + +**Counters:** +- `refactron_files_analyzed_total` +- `refactron_issues_found_total` +- `refactron_analyzer_hits_total{analyzer="name"}` + +**Gauges:** +- `refactron_analysis_duration_ms` +- `refactron_analysis_success_rate` + +### Prometheus Configuration + +Add to your `prometheus.yml`: + +```yaml +scrape_configs: + - job_name: 'refactron' + static_configs: + - targets: ['localhost:9090'] +``` + +## Telemetry + + + Telemetry is **opt-in only** and disabled by default. It collects anonymous usage statistics to help improve Refactron. + + +### What is Collected + +**Collected:** +- Number of files analyzed +- Analysis execution time +- Python version and OS platform + +**NOT Collected:** +- Your code or file names +- Personal information +- Error messages or stack traces + +### Manage Telemetry + +```bash +# Check status +refactron telemetry --status + +# Enable telemetry +refactron telemetry --enable + +# Disable telemetry +refactron telemetry --disable +``` + +## Environment-Specific Configurations + +### Development + +```yaml +log_level: DEBUG +log_format: text +enable_metrics: true +enable_prometheus: false +``` + +### CI/CD + +```yaml +log_level: INFO +log_format: json +enable_metrics: true +enable_prometheus: false +``` + +### Production + +```yaml +log_level: WARNING +log_format: json +enable_metrics: true +enable_prometheus: true +prometheus_port: 9090 +``` + +## Best Practices + + + + JSON format integrates easily with log aggregation systems + + + + Track performance and identify bottlenecks + + + + Visualize Refactron metrics over time + + + + Respect user privacy with opt-in telemetry + + + +## Next Steps + + + Learn how to integrate Refactron into your CI/CD pipeline + diff --git a/docs/advanced/performance.mdx b/docs/advanced/performance.mdx new file mode 100644 index 0000000..9734477 --- /dev/null +++ b/docs/advanced/performance.mdx @@ -0,0 +1,178 @@ +--- +title: Performance Optimization +description: 'Optimize Refactron for large codebases' +--- + +## Overview + +Refactron includes several performance optimization features designed to handle large codebases efficiently: + +- **AST Caching** - Avoid re-parsing unchanged files +- **Incremental Analysis** - Only analyze changed files +- **Parallel Processing** - Analyze multiple files concurrently + +## Quick Start + +Enable all optimizations in `.refactron.yaml`: + +```yaml .refactron.yaml +# Performance optimizations +enable_ast_cache: true +max_ast_cache_size_mb: 100 + +enable_incremental_analysis: true + +enable_parallel_processing: true +max_parallel_workers: 4 +``` + +## AST Caching + +Cache parsed Abstract Syntax Trees to avoid re-parsing. + + + + - 5-10x faster on repeated analysis + - Reduces CPU usage + - Especially effective for large files + + + + ```python + from refactron import Refactron + from refactron.core.config import RefactronConfig + + config = RefactronConfig( + enable_ast_cache=True, + max_ast_cache_size_mb=100 + ) + refactron = Refactron(config) + ``` + + + + ```python + stats = refactron.get_performance_stats() + print(f"Hit rate: {stats['ast_cache']['hit_rate']}%") + print(f"Cache size: {stats['ast_cache']['cache_size_mb']} MB") + ``` + + + +## Incremental Analysis + +Only analyze files that changed since the last run. + +**Benefits:** +- Up to 90% reduction in analysis time +- Ideal for CI/CD pipelines +- Perfect for iterative development + +**Example:** +```python +from refactron import Refactron + +refactron = Refactron() + +# First run - analyzes all files +result1 = refactron.analyze("project/") +print(f"Analyzed {result1.summary['files_analyzed']} files") + +# Second run (no changes) - skips unchanged files +result2 = refactron.analyze("project/") +print(f"Analyzed {result2.summary['files_analyzed']} files") # Much less! +``` + +## Parallel Processing + +Analyze multiple files concurrently using multiprocessing. + +**Configuration:** +```yaml .refactron.yaml +enable_parallel_processing: true +max_parallel_workers: 4 # Number of parallel workers +``` + +**When to Use:** +- ✅ Large codebases (1000+ files) +- ✅ Multi-core systems +- ❌ Small codebases (<10 files) + +## Best Practices by Project Size + +### Small Projects (<1000 files) + +```yaml +enable_ast_cache: true +enable_incremental_analysis: true +enable_parallel_processing: false # Overhead not worth it +``` + +### Medium Projects (1000-10000 files) + +```yaml +enable_ast_cache: true +enable_incremental_analysis: true +enable_parallel_processing: true +max_parallel_workers: 4 +``` + +### Large Projects (10000+ files) + +```yaml +enable_ast_cache: true +max_ast_cache_size_mb: 200 # Larger cache +enable_incremental_analysis: true +enable_parallel_processing: true +max_parallel_workers: 8 # More workers +``` + +## Performance Statistics + +Get detailed performance stats: + +```python +stats = refactron.get_performance_stats() + +# AST Cache +print(f"Cache hits: {stats['ast_cache']['hits']}") +print(f"Hit rate: {stats['ast_cache']['hit_rate']}%") + +# Parallel Processing +print(f"Workers: {stats['parallel']['max_workers']}") + +# Clear caches when needed +refactron.clear_caches() +``` + +## Troubleshooting + + + + Check if optimizations are enabled: + ```python + stats = refactron.get_performance_stats() + print(f"Cache enabled: {stats['ast_cache']['enabled']}") + print(f"Hit rate: {stats['ast_cache']['hit_rate']}%") + ``` + + + + - Reduce cache size: `max_ast_cache_size_mb: 50` + - Lower parallel workers: `max_parallel_workers: 2` + - Clear caches periodically: `refactron.clear_caches()` + + + + Disable for small codebases: + ```yaml + enable_parallel_processing: false + ``` + + + +## Next Steps + + + Learn how to monitor Refactron in production + diff --git a/docs/api-reference/overview.mdx b/docs/api-reference/overview.mdx new file mode 100644 index 0000000..7adc298 --- /dev/null +++ b/docs/api-reference/overview.mdx @@ -0,0 +1,271 @@ +--- +title: API Overview +description: 'Python API reference for Refactron' +--- + +## Introduction + +Refactron provides a comprehensive Python API for programmatic code analysis and refactoring. This is ideal for integrating Refactron into your own tools, scripts, or CI/CD pipelines. + +## Quick Start + +```python +from refactron import Refactron + +# Initialize Refactron +refactron = Refactron() + +# Analyze code +analysis = refactron.analyze("myproject/") +print(f"Found {analysis.summary['total_issues']} issues") + +# Refactor code +result = refactron.refactor("myfile.py", preview=True) +result.show_diff() +``` + +## Core Classes + + + + Main entry point for all operations + + + Configuration management + + + Analysis results and metrics + + + Refactoring results and diffs + + + +## Installation for Python API + +```bash +pip install refactron +``` + +## Basic Patterns + +### Analysis + +```python +from refactron import Refactron + +refactron = Refactron() + +# Analyze file or directory +analysis = refactron.analyze("path/to/code") + +# Access summary +print(f"Files: {analysis.summary['files_analyzed']}") +print(f"Issues: {analysis.summary['total_issues']}") + +# Iterate through issues +for issue in analysis.issues: + print(f"{issue.level.value}: {issue.message} at {issue.file_path}:{issue.line_number}") +``` + +### Filtering Issues + +```python +# Filter by severity +critical_issues = [i for i in analysis.issues if i.level.value == "CRITICAL"] +errors = [i for i in analysis.issues if i.level.value == "ERROR"] + +# Filter by category +security_issues = [i for i in analysis.issues if i.category == "SECURITY"] +complexity_issues = [i for i in analysis.issues if i.category == "COMPLEXITY"] +``` + +### Refactoring + +```python +from refactron import Refactron + +refactron = Refactron() + +# Preview refactoring +result = refactron.refactor("myfile.py", preview=True) + +# Show diff for each operation +for op in result.operations: + print(f"Operation: {op.operation_type}") + print(f"Risk: {op.risk_score}") + print(f"Diff:\n{op.diff}") + +# Apply specific refactorings +result = refactron.refactor( + "myfile.py", + preview=False, + operation_types=["extract_constant", "add_docstring"] +) + +if result.success: + print(f"Backup: {result.backup_path}") +``` + +### Configuration + +```python +from refactron import Refactron +from refactron.core.config import RefactronConfig + +# Custom configuration +config = RefactronConfig( + max_function_complexity=15, + max_function_length=100, + enabled_analyzers=["security", "complexity"], + enabled_refactorers=["extract_constant"], + enable_pattern_learning=True +) + +refactron = Refactron(config) +``` + +### Feedback and Pattern Learning + +```python +from refactron import Refactron + +refactron = Refactron() +result = refactron.refactor("myfile.py", preview=True) + +# Record feedback +for op in result.operations: + refactron.record_feedback( + operation_id=op.operation_id, + action="accepted", # or "rejected", "ignored" + reason="Improved code quality", + operation=op + ) +``` + +## Configuration Options + + + Maximum cyclomatic complexity for functions + + + + Maximum lines per function + + + + Maximum parameters per function + + + + Maximum nesting depth for control structures + + + + List of analyzers to enable: `security`, `complexity`, `code_smell`, `type_hint`, `dead_code`, `dependency` + + + + List of refactorers to enable: `extract_constant`, `add_docstring`, `simplify_conditionals`, `reduce_parameters` + + + + Enable pattern learning system + + +## Models and Data Structures + +### Issue + +```python +class Issue: + file_path: str # Path to file with issue + line_number: int # Line number + message: str # Issue description + level: IssueLevel # CRITICAL, ERROR, WARNING, INFO + category: str # SECURITY, COMPLEXITY, etc. + suggestion: Optional[str] # Fix suggestion +``` + +### RefactoringOperation + +```python +class RefactoringOperation: + operation_id: str # Unique identifier + operation_type: str # Type of refactoring + file_path: str # Target file + line_number: int # Target line + risk_score: float # 0.0 to 1.0 + diff: str # Unified diff + description: str # Human-readable description +``` + +### AnalysisResult + +```python +class AnalysisResult: + summary: dict # Summary statistics + issues: List[Issue] # All detected issues + metrics: FileMetrics # Code metrics + errors: List[str] # Any errors encountered +``` + +## Error Handling + +```python +from refactron import Refactron +from refactron.core.exceptions import RefactronError, AnalysisError + +refactron = Refactron() + +try: + analysis = refactron.analyze("myproject/") +except AnalysisError as e: + print(f"Analysis failed: {e}") +except RefactronError as e: + print(f"Refactron error: {e}") +``` + +## Advanced Usage + +### Custom Analyzers + +```python +from refactron.analyzers.base_analyzer import BaseAnalyzer + +class MyCustomAnalyzer(BaseAnalyzer): + def analyze_file(self, file_path, ast_tree): + # Custom analysis logic + issues = [] + # ... analyze and create issues + return issues + +# Use with Refactron +refactron = Refactron() +refactron.analyzers.append(MyCustomAnalyzer()) +``` + +### Performance Optimization + +```python +from refactron import Refactron + +refactron = Refactron() + +# Get performance stats +stats = refactron.get_performance_stats() +print(f"AST cache hits: {stats['ast_cache']['hits']}") + +# Clear caches +refactron.clear_caches() +``` + +## Next Steps + + + Detailed API documentation for the Refactron class + diff --git a/docs/api-reference/refactron-class.mdx b/docs/api-reference/refactron-class.mdx new file mode 100644 index 0000000..40c9f7d --- /dev/null +++ b/docs/api-reference/refactron-class.mdx @@ -0,0 +1,345 @@ +--- +title: Refactron Class +description: 'Main API class reference' +--- + +## Overview + +The `Refactron` class is the main entry point for all code analysis and refactoring operations. + +## Constructor + +```python +from refactron import Refactron +from refactron.core.config import RefactronConfig + +# Default configuration +refactron = Refactron() + +# Custom configuration +config = RefactronConfig( + max_function_complexity=15, + enabled_analyzers=["security", "complexity"] +) +refactron = Refactron(config) +``` + + + Configuration object. If None, uses default configuration. + + +## Methods + +### analyze() + +Analyze a file or directory for issues. + +```python +def analyze(target: Union[str, Path]) -> AnalysisResult +``` + + + Path to file or directory to analyze + + +**Returns:** `AnalysisResult` containing detected issues and metrics + +**Example:** +```python +refactron = Refactron() +result = refactron.analyze("myproject/") + +print(f"Files analyzed: {result.summary['files_analyzed']}") +print(f"Total issues: {result.summary['total_issues']}") + +for issue in result.issues: + print(f"{issue.level.value}: {issue.message}") +``` + +--- + +### refactor() + +Refactor a file or directory with intelligent transformations. + +```python +def refactor( + target: Union[str, Path], + preview: bool = True, + operation_types: Optional[List[str]] = None +) -> RefactorResult +``` + + + Path to file or directory to refactor + + + + If True, show changes without applying them + + + + Specific refactoring types to apply. None = all types. + + Available types: + - `extract_constant` + - `add_docstring` + - `simplify_conditionals` + - `reduce_parameters` + + +**Returns:** `RefactorResult` containing operations and status + +**Example:** +```python +# Preview all refactorings +result = refactron.refactor("myfile.py", preview=True) +result.show_diff() + +# Apply specific refactorings +result = refactron.refactor( + "myfile.py", + preview=False, + operation_types=["extract_constant", "add_docstring"] +) + +if result.success: + print(f"Applied {len(result.operations)} refactorings") + print(f"Backup: {result.backup_path}") +``` + +--- + +### record_feedback() + +Record developer feedback on a refactoring suggestion. + +```python +def record_feedback( + operation_id: str, + action: str, + reason: Optional[str] = None, + operation: Optional[RefactoringOperation] = None +) -> None +``` + + + Unique identifier for the refactoring operation + + + + Feedback action: `"accepted"`, `"rejected"`, or `"ignored"` + + + + Optional reason for the feedback + + + + The refactoring operation object (for pattern learning) + + +**Example:** +```python +result = refactron.refactor("myfile.py", preview=True) + +for op in result.operations: + refactron.record_feedback( + operation_id=op.operation_id, + action="accepted", + reason="Improved readability", + operation=op + ) +``` + +--- + +### get_python_files() + +Get all Python files in a directory, respecting exclude patterns. + +```python +def get_python_files(directory: Path) -> List[Path] +``` + + + Directory to search for Python files + + +**Returns:** List of Python file paths + +**Example:** +```python +from pathlib import Path + +refactron = Refactron() +files = refactron.get_python_files(Path("myproject")) + +print(f"Found {len(files)} Python files") +for file in files: + print(file) +``` + +--- + +### get_performance_stats() + +Get performance statistics from optimization components. + +```python +def get_performance_stats() -> Dict[str, Any] +``` + +**Returns:** Dictionary containing performance statistics + +**Example:** +```python +stats = refactron.get_performance_stats() + +print(f"AST cache hits: {stats['ast_cache']['hits']}") +print(f"AST cache misses: {stats['ast_cache']['misses']}") +print(f"Parallel jobs: {stats['parallel']['max_workers']}") +``` + +--- + +### clear_caches() + +Clear all performance-related caches. + +```python +def clear_caches() -> None +``` + +**Example:** +```python +# Clear caches after major code changes +refactron.clear_caches() +``` + +--- + +### detect_project_root () + +Detect project root by looking for common markers. + +```python +def detect_project_root(file_path: Path) -> Path +``` + + + File path to start searching from + + +**Returns:** Path to project root + +**Example:** +```python +from pathlib import Path + +refactron = Refactron() +root = refactron.detect_project_root(Path("src/module/file.py")) +print(f"Project root: {root}") +``` + +## Properties + +### config + +Access current configuration. + +```python +config = refactron.config +print(f"Max complexity: {config.max_function_complexity}") +``` + +### analyzers + +List of enabled analyzers. + +```python +analyzers = refactron.analyzers +print(f"Enabled analyzers: {[a.__class__.__name__ for a in analyzers]}") +``` + +### refactorers + +List of enabled refactorers. + +```python +refactorers = refactron.refactorers +print(f"Enabled refactorers: {[r.__class__.__name__ for r in refactorers]}") +``` + +## Complete Example + +```python +from refactron import Refactron +from refactron.core.config import RefactronConfig +from pathlib import Path + +# Configure Refactron +config = RefactronConfig( + max_function_complexity=15, + max_function_length=100, + enabled_analyzers=["security", "complexity", "code_smell"], + enabled_refactorers=["extract_constant", "add_docstring"], + enable_pattern_learning=True +) + +# Initialize +refactron = Refactron(config) + +# Analyze project +print("Analyzing project...") +analysis = refactron.analyze("myproject/") + +# Print summary +print(f"\nAnalysis Summary:") +print(f" Files analyzed: {analysis.summary['files_analyzed']}") +print(f" Total issues: {analysis.summary['total_issues']}") + +# Print critical issues +critical = [i for i in analysis.issues if i.level.value == "CRITICAL"] +if critical: + print(f"\nCritical Issues ({len(critical)}):") + for issue in critical: + print(f" {issue.file_path}:{issue.line_number} - {issue.message}") + +# Refactor with preview +print("\nGenerating refactoring suggestions...") +result = refactron.refactor("myproject/", preview=True) + +# Review and apply specific operations +for op in result.operations: + print(f"\nOperation: {op.operation_type}") + print(f" File: {op.file_path}:{op.line_number}") + print(f" Risk: {op.risk_score}") + print(f" Description: {op.description}") + + # Record feedback + refactron.record_feedback( + operation_id=op.operation_id, + action="accepted", + reason="Approved after review", + operation=op + ) + +# Get performance stats +stats = refactron.get_performance_stats() +print(f"\nPerformance:") +print(f" AST cache hit rate: {stats['ast_cache']['hit_rate']:.1%}") + +print("\nRefactoring complete!") +``` + +## Next Steps + + + + Learn about analysis features + + + Master refactoring workflows + + diff --git a/docs/cli/commands.mdx b/docs/cli/commands.mdx new file mode 100644 index 0000000..a43efb7 --- /dev/null +++ b/docs/cli/commands.mdx @@ -0,0 +1,543 @@ +--- +title: CLI Commands +description: 'Complete command-line interface reference' +--- + +## Overview + +Refactron provides a comprehensive CLI for code analysis, refactoring, AI features, and more. + +## Core Commands + +### analyze + +Analyze code for issues. + +```bash +refactron analyze [options] +``` + +**Options:** +- `--detailed` - Show detailed analysis output +- `--summary` - Show summary only +- `--config FILE` - Use custom config file + +**Examples:** +```bash +# Analyze single file +refactron analyze myfile.py + +# Analyze directory with details +refactron analyze myproject/ --detailed +``` + +--- + +### refactor + +Preview or apply refactoring suggestions. + +```bash +refactron refactor [options] +``` + +**Options:** +- `--preview` - Preview changes without applying +- `--type TYPE` - Filter by operation type (can use multiple) +- `--risk-level LEVEL` - Filter by risk: safe, low, moderate, high +- `--ai` - Enable AI-powered refactoring +- `--feedback` - Enable interactive feedback + +**Examples:** +```bash +# Preview all refactorings +refactron refactor myfile.py --preview + +# Apply specific types only +refactron refactor myfile.py --type extract_constant --type add_docstring + +# AI-powered refactoring +refactron refactor myfile.py --ai --preview +``` + +--- + +### autofix + +Apply automated fixes. + +```bash +refactron autofix [options] +``` + +**Options:** +- `--preview` - Preview auto-fixes +- `--apply` - Apply auto-fixes +- `--no-backup` - Don't create backups + +**Examples:** +```bash +# Preview auto-fixes +refactron autofix myfile.py --preview + +# Apply safe fixes +refactron autofix myfile.py --apply +``` + +--- + +### report + +Generate analysis reports. + +```bash +refactron report [options] +``` + +**Options:** +- `--format FORMAT` - Output format: json, html, text +- `-o FILE` - Output file path + +**Examples:** +```bash +# JSON report +refactron report myproject/ --format json -o report.json + +# HTML report +refactron report my project/ --format html -o report.html +``` + +--- + +## AI Commands + +### suggest + +Get AI-powered suggestions for specific lines. + +```bash +refactron suggest [options] +``` + +**Options:** +- `--line N` - Target specific line number +- `--apply` - Apply suggestion immediately + +**Examples:** +```bash +# Get suggestion for line 42 +refactron suggest myfile.py --line 42 + +# Apply AI suggestion +refactron suggest myfile.py --line 42 --apply +``` + +--- + +### document + +Generate AI-powered documentation. + +```bash +refactron document [options] +``` + +**Options:** +- `--apply` - Apply generated docstrings + +**Examples:** +```bash +# Preview documentation +refactron document myfile.py + +# Apply documentation +refactron document myfile.py --apply +``` + +--- + +### rag + +Manage RAG (Retrieval-Augmented Generation) index. + +```bash +refactron rag [options] +``` + +**Commands:** +- `index` - Index codebase for RAG +- `search` - Search indexed code +- `status` - Show index status + +**Options:** +- `--summarize` - Generate AI summaries when indexing +- `--rerank` - Use AI reranking for search +- `--update` - Incremental index update +- `--force` - Force full re-index + +**Examples:** +```bash +# Index codebase +refactron rag index + +# Index with AI summaries +refactron rag index --summarize + +# Search code +refactron rag search "authentication logic" + +# Search with reranking +refactron rag search "database queries" --rerank + +# Check status +refactron rag status +``` + +--- + +## Pattern Learning Commands + +### feedback + +Record feedback on refactoring operations. + +```bash +refactron feedback --action [options] +``` + +**Options:** +- `--action ACTION` - accepted, rejected, or ignored +- `--reason TEXT` - Reason for feedback + +**Examples:** +```bash +# Accept a suggestion +refactron feedback op-123 --action accepted + +# Reject with reason +refactron feedback op-456 --action rejected --reason "Breaks API contract" +``` + +--- + +### patterns + +Manage pattern learning system. + +```bash +refactron patterns +``` + +**Commands:** +- `analyze` - Analyze project patterns +- `recommend` - Get tuning recommendations +- `tune` - Apply pattern tuning +- `profile` - View current pattern profile + +**Options:** +- `--auto` - Auto-apply recommendations (for tune command) + +**Examples:** +```bash +# Analyze patterns +refactron patterns analyze + +# Get recommendations +refactron patterns recommend + +# Auto-tune +refactron patterns tune --auto + +# View profile +refactron patterns profile +``` + +--- + +## Authentication & Repository + +### auth + +Manage authentication. + +```bash +refactron auth +``` + +**Commands:** +- `status` - Check authentication status +- `logout` - Log out from account + +**Examples:** +```bash +# Check auth status +refactron auth status + +# Logout +refactron auth logout +``` + +--- + +### repo + +Manage repository connections. + +```bash +refactron repo [options] +``` + +**Commands:** +- `list` - List connected repositories +- `connect` - Connect a repository +- `disconnect` - Disconnect a repository + +**Options:** +- `--path PATH` - Repository path (for connect) +- `--delete-files` - Delete local files (for disconnect) + +**Examples:** +```bash +# List repos +refactron repo list + +# Connect repo +refactron repo connect my-repo --path /path/to/repo + +# Disconnect repo +refactron repo disconnect my-repo +``` + +--- + +## Backup & Rollback + +### rollback + +Rollback refactoring changes. + +```bash +refactron rollback [options] +``` + +**Options:** +- `--list` - List available rollback sessions +- `--session ID` - Rollback specific session + +**Examples:** +```bash +# List sessions +refactron rollback --list + +# Rollback session +refactron rollback --session abc123 +``` + +--- + +## CI/CD Integration + +### ci + +Generate CI/CD configuration. + +```bash +refactron ci +``` + +**Platforms:** +- `github` - GitHub Actions workflow +- `gitlab` - GitLab CI configuration +- `pre-commit` - Pre-commit hooks + +**Examples:** +```bash +# Generate GitHub Actions +refactron ci github + +# Generate GitLab CI +refactron ci gitlab + +# Generate pre-commit hooks +refactron ci pre-commit +``` + +--- + +## Observability + +### metrics + +View usage metrics. + +```bash +refactron metrics [options] +``` + +**Options:** +- `--format FORMAT` - Output format: json, text + +**Examples:** +```bash +# View metrics +refactron metrics + +# JSON format +refactron metrics --format json +``` + +--- + +### serve-metrics + +Start Prometheus metrics server. + +```bash +refactron serve-metrics [options] +``` + +**Options:** +- `--port N` - Server port (default: 8000) + +**Examples:** +```bash +# Start metrics server +refactron serve-metrics + +# Custom port +refactron serve-metrics --port 9090 +``` + +--- + +### telemetry + +Manage telemetry settings. + +```bash +refactron telemetry +``` + +**Commands:** +- `enable` - Enable telemetry +- `disable` - Disable telemetry +- `status` - Show telemetry status + +**Examples:** +```bash +# Check status +refactron telemetry status + +# Disable telemetry +refactron telemetry disable +``` + +--- + +## Configuration + +### init + +Initialize Refactron configuration. + +```bash +refactron init +``` + +Creates `.refactron.yaml` in current directory with default settings. + +--- + +## Global Options + +Available for all commands: + +- `--help` - Show help message +- `--version` - Show version +- `--config FILE` - Use custom config file +- `--log-level LEVEL` - Set log level (DEBUG, INFO, WARNING, ERROR) + +**Examples:** +```bash +# Show version +refactron --version + +# Custom config +refactron analyze myfile.py --config custom-config.yaml + +# Debug logging +refactron analyze myfile.py --log-level DEBUG +``` + +--- + +## Environment Variables + +```bash +# Disable color output +export REFACTRON_NO_COLOR=1 + +# Set log level +export REFACTRON_LOG_LEVEL=DEBUG + +# Custom config path +export REFACTRON_CONFIG=/path/to/.refactron.yaml + +# API keys +export GROQ_API_KEY=your-key +export REFACTRON_API_KEY=your-key +``` + +--- + +## Exit Codes + +| Code | Meaning | +|------|---------| +| 0 | Success, no issues found | +| 1 | Issues found | +| 2 | Error during execution | + +--- + +## Quick Reference + +```bash +# Analysis workflow +refactron analyze myproject/ +refactron report myproject/ --format json -o report.json + +# Refactoring workflow +refactron refactor myfile.py --preview +ref actron refactor myfile.py --type extract_constant +refactron rollback --list + +# AI workflow +refactron rag index +refactron refactor myfile.py --ai --preview +refactron suggest myfile.py --line 42 + +# Pattern learning +refactron feedback op-123 --action accepted +refactron patterns analyze +refactron patterns tune --auto +``` + +## Next Steps + + + + Python API documentation + + + Configure Refactron settings + + diff --git a/docs/docs.json b/docs/docs.json new file mode 100644 index 0000000..22642d2 --- /dev/null +++ b/docs/docs.json @@ -0,0 +1,92 @@ +{ + "$schema": "https://mintlify.com/docs.json", + "theme": "maple", + "name": "Refactron", + "description": "Intelligent Python code refactoring with AI-powered suggestions and pattern learning", + "colors": { + "primary": "#10B981", + "light": "#ECFDF5", + "dark": "#065F46" + }, + "fonts": { + "family": "Space Grotesk" + }, + "navigation": { + "groups": [ + { + "group": "Getting Started", + "pages": [ + "introduction", + "quickstart" + ] + }, + { + "group": "Essentials", + "pages": [ + "essentials/installation", + "essentials/configuration", + "essentials/authentication" + ] + }, + { + "group": "Guides", + "pages": [ + "guides/code-analysis", + "guides/refactoring", + "guides/ai-features", + "guides/pattern-learning" + ] + }, + { + "group": "CLI Reference", + "pages": [ + "cli/commands" + ] + }, + { + "group": "API Reference", + "pages": [ + "api-reference/overview", + "api-reference/refactron-class" + ] + }, + { + "group": "Advanced", + "pages": [ + "advanced/performance", + "advanced/monitoring", + "advanced/ci-cd" + ] + }, + { + "group": "Resources", + "pages": [ + "resources/changelog", + "resources/faq" + ] + } + ] + }, + "logo": { + "light": "/logo/logo.png", + "dark": "/logo/logo.png", + "href": "https://www.refactron.dev" + }, + "footer": { + "socials": { + "github": "https://github.com/Refactron-ai/Refactron_lib", + "x": "https://x.com/refactron", + "linkedin": "https://www.linkedin.com/company/refactron", + "discord": "https://discord.gg/HY8WawSH" + } + }, + "favicon": "logo/logo.svg", + "contextual": { + "options": [ + "copy", + "view", + "chatgpt", + "claude" + ] + } +} \ No newline at end of file diff --git a/docs/essentials/authentication.mdx b/docs/essentials/authentication.mdx new file mode 100644 index 0000000..ddb6cd9 --- /dev/null +++ b/docs/essentials/authentication.mdx @@ -0,0 +1,159 @@ +--- +title: Authentication +description: 'Authenticate with Refactron cloud services' +--- + +## Overview + +Some Refactron features require authentication with cloud services: + +- Repository management +- Cloud-based AI features +- Collaborative pattern learning +- Usage analytics + +## Login + +Authenticate using the CLI: + +```bash +refactron login +``` + +This will open your browser to complete the authentication flow. + + + Local analysis and refactoring features work without authentication + + +## Check Authentication Status + +Verify your authentication status: + +```bash +refactron auth status +``` + +Expected output: + +``` +✓ Authenticated as user@example.com + Organization: my-org + Plan: Pro +``` + +## Logout + +To log out from your account: + +```bash +refactron auth logout +``` + +## API Keys + +For CI/CD environments, use API keys instead of interactive login: + +### Generate API Key + +1. Log in via the CLI: `refactron login` +2. Visit your dashboard +3. Navigate to **Settings** → **API Keys** +4. Click **Generate New Key** + +### Use API Key + +Set the API key as an environment variable: + +```bash +export REFACTRON_API_KEY=your_api_key_here +``` + + + Keep your API keys secret! Never commit them to version control. + + +## Authenticated Features + +Features that require authentication: + + + + ```bash + refactron repo list + refactron repo connect my-repo + ``` + + + + Enhanced AI-powered refactoring with cloud LLM models + + + + ```bash + refactron metrics + ``` + + + + Share learned patterns across your team + + + +## Offline Mode + +Refactron works offline for core features: + +- Local code analysis +- Refactoring suggestions +- Auto-fix operations +- Report generation + +Cloud features gracefully degrade when offline. + +## Troubleshooting + + + + If the browser doesn't open automatically: + 1. Copy the URL from the terminal + 2. Paste it into your browser manually + 3. Complete the authentication + + + + If you see "Token expired" errors: + ```bash + refactron auth logout + refactron login + ``` + + + + Verify the API key is set correctly: + ```bash + echo $REFACTRON_API_KEY + ``` + Ensure it's exported in your shell profile. + + + +## Security + + + Refactron follows security best practices: + - Tokens are stored securely in your system keychain + - API keys are never logged or displayed + - All communication uses HTTPS + - Authentication tokens expire after 30 days + + +## Next Steps + + + Start analyzing your code with Refactron + diff --git a/docs/essentials/configuration.mdx b/docs/essentials/configuration.mdx new file mode 100644 index 0000000..335614a --- /dev/null +++ b/docs/essentials/configuration.mdx @@ -0,0 +1,187 @@ +--- +title: Configuration +description: 'Configure Refactron for your project' +--- + +## Configuration File + +Refactron uses a `.refactron.yaml` file to configure analyzers, refactorers, and thresholds. + +### Initialize Configuration + +Generate a default configuration file: + +```bash +refactron init +``` + +This creates `.refactron.yaml` in your project root. + +## Configuration Options + +### Basic Configuration + +```yaml .refactron.yaml +# Analyzers to run +enabled_analyzers: + - complexity + - code_smell + - security + - type_hint + - dead_code + - dependency + +# Refactorers to use +enabled_refactorers: + - extract_constant + - add_docstring + - simplify_conditionals + - reduce_parameters + +# Thresholds +max_function_complexity: 10 +max_function_length: 50 +max_parameters: 5 +max_nesting_depth: 3 +``` + +### Pattern Learning + +Enable pattern learning to improve suggestions over time: + +```yaml .refactron.yaml +# Pattern Learning (optional) +enable_pattern_learning: true +pattern_learning_enabled: true +pattern_ranking_enabled: true +pattern_storage_dir: null # null = auto-detect +``` + + + Learn more about pattern learning in the [Pattern Learning Guide](/guides/pattern-learning) + + +## Available Analyzers + + + + Detects security vulnerabilities like SQL injection, code injection, hardcoded secrets, and SSRF + + + + Identifies magic numbers, long functions, excessive parameters, and deep nesting + + + + Measures cyclomatic complexity, maintainability index, and nested loops + + + + Checks for missing or incomplete type annotations + + + + Finds unused functions and unreachable code + + + + Analyzes circular imports and wildcard imports + + + +## Available Refactorers + + + + Extract magic numbers into named constants + + + + Add missing docstrings to functions and classes + + + + Simplify complex conditional statements + + + + Reduce function parameter count using dataclasses or dictionaries + + + +## Threshold Configuration + +Adjust thresholds to match your project's standards: + +```yaml .refactron.yaml +# Complexity thresholds +max_function_complexity: 15 # Default: 10 +max_function_length: 100 # Default: 50 +max_parameters: 7 # Default: 5 +max_nesting_depth: 4 # Default: 3 + +# Code quality thresholds +min_maintainability_index: 60 # Default: 65 +max_cognitive_complexity: 20 # Default: 15 +``` + +## Exclude Patterns + +Exclude files or directories from analysis: + +```yaml .refactron.yaml +exclude_patterns: + - "*/tests/*" + - "*/migrations/*" + - "*/venv/*" + - "*.pyc" + - "__pycache__" +``` + +## Environment Variables + +Configure Refactron using environment variables: + +```bash +# Disable color output +export REFACTRON_NO_COLOR=1 + +# Set log level (DEBUG, INFO, WARNING, ERROR) +export REFACTRON_LOG_LEVEL=DEBUG + +# Custom config path +export REFACTRON_CONFIG=/path/to/.refactron.yaml +``` + +## Per-File Ignores + +Ignore specific issues in code using comments: + +```python +def my_function(): # refactron: ignore + # This entire function will be ignored + pass + +def another_function(): + x = 42 # refactron: ignore magic-number + # Only magic-number check will be ignored for this line +``` + +## Next Steps + + + + Set up authentication for cloud features + + + Learn about analysis capabilities + + diff --git a/docs/essentials/installation.mdx b/docs/essentials/installation.mdx new file mode 100644 index 0000000..0ec2f0e --- /dev/null +++ b/docs/essentials/installation.mdx @@ -0,0 +1,121 @@ +--- +title: Installation +description: 'How to install and set up Refactron' +--- + +## Requirements + + + **Python Version**: 3.8 or higher + **Supported Platforms**: macOS, Linux, Windows + + +## Install via pip + +The recommended way to install Refactron is using pip: + +```bash +pip install refactron +``` + +### Development Version + +To install the latest development version from GitHub: + +```bash +pip install git+https://github.com/Refactron-ai/Refactron_lib.git +``` + +## Verify Installation + +Check that Refactron is installed correctly: + +```bash +refactron --version +``` + +You should see output like: + +``` +refactron, version 1.0.15 +``` + +## Optional: Install Development Dependencies + +If you plan to contribute to Refactron: + +```bash +# Clone the repository +git clone https://github.com/Refactron-ai/Refactron_lib.git +cd Refactron_lib + +# Run the development setup script +bash setup_dev.sh +``` + +This will: +- Create a virtual environment +- Install all dependencies (including dev tools) +- Set up pre-commit hooks +- Run initial tests + +## Dependencies + +Refactron automatically installs these dependencies: + + + + - **libcst** - Code parsing and transformation + - **click** - CLI framework + - **rich** - Terminal formatting + - **radon** - Complexity metrics + - **astroid** - AST analysis + + + + - **chromadb** - Vector database for RAG + - **tree-sitter** - Code parsing + - **sentence-transformers** - Embeddings + - **groq** - LLM integration + - **pydantic** - Data validation + + + +## Troubleshooting + + + + If you get a permission error, try: + ```bash + pip install --user refactron + ``` + + + + Ensure you're using Python 3.8+: + ```bash + python --version + ``` + If needed, use a specific Python version: + ```bash + python3.10 -m pip install refactron + ``` + + + + On macOS, you may need to install certificates: + ```bash + /Applications/Python\ 3.x/Install\ Certificates.command + ``` + + + +## Next Steps + + + Configure Refactron for your project + diff --git a/docs/guides/ai-features.mdx b/docs/guides/ai-features.mdx new file mode 100644 index 0000000..16466e6 --- /dev/null +++ b/docs/guides/ai-features.mdx @@ -0,0 +1,364 @@ +--- +title: AI Features +description: 'LLM and RAG-powered intelligent refactoring' +--- + +## Overview + +Refactron v1.0.15+ integrates Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) to provide context-aware, intelligent code refactoring and documentation generation. + +## Core Components + + + + Coordinates between analyzer, retriever, and LLM backends + + + Vector database (ChromaDB) for code indexing and retrieval + + + Support for Groq, local models, and custom providers + + + Validates LLM-generated code for syntax and safety + + + +## Setup + +### Prerequisites + +AI features require additional dependencies (already included with Refactron): + +- ChromaDB for vector storage +- Sentence Transformers for embeddings +- Groq API access (or alternative LLM provider) + +### Configuration + +Set your LLM API key: + +```bash +export GROQ_API_KEY='your-api-key-here' +``` + +Configure LLM settings in `.refactron.yaml`: + +```yaml .refactron.yaml +llm: + provider: groq + model: llama3-70b-8192 + temperature: 0.1 + max_tokens: 4096 + +rag: + enabled: true + storage_dir: .refactron/rag_index + embedding_model: all-MiniLM-L6-v2 +``` + +## RAG Indexing + +### Index Your Codebase + +Before using AI features, index your project: + +```bash +# Basic indexing +refactron rag index + +# Index with AI-generated summaries +refactron rag index --summarize +``` + +This creates vector embeddings of your code for context retrieval. + +### Check Index Status + +```bash +refactron rag status +``` + +Output: +``` +✓ RAG Index Status + Indexed files: 142 + Total chunks: 856 + Last updated: 2024-02-15 10:30:42 + Storage: .refactron/rag_index +``` + +### Search Codebase + +Semantic search across your codebase: + +```bash +# Basic search +refactron rag search "authentication logic" + +# With AI reranking +refactron rag search "database connection" --rerank +``` + +## AI-Powered Refactoring + +### Enable AI Suggestions + +Use the `--ai` flag to enable LLM-powered refactoring: + +```bash +# Preview AI refactoring suggestions +refactron refactor myfile.py --ai --preview + +# Apply AI suggestions +refactron refactor myfile.py --ai --apply +``` + +### How It Works + + + + Refactron analyzes your code for issues + + + + RAG system retrieves relevant code chunks from your project + + + + LLM generates refactoring suggestions with project context + + + + Safety gate validates syntax and checks for issues + + + + Suggestions presented with explanations and risk scores + + + +### Example + +```python +# Original code with issues +def process_users(users): + result = [] + for user in users: + if user.age > 18: + if user.status == 'active': + if user.verified: + result.append(user) + return result +``` + +AI-powered refactoring suggests: + +```python +# AI-suggested refactoring +def process_users(users): + """Process and filter users based on eligibility criteria. + + Args: + users: List of user objects to process + + Returns: + List of eligible users (adult, active, verified) + """ + return [ + user for user in users + if user.age > 18 and user.status == 'active' and user.verified + ] +``` + +## AI Documentation Generation + +### Generate Docstrings + +Automatically create docstrings using AI: + +```bash +# Preview docstring generation +refactron document myfile.py + +# Apply docstrings +refactron document myfile.py --apply +``` + +### Line-Specific Suggestions + +Get AI suggestions for specific lines: + +```bash +# Suggest improvements for line 42 +refactron suggest myfile.py --line 42 + +# Apply suggestion instantly +refactron suggest myfile.py --line 42 --apply +``` + +## Available LLM Providers + + + + Fast, cloud-based LLM provider with free tier + + **Models:** + - `llama3-70b-8192` - Best quality + - `llama3-8b-8192` - Faster, good quality + - `mixtral-8x7b-32768` - Long context window + + **Setup:** + ```bash + export GROQ_API_KEY='your-key' + ``` + + + + Bring your own LLM provider + + Configure in `.refactron.yaml`: + ```yaml + llm: + provider: custom + endpoint: https://your-llm-api.com/v1 + api_key: your-key + ``` + + + +## RAG Workflow + +### How RAG Enhances Refactoring + +1. **Parsing**: Code files split into semantic chunks (classes, methods) +2. **Embedding**: Chunks converted to vector representations +3. **Indexing**: Vectors stored in ChromaDB with metadata +4. **Retrieval**: Relevant chunks retrieved when analyzing issues +5. **Context**: LLM receives project-specific context for better suggestions + +### Update Index + +Re-index after significant code changes: + +```bash +# Update index incrementally +refactron rag index --update + +# Full re-index +refactron rag index --force +``` + +## Feedback and Learning + +Provide feedback on AI suggestions to improve quality: + +```bash +# Record acceptance +refactron feedback --action accepted + +# Record rejection with reason +refactron feedback --action rejected --reason "Breaks API contract" +``` + + + AI suggestions integrate with [Pattern Learning](/guides/pattern-learning) to improve over time + + +## Configuration Options + + + LLM provider (groq, custom) + + + + Model identifier for the provider + + + + Temperature for generation (0.0 - 1.0, lower = more deterministic) + + + + Maximum tokens in LLM response + + + + Enable RAG system + + + + Directory for RAG index storage + + + + Sentence transformer model for embeddings + + +## Best Practices + + + + Re-run `refactron rag index` after significant code changes for accurate context + + + + Models like Llama 3 70B provide better refactoring logic than smaller models + + + + Use `--preview` to review AI-generated code before applying + + + + Record feedback to improve AI suggestions over time + + + + Run your test suite after applying AI refactorings + + + +## Troubleshooting + + + + Ensure `GROQ_API_KEY` is set: + ```bash + echo $GROQ_API_KEY + ``` + Export it in your shell profile for persistence + + + + Create index first: + ```bash + refactron rag index + ``` + + + + - Use smaller models (llama3-8b-8192) + - Reduce `max_tokens` + - Limit context retrieval chunks + + + +## Next Steps + + + + Learn how feedback improves suggestions + + + Explore all AI-related commands + + diff --git a/docs/guides/code-analysis.mdx b/docs/guides/code-analysis.mdx new file mode 100644 index 0000000..9a8de3a --- /dev/null +++ b/docs/guides/code-analysis.mdx @@ -0,0 +1,305 @@ +--- +title: Code Analysis +description: 'Comprehensive code analysis features in Refactron' +--- + +## Overview + +Refactron provides comprehensive static analysis to identify security vulnerabilities, code quality issues, complexity problems, and performance bottlenecks in your Python code. + +## Running Analysis + +### Basic Analysis + +```bash +# Analyze a single file +refactron analyze myfile.py + +# Analyze entire directory +refactron analyze myproject/ + +# Show detailed output +refactron analyze myproject/ --detailed +``` + +### Python API + +```python +from refactron import Refactron + +refactron = Refactron() +analysis = refactron.analyze("myproject/") + +# Print summary +print(f"Files analyzed: {analysis.summary['files_analyzed']}") +print(f"Total issues: {analysis.summary['total_issues']}") + +# Access issues by severity +for issue in analysis.issues: + if issue.level.value == "CRITICAL": + print(f"{issue.message} at line {issue.line_number}") +``` + +## Analysis Categories + + + + Detects critical security vulnerabilities: + - **SQL Injection**: Unsafe database queries + - **Code Injection**: Use of `eval()` and `exec()` + - **Hardcoded Secrets**: API keys, passwords in code + - **SSRF Vulnerabilities**: Unsafe URL handling + + ```python + # ❌ Security issue detected + query = f"SELECT * FROM users WHERE id = {user_id}" # SQL injection + eval(user_input) # Code injection + API_KEY = "hardcoded-secret-123" # Hardcoded secret + ``` + + + + Identifies code smells and maintainability issues: + - **Magic Numbers**: Unexplained numeric constants + - **Long Functions**: Functions exceeding length threshold + - **Excessive Parameters**: Too many function parameters + - **Deep Nesting**: Complex nested control structures + + ```python + # ❌ Code quality issues + def process(a, b, c, d, e, f, g): # Too many parameters + if condition1: + if condition2: + if condition3: + if condition4: # Deep nesting + return 42 # Magic number + ``` + + + + Measures code complexity: + - **Cyclomatic Complexity**: Control flow complexity + - **Cognitive Complexity**: Human readability complexity + - **Maintainability Index**: Overall maintainability score + - **Nested Loops**: Performance-impacting nested iterations + + ```python + # ❌ High complexity + def complex_function(data): + for item in data: + for sub in item: + if sub.type == 'A': + for val in sub.values: + # Nested loops = high complexity + process(val) + ``` + + + + Checks type annotation coverage: + - Missing function type hints + - Incomplete parameter annotations + - Missing return type annotations + + ```python + # ❌ Missing type hints + def calculate(x, y): # No type hints + return x + y + + # ✅ Properly typed + def calculate(x: int, y: int) -> int: + return x + y + ``` + + + + Finds unused and unreachable code: + - Unused variables + - Unused functions + - Unreachable code blocks + + ```python + # ❌ Dead code + def process(): + result = expensive_calculation() # Unused variable + return None + + def unused_function(): # Never called + pass + + def example(): + return True + print("Never executed") # Unreachable + ``` + + + + Analyzes import patterns: + - Circular imports + - Wildcard imports + - Deprecated modules + + ```python + # ❌ Dependency issues + from module_a import * # Wildcard import + import deprecated_lib # Deprecated + ``` + + + +## Severity Levels + +Issues are categorized by severity: + + + + Security vulnerabilities requiring immediate attention + + + Significant problems affecting functionality + + + Code quality issues to address + + + Suggestions for improvement + + + +## Filtering Results + +### By Severity + +```python +from refactron import Refactron + +refactron = Refactron() +analysis = refactron.analyze("myproject/") + +# Filter by severity +critical = [i for i in analysis.issues if i.level.value == "CRITICAL"] +errors = [i for i in analysis.issues if i.level.value == "ERROR"] +warnings = [i for i in analysis.issues if i.level.value == "WARNING"] +``` + +### By Category + +```python +security_issues = [i for i in analysis.issues if i.category == "SECURITY"] +complexity_issues = [i for i in analysis.issues if i.category == "COMPLEXITY"] +``` + +## Generating Reports + +### JSON Report + +```bash +refactron report myproject/ --format json -o report.json +``` + +### HTML Report + +```bash +refactron report myproject/ --format html -o report.html +``` + +### Custom Report Format + +```python +from refactron import Refactron + +refactron = Refactron() +analysis = refactron.analyze("myproject/") + +# Create custom report +report = { + "summary": analysis.summary, + "critical_issues": [ + { + "file": issue.file_path, + "line": issue.line_number, + "message": issue.message + } + for issue in analysis.issues + if issue.level.value == "CRITICAL" + ] +} +``` + +## Configuration + +Customize analysis behavior in `.refactron.yaml`: + +```yaml .refactron.yaml +# Enable specific analyzers +enabled_analyzers: + - security + - complexity + - code_smell + - type_hint + - dead_code + - dependency + +# Set thresholds +max_function_complexity: 10 +max_function_length: 50 +max_parameters: 5 +max_nesting_depth: 3 + +# Exclude patterns +exclude_patterns: + - "*/tests/*" + - "*/venv/*" +``` + +## Ignoring Issues + +### Inline Ignores + +```python +def my_function(): # refactron: ignore + # Entire function ignored + pass + +x = 42 # refactron: ignore magic-number +# Only magic-number check ignored +``` + +### Configuration-Based Excludes + +```yaml .refactron.yaml +exclude_patterns: + - "*/migrations/*" + - "*/legacy/*" +``` + +## Performance Tips + + + For large codebases, analyze specific modules separately to get faster results + + +```bash +# Analyze specific modules +refactron analyze src/module1/ +refactron analyze src/module2/ +``` + +## Next Steps + + + + Learn how to fix issues automatically + + + Explore all CLI commands + + diff --git a/docs/guides/pattern-learning.mdx b/docs/guides/pattern-learning.mdx new file mode 100644 index 0000000..3022ede --- /dev/null +++ b/docs/guides/pattern-learning.mdx @@ -0,0 +1,364 @@ +--- +title: Pattern Learning +description: 'Adaptive learning system that improves over time' +--- + +## Overview + +Refactron's Pattern Learning System learns from your refactoring decisions, building a knowledge base that improves the quality and relevance of future recommendations. **The more you use Refactron, the smarter it gets!** + +## How It Works + + + + When Refactron suggests a refactoring, it creates a unique "fingerprint" of the code pattern + + + + You provide feedback: **accepted**, **rejected**, or **ignored** + + + + Refactron tracks: + - Acceptance rates for each pattern + - Pattern frequency and recency + - Code metrics improvements + - Project-specific preferences + + + + Future suggestions ranked based on historical acceptance rates and patterns + + + +## Key Features + + + + Learns from every refactoring decision you make + + + Ranks suggestions by historical acceptance patterns + + + Adapts to your project's unique coding style + + + Patterns persist across sessions + + + +## Configuration + +### Enable Pattern Learning + +Pattern learning is **enabled by default**. Configure in `.refactron.yaml`: + +```yaml .refactron.yaml +# Pattern learning settings +enable_pattern_learning: true # Master switch +pattern_learning_enabled: true # Learn from feedback +pattern_ranking_enabled: true # Rank by patterns +pattern_storage_dir: null # null = auto-detect +``` + +### Python API + +```python +from refactron import Refactron +from refactron.core.config import RefactronConfig + +# Enable all pattern learning features +config = RefactronConfig( + enable_pattern_learning=True, + pattern_learning_enabled=True, + pattern_ranking_enabled=True +) +refactron = Refactron(config) + +# Disable pattern learning +config = RefactronConfig(enable_pattern_learning=False) +refactron = Refactron(config) +``` + +## Providing Feedback + +### Interactive Feedback + +```bash +# Refactor with feedback prompts +refactron refactor myfile.py --preview --feedback +``` + +During preview, you'll be prompted to accept/reject each suggestion. + +### Automatic Feedback + +Feedback is automatically recorded when you apply refactorings: + +```bash +# Accepted suggestions are automatically recorded +refactron refactor myfile.py --apply +``` + +### Manual Feedback + +Provide explicit feedback for specific operations: + +```bash +# Accept a suggestion +refactron feedback --action accepted + +# Reject with reason +refactron feedback --action rejected --reason "Breaks API" + +# Mark as ignored +refactron feedback --action ignored +``` + +### Python API Feedback + +```python +from refactron import Refactron + +refactron = Refactron() +result = refactron.refactor("myfile.py", preview=True) + +# Record feedback for first operation +if result.operations: + op = result.operations[0] + refactron.record_feedback( + operation_id=op.operation_id, + action="accepted", + reason="Improved readability", + operation=op + ) +``` + +## Pattern Analysis + +### View Pattern Statistics + +```bash +# Analyze project patterns +refactron patterns analyze +``` + +Output shows: +- Most accepted patterns +- Patterns with low acceptance +- Recommendations for tuning + +### Get Tuning Recommendations + +```bash +# Get automated recommendations +refactron patterns recommend +``` + +### Apply Tuning + +```bash +# Auto-apply recommended tuning +refactron patterns tune --auto + +# View current pattern profile +refactron patterns profile +``` + +## Ranking in Action + +When pattern learning is enabled, suggestions show ranking scores: + +```bash +refactron refactor myfile.py --preview +``` + +Output: +``` +📊 Refactoring Suggestions (ranked by learned patterns) + +Operation 1: Extract Constant + File: myfile.py:42 + Ranking Score: 0.85 ⭐ # High acceptance history + Risk: Low (0.2) + Description: Extract magic number 42 to constant + +Operation 2: Simplify Conditional + File: myfile.py:67 + Ranking Score: 0.35 # Lower acceptance history + Risk: Moderate (0.4) + Description: Simplify nested if statements +``` + +## Storage Location + +Pattern data is stored in JSON files: + +**Default Locations:** +- Project root: `.refactron/patterns/` +- User home: `~/.refactron/patterns/` (fallback) + +**Storage Files:** +- `patterns.json` - Learned patterns +- `feedback.json` - Feedback records +- `project_profiles.json` - Project configs +- `pattern_metrics.json` - Pattern metrics + +**Custom Location:** +```yaml .refactron.yaml +pattern_storage_dir: /custom/path/patterns +``` + +## Configuration Options + + + Master switch for all pattern learning features + + + + Enable learning from feedback (requires `enable_pattern_learning`) + + + + Enable ranking based on learned patterns (requires `enable_pattern_learning`) + + + + Custom storage directory (null = auto-detect project root or home directory) + + +## Best Practices + + + + The more feedback you provide, the better the system learns: + - **Accept** good refactorings + - **Reject** inappropriate suggestions + - **Ignore** if unsure + + + + For large projects, use automated tuning: + ```bash + refactron patterns analyze + refactron patterns tune --auto + ``` + + + + Periodically check pattern performance: + ```bash + refactron patterns analyze + ``` + + + + In CI/CD environments, use consistent storage: + ```yaml + pattern_storage_dir: /ci/patterns + ``` + + + +## Advanced Usage + +### Custom Pattern Weights + +```python +from refactron.patterns.storage import PatternStorage +from pathlib import Path + +storage = PatternStorage() +profile = storage.get_project_profile(project_path=Path("/project")) + +# Set custom weight for a pattern +profile.set_pattern_weight("pattern-id-123", 0.9) + +# Save profile +storage.save_project_profile(profile) +``` + +### Batch Learning + +```python +from refactron.patterns.learner import PatternLearner + +learner = PatternLearner(storage=storage, fingerprinter=fingerprinter) + +# Load pending feedback +feedback_list = storage.load_feedback() + +# Process in batch +learner.batch_learn(feedback_list) +``` + +### Pattern Cleanup + +```python +from refactron.patterns.learning_service import LearningService + +service = LearningService(storage=storage) + +# Remove patterns older than 90 days +service.cleanup_old_patterns(days=90) +``` + +## Integration with AI + +Pattern learning integrates with [AI Features](/guides/ai-features): + +- AI suggestions use learned patterns for better context +- Feedback on AI refactorings improves future AI suggestions +- RAG system considers pattern preferences + +## Troubleshooting + + + + **Solutions:** + 1. Check `pattern_learning_enabled` is `true` + 2. Verify storage directory is writable + 3. Check logs: `refactron refactor --log-level DEBUG` + + + + **Solutions:** + 1. Check `pattern_ranking_enabled` is `true` + 2. Ensure patterns have been learned (provide feedback first) + 3. Verify sufficient pattern history exists + + + + **Solutions:** + 1. Check directory permissions + 2. Use custom `pattern_storage_dir` if needed + 3. Verify disk space available + + + +## Performance + +- **Memory**: In-memory caching with invalidation on updates +- **Storage**: ~100-200 KB for typical project (100-1000 patterns) +- **Learning Speed**: <100ms per feedback record +- **Ranking Speed**: ~10ms per operation + +## Next Steps + + + + Combine pattern learning with AI + + + Performance optimization techniques + + diff --git a/docs/guides/refactoring.mdx b/docs/guides/refactoring.mdx new file mode 100644 index 0000000..ac55e22 --- /dev/null +++ b/docs/guides/refactoring.mdx @@ -0,0 +1,337 @@ +--- +title: Refactoring +description: 'Automated refactoring and code transformation' +--- + +## Overview + +Refactron provides intelligent refactoring suggestions with safety previews, risk scoring, and automated fixes. All refactorings include backup and rollback capabilities. + +## Basic Refactoring + +### Preview Changes + +Always preview refactoring suggestions before applying: + +```bash +refactron refactor myfile.py --preview +``` + +### Apply Refactoring + +Apply all suggested refactorings: + +```bash +refactron refactor myfile.py +``` + + + Refactoring modifies your code. Always preview changes first and ensure you have backups or version control. + + +## Available Refactorers + + + + Replaces magic numbers with named constants + + ```python + # Before + def calculate_tax(amount): + return amount * 0.18 + + # After + TAX_RATE = 0.18 + + def calculate_tax(amount): + return amount * TAX_RATE + ``` + + + + Adds missing docstrings to functions and classes + + ```python + # Before + def calculate_total(items): + return sum(item.price for item in items) + + # After + def calculate_total(items): + """Calculate total price for list of items. + + Args: + items: List of items with price attribute + + Returns: + Total sum of item prices + """ + return sum(item.price for item in items) + ``` + + + + Refactors complex conditional expressions + + ```python + # Before + if not (x < 10 or x > 20): + process() + + # After + if 10 <= x <= 20: + process() + ``` + + + + Reduces function parameters using dataclasses or dicts + + ```python + # Before + def create_user(name, email, age, city, country): + pass + + # After + from dataclasses import dataclass + + @dataclass + class UserData: + name: str + email: str + age: int + city: str + country: str + + def create_user(user: UserData): + pass + ``` + + + +## Risk Levels + +Refactoring suggestions include risk scores: + + + + Formatting, imports only + + + Documentation, constants + + + Logic changes + + + Complex transformations + + + +## Filter by Risk Level + +```bash +# Only apply safe refactorings +refactron refactor myfile.py --risk-level safe + +# Apply safe and low-risk refactorings +refactron refactor myfile.py --risk-level low +``` + +## Specific Refactorings + +Apply only specific types of refactoring: + +```bash +# Only extract constants +refactron refactor myfile.py --type extract_constant + +# Multiple types +refactron refactor myfile.py --type extract_constant --type add_docstring +``` + +## Python API + +```python +from refactron import Refactron + +refactron = Refactron() + +# Preview refactoring +result = refactron.refactor("myfile.py", preview=True) +result.show_diff() + +# Apply specific refactorings +result = refactron.refactor( + "myfile.py", + preview=False, + operation_types=["extract_constant", "add_docstring"] +) + +if result.success: + print(f"Applied {len(result.operations)} refactorings") + print(f"Backup: {result.backup_path}") +``` + +## Auto-Fix + +The `autofix` command applies safe, automated fixes: + +```bash +# Preview auto-fixes +refactron autofix myfile.py --preview + +# Apply auto-fixes +refactron autofix myfile.py --apply +``` + +Auto-fix includes 14 automated fixers for common issues. + +## Backup and Rollback + +Every refactoring creates automatic backups: + +### List Rollback Sessions + +```bash +refactron rollback --list +``` + +### Rollback Specific Session + +```bash +refactron rollback --session +``` + +### Python API Rollback + +```python +from refactron.autofix.file_ops import FileOperations + +file_ops = FileOperations() + +# Rollback one file +file_ops.rollback_file("myfile.py") + +# Rollback all changes +file_ops.rollback_all() +``` + +## Refactoring Workflow + + + + First, analyze your code to identify issues + ```bash + refactron analyze myproject/ + ``` + + + + Preview refactoring suggestions + ```bash + refactron refactor myproject/ --preview + ``` + + + + Review the diff output and risk scores + + + + Apply refactorings you want to keep + ```bash + refactron refactor myproject/ --type extract_constant + ``` + + + + Run your tests to ensure nothing broke + + + + Rollback if something went wrong + ```bash + refactron rollback --session + ``` + + + +## Configuration + +Configure refactoring behavior in `.refactron.yaml`: + +```yaml .refactron.yaml +# Enable specific refactorers +enabled_refactorers: + - extract_constant + - add_docstring + - simplify_conditionals + - reduce_parameters + +# Risk tolerance +max_risk_score: 0.5 # Don't apply refactorings with risk > 0.5 + +# Backup settings +create_backups: true +backup_dir: .refactron/backups +``` + +## Pattern Learning Integration + +Refactron learns from your refactoring decisions: + +```bash +# Refactor with pattern learning +refactron refactor myfile.py --feedback + +# Provide explicit feedback +refactron feedback --action accepted +``` + + + Learn more about pattern learning in the [Pattern Learning Guide](/guides/pattern-learning) + + +## Best Practices + + + + Use `--preview` to see changes before applying them + + + + Commit your code before refactoring for easy rollback + + + + Run your test suite after applying refactorings + + + + Begin with safe refactorings, then gradually increase risk tolerance + + + + Refactor small portions at a time rather than entire codebase + + + +## Next Steps + + + + Explore AI-powered refactoring + + + Learn how Refactron improves over time + + diff --git a/docs/images/cli_dashboard.png b/docs/images/cli_dashboard.png deleted file mode 100644 index d7c83ad..0000000 Binary files a/docs/images/cli_dashboard.png and /dev/null differ diff --git a/docs/index.html b/docs/index.html deleted file mode 100644 index a42682d..0000000 --- a/docs/index.html +++ /dev/null @@ -1,1134 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - Refactron - The Intelligent Code Refactoring Transformer for Python - - - - - - - - - - - - - -
-
-

Refactron

-

The Intelligent Code Refactoring Transformer

-

- Eliminate technical debt, modernize legacy code, and automate refactoring with AI-powered intelligence. - Built for Python developers who care about code quality and maintainability. -

- - - -
- CI Status - Coverage - Python - PyPI version - License -
- -
-
-
-
8
-
Advanced Analyzers
-
-
-
5
-
Smart Refactorers
-
-
-
81%
-
Test Coverage
-
-
-
135
-
Tests Passing
-
-
-
-
-
- - -
-
-
- -

Powerful Code Analysis & Refactoring

-

- Comprehensive tools to analyze, refactor, and optimize your Python codebase with confidence -

-
- -
-
-
🔍
-

Comprehensive Analysis

-

Deep code inspection with 8 specialized analyzers covering all aspects of code quality

-
    -
  • Security vulnerability scanning
  • -
  • Code smell detection
  • -
  • Complexity & maintainability metrics
  • -
  • Type hint analysis
  • -
  • Dead code detection
  • -
  • Dependency & import analysis
  • -
-
- -
-
🔧
-

Intelligent Refactoring

-

AI-powered code transformations with before/after previews and safety risk scoring

-
    -
  • Extract magic numbers to constants
  • -
  • Reduce function parameters
  • -
  • Simplify nested conditionals
  • -
  • Auto-generate docstrings
  • -
  • Extract complex methods
  • -
  • Risk scoring for each change
  • -
-
- -
-
📊
-

Rich Reporting

-

Actionable insights with multiple output formats and detailed metrics

-
    -
  • Beautiful terminal output
  • -
  • JSON & HTML export options
  • -
  • Before/after code previews
  • -
  • CI/CD integration support
  • -
  • Technical debt tracking
  • -
  • Trend analysis over time
  • -
-
-
-
-
- - -
-
-
- -

Get Started in Seconds

-

- Install Refactron and start analyzing your code immediately -

-
- -
-
- Installation - $ -
-
pip install refactron
-
- -
-
- Analyze Your Code - $ -
-
# Analyze a file or directory
-refactron analyze your_project/
-
-# Get detailed report
-refactron analyze your_file.py --detailed
-
- -
-
- Get Refactoring Suggestions - $ -
-
# Preview refactorings (safe, doesn't modify files)
-refactron refactor your_file.py --preview
-
-# Apply specific refactoring types
-refactron refactor your_file.py --types magic_number,reduce_params
-
- -
-
- Python API -
-
from refactron import Refactron
-from refactron.core.config import RefactronConfig
-
-# Initialize with default or custom config
-config = RefactronConfig.default()
-refactron = Refactron(config)
-
-# Analyze code
-result = refactron.analyze("your_project/")
-print(result.summary())
-
-# Get refactoring suggestions
-refactorings = refactron.refactor("your_file.py", preview=True)
-print(refactorings.show_diff())
-
-
-
- - -
-
-
- -

Built for Every Development Scenario

-

- From legacy modernization to maintaining high-quality codebases -

-
- -
-
-
🏢
-

Enterprise Legacy Code

-

Modernize decades-old Python code with automated refactoring and security scanning

-
- -
-
🚀
-

Startup MVPs

-

Clean up technical debt quickly as your codebase grows and evolves

-
- -
-
👥
-

Team Onboarding

-

Help new developers understand code quality standards and patterns

-
- -
-
🔄
-

CI/CD Integration

-

Automate code quality checks in your deployment pipeline

-
- -
-
📚
-

Open Source Projects

-

Maintain consistent quality across contributions from multiple developers

-
- -
-
🎓
-

Learning & Teaching

-

Teach Python best practices with automated feedback and suggestions

-
-
-
-
- - -
-
-
-

Ready to Transform Your Code?

-

Join developers using Refactron to write cleaner, safer, more maintainable Python code

- -
-
-
- - - - - - - - diff --git a/docs/introduction.mdx b/docs/introduction.mdx new file mode 100644 index 0000000..ed51d03 --- /dev/null +++ b/docs/introduction.mdx @@ -0,0 +1,94 @@ +--- +title: Introduction +description: 'Welcome to Refactron - A Python library for intelligent code analysis and refactoring' +--- + + + + +## What is Refactron? + +Refactron is a powerful Python library that analyzes your code for security vulnerabilities, performance issues, code smells, and complexity problems. It provides intelligent refactoring suggestions with safety previews and supports automated code fixes. + + + + Get up and running in minutes + + + Explore command-line tools + + + Python API documentation + + + Learn best practices + + + +## Key Features + + + + Detect SQL injection, code injection, hardcoded secrets, and SSRF vulnerabilities + + + + Identify magic numbers, long functions, excessive parameters, and deep nesting + + + + LLM orchestration with RAG (Retrieval-Augmented Generation) for context-aware refactoring + + + + Learn from your project-specific coding standards and improve over time + + + + AST caching, incremental analysis, and parallel processing for large codebases + + + + 14 automated fixers with configurable safety levels and rollback support + + + +## Why Choose Refactron? + + + + Analyzes security, quality, complexity, and performance in one tool + + + AI-powered suggestions with pattern learning capabilities + + + Preview changes, risk scoring, and Git-integrated rollback system + + + +## Ready to Get Started? + + + Install Refactron and start analyzing your code + diff --git a/docs/images/Refactron-logo-TM.png b/docs/logo/Refactron-logo-TM.png similarity index 100% rename from docs/images/Refactron-logo-TM.png rename to docs/logo/Refactron-logo-TM.png diff --git a/docs/images/Screenshot 2026-01-29 at 17.45.00.png b/docs/logo/Refactron_CLI.png similarity index 100% rename from docs/images/Screenshot 2026-01-29 at 17.45.00.png rename to docs/logo/Refactron_CLI.png diff --git a/docs/logo/image.png b/docs/logo/image.png new file mode 100644 index 0000000..a7a0fab Binary files /dev/null and b/docs/logo/image.png differ diff --git a/docs/logo/logo.png b/docs/logo/logo.png new file mode 100644 index 0000000..d89f4cc Binary files /dev/null and b/docs/logo/logo.png differ diff --git a/docs/logo/logo.svg b/docs/logo/logo.svg new file mode 100644 index 0000000..1a6bb63 --- /dev/null +++ b/docs/logo/logo.svg @@ -0,0 +1,11 @@ + + + + + + + + + + + diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx new file mode 100644 index 0000000..35f585d --- /dev/null +++ b/docs/quickstart.mdx @@ -0,0 +1,217 @@ +--- +title: Quick Start +description: 'Get started with Refactron in minutes' +--- + +## Installation + +Install Refactron using pip: + +```bash +pip install refactron +``` + + + Refactron requires Python 3.8 or higher + + +## Complete Workflow + +Follow this workflow to get the most out of Refactron: + + + + Authenticate with Refactron to unlock AI features: + ```bash + refactron login + ``` + + + + Set up Refactron in your project: + ```bash + refactron init + ``` + Choose a template (base, django, fastapi, or flask) to get started quickly. + + + + Connect your GitHub repository for enhanced features: + ```bash + refactron repo connect + ``` + + + + Run comprehensive code analysis: + ```bash + refactron analyze . --detailed + ``` + + + + Generate AI-powered refactoring suggestions: + ```bash + refactron suggest myfile.py + ``` + + + + Preview and apply refactoring changes: + ```bash + # Preview changes first + refactron refactor myfile.py --preview + + # Apply when ready + refactron refactor myfile.py --apply + ``` + + + + Create a comprehensive technical debt report: + ```bash + refactron report . --format html -o report.html + ``` + + + + Rollback changes if something goes wrong: + ```bash + refactron rollback + ``` + + + +## Quick Commands + +### Analyze Code + +```bash +# Analyze a file +refactron analyze mycode.py + +# Analyze a directory +refactron analyze myproject/ + +# Get detailed analysis +refactron analyze . --detailed +``` + +### Refactor Code + +```bash +# Preview refactoring suggestions +refactron refactor myfile.py --preview + +# Apply automated fixes +refactron autofix myfile.py --apply + +# Apply specific refactoring +refactron refactor myfile.py --apply +``` + +### AI-Powered Features + +```bash +# Get AI suggestions +refactron suggest myfile.py --query "How can I improve this?" + +# Generate documentation +refactron document myfile.py + +# Use RAG for context-aware suggestions +refactron rag query "How to optimize database queries?" +``` + +## Python API + +Use Refactron programmatically in your Python code: + +```python +from refactron import Refactron + +# Initialize Refactron +refactron = Refactron() + +# Analyze code +analysis = refactron.analyze("path/to/code.py") +print(analysis.report()) + +# Preview refactoring +result = refactron.refactor("path/to/code.py", preview=True) +result.show_diff() + +# Apply refactoring +result = refactron.refactor("path/to/code.py", preview=False) +``` + +## Example Output + +When you run an analysis, you'll see: + + + +```bash Terminal Output +✓ Analyzing myproject/ + Files analyzed: 25 + Issues found: 12 + + CRITICAL (2): + - SQL injection vulnerability (line 45) + - Hardcoded secret detected (line 78) + + ERROR (4): + - High cyclomatic complexity (line 120) + - Deep nesting detected (line 156) + + WARNING (6): + - Magic number usage (line 23) + - Missing type hints (line 67) +``` + +```python Python API +{ + "files_analyzed": 25, + "total_issues": 12, + "by_severity": { + "CRITICAL": 2, + "ERROR": 4, + "WARNING": 6 + } +} +``` + + + +## Next Steps + + + + Customize analyzers and thresholds + + + Learn about analysis features + + + Master refactoring workflows + + + Explore AI-powered capabilities + + diff --git a/docs/resources/changelog.mdx b/docs/resources/changelog.mdx new file mode 100644 index 0000000..2daaf17 --- /dev/null +++ b/docs/resources/changelog.mdx @@ -0,0 +1,168 @@ +--- +title: Changelog +description: 'Release history and version updates' +--- + +## v1.0.15 (2026-02-08) + +### Added + +**LLM/RAG Integration**: Full semantic intelligence suite +- LLM orchestration using Llama 3 via Groq +- ChromaDB for semantic code indexing +- `refactron suggest` - AI-powered refactoring suggestions +- `refactron document` - Automated docstring generation + +**Repository Management**: +- `refactron repo list` - List connected repositories +- `refactron repo connect` - Connect local workspaces +- `refactron repo disconnect` - Disconnect repositories + +**Observability & Metrics**: +- `refactron metrics` - Performance statistics +- `refactron serve-metrics` - Prometheus endpoint +- `refactron telemetry` - Opt-in usage analytics + +**CI/CD Integration**: +- `refactron ci` - Generate GitHub Actions and GitLab CI configs + +### Fixed +- Security hardened URL sanitization +- Python 3.8 compatibility issues +- Missing GroqClient dependency + +--- + +## v1.0.14 (2026-01-30) + +### Changed +- Improved CLI startup sequence +- Better dependency management + +--- + +## v1.0.13 (2026-01-30) + +### Added + +**Pattern Learning System**: +- Pattern learning engine for project-specific patterns +- Suggestion ranking based on historical acceptance +- Interactive feedback collection + +**CLI Enhancements**: +- Welcome animation with system checks +- Interactive dashboard +- Enhanced help formatter + +**Performance**: +- AST caching for faster analysis +- Incremental analysis (only changed files) +- Parallel processing support + +**Backup & Rollback**: +- Git-integrated safety system +- Automatic backups before refactoring + +### Fixed +- Numerous linting and type-checking issues +- Python 3.8 compatibility improvements + +--- + +## v1.0.1 (2025-12-28) + +### Added + +**New Analyzers**: +- Security: SQL injection, SSRF, weak crypto +- Performance: N+1 queries, inefficient loops +- Complexity: Nested loops, method chains + +**False Positive Reduction**: +- Context-aware security analysis +- Confidence scoring for issues +- Rule whitelisting + +**Test Coverage**: +- 96.8% coverage for analyzer modules +- Comprehensive edge case tests + +--- + +## v1.0.0 (2025-10-27) + +### 🎉 Major Release - Production Ready! + +**Auto-fix System**: +- 14 automatic fixers for common issues +- Atomic file writes for safety +- Automatic backups and rollback +- Safety levels (safe/low/moderate/high) + +**Fixers Include**: +- Remove unused imports +- Extract magic numbers +- Add missing docstrings +- Simplify boolean expressions +- Convert to f-strings + +### Test Coverage +- 135 tests (81% coverage) +- Production-ready quality + +--- + +## v0.1.0-beta (2025-10-25) + +### 🎉 Initial Beta Release + +**Core Features**: +- Plugin-based analyzer system +- Refactoring suggestion engine +- Risk scoring (0.0-1.0 scale) +- YAML configuration +- Rich CLI interface + +**Analyzers** (8 total): +- Complexity, Code Smell, Security +- Dependency, Dead Code, Type Hints +- Extract Method, Performance + +**Refactorers** (6 total ): +- Extract constants +- Reduce parameters +- Simplify conditionals +- Add docstrings +- Extract methods + +### Performance +- 4,300 lines/second analysis speed +- CI/CD ready +- Fast pre-commit hooks + +--- + +## Version History + +| Version | Date | Status | Highlights | +|---------|------|--------|------------| +| 1.0.15 | 2026-02-08 | **Stable** | AI/LLM integration, RAG system | +| 1.0.14 | 2026-01-30 | **Stable** | CLI improvements | +| 1.0.13 | 2026-01-30 | **Stable** | Pattern learning, performance | +| 1.0.1 | 2025-12-28 | **Stable** | New analyzers, test coverage | +| 1.0.0 | 2025-10-27 | **Stable** | Auto-fix system | +| 0.1.0 | 2025-10-25 | **Beta** | Initial release | + +--- + +## What's Next + +### Upcoming Features +- Advanced pattern recognition +- VS Code extension +- PyCharm plugin +- Custom rule engine +- Performance profiling improvements + +Stay tuned for more updates! diff --git a/docs/resources/faq.mdx b/docs/resources/faq.mdx new file mode 100644 index 0000000..a7e83c7 --- /dev/null +++ b/docs/resources/faq.mdx @@ -0,0 +1,231 @@ +--- +title: FAQ +description: 'Frequently asked questions' +--- + +## General + + + + Refactron is an intelligent Python code refactoring tool that analyzes your code for issues and suggests automated improvements with safety guarantees. + + + + Yes! Refactron is open source and free to use. Some advanced features require an API key for authentication. + + + + Refactron supports Python 3.8 and above. + + + + Absolutely! Refactron is designed for CI/CD integration. Use `refactron ci` to generate configuration templates. + + + +## Installation & Setup + + + + Install via pip: + ```bash + pip install refactron + ``` + Then login to authenticate: + ```bash + refactron login + ``` + + + + Refactron works out of the box with sensible defaults. For customization, run: + ```bash + refactron init + ``` + This creates a `.refactron.yaml` configuration file. + + + + Refactron automatically installs all required dependencies via pip. For AI features, you'll need a Groq API key (free tier available). + + + +## Usage + + + + Run: + ```bash + refactron analyze myproject/ + ``` + This will scan your code and report issues by severity. + + + + Refactron includes multiple safety features: + - **Preview mode** - See changes before applying + - **Risk scores** - Each suggestion has a risk rating + - **Automatic backups** - All changes are backed up + - **Rollback** - Easily undo changes + + + + Yes! Use the `--type` flag: + ```bash + refactron refactor myfile.py --type extract_constant + ``` + Or configure in `.refactron.yaml`: + ```yaml + enabled_refactorers: + - extract_constant + - add_docstring + ``` + + + + First, set your Groq API key: + ```bash + export GROQ_API_KEY='your-key' + ``` + Then index your codebase: + ```bash + refactron rag index + ``` + Now use AI-powered refactoring: + ```bash + refactron refactor myfile.py --ai + ``` + + + +## AI & RAG + + + + By default, Refactron uses Groq (Llama 3). You can also configure custom LLM providers. + + + + When using AI features with cloud LLM providers (like Groq), code snippets are sent for analysis. The RAG indexing happens locally. You can use local LLM providers for complete privacy. + + + + Re-index after significant code changes: + ```bash + refactron rag index --update + ``` + For complete re-indexing: + ```bash + refactron rag index --force + ``` + + + +## Pattern Learning + + + + Refactron learns from your feedback on refactoring suggestions. When you accept or reject suggestions, it adapts to your project's style and preferences. + + + + Yes, in `.refactron.yaml`: + ```yaml + enable_pattern_learning: false + ``` + + + + By default in `.refactron/patterns/` in your project root. You can customize this: + ```yaml + pattern_storage_dir: /custom/path + ``` + + + +## Troubleshooting + + + + Run `refactron login` to authenticate. Check your authentication status: + ```bash + refactron auth status + ``` + + + + Enable performance optimizations: + ```yaml + enable_ast_cache: true + enable_parallel_processing: true + max_parallel_workers: 4 + ``` + + + + You can: + 1. Provide feedback to pattern learning + 2. Ignore specific issues with inline comments: + ```python + x = 42 # refactron: ignore magic-number + ``` + 3. Exclude files in `.refactron.yaml`: + ```yaml + exclude_patterns: + - "*/tests/*" + ``` + + + + List available rollback sessions: + ```bash + refactron rollback --list + ``` + Rollback specific session: + ```bash + refactron rollback --session + ``` + + + + Create the index first: + ```bash + refactron rag index + ``` + + + +## Advanced + + + + Yes! Extend the `BaseAnalyzer` class: + ```python + from refactron.analyzers.base_analyzer import BaseAnalyzer + + class MyAnalyzer(BaseAnalyzer): + def analyze_file(self, file_path, ast_tree): + # Your analysis logic + return issues + ``` + + + + Yes! Refactron integrates well with: + - **Black** - Code formatting + - **isort** - Import sorting + - **pytest** - Testing + - **pre-commit** - Git hooks + - **Prometheus** - Monitoring + + + + Check out our [GitHub repository](https://github.com/Refactron-ai/Refactron_lib) and see the CONTRIBUTING.md guide! + + + +## Support + + + Can't find your answer? Open an issue on [GitHub](https://github.com/Refactron-ai/Refactron_lib/issues) or check the [full documentation](/introduction). + diff --git a/documentation/docs/ARCHITECTURE.md b/documentation/docs/ARCHITECTURE.md new file mode 100644 index 0000000..af9af7d --- /dev/null +++ b/documentation/docs/ARCHITECTURE.md @@ -0,0 +1,364 @@ +# Refactron Architecture + +## Overview + +Refactron is designed as a modular, extensible Python library for code analysis and refactoring. The architecture follows clean separation of concerns with well-defined interfaces. + +## Project Structure + +``` +refactron/ +├── core/ # Core functionality +│ ├── refactron.py # Main entry point +│ ├── config.py # Configuration management +│ ├── models.py # Data models +│ ├── analysis_result.py # Analysis results +│ └── refactor_result.py # Refactoring results +├── analyzers/ # Code analyzers +│ ├── base_analyzer.py # Abstract base class +│ ├── complexity_analyzer.py +│ └── code_smell_analyzer.py +├── refactorers/ # Code refactorers +│ ├── base_refactorer.py # Abstract base class +│ └── extract_method_refactorer.py +└── cli.py # Command-line interface +``` + +## Core Components + +### 1. Refactron (Main Class) + +The `Refactron` class is the main entry point that orchestrates analysis and refactoring: + +```python +class Refactron: + def __init__(self, config: Optional[RefactronConfig] = None) + def analyze(self, target: Union[str, Path]) -> AnalysisResult + def refactor(self, target: Union[str, Path], ...) -> RefactorResult +``` + +**Responsibilities:** +- Initialize analyzers and refactorers +- Coordinate file discovery +- Run analysis and refactoring operations +- Return structured results + +### 2. Configuration System + +`RefactronConfig` provides flexible configuration: + +```python +@dataclass +class RefactronConfig: + enabled_analyzers: List[str] + enabled_refactorers: List[str] + max_function_complexity: int + # ... other settings +``` + +**Features:** +- Default configuration +- YAML file support +- Per-project customization + +### 3. Data Models + +Core data structures defined in `models.py`: + +- **`CodeIssue`**: Represents detected problems +- **`FileMetrics`**: Metrics for a single file +- **`RefactoringOperation`**: Proposed code changes +- **`IssueLevel`**: Severity enumeration +- **`IssueCategory`**: Issue type enumeration + +## Analyzers + +Analyzers detect code issues and patterns. + +### Base Analyzer + +All analyzers inherit from `BaseAnalyzer`: + +```python +class BaseAnalyzer(ABC): + @abstractmethod + def analyze(self, file_path: Path, source_code: str) -> List[CodeIssue]: + pass + + @property + @abstractmethod + def name(self) -> str: + pass +``` + +### Built-in Analyzers + +1. **ComplexityAnalyzer** + - Cyclomatic complexity + - Function length + - Maintainability index + +2. **CodeSmellAnalyzer** + - Too many parameters + - Deep nesting + - Magic numbers + - Missing docstrings + - Duplicate code patterns + +### Creating a Custom Analyzer + +```python +from refactron.analyzers.base_analyzer import BaseAnalyzer +from refactron.core.models import CodeIssue, IssueLevel, IssueCategory + +class MyAnalyzer(BaseAnalyzer): + @property + def name(self) -> str: + return "my_analyzer" + + def analyze(self, file_path: Path, source_code: str) -> List[CodeIssue]: + issues = [] + + # Your analysis logic here + tree = ast.parse(source_code) + # ... analyze the AST + + # Create issues + issue = CodeIssue( + category=IssueCategory.CODE_SMELL, + level=IssueLevel.WARNING, + message="Problem detected", + file_path=file_path, + line_number=10, + suggestion="Fix it this way", + rule_id="MY001", + ) + issues.append(issue) + + return issues +``` + +**Register in config:** +```yaml +enabled_analyzers: + - my_analyzer +``` + +## Refactorers + +Refactorers propose and apply code transformations. + +### Base Refactorer + +All refactorers inherit from `BaseRefactorer`: + +```python +class BaseRefactorer(ABC): + @abstractmethod + def refactor(self, file_path: Path, source_code: str) -> List[RefactoringOperation]: + pass + + @property + @abstractmethod + def operation_type(self) -> str: + pass +``` + +### Built-in Refactorers + +1. **ExtractMethodRefactorer** + - Identifies opportunities to extract methods + - Suggests breaking down complex functions + +### Creating a Custom Refactorer + +```python +from refactron.refactorers.base_refactorer import BaseRefactorer +from refactron.core.models import RefactoringOperation + +class MyRefactorer(BaseRefactorer): + @property + def operation_type(self) -> str: + return "my_refactoring" + + def refactor(self, file_path: Path, source_code: str) -> List[RefactoringOperation]: + operations = [] + + # Your refactoring logic here + tree = ast.parse(source_code) + # ... find refactoring opportunities + + operation = RefactoringOperation( + operation_type=self.operation_type, + file_path=file_path, + line_number=42, + description="Apply my refactoring", + old_code="old code", + new_code="new code", + risk_score=0.3, # 0.0 = safe, 1.0 = risky + reasoning="This improves readability", + ) + operations.append(operation) + + return operations +``` + +## Analysis Pipeline + +1. **File Discovery** + - Scan directories for Python files + - Apply include/exclude patterns + +2. **File Analysis** + - Read source code + - Parse into AST + - Run each enabled analyzer + - Collect issues + +3. **Metric Calculation** + - Lines of code + - Comment lines + - Complexity metrics + +4. **Result Aggregation** + - Combine all issues + - Generate summary statistics + - Create report + +## Refactoring Pipeline + +1. **Analysis Phase** + - Run analyzers to understand code + +2. **Opportunity Detection** + - Each refactorer identifies opportunities + - Calculate risk scores + +3. **Operation Generation** + - Create RefactoringOperation objects + - Include before/after code + +4. **Preview/Application** + - Show diff (preview mode) + - Apply changes (apply mode) + - Create backups + +## Extension Points + +### 1. Custom Rules + +Add rules via configuration: + +```yaml +custom_rules: + max_class_methods: 20 + enforce_type_hints: true +``` + +Access in analyzers: +```python +custom_value = self.config.custom_rules.get("max_class_methods", 20) +``` + +### 2. Plugin System (Future) + +Planned plugin architecture: + +```python +from refactron.plugins import RefactronPlugin + +class MyPlugin(RefactronPlugin): + def register(self): + return { + "analyzers": [MyAnalyzer], + "refactorers": [MyRefactorer], + } +``` + +### 3. Custom Reporters + +Currently supports: text, JSON, HTML +Future: PDF, Markdown, etc. + +## CLI Architecture + +The CLI (`cli.py`) provides command-line interface: + +```bash +refactron analyze +refactron refactor +refactron report +refactron init +``` + +Built with `click` for: +- Argument parsing +- Help text +- Option handling + +Uses `rich` for: +- Beautiful terminal output +- Tables +- Progress indicators +- Syntax highlighting + +## Design Principles + +1. **Modularity**: Each component has single responsibility +2. **Extensibility**: Easy to add new analyzers/refactorers +3. **Configuration**: Customizable via config files +4. **Safety**: Preview before apply, risk scoring +5. **Clarity**: Clear error messages and suggestions +6. **Performance**: Efficient AST parsing, caching where appropriate + +## Technology Stack + +- **libcst**: Concrete syntax tree (preserves formatting) +- **ast**: Abstract syntax tree (analysis) +- **radon**: Complexity metrics +- **astroid**: Advanced AST analysis +- **click**: CLI framework +- **rich**: Terminal UI +- **pyyaml**: Configuration files +- **pytest**: Testing + +## Testing Strategy + +1. **Unit Tests**: Test individual components +2. **Integration Tests**: Test end-to-end workflows +3. **Example-Based Tests**: Use real code examples +4. **Coverage**: Aim for >80% code coverage + +## Future Enhancements + +1. **Multi-language Support**: JavaScript, TypeScript, etc. +2. **AI Integration**: LLM-powered suggestions +3. **IDE Plugins**: VS Code, PyCharm +4. **CI/CD Integration**: GitHub Actions, GitLab CI +5. **Learning System**: Adapt to project patterns +6. **Batch Processing**: Parallel analysis +7. **Auto-fix**: Automatically apply safe refactorings +8. **Custom Rule Engine**: DSL for defining rules + +## Performance Considerations + +- Use generators for large file sets +- Cache parsed ASTs when possible +- Parallelize analysis across files +- Lazy load analyzers +- Incremental analysis (only changed files) + +## Security Considerations + +- Never execute analyzed code +- Sandbox refactoring operations +- Validate file paths +- Limit file sizes +- Rate limit external API calls (future) + +--- + +For more details, see: +- [API Documentation](docs/api.md) +- [Contributing Guide](CONTRIBUTING.md) +- [Examples](examples/) diff --git a/documentation/docs/CHANGELOG.md b/documentation/docs/CHANGELOG.md new file mode 100644 index 0000000..2fc26da --- /dev/null +++ b/documentation/docs/CHANGELOG.md @@ -0,0 +1,365 @@ +# Changelog + +All notable changes to Refactron will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +### v1.0.15 (2026-02-08) + +#### Added +- **LLM/RAG Integration**: Full semantic intelligence suite using Llama 3 (via Groq) and ChromaDB. +- **Repository Management**: New `repo` command group (`list`, `connect`, `disconnect`) for managing local workspaces and auto-indexing. +- **AI-Powered Commands**: + - `refactron suggest`: Contextual refactoring proposals. + - `refactron document`: Automated Google-style docstring generation. +- **Observability & Metrics**: + - `refactron metrics`: Detailed performance and analyzer statistics. + - `refactron serve-metrics`: Prometheus-compatible endpoint. + - `refactron telemetry`: Opt-in anonymous usage statistics. +- **CI/CD Integration**: `refactron ci` command to generate GitHub Actions and GitLab CI configurations. +- **Improved CLI UI**: New interactive dashboard and file selector. + +#### Fixed +- **Security**: Hardened URL sanitization in workspace management to prevent injection attacks. +- **Compatibility**: Resolved Python 3.8 issues in RAG parsing, Tree-sitter API, and fingerprinting. +- **Reliability**: Added missing `GroqClient` dependency and cleaned up project-wide linting issues. +- **Indexer Reliability**: Fixed missing `GroqClient` import and clarified error messages in the RAG indexer. + +### Changed +- Improved CLI startup sequence and progress feedback for RAG indexing operations. +- Updated documentation files to reflect the new AI-powered features. +- Applied project-wide linting and code style fixes. + +--- + +## [1.0.14] - 2026-01-30 + +### Changed +- Refactored CLI startup sequence to display animation before authentication prompt. +- Improved dependency management (added `astroid`). + +## [1.0.13] - 2026-01-30 + +### Added + +#### Pattern Learning System +- **Pattern Learning Engine** - Foundation for identifying and learning project-specific refactoring patterns. +- **Project-Specific Rule Tuner** - CLI commands to tune refactoring rules based on project needs. +- **Suggestion Ranking System** - Intelligent ranking of refactoring suggestions based on risk and impact. +- **Feedback Collection System** - Interactive feedback loop to improve pattern recognition over time. + +#### CLI Enhancements +- **Enhanced Welcome Flow** - Sleek startup animation with system checks and rotating quick tips. +- **Interactive Dashboard** - Minimal "Info Center" for quick access to help and version information. +- **Custom Help Formatter** - Beautifully formatted, numbered help output for better command discovery. +- **Authentication Enforcement** - Mandatory authentication for all core commands (analyze, refactor, etc.). + +#### Performance & Reliability +- **AST Cache & Incremental Analysis** - Faster analysis by only processing changed files. +- **Parallel Processing** - Multi-threaded analysis for large codebases. +- **Backup & Rollback System** - Git-integrated safety system to undo refactoring changes. +- **Enhanced Error Handling** - Custom exceptions and graceful degradation for a more robust experience. + +#### Configuration & Integration +- **Advanced Configuration Management** - Support for profiles, validation, and project-specific settings. +- **CI/CD Integration Templates** - Pre-configured templates for GitHub Actions and other CI/CD platforms. +- **Prometheus Metrics** - Built-in support for exporting metrics to Prometheus. + +### Fixed +- Resolved numerous linting and type-checking issues across the codebase. +- Improved Python 3.8 compatibility with explicit type hints. +- Optimized project type detection for large codebases. +- Fixed critical issues in feedback persistence and test isolation. + +--- + +### Planned +- AI-powered pattern recognition +- VS Code extension +- PyCharm plugin +- Advanced custom rule engine +- Performance profiling + +--- + +## [1.0.1] - 2025-12-28 + +### Added + +#### Analyzer Enhancements (#37) +- **New Security Patterns**: + - SQL parameterization detection (SEC009) - Identifies unsafe SQL string formatting + - SSRF (Server-Side Request Forgery) vulnerability detection (SEC010) + - Insecure random number generation detection (SEC011) + - Weak SSL/TLS configuration detection (SEC012, SEC013) + - Cryptographic weaknesses detection (weak hashing algorithms) +- **Complexity Analyzer Improvements**: + - Nested loop depth detection (C003) - Flags deeply nested loops that impact performance + - Method call chain complexity detection (C004) - Identifies overly long method chaining +- **Performance Analyzer** - New analyzer detecting performance antipatterns: + - N+1 query detection (P001) - Identifies database queries inside loops + - Inefficient list comprehensions (P002, P003) - Detects patterns that can be optimized + - Unnecessary iterations (P004) - Finds redundant loops and iterations + - Inefficient string concatenation (P005) - Detects string building in loops + - Redundant list conversions (P006) - Finds unnecessary list() wrappers +- **Code Smell Analyzer Enhancements**: + - Improved unused import detection (S006) - More accurate analysis of import usage + - Repeated code block detection - Identifies duplicate code patterns within functions + +#### False Positive Reduction (#39, #40) +- **Context-Aware Security Analysis** - Adjusts confidence scores based on file context: + - Test files get 60% confidence multiplier for certain rules (eval, pickle, random) + - Example/demo files get 70% confidence multiplier + - Reduces false positives for legitimate test code +- **Rule Whitelisting** - Configuration-based whitelisting of security rules for specific file patterns +- **False Positive Tracking System** - `FalsePositiveTracker` class to learn and track known false positives +- **Confidence Scores** - All security issues now include confidence scores (0.0-1.0) indicating detection certainty +- **Minimum Confidence Filtering** - Filter out low-confidence issues via configuration + +#### Test Coverage Improvements (#47, #48) +- **Comprehensive Edge Case Tests** - Added extensive edge case coverage for all analyzers +- **Real-World Test Datasets** - Test suites with real-world problematic code patterns +- **Integration Tests** - Improved integration test coverage across analyzer modules +- **Achieved 96.8% test coverage** for analyzer modules + +#### Developer Experience +- Pre-commit hooks configuration for automated code quality checks +- SECURITY.md with comprehensive security policy and vulnerability reporting process +- CONTRIBUTING_QUICKSTART.md for fast contributor onboarding (5-minute setup) +- Performance benchmarking suite in benchmarks/ directory +- Pre-commit GitHub Actions workflow for CI/CD +- Enhanced README badges (Black, pre-commit, security scanning) +- Comprehensive documentation for false positive reduction features + +### Changed +- Formatted 10 files with Black in examples/ and real_world_tests/ directories +- Updated README with accurate test coverage (84%) and test count (135) +- Improved contributing documentation with quick start guide +- Updated CI/CD metrics in README +- Security analyzer now uses context-aware confidence scoring +- Enhanced code smell detection accuracy for unused imports + +### Fixed +- Fixed flake8 violations in simplify_conditionals_refactorer.py +- Fixed flake8 violations in reduce_parameters_refactorer.py +- Reduced total flake8 issues from 294 to ~17 (94% improvement) +- Fixed code formatting issues in examples directory +- Improved security analyzer accuracy by reducing false positives in test files + +--- + +## [1.0.0] - 2025-10-27 + +### 🎉 Major Release - Production Ready! + +First stable release with complete auto-fix system and Phase 3 features. + +### Added + +#### Phase 3: Auto-fix System +- **Auto-fix Engine** - Intelligent automatic code fixing with safety guarantees +- **14 Automatic Fixers** - Fix common issues automatically + - 🟢 `remove_unused_imports` - Remove unused import statements (risk: 0.0) + - 🟢 `sort_imports` - Sort imports using isort (risk: 0.0) + - 🟢 `remove_trailing_whitespace` - Clean whitespace (risk: 0.0) + - 🟡 `extract_magic_numbers` - Extract to named constants (risk: 0.2) + - 🟡 `add_docstrings` - Add missing documentation (risk: 0.1) + - 🟡 `remove_dead_code` - Remove unreachable code (risk: 0.1) + - 🟡 `normalize_quotes` - Standardize quote style (risk: 0.1) + - 🟡 `simplify_boolean` - Simplify boolean expressions (risk: 0.3) + - 🟡 `convert_to_fstring` - Modernize string formatting (risk: 0.2) + - 🟡 `remove_unused_variables` - Clean unused variables (risk: 0.2) + - 🟡 `fix_indentation` - Fix tabs/spaces (risk: 0.1) + - 🟡 `add_missing_commas` - Add trailing commas (risk: 0.1) + - 🟡 `remove_print_statements` - Remove debug prints (risk: 0.3) + - 🔴 `fix_type_hints` - Add type hints (risk: 0.4, placeholder) + +#### File Operations & Safety +- **Atomic File Writes** - Safe file operations (temp file → rename) +- **Automatic Backups** - All changes backed up before applying +- **Rollback System** - Undo individual files or all at once +- **Backup Index** - Track all backups with timestamps +- **Safety Levels** - Control fix risk (safe/low/moderate/high) + +#### CLI Enhancements +- **New Command**: `refactron autofix` - Automatic code fixing +- **Safety Level Flags** - `--safety-level` for risk control +- **Preview Mode** - See changes before applying +- **Apply Mode** - Apply fixes with automatic backup + +### Improved +- **Test Coverage** - 135 tests (was 98) → +37 auto-fix tests +- **Overall Coverage** - 81% (maintained high coverage) +- **Production Status** - Changed from Beta to Stable +- **Documentation** - Added comprehensive manual testing guide + +### Fixed +- All existing bugs from v0.1.0-beta +- Edge cases in fixer logic +- File operation safety + +### Technical Details +- Added `refactron/autofix/` module + - `engine.py` - Auto-fix engine (95% coverage) + - `fixers.py` - 14 concrete fixers (88% coverage) + - `file_ops.py` - File operations (87% coverage) + - `models.py` - Data models (100% coverage) +- Added 37 comprehensive tests +- File backup stored in `.refactron_backups/` +- Backup index: `.refactron_backups/index.json` + +--- + +## [0.1.0-beta] - 2025-10-25 + +### 🎉 Initial Beta Release + +First production-ready beta release of Refactron! + +### Recent Improvements (Pre-Release Polish) +- **Fixed** security analyzer false positives for package metadata (`__author__`, `__version__`, etc.) +- **Improved** CLI code quality by extracting helper functions + - `analyze()` function simplified (reduced complexity) + - `refactor()` function simplified (reduced complexity) +- **Added** 11 comprehensive tests for Extract Method refactorer + - Coverage improved from 62% → 97% +- **Increased** overall test coverage from 89% → 90% +- **Increased** total tests from 87 → 98 +- **Eliminated** all critical issues in production code (1 → 0) + +### Added + +#### Core Features +- **Plugin-based analyzer system** for extensibility +- **Refactoring suggestion engine** with before/after previews +- **Risk scoring system** (0.0-1.0 scale) for safe refactoring +- **Configuration management** via YAML files +- **Rich CLI interface** with colors and progress indicators + +#### Analyzers (8 Total) +- **Complexity Analyzer** - Cyclomatic complexity, maintainability index +- **Code Smell Analyzer** - Too many parameters, deep nesting, magic numbers +- **Security Analyzer** - eval/exec detection, hardcoded secrets, injection patterns +- **Dependency Analyzer** - Wildcard imports, unused imports, circular dependencies +- **Dead Code Analyzer** - Unused functions, unreachable code, empty functions +- **Type Hint Analyzer** - Missing type annotations, incomplete generics +- **Extract Method Analyzer** - Identify complex functions that should be split +- **Base Analyzer** - Abstract base for custom analyzers + +#### Refactorers (6 Total) +- **Magic Number Refactorer** - Extract magic numbers to constants +- **Reduce Parameters Refactorer** - Convert parameter lists to config objects +- **Simplify Conditionals Refactorer** - Transform nested if statements to guard clauses +- **Add Docstring Refactorer** - Generate contextual docstrings +- **Extract Method Refactorer** - Suggest method extraction +- **Base Refactorer** - Abstract base for custom refactorers + +#### CLI Commands +- `refactron analyze` - Analyze code for issues +- `refactron refactor` - Generate refactoring suggestions +- `refactron report` - Create detailed reports (text, JSON, HTML) +- `refactron init` - Initialize configuration file + +#### Testing +- **87 tests** with **89% coverage** +- Unit tests for all analyzers +- Integration tests for CLI +- Real-world testing on 5,800 lines of code +- Edge case and error handling tests + +#### Documentation +- Comprehensive README with quick start +- Architecture documentation +- Developer setup guide +- Real-world case study with metrics +- Usage examples (Flask, Data Science, CLI) +- Complete feature matrix + +#### Examples +- Bad code examples for testing +- Flask API with security issues +- Data science workflow issues +- CLI tool best practices +- Refactoring demonstration +- Phase 2 analyzer showcase + +### Performance +- **4,300 lines per second** analysis speed +- Low memory footprint +- Suitable for CI/CD integration +- Fast enough for pre-commit hooks (<2s typical) + +### Quality Metrics +- 89% test coverage +- 0 critical security issues in production code +- 51 issues per 1,000 lines (top 25% for Python projects) +- 100% accuracy on security vulnerability detection + +--- + +## [0.0.1] - 2025-10-23 + +### Initial Development + +- Project structure setup +- Basic AST parsing +- Initial analyzer prototypes +- CLI framework +- Early testing + +--- + +## Version History + +| Version | Date | Status | Highlights | +|---------|------|--------|------------| +| 1.0.1 | 2025-12-28 | **Stable** | Expanded analyzers, false positive reduction, performance analyzer, improved test coverage | +| 1.0.0 | 2025-10-27 | **Stable** | Production-ready with auto-fix system | +| 0.1.0 | 2025-10-25 | **Beta** | First production-ready release | +| 0.0.1 | 2025-10-23 | Alpha | Initial development | + +--- + +## Categories + +### Added +New features and capabilities + +### Changed +Changes to existing functionality + +### Deprecated +Features that will be removed in future releases + +### Removed +Features that have been removed + +### Fixed +Bug fixes + +### Security +Security-related changes and fixes + +--- + +## Notes + +- **v0.1.0** is the first **production-ready** release +- Tested on 5,800 lines of real Python code +- Zero critical issues in production code +- Ready for CI/CD integration +- Suitable for team adoption + +--- + +## Links + +- [GitHub Repository](https://github.com/yourusername/refactron) +- [Documentation](README.md) +- [Contributing Guide](CONTRIBUTING.md) +- [Issue Tracker](https://github.com/yourusername/refactron/issues) + +--- + +**Keep this changelog up to date with every release!** diff --git a/documentation/docs/CLI_REFERENCE.md b/documentation/docs/CLI_REFERENCE.md new file mode 100644 index 0000000..56d58d5 --- /dev/null +++ b/documentation/docs/CLI_REFERENCE.md @@ -0,0 +1,455 @@ +# CLI Reference + +This document contains the reference for all Refactron CLI commands. + +## Global Options + +```bash + +╭────────────────────────────────────────────────────────────────────────────────────────────────────────╮ +│ │ +│ ⚡ REFACTRON │ +│ INTELLIGENT CODE REFACTORING │ +│ │ +╰────────────────────────────────────────────────────────────────────────────────────────────────────────╯ + +COMMAND CENTER +Select a command by name or number + + + ID COMMAND DESCRIPTION + ──────────────────────────────────────────────────────────────────────────────────────────────────────── + 01 ANALYZE Analyze code for issues and technical debt. + 02 AUTH Manage authentication state. + 03 AUTOFIX Automatically fix code issues (Phase 3... + 04 DOCUMENT Generate Google-style docstrings for a... + 05 FEEDBACK Provide feedback on a refactoring operation. + 06 GENERATE-CICD Generate CI/CD integration templates. + 07 INIT Initialize Refactron configuration in the... + 08 LOGIN Log in to Refactron CLI via device-code flow. + 09 LOGOUT Log out of Refactron CLI. + 10 METRICS Display collected metrics from the current... + 11 PATTERNS Pattern learning and project-specific... + 12 RAG RAG (Retrieval-Augmented Generation)... + 13 REFACTOR Refactor code with intelligent... + 14 REPO Manage GitHub repository connections. + 15 REPORT Generate a detailed technical debt report. + 16 ROLLBACK Rollback refactoring changes to restore... + 17 SERVE-METRICS Start a Prometheus metrics HTTP server. + 18 SUGGEST Generate AI-powered refactoring suggestions. + 19 TELEMETRY Manage telemetry settings. + + +GLOBAL OPTIONS +--version Show the version and exit. +--help Show this message and exit. + +USAGE: refactron ... +EXAMPLE: refactron analyze . --detailed + + + +``` + +## analyze + +```bash +Usage: refactron analyze [OPTIONS] [TARGET] + + Analyze code for issues and technical debt. + + TARGET: Path to file or directory to analyze (optional if workspace is + connected) + +Options: + -c, --config PATH Path to configuration file + --detailed / --summary Show detailed or summary report + --log-level [DEBUG|INFO|WARNING|ERROR|CRITICAL] + Set log level + --log-format [json|text] Set log format (json for CI/CD, text for + console) + --metrics / --no-metrics Enable or disable metrics collection + --show-metrics Show metrics summary after analysis + -p, --profile [dev|staging|prod] + Named configuration profile to use (dev, + staging, prod). Profiles typically group + config defaults; if both --profile and + --environment are set, the environment + determines the final effective configuration. + -e, --environment [dev|staging|prod] + Target runtime environment (dev, staging, + prod). When both --profile and --environment + are provided, the environment overrides the + selected profile. + --help Show this message and exit. + +``` + +## auth + +```bash +Usage: refactron auth [OPTIONS] COMMAND [ARGS]... + + Manage authentication state. + +Options: + --help Show this message and exit. + +Commands: + logout Log out of Refactron CLI. + status Show current authentication status. + +``` + +## autofix + +```bash +Usage: refactron autofix [OPTIONS] TARGET + + Automatically fix code issues (Phase 3 feature). + + TARGET: Path to file or directory to fix + + Examples: refactron autofix myfile.py --preview refactron autofix + myproject/ --apply --safety-level moderate + +Options: + -c, --config PATH Path to configuration file + -p, --profile [dev|staging|prod] + Named configuration profile to use (dev, + staging, prod). Profiles typically group + config defaults; if both --profile and + --environment are set, the environment + determines the final effective configuration. + -e, --environment [dev|staging|prod] + Target runtime environment (dev, staging, + prod). When both --profile and --environment + are provided, the environment overrides the + selected profile. + --preview / --apply Preview fixes or apply them + -s, --safety-level [safe|low|moderate|high] + Maximum risk level for automatic fixes + --help Show this message and exit. + +``` + +## document + +```bash +Usage: refactron document [OPTIONS] TARGET + + Generate Google-style docstrings for a Python file. + + Uses AI to analyze code and add comprehensive documentation. + +Options: + --apply / --no-apply Apply the documentation changes to the file + --interactive / --no-interactive + Use interactive mode for apply + --help Show this message and exit. + +``` + +## feedback + +```bash +Usage: refactron feedback [OPTIONS] OPERATION_ID + + Provide feedback on a refactoring operation. + + OPERATION_ID: The unique identifier of the refactoring operation + + Examples: refactron feedback abc-123 --action accepted --reason "Improved + readability" refactron feedback xyz-789 --action rejected --reason "Too + risky" + +Options: + -a, --action [accepted|rejected|ignored] + Feedback action: accepted, rejected, or + ignored [required] + -r, --reason TEXT Optional reason for the feedback + -c, --config PATH Path to configuration file + --help Show this message and exit. + +``` + +## generate-cicd + +```bash +Usage: refactron generate-cicd [OPTIONS] {github|gitlab|pre-commit|all} + + Generate CI/CD integration templates. + + TYPE: Type of template to generate (github, gitlab, pre-commit, all) + + Examples: refactron generate-cicd github --output .github/workflows + refactron generate-cicd gitlab --output . refactron generate-cicd pre-commit + --output . refactron generate-cicd all --output . + +Options: + -o, --output PATH Output directory (default: current directory) + --python-versions TEXT Comma-separated Python versions (default: + 3.8,3.9,3.10,3.11,3.12) + --fail-on-critical / --no-fail-on-critical + Fail build on critical issues (default: True) + --fail-on-errors / --no-fail-on-errors + Fail build on error-level issues (default: + False) + --max-critical INTEGER Maximum allowed critical issues (default: 0) + --max-errors INTEGER Maximum allowed error-level issues (default: + 10) + --help Show this message and exit. + +``` + +## init + +```bash +Usage: refactron init [OPTIONS] + + Initialize Refactron configuration in the current directory. + +Options: + -t, --template [base|django|fastapi|flask] + Configuration template to use (base, django, + fastapi, flask) + --help Show this message and exit. + +``` + +## login + +```bash +Usage: refactron login [OPTIONS] + + Log in to Refactron CLI via device-code flow. + +Options: + --api-base-url TEXT Refactron API base URL [default: + https://api.refactron.dev] + --no-browser Do not open a browser automatically (print the URL + instead) + --timeout INTEGER HTTP timeout in seconds for each request [default: 10] + --force Force re-login even if already logged in + --help Show this message and exit. + +``` + +## logout + +```bash +Usage: refactron logout [OPTIONS] + + Log out of Refactron CLI. + +Options: + --help Show this message and exit. + +``` + +## metrics + +```bash +Usage: refactron metrics [OPTIONS] + + Display collected metrics from the current session. + + Shows performance metrics, analyzer hit counts, and other statistics from + Refactron operations. + + Examples: refactron metrics # Show metrics in text format + refactron metrics --format json # Show metrics in JSON format + +Options: + -f, --format [text|json] Output format + --help Show this message and exit. + +``` + +## patterns + +```bash +Usage: refactron patterns [OPTIONS] COMMAND [ARGS]... + + Pattern learning and project-specific tuning commands. + +Options: + --help Show this message and exit. + +Commands: + analyze Analyze learned patterns for a specific project. + profile Show the current pattern profile for a project. + recommend Show rule tuning recommendations for a project. + tune Apply tuning recommendations to the project profile. + +``` + +## rag + +```bash +Usage: refactron rag [OPTIONS] COMMAND [ARGS]... + + RAG (Retrieval-Augmented Generation) management commands. + +Options: + --help Show this message and exit. + +Commands: + index Index the current workspace for RAG retrieval. + search Search the RAG index for similar code. + status Show RAG index statistics. + +``` + +## refactor + +```bash +Usage: refactron refactor [OPTIONS] [TARGET] + + Refactor code with intelligent transformations. + + TARGET: Path to file or directory to refactor (optional if workspace is + connected) + +Options: + -c, --config PATH Path to configuration file + -p, --profile [dev|staging|prod] + Named configuration profile to use (dev, + staging, prod). Profiles typically group + config defaults; if both --profile and + --environment are set, the environment + determines the final effective configuration. + -e, --environment [dev|staging|prod] + Target runtime environment (dev, staging, + prod). When both --profile and --environment + are provided, the environment overrides the + selected profile. + --preview / --apply Preview changes or apply them + -t, --types TEXT Specific refactoring types to apply + --feedback / --no-feedback Collect interactive feedback on refactoring + suggestions + --help Show this message and exit. + +``` + +## repo + +```bash +Usage: refactron repo [OPTIONS] COMMAND [ARGS]... + + Manage GitHub repository connections. + +Options: + --help Show this message and exit. + +Commands: + connect Connect to a GitHub repository. + disconnect Disconnect a repository and optionally delete local files. + list List all GitHub repositories connected to your account. + +``` + +## report + +```bash +Usage: refactron report [OPTIONS] TARGET + + Generate a detailed technical debt report. + + TARGET: Path to file or directory to analyze + +Options: + -c, --config PATH Path to configuration file + -p, --profile [dev|staging|prod] + Configuration profile to use (dev, staging, + prod) + -e, --environment [dev|staging|prod] + Environment to use (overrides profile) + -f, --format [text|json|html] Report format + -o, --output PATH Output file path + --help Show this message and exit. + +``` + +## rollback + +```bash +Usage: refactron rollback [OPTIONS] [SESSION_ID] + + Rollback refactoring changes to restore original files. + + By default, restores files from the latest backup session. + + Arguments: SESSION_ID: Optional specific session ID to rollback. + + Examples: refactron rollback # Rollback latest session + refactron rollback session_123 # Rollback specific session refactron + rollback --list # List all backup sessions refactron rollback --use- + git # Use Git rollback refactron rollback --clear # Clear all + backups + +Options: + -s, --session TEXT Specific session ID to rollback (deprecated, use argument + instead) + --use-git Use Git rollback instead of file backup + --list List all backup sessions + --clear Clear all backup sessions + --help Show this message and exit. + +``` + +## serve-metrics + +```bash +Usage: refactron serve-metrics [OPTIONS] + + Start a Prometheus metrics HTTP server. + + This command starts a persistent HTTP server that exposes Refactron metrics in + Prometheus format on the /metrics endpoint. + + Examples: refactron serve-metrics # Start on 0.0.0.0:9090 + refactron serve-metrics --port 8080 # Start on port 8080 refactron + serve-metrics --host 127.0.0.1 # Bind to localhost only + +Options: + --host TEXT Host to bind Prometheus metrics server to (default: 127.0.0.1 + for localhost-only) + --port INTEGER Port for Prometheus metrics server + --help Show this message and exit. + +``` + +## suggest + +```bash +Usage: refactron suggest [OPTIONS] [TARGET] + + Generate AI-powered refactoring suggestions. + + Uses RAG and LLM to analyze code and propose fixes. + +Options: + --line INTEGER Specific line number to fix + --interactive / --no-interactive + Use interactive mode + --apply / --no-apply Apply the suggested changes to the file + --help Show this message and exit. + +``` + +## telemetry + +```bash +Usage: refactron telemetry [OPTIONS] + + Manage telemetry settings. + +Options: + --enable Enable telemetry collection + --disable Disable telemetry collection + --status Show current telemetry status + --help Show this message and exit. + +``` + diff --git a/documentation/docs/CODE_OF_CONDUCT.md b/documentation/docs/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..3a84de0 --- /dev/null +++ b/documentation/docs/CODE_OF_CONDUCT.md @@ -0,0 +1,133 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +* Focusing on what is best not just for us as individuals, but for the overall + community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of + any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, + without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at +[INSERT CONTACT EMAIL]. + +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of +actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the +community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at +[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at +[https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq +[translations]: https://www.contributor-covenant.org/translations diff --git a/docs/ERROR_HANDLING.md b/documentation/docs/ERROR_HANDLING.md similarity index 100% rename from docs/ERROR_HANDLING.md rename to documentation/docs/ERROR_HANDLING.md diff --git a/docs/FALSE_POSITIVE_REDUCTION.md b/documentation/docs/FALSE_POSITIVE_REDUCTION.md similarity index 100% rename from docs/FALSE_POSITIVE_REDUCTION.md rename to documentation/docs/FALSE_POSITIVE_REDUCTION.md diff --git a/documentation/docs/LLM_RAG_INTEGRATION.md b/documentation/docs/LLM_RAG_INTEGRATION.md new file mode 100644 index 0000000..360c244 --- /dev/null +++ b/documentation/docs/LLM_RAG_INTEGRATION.md @@ -0,0 +1,111 @@ +# LLM & RAG Integration Guide + +**Harness the power of Large Language Models for intelligent code refactoring and documentation.** + +--- + +## Overview + +In version v1.0.15, Refactron introduces a powerful AI-driven subsystem that combines LLM reasoning with RAG (Retrieval-Augmented Generation). This allows Refactron to understand your entire project context when suggesting refactorings or generating documentation. + +### Core Components + +1. **LLM Orchestrator**: Coordinates between the code analyzer, retriever, and LLM backends. +2. **RAG System**: Uses a vector database (ChromaDB) to index and retrieve relevant code chunks. +3. **LLM Backends**: Support for high-performance providers like Groq, as well as local or custom backends. +4. **Safety Gate**: Ensures that LLM-generated code adheres to safety standards and doesn't introduce syntax errors. + +--- + +## Getting Started + +### 1. Prerequisites + +- Python 3.8+ +- ChromaDB (`pip install chromadb`) +- Sentence Transformers (`pip install sentence-transformers`) +- LLM API Key (e.g., Groq API Key) + +### 2. Configuration + +Set your LLM API key as an environment variable: + +```bash +export GROQ_API_KEY='your-api-key-here' +``` + +Alternatively, configure it in your `.refactron.yaml`: + +```yaml +llm: + provider: groq + model: llama3-70b-8192 + temperature: 0.1 + max_tokens: 4096 + +rag: + enabled: true + storage_dir: .refactron/rag_index + embedding_model: all-MiniLM-L6-v2 +``` + +--- + +## Usage + +### Indexing your Project + +Before using RAG-powered features, you need to index your repository: + +```bash +refactron rag index +``` + +This will parse your Python files and store embeddings in the local vector database. + +### AI-Powered Refactoring + +When you run the `refactor` command, Refactron can now use the LLM to generate more sophisticated suggestions: + +```bash +refactron refactor myfile.py --ai --preview +``` + +The `--ai` flag enables LLM-based suggestion generation, which uses retrieved context from your project to provide more accurate fixes. + +### Generating Documentation + +Generate comprehensive docstrings and technical documentation using the LLM: + +```bash +refactron docs generate myfile.py +``` + +Refactron will analyze the code structure and use the LLM to write high-quality documentation that follows PEP 257 or your configured style. + +--- + +## Technical Details + +### RAG Workflow + +1. **Parsing**: Code files are parsed into semantic chunks (classes, methods, functions). +2. **Embedding**: Chunks are converted into vector representations using `sentence-transformers`. +3. **Indexing**: Vectors and metadata are stored in `ChromaDB`. +4. **Retrieval**: When an issue is analyzed, the system retrieves the most relevant code chunks as context for the LLM. + +### LLM Orchestration + +The `LLMOrchestrator` handles the prompt engineering, ensuring that the LLM receives the right balance of task instructions and code context. It also includes a JSON cleaning layer to reliably parse LLM outputs. + +--- + +## Best Practices + +- **Keep the Index Updated**: Re-run `refactron rag index` after significant code changes. +- **Model Selection**: Higher-parameter models (like Llama 3 70B) generally provide better refactoring logic but may be slower. +- **Review AI Suggestions**: Always use `--preview` to review AI-generated code before applying it. + +--- + +**Refactron AI** - Bringing semantic understanding to code refactoring! 🚀🤖 diff --git a/docs/MONITORING.md b/documentation/docs/MONITORING.md similarity index 100% rename from docs/MONITORING.md rename to documentation/docs/MONITORING.md diff --git a/docs/PATTERN_LEARNING.md b/documentation/docs/PATTERN_LEARNING.md similarity index 99% rename from docs/PATTERN_LEARNING.md rename to documentation/docs/PATTERN_LEARNING.md index 30da37d..7a0590e 100644 --- a/docs/PATTERN_LEARNING.md +++ b/documentation/docs/PATTERN_LEARNING.md @@ -12,6 +12,7 @@ Refactron's Pattern Learning System learns from your feedback on refactoring sug - **Automatic Learning** - Learns from every refactoring decision you make - **Smart Ranking** - Ranks suggestions by historical acceptance rates +- **AI-Powered Insights** - Integrates with LLMs (v1.0.15+) to provide context-aware pattern recognition and suggestions. - **Project-Specific** - Adapts to your project's coding style and preferences - **Persistent Storage** - Patterns persist across sessions - **Configurable** - Enable/disable features as needed diff --git a/docs/PERFORMANCE_OPTIMIZATION.md b/documentation/docs/PERFORMANCE_OPTIMIZATION.md similarity index 99% rename from docs/PERFORMANCE_OPTIMIZATION.md rename to documentation/docs/PERFORMANCE_OPTIMIZATION.md index 291dd78..bd74c1c 100644 --- a/docs/PERFORMANCE_OPTIMIZATION.md +++ b/documentation/docs/PERFORMANCE_OPTIMIZATION.md @@ -203,7 +203,7 @@ config = RefactronConfig(max_parallel_workers=cpu_count) - ✅ Large codebases (1000+ files) - ✅ Multi-core systems - ✅ I/O-bound operations -- ❌ Small codebases (<10 files) +- ❌ Small codebases (<10 files) - ❌ Single-core systems - ❌ Memory-constrained environments @@ -288,7 +288,7 @@ print(f"Results: {latest['results']}") ## Best Practices -### For Small Projects (<1000 files) +### For Small Projects (<1000 files) ```python config = RefactronConfig( diff --git a/docs/QUICK_REFERENCE.md b/documentation/docs/QUICK_REFERENCE.md similarity index 83% rename from docs/QUICK_REFERENCE.md rename to documentation/docs/QUICK_REFERENCE.md index bc27776..5e7efb2 100644 --- a/docs/QUICK_REFERENCE.md +++ b/documentation/docs/QUICK_REFERENCE.md @@ -26,6 +26,31 @@ refactron refactor # Generate report refactron report --format json -o report.json + +# AI-Powered Commands (v1.0.15) +refactron suggest [--line N] [--apply] +refactron document [--apply] +refactron feedback --action accepted +refactron rag index [--summarize] +refactron rag search "query" [--rerank] +refactron rag status + +# Repository Management (v1.0.15) +refactron repo list +refactron repo connect [--path PATH] +refactron repo disconnect [--delete-files] + +# CI/CD & Integration (v1.0.15) +refactron ci + +# Authentication & Status (v1.0.15) +refactron auth status +refactron auth logout + +# Observability (v1.0.15) +refactron metrics [--format json] +refactron serve-metrics [--port N] +refactron telemetry ``` ### Python API @@ -185,6 +210,11 @@ export REFACTRON_LOG_LEVEL=DEBUG --type TYPE # Filter by type (can use multiple) --no-backup # Don't create backups --risk-level LEVEL # safe|low|moderate|high + +# AI (v1.0.15) +--ai # Use AI reasoning for refactoring +--summarize # Generate AI summaries for RAG chunks +--rerank # AI reranking for semantic search ``` ## Common Issues @@ -222,6 +252,8 @@ refactron patterns --help # Pattern analysis and tuning commands - 📚 [Full Documentation](https://refactron-ai.github.io/Refactron_lib/) - 🚀 [Tutorial](TUTORIAL.md) +- 🤖 [LLM & RAG Guide](LLM_RAG_INTEGRATION.md) +- 📜 [v1.0.15 Release Notes](v1.0.15_RELEASE_NOTES.md) - 🏗️ [Architecture](../ARCHITECTURE.md) - 🤝 [Contributing](../CONTRIBUTING.md) - 🔒 [Security](../SECURITY.md) diff --git a/docs/README.md b/documentation/docs/README.md similarity index 100% rename from docs/README.md rename to documentation/docs/README.md diff --git a/documentation/docs/SECURITY.md b/documentation/docs/SECURITY.md new file mode 100644 index 0000000..e90ab1b --- /dev/null +++ b/documentation/docs/SECURITY.md @@ -0,0 +1,131 @@ +# Security Policy + +## Supported Versions + +We actively support the following versions of Refactron with security updates: + +| Version | Supported | +| ------- | ------------------ | +| 1.x.x | :white_check_mark: | +| < 1.0 | :x: | + +## Reporting a Vulnerability + +We take the security of Refactron seriously. If you discover a security vulnerability, please follow these steps: + +### 1. **Do Not** Open a Public Issue + +Please do not open a public GitHub issue for security vulnerabilities. This helps prevent exploitation before a fix is available. + +### 2. Report Privately + +Report security vulnerabilities by emailing: **security@refactron.us.kg** + +Include the following information: +- Description of the vulnerability +- Steps to reproduce the issue +- Affected versions +- Potential impact +- Suggested fix (if any) + +### 3. Response Timeline + +- **Initial Response**: Within 48 hours of submission +- **Status Update**: Within 7 days with assessment +- **Fix Timeline**: Critical issues within 14 days, others within 30 days + +### 4. Disclosure Policy + +We follow responsible disclosure: +- We will work with you to understand and address the issue +- We will credit you in the security advisory (unless you prefer to remain anonymous) +- We will publicly disclose the vulnerability only after a fix is released +- We typically wait 90 days before full disclosure + +## Security Best Practices + +When using Refactron, follow these security best practices: + +### 1. **Code Execution** +- Refactron analyzes code using AST parsing and **never executes** analyzed code +- It's safe to analyze untrusted code + +### 2. **File Permissions** +- Ensure Refactron has appropriate file permissions +- Review suggested refactorings before applying them +- Always backup your code before applying automated refactorings + +### 3. **Dependencies** +- Keep Refactron and its dependencies up to date +- We use Dependabot to monitor and update dependencies +- Review dependency updates in our release notes + +### 4. **Configuration Files** +- Protect your `.refactron.yaml` configuration files +- Don't commit sensitive information to configuration files +- Use environment variables for sensitive settings + +### 5. **CI/CD Integration** +- When using Refactron in CI/CD pipelines: + - Use read-only mode for analysis + - Review changes before merging + - Limit file system access appropriately + +## Known Security Considerations + +### Static Analysis Only +Refactron performs static analysis and does not: +- Execute analyzed code +- Make network requests (except for updates) +- Access system resources beyond the specified project directory + +### Refactoring Safety +- All refactorings are previewed before application +- Risk scores are provided for each refactoring (0.0 = safe, 1.0 = high risk) +- You must explicitly approve changes before they are applied + +## Security Scanning + +We use multiple tools to ensure code security: + +- **Bandit**: Python security linting +- **Safety**: Dependency vulnerability scanning +- **CodeQL**: Advanced semantic code analysis +- **Dependabot**: Automated dependency updates + +## Third-Party Dependencies + +We carefully vet all dependencies: + +- **libcst**: Concrete syntax tree manipulation (maintained by Instagram/Meta) +- **astroid**: AST analysis (maintained by PyCQA) +- **radon**: Code metrics (well-established tool) +- **click**: CLI framework (maintained by Pallets) +- **rich**: Terminal formatting (actively maintained) +- **pyyaml**: YAML parsing (standard library quality) + +All dependencies are regularly updated and monitored for vulnerabilities. + +## Security Audit History + +| Date | Type | Findings | Status | +|------------|----------------|----------|----------| +| 2024-10-31 | Code Review | 0 | ✅ Clean | +| 2024-10-31 | Dependency | 0 | ✅ Clean | + +## Hall of Fame + +We recognize security researchers who help keep Refactron secure: + +*No security issues reported yet. Be the first!* + +## Questions? + +If you have security questions or concerns that are not vulnerabilities, please: +- Open a discussion on GitHub +- Email: support@refactron.us.kg +- Review our [Contributing Guidelines](CONTRIBUTING.md) + +--- + +**Thank you for helping keep Refactron and its users safe!** 🔒 diff --git a/docs/TUTORIAL.md b/documentation/docs/TUTORIAL.md similarity index 92% rename from docs/TUTORIAL.md rename to documentation/docs/TUTORIAL.md index d4e1516..9c6dff6 100644 --- a/docs/TUTORIAL.md +++ b/documentation/docs/TUTORIAL.md @@ -15,7 +15,8 @@ pip install refactron 3. [Previewing Refactorings](#previewing-refactorings) 4. [Applying Changes](#applying-changes) 5. [Configuration](#configuration) -6. [Advanced Usage](#advanced-usage) +6. [AI-Powered Features (v1.0.15)](#ai-powered-features) +7. [Advanced Usage](#advanced-usage) --- @@ -222,6 +223,30 @@ analysis = refactron.analyze("example.py") --- +## AI-Powered Features (v1.0.15) + +Version v1.0.15 introduces semantic intelligence using LLMs and RAG. + +### Initializing the RAG Index +To give the AI context about your project, you must first index it: +```bash +refactron rag index +``` + +### AI Refactoring Suggestions +Use the `suggest` command for smarter, multi-line refactorings: +```bash +refactron suggest example.py --line 5 +``` + +### Automated Documentation +Generate comprehensive docstrings for your file: +```bash +refactron document example.py --apply +``` + +--- + ## Advanced Usage ### Analyze Multiple Files diff --git a/documentation/docs/api/analyzers.md b/documentation/docs/api/analyzers.md new file mode 100644 index 0000000..cf39613 --- /dev/null +++ b/documentation/docs/api/analyzers.md @@ -0,0 +1,488 @@ +# refactron.analyzers + +Analyzers for detecting code issues and patterns. + +## Classes + +## Functions + + +--- + +# refactron.analyzers.base_analyzer + +Base analyzer class. + +## Classes + +### BaseAnalyzer + +```python +BaseAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Base class for all analyzers. + +#### BaseAnalyzer.__init__ + +```python +BaseAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### BaseAnalyzer.analyze + +```python +BaseAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze source code and return detected issues. + +Args: + file_path: Path to the file being analyzed + source_code: Source code content + +Returns: + List of detected code issues + +#### BaseAnalyzer.parse_astroid + +```python +BaseAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.code_smell_analyzer + +Analyzer for code smells and anti-patterns. + +## Classes + +### CodeSmellAnalyzer + +```python +CodeSmellAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Detects common code smells and anti-patterns. + +#### CodeSmellAnalyzer.__init__ + +```python +CodeSmellAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### CodeSmellAnalyzer.analyze + +```python +CodeSmellAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze code for smells and anti-patterns. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of detected code smell issues + +#### CodeSmellAnalyzer.parse_astroid + +```python +CodeSmellAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.complexity_analyzer + +Analyzer for code complexity metrics. + +## Classes + +### ComplexityAnalyzer + +```python +ComplexityAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Analyzes code complexity using cyclomatic complexity and other metrics. + +#### ComplexityAnalyzer.__init__ + +```python +ComplexityAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### ComplexityAnalyzer.analyze + +```python +ComplexityAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze complexity of the source code. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of complexity-related issues + +#### ComplexityAnalyzer.parse_astroid + +```python +ComplexityAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.dead_code_analyzer + +Analyzer for dead code - unused functions, variables, and imports. + +## Classes + +### DeadCodeAnalyzer + +```python +DeadCodeAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Detects unused code that can be safely removed. + +#### DeadCodeAnalyzer.__init__ + +```python +DeadCodeAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### DeadCodeAnalyzer.analyze + +```python +DeadCodeAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze code for unused elements. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of dead code issues + +#### DeadCodeAnalyzer.parse_astroid + +```python +DeadCodeAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.dependency_analyzer + +Analyzer for import dependencies and module relationships. + +## Classes + +### DependencyAnalyzer + +```python +DependencyAnalyzer(config: 'RefactronConfig') -> None +``` + +Analyzes import statements and dependencies. + +#### DependencyAnalyzer.__init__ + +```python +DependencyAnalyzer.__init__(self, config: 'RefactronConfig') -> None +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### DependencyAnalyzer.analyze + +```python +DependencyAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze imports and dependencies. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of dependency-related issues + +#### DependencyAnalyzer.parse_astroid + +```python +DependencyAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.performance_analyzer + +Analyzer for performance antipatterns. + +## Classes + +### PerformanceAnalyzer + +```python +PerformanceAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Detects common performance antipatterns and inefficiencies. + +#### PerformanceAnalyzer.__init__ + +```python +PerformanceAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### PerformanceAnalyzer.analyze + +```python +PerformanceAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze code for performance antipatterns. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of performance-related issues + +#### PerformanceAnalyzer.parse_astroid + +```python +PerformanceAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.security_analyzer + +Analyzer for security vulnerabilities and unsafe patterns. + +## Classes + +### SecurityAnalyzer + +```python +SecurityAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Detects common security vulnerabilities and unsafe code patterns. + +#### SecurityAnalyzer.__init__ + +```python +SecurityAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### SecurityAnalyzer.analyze + +```python +SecurityAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze code for security vulnerabilities. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of security-related issues + +#### SecurityAnalyzer.parse_astroid + +```python +SecurityAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + + +--- + +# refactron.analyzers.type_hint_analyzer + +Analyzer for type hints and type annotations. + +## Classes + +### TypeHintAnalyzer + +```python +TypeHintAnalyzer(config: refactron.core.config.RefactronConfig) +``` + +Analyzes type hint usage and suggests improvements. + +#### TypeHintAnalyzer.__init__ + +```python +TypeHintAnalyzer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the analyzer. + +Args: + config: Refactron configuration + +#### TypeHintAnalyzer.analyze + +```python +TypeHintAnalyzer.analyze(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.CodeIssue] +``` + +Analyze type hints in code. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of type hint issues + +#### TypeHintAnalyzer.parse_astroid + +```python +TypeHintAnalyzer.parse_astroid(self, source_code: str, file_path: Optional[pathlib._local.Path] = None) -> Any +``` + +Helper to parse code into an astroid tree. + +Args: + source_code: The code to parse + file_path: Optional path for module naming context + +Returns: + astroid.nodes.Module + +## Functions + diff --git a/documentation/docs/api/autofix.md b/documentation/docs/api/autofix.md new file mode 100644 index 0000000..0cd2213 --- /dev/null +++ b/documentation/docs/api/autofix.md @@ -0,0 +1,802 @@ +# refactron.autofix + +Auto-fix module for automatically fixing code issues. + +This module provides rule-based code fixes without requiring expensive AI APIs. +All fixers use AST analysis and pattern matching for fast, reliable transformations. + +## Classes + +## Functions + + +--- + +# refactron.autofix.engine + +Auto-fix engine for applying rule-based code transformations. + +This engine uses AST analysis and pattern matching to apply safe +automatic fixes without requiring expensive AI APIs. + +## Classes + +### AutoFixEngine + +```python +AutoFixEngine(safety_level: refactron.autofix.models.FixRiskLevel = ) +``` + +Engine for applying automatic fixes to code issues. + +All fixes use rule-based AST transformations for reliability +and performance. No expensive AI APIs required! + +#### AutoFixEngine.__init__ + +```python +AutoFixEngine.__init__(self, safety_level: refactron.autofix.models.FixRiskLevel = ) +``` + +Initialize the auto-fix engine. + +Args: + safety_level: Maximum risk level to apply automatically + +#### AutoFixEngine.can_fix + +```python +AutoFixEngine.can_fix(self, issue: refactron.core.models.CodeIssue) -> bool +``` + +Check if an issue can be auto-fixed. + +Args: + issue: The issue to check + +Returns: + True if a fixer is available, False otherwise + +#### AutoFixEngine.fix + +```python +AutoFixEngine.fix(self, issue: refactron.core.models.CodeIssue, code: str, preview: bool = True) -> refactron.autofix.models.FixResult +``` + +Apply automatic fix to an issue. + +Args: + issue: The issue to fix + code: The original code + preview: If True, only preview changes (don't apply) + +Returns: + FixResult with success status and details + +#### AutoFixEngine.fix_all + +```python +AutoFixEngine.fix_all(self, issues: list, code: str, preview: bool = True) -> Dict[int, refactron.autofix.models.FixResult] +``` + +Apply fixes to multiple issues. + +Args: + issues: List of issues to fix + code: The original code + preview: If True, only preview changes + +Returns: + Dictionary mapping issue index to fix result + +### BaseFixer + +```python +BaseFixer(name: str, risk_score: float = 0.0) +``` + +Base class for all automatic fixers. + +#### BaseFixer.__init__ + +```python +BaseFixer.__init__(self, name: str, risk_score: float = 0.0) +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### BaseFixer.apply + +```python +BaseFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply the fix. + +Args: + issue: The issue to fix + code: The original code + +Returns: + FixResult with fixed code + +#### BaseFixer.preview + +```python +BaseFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview the fix without applying it. + +Args: + issue: The issue to fix + code: The original code + +Returns: + FixResult with diff showing proposed changes + +## Functions + + +--- + +# refactron.autofix.file_ops + +File operations for auto-fix system with backup and rollback support. + +## Classes + +### FileOperations + +```python +FileOperations(backup_dir: Optional[pathlib._local.Path] = None) +``` + +Handle file operations with safety guarantees. + +#### FileOperations.__init__ + +```python +FileOperations.__init__(self, backup_dir: Optional[pathlib._local.Path] = None) +``` + +Initialize file operations. + +Args: + backup_dir: Directory for backups (default: .refactron_backups) + +#### FileOperations.backup_file + +```python +FileOperations.backup_file(self, filepath: pathlib._local.Path) -> pathlib._local.Path +``` + +Create a backup of a file. + +Args: + filepath: Path to file to backup + +Returns: + Path to backup file + +#### FileOperations.clear_backups + +```python +FileOperations.clear_backups(self) -> int +``` + +Clear all backups. + +Returns: + Number of backups cleared + +#### FileOperations.list_backups + +```python +FileOperations.list_backups(self) -> List[Any] +``` + +List all backups. + +Returns: + List of backup information + +#### FileOperations.rollback_all + +```python +FileOperations.rollback_all(self) -> int +``` + +Rollback all backed up files. + +Returns: + Number of files rolled back + +#### FileOperations.rollback_file + +```python +FileOperations.rollback_file(self, filepath: pathlib._local.Path) -> bool +``` + +Rollback a file to its last backup. + +Args: + filepath: Path to file to rollback + +Returns: + True if successful, False otherwise + +#### FileOperations.write_with_backup + +```python +FileOperations.write_with_backup(self, filepath: pathlib._local.Path, content: str) -> Dict +``` + +Write content to file with automatic backup. + +Args: + filepath: Path to file to write + content: Content to write + +Returns: + Dictionary with operation details + +## Functions + + +--- + +# refactron.autofix.fixers + +Concrete fixer implementations for common code issues. + +All fixers use AST-based transformations for reliability and speed. +No expensive AI APIs required! + +## Classes + +### AddDocstringsFixer + +```python +AddDocstringsFixer() -> None +``` + +Adds missing docstrings to functions and classes. + +#### AddDocstringsFixer.__init__ + +```python +AddDocstringsFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### AddDocstringsFixer.apply + +```python +AddDocstringsFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply docstring addition. + +#### AddDocstringsFixer.preview + +```python +AddDocstringsFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview docstring addition. + +### AddMissingCommasFixer + +```python +AddMissingCommasFixer() -> None +``` + +Add missing trailing commas in lists/dicts. + +#### AddMissingCommasFixer.__init__ + +```python +AddMissingCommasFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### AddMissingCommasFixer.apply + +```python +AddMissingCommasFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply comma addition. + +#### AddMissingCommasFixer.preview + +```python +AddMissingCommasFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview comma addition. + +### ConvertToFStringFixer + +```python +ConvertToFStringFixer() -> None +``` + +Convert old-style format strings to f-strings. + +#### ConvertToFStringFixer.__init__ + +```python +ConvertToFStringFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### ConvertToFStringFixer.apply + +```python +ConvertToFStringFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply f-string conversion. + +#### ConvertToFStringFixer.preview + +```python +ConvertToFStringFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview f-string conversion. + +### ExtractMagicNumbersFixer + +```python +ExtractMagicNumbersFixer() -> None +``` + +Extracts magic numbers into named constants. + +#### ExtractMagicNumbersFixer.__init__ + +```python +ExtractMagicNumbersFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### ExtractMagicNumbersFixer.apply + +```python +ExtractMagicNumbersFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply magic number extraction. + +#### ExtractMagicNumbersFixer.preview + +```python +ExtractMagicNumbersFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview magic number extraction. + +### FixIndentationFixer + +```python +FixIndentationFixer(spaces: int = 4) +``` + +Fix inconsistent indentation. + +#### FixIndentationFixer.__init__ + +```python +FixIndentationFixer.__init__(self, spaces: int = 4) +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### FixIndentationFixer.apply + +```python +FixIndentationFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply indentation fix. + +#### FixIndentationFixer.preview + +```python +FixIndentationFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview indentation fix. + +### FixTypeHintsFixer + +```python +FixTypeHintsFixer() -> None +``` + +Adds or fixes type hints. + +#### FixTypeHintsFixer.__init__ + +```python +FixTypeHintsFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### FixTypeHintsFixer.apply + +```python +FixTypeHintsFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply type hint fix. + +#### FixTypeHintsFixer.preview + +```python +FixTypeHintsFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview type hint fix. + +### NormalizeQuotesFixer + +```python +NormalizeQuotesFixer(prefer_double: bool = True) +``` + +Normalize string quotes (single → double or vice versa). + +#### NormalizeQuotesFixer.__init__ + +```python +NormalizeQuotesFixer.__init__(self, prefer_double: bool = True) +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### NormalizeQuotesFixer.apply + +```python +NormalizeQuotesFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply quote normalization. + +#### NormalizeQuotesFixer.preview + +```python +NormalizeQuotesFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview quote normalization. + +### RemoveDeadCodeFixer + +```python +RemoveDeadCodeFixer() -> None +``` + +Removes dead/unreachable code. + +#### RemoveDeadCodeFixer.__init__ + +```python +RemoveDeadCodeFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### RemoveDeadCodeFixer.apply + +```python +RemoveDeadCodeFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply dead code removal. + +#### RemoveDeadCodeFixer.preview + +```python +RemoveDeadCodeFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview dead code removal. + +### RemovePrintStatementsFixer + +```python +RemovePrintStatementsFixer(convert_to_logging: bool = False) +``` + +Remove or convert print statements to logging. + +#### RemovePrintStatementsFixer.__init__ + +```python +RemovePrintStatementsFixer.__init__(self, convert_to_logging: bool = False) +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### RemovePrintStatementsFixer.apply + +```python +RemovePrintStatementsFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply print statement removal/conversion. + +#### RemovePrintStatementsFixer.preview + +```python +RemovePrintStatementsFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview print statement removal/conversion. + +### RemoveTrailingWhitespaceFixer + +```python +RemoveTrailingWhitespaceFixer() -> None +``` + +Remove trailing whitespace from lines. + +#### RemoveTrailingWhitespaceFixer.__init__ + +```python +RemoveTrailingWhitespaceFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### RemoveTrailingWhitespaceFixer.apply + +```python +RemoveTrailingWhitespaceFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply whitespace removal. + +#### RemoveTrailingWhitespaceFixer.preview + +```python +RemoveTrailingWhitespaceFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview whitespace removal. + +### RemoveUnusedImportsFixer + +```python +RemoveUnusedImportsFixer() -> None +``` + +Removes unused import statements. + +#### RemoveUnusedImportsFixer.__init__ + +```python +RemoveUnusedImportsFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### RemoveUnusedImportsFixer.apply + +```python +RemoveUnusedImportsFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply the fix to remove unused imports. + +#### RemoveUnusedImportsFixer.preview + +```python +RemoveUnusedImportsFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview the removal of unused imports. + +### RemoveUnusedVariablesFixer + +```python +RemoveUnusedVariablesFixer() -> None +``` + +Remove unused variables. + +#### RemoveUnusedVariablesFixer.__init__ + +```python +RemoveUnusedVariablesFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### RemoveUnusedVariablesFixer.apply + +```python +RemoveUnusedVariablesFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply unused variable removal. + +#### RemoveUnusedVariablesFixer.preview + +```python +RemoveUnusedVariablesFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview unused variable removal. + +### SimplifyBooleanFixer + +```python +SimplifyBooleanFixer() -> None +``` + +Simplify boolean expressions. + +#### SimplifyBooleanFixer.__init__ + +```python +SimplifyBooleanFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### SimplifyBooleanFixer.apply + +```python +SimplifyBooleanFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply boolean simplification. + +#### SimplifyBooleanFixer.preview + +```python +SimplifyBooleanFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview boolean simplification. + +### SortImportsFixer + +```python +SortImportsFixer() -> None +``` + +Sort and organize imports using isort. + +#### SortImportsFixer.__init__ + +```python +SortImportsFixer.__init__(self) -> None +``` + +Initialize a fixer. + +Args: + name: Name of the fixer + risk_score: Risk level (0.0 = safe, 1.0 = dangerous) + +#### SortImportsFixer.apply + +```python +SortImportsFixer.apply(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Apply import sorting. + +#### SortImportsFixer.preview + +```python +SortImportsFixer.preview(self, issue: refactron.core.models.CodeIssue, code: str) -> refactron.autofix.models.FixResult +``` + +Preview import sorting. + +## Functions + + +--- + +# refactron.autofix.models + +Models for auto-fix system. + +## Classes + +### FixResult + +```python +FixResult(success: bool, reason: str = '', diff: Optional[str] = None, original: Optional[str] = None, fixed: Optional[str] = None, risk_score: float = 1.0, files_affected: List[str] = None) -> None +``` + +Result of an automatic fix. + +#### FixResult.__init__ + +```python +FixResult.__init__(self, success: bool, reason: str = '', diff: Optional[str] = None, original: Optional[str] = None, fixed: Optional[str] = None, risk_score: float = 1.0, files_affected: List[str] = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### FixRiskLevel + +```python +FixRiskLevel(*values) +``` + +Risk levels for automatic fixes. + +## Functions + diff --git a/documentation/docs/api/cicd.md b/documentation/docs/api/cicd.md new file mode 100644 index 0000000..c8ff4ee --- /dev/null +++ b/documentation/docs/api/cicd.md @@ -0,0 +1,470 @@ +# refactron.cicd + +CI/CD integration templates and utilities for Refactron. + +## Classes + +## Functions + + +--- + +# refactron.cicd.github_actions + +GitHub Actions workflow template generation. + +## Classes + +### GitHubActionsGenerator + +```python +GitHubActionsGenerator() +``` + +Generate GitHub Actions workflow templates for Refactron. + +#### GitHubActionsGenerator.generate_analysis_workflow + +```python +GitHubActionsGenerator.generate_analysis_workflow(python_versions: Optional[List[str]] = None, trigger_on: Optional[List[str]] = None, quality_gate: Optional[Dict[str, Any]] = None, cache_enabled: bool = True, upload_artifacts: bool = True) -> str +``` + +Generate GitHub Actions workflow for code analysis. + +Args: + python_versions: Python versions to test + (default: ['3.8', '3.9', '3.10', '3.11', '3.12']) + trigger_on: Events to trigger on + (default: ['push', 'pull_request']) + quality_gate: Quality gate configuration + cache_enabled: Enable dependency caching + upload_artifacts: Upload analysis reports as artifacts + +Returns: + YAML workflow content + +#### GitHubActionsGenerator.generate_pre_commit_workflow + +```python +GitHubActionsGenerator.generate_pre_commit_workflow(python_version: str = '3.11', trigger_on: Optional[List[str]] = None) -> str +``` + +Generate GitHub Actions workflow for pre-commit analysis. + +Args: + python_version: Python version to use + trigger_on: Events to trigger on + +Returns: + YAML workflow content + +#### GitHubActionsGenerator.save_workflow + +```python +GitHubActionsGenerator.save_workflow(workflow_content: str, output_path: pathlib._local.Path) -> None +``` + +Save workflow to file. + +Args: + workflow_content: Workflow YAML content + output_path: Path to save workflow file + +Raises: + IOError: If file cannot be written + +## Functions + + +--- + +# refactron.cicd.gitlab_ci + +GitLab CI pipeline configuration generation. + +## Classes + +### GitLabCIGenerator + +```python +GitLabCIGenerator() +``` + +Generate GitLab CI pipeline configurations for Refactron. + +#### GitLabCIGenerator.generate_analysis_pipeline + +```python +GitLabCIGenerator.generate_analysis_pipeline(python_versions: Optional[List[str]] = None, quality_gate: Optional[Dict[str, Any]] = None, cache_enabled: bool = True, artifacts_enabled: bool = True) -> str +``` + +Generate GitLab CI pipeline for code analysis. + +Args: + python_versions: Python versions to test + quality_gate: Quality gate configuration + cache_enabled: Enable dependency caching + artifacts_enabled: Save analysis reports as artifacts + +Returns: + YAML pipeline content + +#### GitLabCIGenerator.generate_pre_commit_pipeline + +```python +GitLabCIGenerator.generate_pre_commit_pipeline(python_version: str = '3.11') -> str +``` + +Generate GitLab CI pipeline for pre-commit analysis. + +Args: + python_version: Python version to use + +Returns: + YAML pipeline content + +#### GitLabCIGenerator.save_pipeline + +```python +GitLabCIGenerator.save_pipeline(pipeline_content: str, output_path: pathlib._local.Path) -> None +``` + +Save pipeline configuration to file. + +Args: + pipeline_content: Pipeline YAML content + output_path: Path to save pipeline file + +Raises: + IOError: If file cannot be written + +## Functions + + +--- + +# refactron.cicd.pr_integration + +Pull Request integration utilities for posting comments and suggestions. + +## Classes + +### PRComment + +```python +PRComment(file_path: str, line: int, message: str, level: str, rule_id: Optional[str] = None, suggestion: Optional[str] = None) -> None +``` + +Represents a PR comment. + +#### PRComment.__init__ + +```python +PRComment.__init__(self, file_path: str, line: int, message: str, level: str, rule_id: Optional[str] = None, suggestion: Optional[str] = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### PRComment.to_markdown + +```python +PRComment.to_markdown(self) -> str +``` + +Convert comment to markdown format. + +Returns: + Markdown formatted comment + +### PRIntegration + +```python +PRIntegration() +``` + +Utilities for PR integration and inline comments. + +#### PRIntegration.format_comment_for_github_api + +```python +PRIntegration.format_comment_for_github_api(comment: refactron.cicd.pr_integration.PRComment) -> Dict +``` + +Format comment for GitHub API. + +Args: + comment: PR comment + +Returns: + GitHub API comment format + +#### PRIntegration.generate_github_comment_body + +```python +PRIntegration.generate_github_comment_body(result: refactron.core.analysis_result.AnalysisResult) -> str +``` + +Generate GitHub PR comment body. + +Args: + result: Analysis result + +Returns: + Markdown comment body + +#### PRIntegration.generate_inline_comments + +```python +PRIntegration.generate_inline_comments(result: refactron.core.analysis_result.AnalysisResult, file_path: pathlib._local.Path) -> List[refactron.cicd.pr_integration.PRComment] +``` + +Generate inline comments for a specific file. + +Args: + result: Analysis result + file_path: File to generate comments for + +Returns: + List of PR comments + +#### PRIntegration.generate_pr_summary + +```python +PRIntegration.generate_pr_summary(result: refactron.core.analysis_result.AnalysisResult) -> str +``` + +Generate PR summary from analysis result. + +Args: + result: Analysis result + +Returns: + Markdown formatted summary + +#### PRIntegration.save_comments_json + +```python +PRIntegration.save_comments_json(comments: List[refactron.cicd.pr_integration.PRComment], output_path: pathlib._local.Path) -> None +``` + +Save comments to JSON file for CI/CD integration. + +Args: + comments: List of PR comments + output_path: Path to save JSON file + +Raises: + IOError: If file cannot be written + +## Functions + + +--- + +# refactron.cicd.pre_commit + +Pre-commit hook template generation. + +## Classes + +### PreCommitGenerator + +```python +PreCommitGenerator() +``` + +Generate pre-commit hook templates for Refactron. + +#### PreCommitGenerator.generate_pre_commit_config + +```python +PreCommitGenerator.generate_pre_commit_config(stages: Optional[List[str]] = None, fail_on_critical: bool = True, fail_on_errors: bool = False, max_critical: int = 0, max_errors: int = 10) -> str +``` + +Generate pre-commit hook configuration. + +Args: + stages: Pre-commit stages to run on (default: ['commit', 'push']) + fail_on_critical: Fail commit if critical issues found + fail_on_errors: Fail commit if error-level issues found + max_critical: Maximum allowed critical issues + max_errors: Maximum allowed error-level issues + +Returns: + YAML configuration content + +#### PreCommitGenerator.generate_simple_hook + +```python +PreCommitGenerator.generate_simple_hook() -> str +``` + +Generate simple pre-commit hook script. + +Returns: + Bash script content + +#### PreCommitGenerator.save_config + +```python +PreCommitGenerator.save_config(config_content: str, output_path: pathlib._local.Path) -> None +``` + +Save pre-commit configuration to file. + +Args: + config_content: YAML configuration content + output_path: Path to save configuration file + +Raises: + IOError: If file cannot be written + +#### PreCommitGenerator.save_hook + +```python +PreCommitGenerator.save_hook(hook_content: str, output_path: pathlib._local.Path) -> None +``` + +Save pre-commit hook script to file. + +Args: + hook_content: Hook script content + output_path: Path to save hook file + +Raises: + IOError: If file cannot be written + +## Functions + + +--- + +# refactron.cicd.quality_gates + +Quality gate parsing and enforcement for CI/CD pipelines. + +## Classes + +### QualityGate + +```python +QualityGate(max_critical: int = 0, max_errors: int = 10, max_warnings: int = 50, max_total: Optional[int] = None, fail_on_critical: bool = True, fail_on_errors: bool = False, fail_on_warnings: bool = False, min_success_rate: float = 0.95) -> None +``` + +Configuration for quality gate thresholds. + +#### QualityGate.__init__ + +```python +QualityGate.__init__(self, max_critical: int = 0, max_errors: int = 10, max_warnings: int = 50, max_total: Optional[int] = None, fail_on_critical: bool = True, fail_on_errors: bool = False, fail_on_warnings: bool = False, min_success_rate: float = 0.95) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### QualityGate.check + +```python +QualityGate.check(self, result: refactron.core.analysis_result.AnalysisResult) -> Tuple[bool, str] +``` + +Check if quality gate passes. + +Args: + result: Analysis result to check + +Returns: + Tuple of (passed, message) + +#### QualityGate.to_dict + +```python +QualityGate.to_dict(self) -> Dict +``` + +Convert quality gate to dictionary. + +### QualityGateParser + +```python +QualityGateParser() +``` + +Parse CLI output and enforce quality gates. + +#### QualityGateParser.enforce_gate + +```python +QualityGateParser.enforce_gate(result: refactron.core.analysis_result.AnalysisResult, gate: refactron.cicd.quality_gates.QualityGate) -> Tuple[bool, str, int] +``` + +Enforce quality gate on analysis result. + +Args: + result: Analysis result + gate: Quality gate configuration + +Returns: + Tuple of (passed, message, exit_code) + +#### QualityGateParser.generate_summary + +```python +QualityGateParser.generate_summary(result: refactron.core.analysis_result.AnalysisResult) -> str +``` + +Generate quality gate summary for CI/CD. + +Args: + result: Analysis result + +Returns: + Formatted summary string + +#### QualityGateParser.parse_exit_code + +```python +QualityGateParser.parse_exit_code(exit_code: int) -> Dict[str, int] +``` + +Parse exit code from refactron analyze. + +Args: + exit_code: Process exit code + +Returns: + Dictionary indicating if build should fail + +#### QualityGateParser.parse_json_output + +```python +QualityGateParser.parse_json_output(json_path: pathlib._local.Path) -> Dict +``` + +Parse JSON output from refactron analyze --format json. + +Args: + json_path: Path to JSON output file + +Returns: + Parsed JSON dictionary + +Raises: + ValueError: If JSON is invalid + FileNotFoundError: If file doesn't exist + +#### QualityGateParser.parse_text_output + +```python +QualityGateParser.parse_text_output(text: str) -> Dict[str, int] +``` + +Parse text output from refactron analyze command. + +Args: + text: Text output from CLI + +Returns: + Dictionary with issue counts + +## Functions + diff --git a/documentation/docs/api/core.md b/documentation/docs/api/core.md new file mode 100644 index 0000000..c71b646 --- /dev/null +++ b/documentation/docs/api/core.md @@ -0,0 +1,2881 @@ +# refactron.core + +Core functionality for Refactron. + +## Classes + +## Functions + + +--- + +# refactron.core.analysis_result + +Analysis result representation. + +## Classes + +### AnalysisResult + +```python +AnalysisResult(file_metrics: List[refactron.core.models.FileMetrics] = , total_files: int = 0, total_issues: int = 0, failed_files: List[refactron.core.analysis_result.FileAnalysisError] = ) -> None +``` + +Result of code analysis. + +#### AnalysisResult.__init__ + +```python +AnalysisResult.__init__(self, file_metrics: List[refactron.core.models.FileMetrics] = , total_files: int = 0, total_issues: int = 0, failed_files: List[refactron.core.analysis_result.FileAnalysisError] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### AnalysisResult.issues_by_file + +```python +AnalysisResult.issues_by_file(self, file_path: pathlib._local.Path) -> List[refactron.core.models.CodeIssue] +``` + +Get issues for a specific file. + +#### AnalysisResult.issues_by_level + +```python +AnalysisResult.issues_by_level(self, level: refactron.core.models.IssueLevel) -> List[refactron.core.models.CodeIssue] +``` + +Get issues filtered by severity level. + +#### AnalysisResult.report + +```python +AnalysisResult.report(self, detailed: bool = True) -> str +``` + +Generate a text report of the analysis. + +#### AnalysisResult.summary + +```python +AnalysisResult.summary(self) -> Dict[str, int] +``` + +Get a summary of the analysis. + +### FileAnalysisError + +```python +FileAnalysisError(file_path: pathlib._local.Path, error_message: str, error_type: str, recovery_suggestion: Optional[str] = None) -> None +``` + +Represents an error that occurred while analyzing a file. + +#### FileAnalysisError.__init__ + +```python +FileAnalysisError.__init__(self, file_path: pathlib._local.Path, error_message: str, error_type: str, recovery_suggestion: Optional[str] = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + + +--- + +# refactron.core.backup + +Backup and Rollback System for Refactron. + +Provides functionality to: +- Auto-create .refactron-backup/ directory with original files before changes +- Git integration for automatic commits before major refactoring +- Rollback capability to restore original files + +## Classes + +### BackupManager + +```python +BackupManager(root_dir: Optional[pathlib._local.Path] = None) +``` + +Manage backups and rollbacks for refactoring operations. + +#### BackupManager.__init__ + +```python +BackupManager.__init__(self, root_dir: Optional[pathlib._local.Path] = None) +``` + +Initialize the backup manager. + +Args: + root_dir: Root directory for backups. Defaults to current working directory. + +#### BackupManager.backup_file + +```python +BackupManager.backup_file(self, file_path: pathlib._local.Path, session_id: str) -> pathlib._local.Path +``` + +Backup a single file to the backup directory. + +Args: + file_path: Path to the file to backup. + session_id: Session ID for this backup operation. + +Returns: + Path to the backup file. + +#### BackupManager.backup_files + +```python +BackupManager.backup_files(self, file_paths: List[pathlib._local.Path], session_id: str) -> Tuple[List[pathlib._local.Path], List[pathlib._local.Path]] +``` + +Backup multiple files. + +Args: + file_paths: List of file paths to backup. + session_id: Session ID for this backup operation. + +Returns: + Tuple of (successful backup paths, failed file paths). + +#### BackupManager.clear_all_sessions + +```python +BackupManager.clear_all_sessions(self) -> int +``` + +Clear all backup sessions. + +Returns: + Number of sessions cleared. + +#### BackupManager.clear_session + +```python +BackupManager.clear_session(self, session_id: str) -> bool +``` + +Clear a specific backup session. + +Args: + session_id: Session ID to clear. + +Returns: + True if successful, False otherwise. + +#### BackupManager.create_backup_session + +```python +BackupManager.create_backup_session(self, description: str = '') -> str +``` + +Create a new backup session. + +Args: + description: Description of the operation being performed. + +Returns: + Session ID for the backup session. + +#### BackupManager.get_latest_session + +```python +BackupManager.get_latest_session(self) -> Optional[Dict[str, Any]] +``` + +Get the latest backup session. + +Returns: + Latest session information or None if no sessions exist. + +#### BackupManager.get_session + +```python +BackupManager.get_session(self, session_id: str) -> Optional[Dict[str, Any]] +``` + +Get information about a specific session. + +Args: + session_id: Session ID to look up. + +Returns: + Session information or None if not found. + +#### BackupManager.list_sessions + +```python +BackupManager.list_sessions(self) -> List[Dict[str, Any]] +``` + +List all backup sessions. + +Returns: + List of session information dictionaries. + +#### BackupManager.rollback_session + +```python +BackupManager.rollback_session(self, session_id: Optional[str] = None) -> Tuple[int, List[str]] +``` + +Rollback files from a backup session. + +Args: + session_id: Session ID to rollback. If None, uses the latest session. + +Returns: + Tuple of (number of files restored, list of failed file paths). + +#### BackupManager.update_session_git_commit + +```python +BackupManager.update_session_git_commit(self, session_id: str, commit_hash: Optional[str]) -> bool +``` + +Update the Git commit hash for a session. + +Args: + session_id: Session ID to update. + commit_hash: Git commit hash to associate with the session. + +Returns: + True if successful, False if session not found. + +### BackupRollbackSystem + +```python +BackupRollbackSystem(root_dir: Optional[pathlib._local.Path] = None) +``` + +Combined backup and rollback system that integrates file backups with Git. + +#### BackupRollbackSystem.__init__ + +```python +BackupRollbackSystem.__init__(self, root_dir: Optional[pathlib._local.Path] = None) +``` + +Initialize the backup and rollback system. + +Args: + root_dir: Root directory for operations. + +#### BackupRollbackSystem.clear_all + +```python +BackupRollbackSystem.clear_all(self) -> int +``` + +Clear all backup sessions. + +#### BackupRollbackSystem.list_sessions + +```python +BackupRollbackSystem.list_sessions(self) -> List[Dict[str, Any]] +``` + +List all backup sessions. + +#### BackupRollbackSystem.prepare_for_refactoring + +```python +BackupRollbackSystem.prepare_for_refactoring(self, files: List[pathlib._local.Path], description: str = 'refactoring operation', create_git_commit: bool = True) -> Tuple[str, List[pathlib._local.Path]] +``` + +Prepare for a refactoring operation by creating backups and optionally a Git commit. + +Args: + files: List of files to be refactored. + description: Description of the refactoring operation. + create_git_commit: Whether to create a Git commit before refactoring. + +Returns: + Tuple of (session ID, list of files that failed to backup). + +#### BackupRollbackSystem.rollback + +```python +BackupRollbackSystem.rollback(self, session_id: Optional[str] = None, use_git: bool = False) -> Dict[str, Any] +``` + +Rollback changes from a refactoring session. + +Args: + session_id: Session ID to rollback. If None, uses the latest session. + use_git: Whether to use Git rollback instead of file backup. + +Returns: + Dictionary with rollback results. + +### GitIntegration + +```python +GitIntegration(repo_path: Optional[pathlib._local.Path] = None) +``` + +Git integration for automatic commits before refactoring. + +#### GitIntegration.__init__ + +```python +GitIntegration.__init__(self, repo_path: Optional[pathlib._local.Path] = None) +``` + +Initialize Git integration. + +Args: + repo_path: Path to the Git repository. Defaults to current directory. + +#### GitIntegration.create_pre_refactor_commit + +```python +GitIntegration.create_pre_refactor_commit(self, message: Optional[str] = None, files: Optional[List[pathlib._local.Path]] = None) -> Optional[str] +``` + +Create a commit before refactoring. + +Args: + message: Commit message. Defaults to auto-generated message. + files: Specific files to commit. If None, stages and commits all + uncommitted changes (git add -A). Note: This may include + unintended files like temporary files or build artifacts. + +Returns: + Commit hash if successful, None otherwise. + +#### GitIntegration.get_current_branch + +```python +GitIntegration.get_current_branch(self) -> Optional[str] +``` + +Get the current Git branch name. + +#### GitIntegration.get_current_commit + +```python +GitIntegration.get_current_commit(self) -> Optional[str] +``` + +Get the current commit hash. + +#### GitIntegration.git_rollback_to_commit + +```python +GitIntegration.git_rollback_to_commit(self, commit_hash: str) -> bool +``` + +Rollback to a specific commit (soft reset). + +Args: + commit_hash: Commit hash to rollback to. + +Returns: + True if successful, False otherwise. + +#### GitIntegration.has_uncommitted_changes + +```python +GitIntegration.has_uncommitted_changes(self) -> bool +``` + +Check if there are uncommitted changes. + +#### GitIntegration.is_git_repo + +```python +GitIntegration.is_git_repo(self) -> bool +``` + +Check if the current directory is a Git repository. + +## Functions + + +--- + +# refactron.core.cache + +AST caching layer for performance optimization. + +## Classes + +### ASTCache + +```python +ASTCache(cache_dir: Optional[pathlib._local.Path] = None, enabled: bool = True, max_cache_size_mb: int = 100, cleanup_threshold_percent: float = 0.8) +``` + +Cache for parsed AST trees to avoid re-parsing identical files. + +Uses file content hashing to determine cache validity. + +#### ASTCache.__init__ + +```python +ASTCache.__init__(self, cache_dir: Optional[pathlib._local.Path] = None, enabled: bool = True, max_cache_size_mb: int = 100, cleanup_threshold_percent: float = 0.8) +``` + +Initialize the AST cache. + +Args: + cache_dir: Directory to store cache files. If None, uses temporary directory. + enabled: Whether caching is enabled. + max_cache_size_mb: Maximum cache size in megabytes. + cleanup_threshold_percent: Cleanup to this percentage of max when limit exceeded. + +#### ASTCache.clear + +```python +ASTCache.clear(self) -> None +``` + +Clear all cached data. + +#### ASTCache.get + +```python +ASTCache.get(self, file_path: pathlib._local.Path, content: str) -> Optional[Tuple[libcst._nodes.module.Module, Dict[str, Any]]] +``` + +Get cached AST and metadata for a file. + +Args: + file_path: Path to the file. + content: Current content of the file. + +Returns: + Tuple of (AST module, metadata) if cached, None otherwise. + +#### ASTCache.get_stats + +```python +ASTCache.get_stats(self) -> Dict[str, Any] +``` + +Get cache statistics. + +Returns: + Dictionary containing cache statistics. + +#### ASTCache.put + +```python +ASTCache.put(self, file_path: pathlib._local.Path, content: str, ast_module: libcst._nodes.module.Module, metadata: Optional[Dict[str, Any]] = None) -> None +``` + +Store AST and metadata in cache. + +Args: + file_path: Path to the file. + content: Content of the file. + ast_module: Parsed AST module. + metadata: Optional metadata to cache alongside the AST. + +## Functions + + +--- + +# refactron.core.config + +Configuration management for Refactron. + +## Classes + +### RefactronConfig + +```python +RefactronConfig(version: str = , environment: Optional[str] = None, enabled_analyzers: List[str] = , enabled_refactorers: List[str] = , max_function_complexity: int = 10, max_function_length: int = 50, max_file_length: int = 500, max_parameters: int = 5, report_format: str = 'text', show_details: bool = True, require_preview: bool = True, backup_enabled: bool = True, include_patterns: List[str] = , exclude_patterns: List[str] = , custom_rules: Dict[str, Any] = , security_ignore_patterns: List[str] = , security_rule_whitelist: Dict[str, List[str]] = , security_min_confidence: float = 0.5, enable_ast_cache: bool = True, ast_cache_dir: Optional[pathlib._local.Path] = None, max_ast_cache_size_mb: int = 100, enable_incremental_analysis: bool = True, incremental_state_file: Optional[pathlib._local.Path] = None, enable_parallel_processing: bool = True, max_parallel_workers: Optional[int] = None, use_multiprocessing: bool = False, enable_memory_profiling: bool = False, memory_optimization_threshold_mb: float = 5.0, memory_pressure_threshold_percent: float = 80.0, memory_pressure_threshold_available_mb: float = 500.0, cache_cleanup_threshold_percent: float = 0.8, log_level: str = 'INFO', log_format: str = 'text', log_file: Optional[pathlib._local.Path] = None, log_max_bytes: int = 10485760, log_backup_count: int = 5, enable_console_logging: bool = True, enable_file_logging: bool = True, enable_metrics: bool = True, metrics_detailed: bool = True, enable_prometheus: bool = False, prometheus_host: str = '127.0.0.1', prometheus_port: int = 9090, enable_telemetry: bool = False, telemetry_endpoint: Optional[str] = None, enable_pattern_learning: bool = True, pattern_storage_dir: Optional[pathlib._local.Path] = None, pattern_learning_enabled: bool = True, pattern_ranking_enabled: bool = True) -> None +``` + +Configuration for Refactron analysis and refactoring. + +#### RefactronConfig.__init__ + +```python +RefactronConfig.__init__(self, version: str = , environment: Optional[str] = None, enabled_analyzers: List[str] = , enabled_refactorers: List[str] = , max_function_complexity: int = 10, max_function_length: int = 50, max_file_length: int = 500, max_parameters: int = 5, report_format: str = 'text', show_details: bool = True, require_preview: bool = True, backup_enabled: bool = True, include_patterns: List[str] = , exclude_patterns: List[str] = , custom_rules: Dict[str, Any] = , security_ignore_patterns: List[str] = , security_rule_whitelist: Dict[str, List[str]] = , security_min_confidence: float = 0.5, enable_ast_cache: bool = True, ast_cache_dir: Optional[pathlib._local.Path] = None, max_ast_cache_size_mb: int = 100, enable_incremental_analysis: bool = True, incremental_state_file: Optional[pathlib._local.Path] = None, enable_parallel_processing: bool = True, max_parallel_workers: Optional[int] = None, use_multiprocessing: bool = False, enable_memory_profiling: bool = False, memory_optimization_threshold_mb: float = 5.0, memory_pressure_threshold_percent: float = 80.0, memory_pressure_threshold_available_mb: float = 500.0, cache_cleanup_threshold_percent: float = 0.8, log_level: str = 'INFO', log_format: str = 'text', log_file: Optional[pathlib._local.Path] = None, log_max_bytes: int = 10485760, log_backup_count: int = 5, enable_console_logging: bool = True, enable_file_logging: bool = True, enable_metrics: bool = True, metrics_detailed: bool = True, enable_prometheus: bool = False, prometheus_host: str = '127.0.0.1', prometheus_port: int = 9090, enable_telemetry: bool = False, telemetry_endpoint: Optional[str] = None, enable_pattern_learning: bool = True, pattern_storage_dir: Optional[pathlib._local.Path] = None, pattern_learning_enabled: bool = True, pattern_ranking_enabled: bool = True) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### RefactronConfig.to_file + +```python +RefactronConfig.to_file(self, config_path: pathlib._local.Path) -> None +``` + +Save configuration to a YAML file. + +Args: + config_path: Path where configuration should be saved + +Raises: + ConfigError: If config file cannot be written + +## Functions + + +--- + +# refactron.core.config_loader + +Enhanced configuration loader with profiles, inheritance, and versioning. + +## Classes + +### ConfigLoader + +```python +ConfigLoader() +``` + +Loads and merges configuration with support for profiles and inheritance. + +## Functions + + +--- + +# refactron.core.config_templates + +Configuration templates for common Python frameworks. + +## Classes + +### ConfigTemplates + +```python +ConfigTemplates() +``` + +Pre-configured templates for common Python frameworks. + +#### ConfigTemplates.get_base_template + +```python +ConfigTemplates.get_base_template() -> Dict +``` + +Get base configuration template. + +#### ConfigTemplates.get_django_template + +```python +ConfigTemplates.get_django_template() -> Dict +``` + +Get Django-specific configuration template. + +#### ConfigTemplates.get_fastapi_template + +```python +ConfigTemplates.get_fastapi_template() -> Dict +``` + +Get FastAPI-specific configuration template. + +#### ConfigTemplates.get_flask_template + +```python +ConfigTemplates.get_flask_template() -> Dict +``` + +Get Flask-specific configuration template. + +#### ConfigTemplates.get_template + +```python +ConfigTemplates.get_template(framework: str) -> Dict +``` + +Get configuration template for a specific framework. + +Args: + framework: Framework name (django, fastapi, flask, base) + +Returns: + Configuration template dictionary + +Raises: + ValueError: If framework is not supported + +## Functions + + +--- + +# refactron.core.config_validator + +Configuration schema validation for Refactron. + +## Classes + +### ConfigValidator + +```python +ConfigValidator() +``` + +Validates Refactron configuration against schema. + +## Functions + + +--- + +# refactron.core.credentials + +Local credential storage for Refactron CLI. + +This is intentionally minimal: credentials are stored in a user-only readable file +under ~/.refactron/. For production hardening, an OS keychain integration can be +added later. + +## Classes + +### RefactronCredentials + +```python +RefactronCredentials(api_base_url: 'str', access_token: 'str', token_type: 'str', expires_at: 'Optional[datetime]' = None, email: 'Optional[str]' = None, plan: 'Optional[str]' = None, api_key: 'Optional[str]' = None) -> None +``` + +Stored CLI credentials. + +#### RefactronCredentials.__init__ + +```python +RefactronCredentials.__init__(self, api_base_url: 'str', access_token: 'str', token_type: 'str', expires_at: 'Optional[datetime]' = None, email: 'Optional[str]' = None, plan: 'Optional[str]' = None, api_key: 'Optional[str]' = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### RefactronCredentials.to_dict + +```python +RefactronCredentials.to_dict(self) -> 'Dict[str, Any]' +``` + +No documentation available. + +## Functions + +### credentials_path + +```python +credentials_path() -> 'Path' +``` + +Default credentials file path. + +### delete_credentials + +```python +delete_credentials(path: 'Optional[Path]' = None) -> 'bool' +``` + +Delete stored credentials. Returns True if deleted, False if not present. + +### load_credentials + +```python +load_credentials(path: 'Optional[Path]' = None) -> 'Optional[RefactronCredentials]' +``` + +Load credentials from disk. Returns None if missing or invalid. + +### save_credentials + +```python +save_credentials(creds: 'RefactronCredentials', path: 'Optional[Path]' = None) -> 'None' +``` + +Save credentials to disk (0600 permissions where supported). + + +--- + +# refactron.core.device_auth + +Device-code authentication helpers for Refactron CLI. + +Implements a minimal Device Authorization Grant-like flow against the Refactron API: +- POST /oauth/device to get (device_code, user_code, verification_uri) +- POST /oauth/token to poll until authorized and receive tokens + +## Classes + +### DeviceAuthorization + +```python +DeviceAuthorization(device_code: 'str', user_code: 'str', verification_uri: 'str', expires_in: 'int', interval: 'int') -> None +``` + +DeviceAuthorization(device_code: 'str', user_code: 'str', verification_uri: 'str', expires_in: 'int', interval: 'int') + +#### DeviceAuthorization.__init__ + +```python +DeviceAuthorization.__init__(self, device_code: 'str', user_code: 'str', verification_uri: 'str', expires_in: 'int', interval: 'int') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### TokenResponse + +```python +TokenResponse(access_token: 'str', token_type: 'str', expires_in: 'int', email: 'Optional[str]' = None, plan: 'Optional[str]' = None, api_key: 'Optional[str]' = None) -> None +``` + +TokenResponse(access_token: 'str', token_type: 'str', expires_in: 'int', email: 'Optional[str]' = None, plan: 'Optional[str]' = None, api_key: 'Optional[str]' = None) + +#### TokenResponse.__init__ + +```python +TokenResponse.__init__(self, access_token: 'str', token_type: 'str', expires_in: 'int', email: 'Optional[str]' = None, plan: 'Optional[str]' = None, api_key: 'Optional[str]' = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### TokenResponse.expires_at + +```python +TokenResponse.expires_at(self) -> 'datetime' +``` + +No documentation available. + +## Functions + +### _normalize_base_url + +```python +_normalize_base_url(api_base_url: 'str') -> 'str' +``` + +No documentation available. + +### _post_json + +```python +_post_json(url: 'str', payload: 'Dict[str, Any]', timeout_seconds: 'int' = 10) -> 'Dict[str, Any]' +``` + +No documentation available. + +### poll_for_token + +```python +poll_for_token(device_code: 'str', api_base_url: 'str' = 'https://api.refactron.dev', client_id: 'str' = 'refactron-cli', interval_seconds: 'int' = 5, expires_in_seconds: 'int' = 900, timeout_seconds: 'int' = 10, sleep_fn: 'Callable[[float], None]' = ) -> 'TokenResponse' +``` + +No documentation available. + +### start_device_authorization + +```python +start_device_authorization(api_base_url: 'str' = 'https://api.refactron.dev', client_id: 'str' = 'refactron-cli', timeout_seconds: 'int' = 10) -> 'DeviceAuthorization' +``` + +No documentation available. + + +--- + +# refactron.core.exceptions + +Custom exception types for Refactron. + +This module defines granular exception types for different failure scenarios, +enabling better error handling and recovery strategies. + +## Classes + +### AnalysisError + +```python +AnalysisError(message: str, file_path: Optional[pathlib._local.Path] = None, analyzer_name: Optional[str] = None, recovery_suggestion: Optional[str] = None) +``` + +Raised when code analysis fails. + +This exception is raised when an analyzer encounters an error +while processing source code. Common causes include: +- Syntax errors in the source code +- Unsupported Python language features +- AST parsing failures +- File encoding issues + +#### AnalysisError.__init__ + +```python +AnalysisError.__init__(self, message: str, file_path: Optional[pathlib._local.Path] = None, analyzer_name: Optional[str] = None, recovery_suggestion: Optional[str] = None) +``` + +Initialize the exception. + +Args: + message: Error message describing what went wrong + file_path: Optional path to the file being analyzed + analyzer_name: Name of the analyzer that failed + recovery_suggestion: Optional suggestion for how to recover + +### ConfigError + +```python +ConfigError(message: str, config_path: Optional[pathlib._local.Path] = None, config_key: Optional[str] = None, recovery_suggestion: Optional[str] = None) +``` + +Raised when configuration is invalid or cannot be loaded. + +This exception is raised when there are problems with the +configuration. Common causes include: +- Invalid YAML syntax in config file +- Missing required configuration options +- Invalid configuration values (e.g., negative thresholds) +- Configuration file not found or not readable + +#### ConfigError.__init__ + +```python +ConfigError.__init__(self, message: str, config_path: Optional[pathlib._local.Path] = None, config_key: Optional[str] = None, recovery_suggestion: Optional[str] = None) +``` + +Initialize the exception. + +Args: + message: Error message describing what went wrong + config_path: Optional path to the config file + config_key: Optional specific config key that caused the error + recovery_suggestion: Optional suggestion for how to recover + +### RefactoringError + +```python +RefactoringError(message: str, file_path: Optional[pathlib._local.Path] = None, operation_type: Optional[str] = None, recovery_suggestion: Optional[str] = None) +``` + +Raised when code refactoring fails. + +This exception is raised when a refactoring operation cannot be +completed successfully. Common causes include: +- Unable to parse the source code +- Refactoring would break code semantics +- File write permission issues +- Backup creation failures + +#### RefactoringError.__init__ + +```python +RefactoringError.__init__(self, message: str, file_path: Optional[pathlib._local.Path] = None, operation_type: Optional[str] = None, recovery_suggestion: Optional[str] = None) +``` + +Initialize the exception. + +Args: + message: Error message describing what went wrong + file_path: Optional path to the file being refactored + operation_type: Type of refactoring operation that failed + recovery_suggestion: Optional suggestion for how to recover + +### RefactronError + +```python +RefactronError(message: str, file_path: Optional[pathlib._local.Path] = None, recovery_suggestion: Optional[str] = None) +``` + +Base exception for all Refactron errors. + +All custom exceptions in Refactron inherit from this class, +allowing for easy catching of all Refactron-specific errors. + +#### RefactronError.__init__ + +```python +RefactronError.__init__(self, message: str, file_path: Optional[pathlib._local.Path] = None, recovery_suggestion: Optional[str] = None) +``` + +Initialize the exception. + +Args: + message: Error message describing what went wrong + file_path: Optional path to the file that caused the error + recovery_suggestion: Optional suggestion for how to recover from the error + +## Functions + + +--- + +# refactron.core.false_positive_tracker + +False positive tracking system for security rules. + +## Classes + +### FalsePositiveTracker + +```python +FalsePositiveTracker(storage_path: pathlib._local.Path = None) +``` + +Tracks and learns from false positive patterns. + +#### FalsePositiveTracker.__init__ + +```python +FalsePositiveTracker.__init__(self, storage_path: pathlib._local.Path = None) +``` + +Initialize the false positive tracker. + +Args: + storage_path: Path to store false positive data + +#### FalsePositiveTracker.clear_all + +```python +FalsePositiveTracker.clear_all(self) -> None +``` + +Clear all false positive data. + +#### FalsePositiveTracker.clear_rule + +```python +FalsePositiveTracker.clear_rule(self, rule_id: str) -> None +``` + +Clear all false positives for a specific rule. + +Args: + rule_id: The rule to clear + +#### FalsePositiveTracker.get_false_positive_patterns + +```python +FalsePositiveTracker.get_false_positive_patterns(self, rule_id: str) -> List[str] +``` + +Get all false positive patterns for a rule. + +Args: + rule_id: The rule ID + +Returns: + List of false positive patterns + +#### FalsePositiveTracker.is_false_positive + +```python +FalsePositiveTracker.is_false_positive(self, rule_id: str, pattern: str) -> bool +``` + +Check if a pattern is marked as a false positive. + +Args: + rule_id: The rule to check + pattern: The pattern to check + +Returns: + True if the pattern is a known false positive + +#### FalsePositiveTracker.load + +```python +FalsePositiveTracker.load(self) -> None +``` + +Load false positive data from storage. + +#### FalsePositiveTracker.mark_false_positive + +```python +FalsePositiveTracker.mark_false_positive(self, rule_id: str, pattern: str) -> None +``` + +Mark a pattern as a false positive for a specific rule. + +Args: + rule_id: The rule that produced the false positive + pattern: The pattern that was incorrectly flagged + +#### FalsePositiveTracker.save + +```python +FalsePositiveTracker.save(self) -> None +``` + +Save false positive data to storage. + +## Functions + + +--- + +# refactron.core.incremental + +Incremental analysis tracking for performance optimization. + +## Classes + +### IncrementalAnalysisTracker + +```python +IncrementalAnalysisTracker(state_file: Optional[pathlib._local.Path] = None, enabled: bool = True) +``` + +Track file changes to enable incremental analysis. + +Only analyzes files that have changed since the last run. +Thread-safe for concurrent updates. + +#### IncrementalAnalysisTracker.__init__ + +```python +IncrementalAnalysisTracker.__init__(self, state_file: Optional[pathlib._local.Path] = None, enabled: bool = True) +``` + +Initialize the incremental analysis tracker. + +Args: + state_file: Path to the state file. If None, uses default location. + enabled: Whether incremental analysis is enabled. + +#### IncrementalAnalysisTracker.cleanup_missing_files + +```python +IncrementalAnalysisTracker.cleanup_missing_files(self, valid_file_paths: Set[pathlib._local.Path]) -> None +``` + +Remove files from state that no longer exist or are not in the valid set. + +Args: + valid_file_paths: Set of file paths that are still valid. + +#### IncrementalAnalysisTracker.clear + +```python +IncrementalAnalysisTracker.clear(self) -> None +``` + +Clear all state data. + +#### IncrementalAnalysisTracker.get_changed_files + +```python +IncrementalAnalysisTracker.get_changed_files(self, file_paths: List[pathlib._local.Path]) -> List[pathlib._local.Path] +``` + +Filter list of files to only those that have changed. + +Args: + file_paths: List of file paths to check. + +Returns: + List of files that have changed or are new. + +#### IncrementalAnalysisTracker.get_stats + +```python +IncrementalAnalysisTracker.get_stats(self) -> Dict[str, int] +``` + +Get statistics about the tracked state. + +Returns: + Dictionary containing statistics. + +#### IncrementalAnalysisTracker.has_file_changed + +```python +IncrementalAnalysisTracker.has_file_changed(self, file_path: pathlib._local.Path) -> bool +``` + +Check if a file has changed since the last analysis. + +Args: + file_path: Path to the file to check. + +Returns: + True if the file has changed or is new, False otherwise. + +#### IncrementalAnalysisTracker.remove_file_state + +```python +IncrementalAnalysisTracker.remove_file_state(self, file_path: pathlib._local.Path) -> None +``` + +Remove a file from the state tracking. + +Args: + file_path: Path to the file to remove. + +#### IncrementalAnalysisTracker.save + +```python +IncrementalAnalysisTracker.save(self) -> None +``` + +Save the current state to disk. + +#### IncrementalAnalysisTracker.update_file_state + +```python +IncrementalAnalysisTracker.update_file_state(self, file_path: pathlib._local.Path) -> None +``` + +Update the state for a file after analysis. + +Args: + file_path: Path to the file that was analyzed. + +## Functions + + +--- + +# refactron.core.inference + +Inference engine wrapping astroid for semantic analysis. +Provides capabilities to infer types, values, and resolve symbols. + +## Classes + +### InferenceEngine + +```python +InferenceEngine() +``` + +Wrapper around astroid to provide high-level semantic analysis capabilities. + +#### InferenceEngine.get_node_type_name + +```python +InferenceEngine.get_node_type_name(node: astroid.nodes.node_ng.NodeNG) -> str +``` + +Get the string representation of the inferred type. + +#### InferenceEngine.infer_node + +```python +InferenceEngine.infer_node(node: astroid.nodes.node_ng.NodeNG, context: Optional[astroid.context.InferenceContext] = None) -> List[Any] +``` + +Attempt to infer the value/type of a given node. +Returns a list of potential values (astroid nodes). + +#### InferenceEngine.is_subtype_of + +```python +InferenceEngine.is_subtype_of(node: astroid.nodes.node_ng.NodeNG, type_name: str) -> bool +``` + +Check if node infers to a specific type name (e.g. 'str', 'int', 'MyClass'). + +#### InferenceEngine.parse_file + +```python +InferenceEngine.parse_file(file_path: str) -> astroid.nodes.scoped_nodes.scoped_nodes.Module +``` + +Parse a file into an astroid node tree. + +#### InferenceEngine.parse_string + +```python +InferenceEngine.parse_string(code: str, module_name: str = '') -> astroid.nodes.scoped_nodes.scoped_nodes.Module +``` + +Parse source code string into an astroid node tree. + +## Functions + + +--- + +# refactron.core.logging_config + +Structured logging configuration for Refactron. + +This module provides JSON-formatted logging for CI/CD and log aggregation systems, +with configurable log levels and rotation support. + +## Classes + +### JSONFormatter + +```python +JSONFormatter(fmt=None, datefmt=None, style='%', validate=True, *, defaults=None) +``` + +Custom JSON formatter for structured logging. + +#### JSONFormatter.__init__ + +```python +JSONFormatter.__init__(self, fmt=None, datefmt=None, style='%', validate=True, *, defaults=None) +``` + +Initialize the formatter with specified format strings. + +Initialize the formatter either with the specified format string, or a +default as described above. Allow for specialized date formatting with +the optional datefmt argument. If datefmt is omitted, you get an +ISO8601-like (or RFC 3339-like) format. + +Use a style parameter of '%', '\{' or '$' to specify that you want to +use one of %-formatting, :meth:`str.format` (``{}``) formatting or +:class:`string.Template` formatting in your format string. + +.. versionchanged:: 3.2 + Added the ``style`` parameter. + +#### JSONFormatter.format + +```python +JSONFormatter.format(self, record: logging.LogRecord) -> str +``` + +Format log record as JSON. + +Args: + record: The log record to format + +Returns: + JSON-formatted log string + +#### JSONFormatter.formatException + +```python +JSONFormatter.formatException(self, ei) +``` + +Format and return the specified exception information as a string. + +This default implementation just uses +traceback.print_exception() + +#### JSONFormatter.formatMessage + +```python +JSONFormatter.formatMessage(self, record) +``` + +No documentation available. + +#### JSONFormatter.formatStack + +```python +JSONFormatter.formatStack(self, stack_info) +``` + +This method is provided as an extension point for specialized +formatting of stack information. + +The input data is a string as returned from a call to +:func:`traceback.print_stack`, but with the last trailing newline +removed. + +The base implementation just returns the value passed in. + +#### JSONFormatter.formatTime + +```python +JSONFormatter.formatTime(self, record, datefmt=None) +``` + +Return the creation time of the specified LogRecord as formatted text. + +This method should be called from format() by a formatter which +wants to make use of a formatted time. This method can be overridden +in formatters to provide for any specific requirement, but the +basic behaviour is as follows: if datefmt (a string) is specified, +it is used with time.strftime() to format the creation time of the +record. Otherwise, an ISO8601-like (or RFC 3339-like) format is used. +The resulting string is returned. This function uses a user-configurable +function to convert the creation time to a tuple. By default, +time.localtime() is used; to change this for a particular formatter +instance, set the 'converter' attribute to a function with the same +signature as time.localtime() or time.gmtime(). To change it for all +formatters, for example if you want all logging times to be shown in GMT, +set the 'converter' attribute in the Formatter class. + +#### JSONFormatter.usesTime + +```python +JSONFormatter.usesTime(self) +``` + +Check if the format uses the creation time of the record. + +### StructuredLogger + +```python +StructuredLogger(name: str = 'refactron', level: str = 'INFO', log_file: Optional[pathlib._local.Path] = None, log_format: str = 'json', max_bytes: int = 10485760, backup_count: int = 5, enable_console: bool = True, enable_file: bool = True) +``` + +Structured logger with JSON formatting and rotation support. + +#### StructuredLogger.__init__ + +```python +StructuredLogger.__init__(self, name: str = 'refactron', level: str = 'INFO', log_file: Optional[pathlib._local.Path] = None, log_format: str = 'json', max_bytes: int = 10485760, backup_count: int = 5, enable_console: bool = True, enable_file: bool = True) +``` + +Initialize structured logger. + +Args: + name: Logger name + level: Log level (DEBUG, INFO, WARNING, ERROR, CRITICAL) + log_file: Path to log file (if None, uses default location) + log_format: Log format ('json' or 'text') + max_bytes: Maximum log file size before rotation + backup_count: Number of backup files to keep + enable_console: Enable console logging + enable_file: Enable file logging + +#### StructuredLogger.get_logger + +```python +StructuredLogger.get_logger(self) -> logging.Logger +``` + +Get the configured logger instance. + +Returns: + Configured logger instance + +#### StructuredLogger.log_with_context + +```python +StructuredLogger.log_with_context(self, level: str, message: str, extra_data: Optional[Dict[str, Any]] = None) -> None +``` + +Log message with additional context data. + +Args: + level: Log level (debug, info, warning, error, critical) + message: Log message + extra_data: Additional context data to include in log + +## Functions + +### setup_logging + +```python +setup_logging(level: str = 'INFO', log_file: Optional[pathlib._local.Path] = None, log_format: str = 'json', max_bytes: int = 10485760, backup_count: int = 5, enable_console: bool = True, enable_file: bool = True) -> refactron.core.logging_config.StructuredLogger +``` + +Setup structured logging for Refactron. + +Args: + level: Log level (DEBUG, INFO, WARNING, ERROR, CRITICAL) + log_file: Path to log file + log_format: Log format ('json' or 'text') + max_bytes: Maximum log file size before rotation + backup_count: Number of backup files to keep + enable_console: Enable console logging + enable_file: Enable file logging + +Returns: + Configured StructuredLogger instance + + +--- + +# refactron.core.memory_profiler + +Memory profiling and optimization utilities. + +## Classes + +### MemoryProfiler + +```python +MemoryProfiler(enabled: bool = True, pressure_threshold_percent: float = 80.0, pressure_threshold_available_mb: float = 500.0) +``` + +Memory profiling and optimization utilities. + +Helps track and optimize memory usage for large codebases. + +#### MemoryProfiler.__init__ + +```python +MemoryProfiler.__init__(self, enabled: bool = True, pressure_threshold_percent: float = 80.0, pressure_threshold_available_mb: float = 500.0) +``` + +Initialize the memory profiler. + +Args: + enabled: Whether memory profiling is enabled. + pressure_threshold_percent: Percent threshold for high memory pressure. + pressure_threshold_available_mb: Available memory threshold in MB. + +#### MemoryProfiler.check_memory_pressure + +```python +MemoryProfiler.check_memory_pressure(self) -> bool +``` + +Check if the system is under memory pressure. + +Returns: + True if memory pressure is high (>80% usage). + +#### MemoryProfiler.clear_snapshots + +```python +MemoryProfiler.clear_snapshots(self) -> None +``` + +Clear all stored snapshots. + +#### MemoryProfiler.compare + +```python +MemoryProfiler.compare(self, start_label: str, end_label: str) -> Dict[str, float] +``` + +Compare two memory snapshots. + +Args: + start_label: Label of the starting snapshot. + end_label: Label of the ending snapshot. + +Returns: + Dictionary with memory differences. + +#### MemoryProfiler.get_current_memory + +```python +MemoryProfiler.get_current_memory(self) -> refactron.core.memory_profiler.MemorySnapshot +``` + +Get current memory usage snapshot. + +Returns: + MemorySnapshot with current memory usage. + +#### MemoryProfiler.get_stats + +```python +MemoryProfiler.get_stats(self) -> Dict[str, Any] +``` + +Get memory profiling statistics. + +Returns: + Dictionary containing statistics. + +#### MemoryProfiler.optimize_for_large_files + +```python +MemoryProfiler.optimize_for_large_files(self, file_size_mb: float, threshold_mb: Optional[float] = None) -> bool +``` + +Determine if special optimization is needed for a large file. + +Args: + file_size_mb: File size in megabytes. + threshold_mb: Optional threshold override. If None, uses default of 5.0 MB. + +Returns: + True if optimization is recommended. + +#### MemoryProfiler.profile_function + +```python +MemoryProfiler.profile_function(self, func: Callable[..., ~T], *args: Any, label: Optional[str] = None, **kwargs: Any) -> Tuple[~T, Dict[str, Any]] +``` + +Profile memory usage of a function call. + +Args: + func: Function to profile. + *args: Positional arguments for the function. + label: Optional label for logging. + **kwargs: Keyword arguments for the function. + +Returns: + Tuple of (function result, profiling info). + +#### MemoryProfiler.snapshot + +```python +MemoryProfiler.snapshot(self, label: str) -> refactron.core.memory_profiler.MemorySnapshot +``` + +Take a memory snapshot with a label. + +Args: + label: Label for this snapshot. + +Returns: + MemorySnapshot with current memory usage. + +#### MemoryProfiler.suggest_gc + +```python +MemoryProfiler.suggest_gc(self) -> None +``` + +Suggest garbage collection if memory pressure is high. + +### MemorySnapshot + +```python +MemorySnapshot(rss_mb: float, vms_mb: float, percent: float, available_mb: float) -> None +``` + +Snapshot of memory usage at a point in time. + +#### MemorySnapshot.__init__ + +```python +MemorySnapshot.__init__(self, rss_mb: float, vms_mb: float, percent: float, available_mb: float) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + +### estimate_file_size_mb + +```python +estimate_file_size_mb(file_path: str) -> float +``` + +Estimate file size in megabytes. + +Args: + file_path: Path to the file. + +Returns: + File size in MB. + +### stream_large_file + +```python +stream_large_file(file_path: str, chunk_size: int = 8192) -> Any +``` + +Stream a large file in chunks instead of reading all at once. + +Args: + file_path: Path to the file. + chunk_size: Size of each chunk in bytes. + +Yields: + Chunks of file content. + + +--- + +# refactron.core.metrics + +Metrics collection and tracking for Refactron. + +This module provides execution metrics tracking including: +- Analysis time per file and total run time +- Refactoring success/failure rates +- Rule hit counts per analyzer/refactorer + +## Classes + +### FileMetric + +```python +FileMetric(file_path: str, analysis_time_ms: float, lines_of_code: int, issues_found: int, analyzers_run: List[str] = , timestamp: str = , success: bool = True, error_message: Optional[str] = None) -> None +``` + +Metrics for a single file analysis. + +#### FileMetric.__init__ + +```python +FileMetric.__init__(self, file_path: str, analysis_time_ms: float, lines_of_code: int, issues_found: int, analyzers_run: List[str] = , timestamp: str = , success: bool = True, error_message: Optional[str] = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### MetricsCollector + +```python +MetricsCollector() -> None +``` + +Centralized metrics collection for Refactron operations. + +#### MetricsCollector.__init__ + +```python +MetricsCollector.__init__(self) -> None +``` + +Initialize metrics collector. + +#### MetricsCollector.end_analysis + +```python +MetricsCollector.end_analysis(self) -> None +``` + +Mark the end of an analysis session. + +#### MetricsCollector.end_refactoring + +```python +MetricsCollector.end_refactoring(self) -> None +``` + +Mark the end of a refactoring session. + +#### MetricsCollector.get_analysis_summary + +```python +MetricsCollector.get_analysis_summary(self) -> Dict[str, Any] +``` + +Get summary of analysis metrics. + +Returns: + Dictionary containing analysis summary metrics + +#### MetricsCollector.get_combined_summary + +```python +MetricsCollector.get_combined_summary(self) -> Dict[str, Any] +``` + +Get combined summary of all metrics. + +Returns: + Dictionary containing all metrics summaries + +#### MetricsCollector.get_refactoring_summary + +```python +MetricsCollector.get_refactoring_summary(self) -> Dict[str, Any] +``` + +Get summary of refactoring metrics. + +Returns: + Dictionary containing refactoring summary metrics + +#### MetricsCollector.record_analyzer_hit + +```python +MetricsCollector.record_analyzer_hit(self, analyzer_name: str, issue_type: str) -> None +``` + +Record that an analyzer found an issue. + +Args: + analyzer_name: Name of the analyzer + issue_type: Type of issue found + +#### MetricsCollector.record_file_analysis + +```python +MetricsCollector.record_file_analysis(self, file_path: str, analysis_time_ms: float, lines_of_code: int, issues_found: int, analyzers_run: List[str], success: bool = True, error_message: Optional[str] = None) -> None +``` + +Record metrics for a single file analysis. + +Args: + file_path: Path to the analyzed file + analysis_time_ms: Time taken to analyze the file in milliseconds + lines_of_code: Number of lines of code in the file + issues_found: Number of issues found in the file + analyzers_run: List of analyzer names that were run + success: Whether the analysis succeeded + error_message: Error message if analysis failed + +#### MetricsCollector.record_refactoring + +```python +MetricsCollector.record_refactoring(self, operation_type: str, file_path: str, execution_time_ms: float, success: bool, risk_level: str = 'safe', error_message: Optional[str] = None) -> None +``` + +Record metrics for a single refactoring operation. + +Args: + operation_type: Type of refactoring operation + file_path: Path to the refactored file + execution_time_ms: Time taken to perform refactoring in milliseconds + success: Whether the refactoring succeeded + risk_level: Risk level of the refactoring + error_message: Error message if refactoring failed + +#### MetricsCollector.reset + +```python +MetricsCollector.reset(self) -> None +``` + +Reset all metrics to initial state. + +#### MetricsCollector.start_analysis + +```python +MetricsCollector.start_analysis(self) -> None +``` + +Mark the start of an analysis session. + +#### MetricsCollector.start_refactoring + +```python +MetricsCollector.start_refactoring(self) -> None +``` + +Mark the start of a refactoring session. + +### RefactoringMetric + +```python +RefactoringMetric(operation_type: str, file_path: str, execution_time_ms: float, success: bool, risk_level: str, timestamp: str = , error_message: Optional[str] = None) -> None +``` + +Metrics for a single refactoring operation. + +#### RefactoringMetric.__init__ + +```python +RefactoringMetric.__init__(self, operation_type: str, file_path: str, execution_time_ms: float, success: bool, risk_level: str, timestamp: str = , error_message: Optional[str] = None) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + +### get_metrics_collector + +```python +get_metrics_collector() -> refactron.core.metrics.MetricsCollector +``` + +Get the global metrics collector instance. + +Returns: + Global MetricsCollector instance + +### reset_metrics_collector + +```python +reset_metrics_collector() -> None +``` + +Reset the global metrics collector. + + +--- + +# refactron.core.models + +Data models for Refactron. + +## Classes + +### CodeIssue + +```python +CodeIssue(category: refactron.core.models.IssueCategory, level: refactron.core.models.IssueLevel, message: str, file_path: pathlib._local.Path, line_number: int, column: int = 0, end_line: Optional[int] = None, code_snippet: Optional[str] = None, suggestion: Optional[str] = None, rule_id: Optional[str] = None, confidence: float = 1.0, metadata: Dict[str, Any] = ) -> None +``` + +Represents a detected code issue. + +#### CodeIssue.__init__ + +```python +CodeIssue.__init__(self, category: refactron.core.models.IssueCategory, level: refactron.core.models.IssueLevel, message: str, file_path: pathlib._local.Path, line_number: int, column: int = 0, end_line: Optional[int] = None, code_snippet: Optional[str] = None, suggestion: Optional[str] = None, rule_id: Optional[str] = None, confidence: float = 1.0, metadata: Dict[str, Any] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### FileMetrics + +```python +FileMetrics(file_path: pathlib._local.Path, lines_of_code: int, comment_lines: int, blank_lines: int, complexity: float, maintainability_index: float, functions: int, classes: int, issues: List[refactron.core.models.CodeIssue] = ) -> None +``` + +Metrics for a single file. + +#### FileMetrics.__init__ + +```python +FileMetrics.__init__(self, file_path: pathlib._local.Path, lines_of_code: int, comment_lines: int, blank_lines: int, complexity: float, maintainability_index: float, functions: int, classes: int, issues: List[refactron.core.models.CodeIssue] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### IssueCategory + +```python +IssueCategory(*values) +``` + +Categories of code issues. + +### IssueLevel + +```python +IssueLevel(*values) +``` + +Severity level of code issues. + +### RefactoringOperation + +```python +RefactoringOperation(operation_type: str, file_path: pathlib._local.Path, line_number: int, description: str, old_code: str, new_code: str, risk_score: float, operation_id: str = , reasoning: Optional[str] = None, metadata: Dict[str, Any] = ) -> None +``` + +Represents a refactoring operation to be applied. + +#### RefactoringOperation.__init__ + +```python +RefactoringOperation.__init__(self, operation_type: str, file_path: pathlib._local.Path, line_number: int, description: str, old_code: str, new_code: str, risk_score: float, operation_id: str = , reasoning: Optional[str] = None, metadata: Dict[str, Any] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + + +--- + +# refactron.core.parallel + +Parallel processing utilities for performance optimization. + +## Classes + +### ParallelProcessor + +```python +ParallelProcessor(max_workers: Optional[int] = None, use_processes: bool = True, enabled: bool = True) +``` + +Parallel processing manager for analyzing multiple files concurrently. + +Supports both multiprocessing and threading based on the task type. + +#### ParallelProcessor.__init__ + +```python +ParallelProcessor.__init__(self, max_workers: Optional[int] = None, use_processes: bool = True, enabled: bool = True) +``` + +Initialize the parallel processor. + +Args: + max_workers: Maximum number of worker processes/threads. + If None, uses CPU count capped at 8 workers to avoid resource exhaustion. + use_processes: If True, uses multiprocessing; if False, uses threading. + enabled: Whether parallel processing is enabled. + +#### ParallelProcessor.get_config + +```python +ParallelProcessor.get_config(self) -> Dict[str, Any] +``` + +Get the current configuration. + +Returns: + Dictionary containing configuration details. + +#### ParallelProcessor.process_files + +```python +ParallelProcessor.process_files(self, files: List[pathlib._local.Path], process_func: Callable[[pathlib._local.Path], Tuple[Optional[refactron.core.models.FileMetrics], Optional[refactron.core.analysis_result.FileAnalysisError]]], progress_callback: Optional[Callable[[int, int], NoneType]] = None) -> Tuple[List[refactron.core.models.FileMetrics], List[refactron.core.analysis_result.FileAnalysisError]] +``` + +Process multiple files in parallel. + +Args: + files: List of file paths to process. + process_func: Function to process a single file. Should return + (FileMetrics, None) on success or (None, FileAnalysisError) on error. + progress_callback: Optional callback for progress updates (completed, total). + +Returns: + Tuple of (successful results, failed files). + +## Functions + + +--- + +# refactron.core.prometheus_metrics + +Prometheus metrics exporter for Refactron. + +This module provides Prometheus-compatible metrics endpoint for monitoring +Refactron's performance and usage in production environments. + +## Classes + +### MetricsHTTPHandler + +```python +MetricsHTTPHandler(request, client_address, server) +``` + +HTTP handler for Prometheus metrics endpoint. + +#### MetricsHTTPHandler.__init__ + +```python +MetricsHTTPHandler.__init__(self, request, client_address, server) +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### MetricsHTTPHandler.address_string + +```python +MetricsHTTPHandler.address_string(self) +``` + +Return the client address. + +#### MetricsHTTPHandler.date_time_string + +```python +MetricsHTTPHandler.date_time_string(self, timestamp=None) +``` + +Return the current date and time formatted for a message header. + +#### MetricsHTTPHandler.do_GET + +```python +MetricsHTTPHandler.do_GET(self) -> None +``` + +Handle GET requests to /metrics endpoint. + +#### MetricsHTTPHandler.end_headers + +```python +MetricsHTTPHandler.end_headers(self) +``` + +Send the blank line ending the MIME headers. + +#### MetricsHTTPHandler.finish + +```python +MetricsHTTPHandler.finish(self) +``` + +No documentation available. + +#### MetricsHTTPHandler.flush_headers + +```python +MetricsHTTPHandler.flush_headers(self) +``` + +No documentation available. + +#### MetricsHTTPHandler.handle + +```python +MetricsHTTPHandler.handle(self) +``` + +Handle multiple requests if necessary. + +#### MetricsHTTPHandler.handle_expect_100 + +```python +MetricsHTTPHandler.handle_expect_100(self) +``` + +Decide what to do with an "Expect: 100-continue" header. + +If the client is expecting a 100 Continue response, we must +respond with either a 100 Continue or a final response before +waiting for the request body. The default is to always respond +with a 100 Continue. You can behave differently (for example, +reject unauthorized requests) by overriding this method. + +This method should either return True (possibly after sending +a 100 Continue response) or send an error response and return +False. + +#### MetricsHTTPHandler.handle_one_request + +```python +MetricsHTTPHandler.handle_one_request(self) +``` + +Handle a single HTTP request. + +You normally don't need to override this method; see the class +__doc__ string for information on how to handle specific HTTP +commands such as GET and POST. + +#### MetricsHTTPHandler.log_date_time_string + +```python +MetricsHTTPHandler.log_date_time_string(self) +``` + +Return the current time formatted for logging. + +#### MetricsHTTPHandler.log_error + +```python +MetricsHTTPHandler.log_error(self, format, *args) +``` + +Log an error. + +This is called when a request cannot be fulfilled. By +default it passes the message on to log_message(). + +Arguments are the same as for log_message(). + +XXX This should go to the separate error log. + +#### MetricsHTTPHandler.log_message + +```python +MetricsHTTPHandler.log_message(self, format: str, *args: Any) -> None +``` + +Suppress default logging. + +#### MetricsHTTPHandler.log_request + +```python +MetricsHTTPHandler.log_request(self, code='-', size='-') +``` + +Log an accepted request. + +This is called by send_response(). + +#### MetricsHTTPHandler.parse_request + +```python +MetricsHTTPHandler.parse_request(self) +``` + +Parse a request (internal). + +The request should be stored in self.raw_requestline; the results +are in self.command, self.path, self.request_version and +self.headers. + +Return True for success, False for failure; on failure, any relevant +error response has already been sent back. + +#### MetricsHTTPHandler.send_error + +```python +MetricsHTTPHandler.send_error(self, code, message=None, explain=None) +``` + +Send and log an error reply. + +Arguments are +* code: an HTTP error code + 3 digits +* message: a simple optional 1 line reason phrase. + *( HTAB / SP / VCHAR / %x80-FF ) + defaults to short entry matching the response code +* explain: a detailed message defaults to the long entry + matching the response code. + +This sends an error response (so it must be called before any +output has been generated), logs the error, and finally sends +a piece of HTML explaining the error to the user. + +#### MetricsHTTPHandler.send_header + +```python +MetricsHTTPHandler.send_header(self, keyword, value) +``` + +Send a MIME header to the headers buffer. + +#### MetricsHTTPHandler.send_response + +```python +MetricsHTTPHandler.send_response(self, code, message=None) +``` + +Add the response header to the headers buffer and log the +response code. + +Also send two standard headers with the server software +version and the current date. + +#### MetricsHTTPHandler.send_response_only + +```python +MetricsHTTPHandler.send_response_only(self, code, message=None) +``` + +Send the response header only. + +#### MetricsHTTPHandler.setup + +```python +MetricsHTTPHandler.setup(self) +``` + +No documentation available. + +#### MetricsHTTPHandler.version_string + +```python +MetricsHTTPHandler.version_string(self) +``` + +Return the server software version string. + +### PrometheusMetrics + +```python +PrometheusMetrics() -> None +``` + +Prometheus metrics formatter and exporter. + +#### PrometheusMetrics.__init__ + +```python +PrometheusMetrics.__init__(self) -> None +``` + +Initialize Prometheus metrics. + +#### PrometheusMetrics.format_metrics + +```python +PrometheusMetrics.format_metrics(self) -> str +``` + +Format metrics in Prometheus exposition format. + +Returns: + String containing Prometheus-formatted metrics + +### PrometheusMetricsServer + +```python +PrometheusMetricsServer(host: str = '127.0.0.1', port: int = 9090) -> None +``` + +HTTP server for exposing Prometheus metrics. + +#### PrometheusMetricsServer.__init__ + +```python +PrometheusMetricsServer.__init__(self, host: str = '127.0.0.1', port: int = 9090) -> None +``` + +Initialize Prometheus metrics server. + +Args: + host: Host to bind to (default: 127.0.0.1 for localhost-only access) + port: Port to listen on + +#### PrometheusMetricsServer.is_running + +```python +PrometheusMetricsServer.is_running(self) -> bool +``` + +Check if the metrics server is running. + +Returns: + True if server is running, False otherwise + +#### PrometheusMetricsServer.start + +```python +PrometheusMetricsServer.start(self) -> None +``` + +Start the metrics server in a background thread. + +#### PrometheusMetricsServer.stop + +```python +PrometheusMetricsServer.stop(self) -> None +``` + +Stop the metrics server. + +## Functions + +### get_metrics_server + +```python +get_metrics_server() -> Optional[refactron.core.prometheus_metrics.PrometheusMetricsServer] +``` + +Get the global metrics server instance. + +Returns: + PrometheusMetricsServer instance or None if not started + +### start_metrics_server + +```python +start_metrics_server(host: str = '127.0.0.1', port: int = 9090) -> refactron.core.prometheus_metrics.PrometheusMetricsServer +``` + +Start the global Prometheus metrics server. + +Args: + host: Host to bind to (default: 127.0.0.1 for localhost-only access) + port: Port to listen on + +Returns: + PrometheusMetricsServer instance + +### stop_metrics_server + +```python +stop_metrics_server() -> None +``` + +Stop the global Prometheus metrics server. + + +--- + +# refactron.core.refactor_result + +Refactoring result representation. + +## Classes + +### RefactorResult + +```python +RefactorResult(operations: List[refactron.core.models.RefactoringOperation] = , applied: bool = False, preview_mode: bool = True) -> None +``` + +Result of refactoring operations. + +#### RefactorResult.__init__ + +```python +RefactorResult.__init__(self, operations: List[refactron.core.models.RefactoringOperation] = , applied: bool = False, preview_mode: bool = True) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### RefactorResult.apply + +```python +RefactorResult.apply(self) -> bool +``` + +Apply the refactoring operations to the files. + +#### RefactorResult.get_ranking_score + +```python +RefactorResult.get_ranking_score(self, operation: refactron.core.models.RefactoringOperation) -> float +``` + +Get ranking score for an operation (0.0 if not ranked). + +#### RefactorResult.operations_by_file + +```python +RefactorResult.operations_by_file(self, file_path: pathlib._local.Path) -> List[refactron.core.models.RefactoringOperation] +``` + +Get operations for a specific file. + +#### RefactorResult.operations_by_type + +```python +RefactorResult.operations_by_type(self, operation_type: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Get operations of a specific type. + +#### RefactorResult.show_diff + +```python +RefactorResult.show_diff(self) -> str +``` + +Show a diff of all operations. + +#### RefactorResult.summary + +```python +RefactorResult.summary(self) -> Dict[str, int] +``` + +Get a summary of refactoring operations. + +#### RefactorResult.top_ranked_operations + +```python +RefactorResult.top_ranked_operations(self, top_n: int = 10) -> List[refactron.core.models.RefactoringOperation] +``` + +Get top N ranked operations by ranking score. + +## Functions + + +--- + +# refactron.core.refactron + +Main Refactron class - the entry point for all operations. + +## Classes + +### Refactron + +```python +Refactron(config: Optional[refactron.core.config.RefactronConfig] = None) +``` + +Main Refactron class for code analysis and refactoring. + +Example: + >>> refactron = Refactron() + >>> result = refactron.analyze("mycode.py") + >>> print(result.report()) + +#### Refactron.__init__ + +```python +Refactron.__init__(self, config: Optional[refactron.core.config.RefactronConfig] = None) +``` + +Initialize Refactron. + +Args: + config: Configuration object. If None, uses default config. + +#### Refactron.analyze + +```python +Refactron.analyze(self, target: Union[str, pathlib._local.Path]) -> refactron.core.analysis_result.AnalysisResult +``` + +Analyze a file or directory. + +Args: + target: Path to file or directory to analyze + +Returns: + AnalysisResult containing all detected issues and any errors encountered + +Note: + This method implements graceful degradation - if individual files fail + to analyze, they are logged and skipped, allowing analysis to continue + on remaining files. + +#### Refactron.clear_caches + +```python +Refactron.clear_caches(self) -> None +``` + +Clear all performance-related caches. + +#### Refactron.detect_project_root + +```python +Refactron.detect_project_root(self, file_path: pathlib._local.Path) -> pathlib._local.Path +``` + +Detect project root by looking for common markers in parent directories. + +The search walks up the directory tree from the file's parent directory, +checking for common project markers up to a fixed maximum depth. + +Args: + file_path: Path to a file in the project. + +Returns: + The path to the project root if any of the known markers are found + within the search depth limit, or the file's parent directory if no + markers are detected. + +#### Refactron.get_performance_stats + +```python +Refactron.get_performance_stats(self) -> dict +``` + +Get performance statistics from all optimization components. + +Returns: + Dictionary containing performance statistics. + +#### Refactron.get_python_files + +```python +Refactron.get_python_files(self, directory: pathlib._local.Path) -> List[pathlib._local.Path] +``` + +Get all Python files in a directory, respecting exclude patterns. + +#### Refactron.record_feedback + +```python +Refactron.record_feedback(self, operation_id: str, action: str, reason: Optional[str] = None, operation: Optional[refactron.core.models.RefactoringOperation] = None) -> None +``` + +Record developer feedback on a refactoring suggestion. + +Args: + operation_id: Unique identifier for the refactoring operation + action: Feedback action - "accepted", "rejected", or "ignored" + reason: Optional reason for the feedback + operation: Optional RefactoringOperation object (used to extract metadata) + +Note: + If pattern storage is not initialized, this method will silently fail. + +#### Refactron.refactor + +```python +Refactron.refactor(self, target: Union[str, pathlib._local.Path], preview: bool = True, operation_types: Optional[List[str]] = None) -> refactron.core.refactor_result.RefactorResult +``` + +Refactor a file or directory. + +Args: + target: Path to file or directory to refactor + preview: If True, show changes without applying them + operation_types: Specific refactoring operations to apply (None = all) + +Returns: + RefactorResult containing all proposed operations + +Note: + This method implements graceful degradation - if individual files fail + to refactor, they are logged and skipped, allowing refactoring to continue + on remaining files. + +## Functions + + +--- + +# refactron.core.repositories + +GitHub repository integration for Refactron CLI. + +This module provides functionality to interact with the Refactron backend API +to fetch GitHub repositories connected to the user's account. + +## Classes + +### Repository + +```python +Repository(id: 'int', name: 'str', full_name: 'str', description: 'Optional[str]', private: 'bool', html_url: 'str', clone_url: 'str', ssh_url: 'str', default_branch: 'str', language: 'Optional[str]', updated_at: 'str') -> None +``` + +Represents a GitHub repository. + +#### Repository.__init__ + +```python +Repository.__init__(self, id: 'int', name: 'str', full_name: 'str', description: 'Optional[str]', private: 'bool', html_url: 'str', clone_url: 'str', ssh_url: 'str', default_branch: 'str', language: 'Optional[str]', updated_at: 'str') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + +### list_repositories + +```python +list_repositories(api_base_url: 'str', timeout_seconds: 'int' = 10) -> 'List[Repository]' +``` + +Fetch all GitHub repositories connected to the user's account. + +Args: + api_base_url: The Refactron API base URL + timeout_seconds: Request timeout in seconds + +Returns: + List of Repository objects + +Raises: + RuntimeError: If the request fails or user is not authenticated + + +--- + +# refactron.core.telemetry + +Opt-in telemetry system for Refactron. + +This module provides anonymous usage data collection to understand real-world +usage patterns and performance characteristics. All telemetry is opt-in and +respects user privacy. + +## Classes + +### TelemetryCollector + +```python +TelemetryCollector(enabled: bool = False, anonymous_id: Optional[str] = None, telemetry_file: Optional[pathlib._local.Path] = None) +``` + +Collects and manages telemetry data with privacy considerations. + +#### TelemetryCollector.__init__ + +```python +TelemetryCollector.__init__(self, enabled: bool = False, anonymous_id: Optional[str] = None, telemetry_file: Optional[pathlib._local.Path] = None) +``` + +Initialize telemetry collector. + +Args: + enabled: Whether telemetry collection is enabled + anonymous_id: Anonymous identifier for this installation + telemetry_file: Path to file where telemetry data is stored + +#### TelemetryCollector.flush + +```python +TelemetryCollector.flush(self) -> None +``` + +Write collected events to disk. + +#### TelemetryCollector.get_summary + +```python +TelemetryCollector.get_summary(self) -> Dict[str, Any] +``` + +Get a summary of collected telemetry events. + +Returns: + Dictionary containing telemetry summary + +#### TelemetryCollector.record_analysis_completed + +```python +TelemetryCollector.record_analysis_completed(self, files_analyzed: int, total_time_ms: float, issues_found: int, analyzers_used: List[str]) -> None +``` + +Record an analysis completion event. + +Args: + files_analyzed: Number of files analyzed + total_time_ms: Total analysis time in milliseconds + issues_found: Number of issues found + analyzers_used: List of analyzers that were used + +#### TelemetryCollector.record_error + +```python +TelemetryCollector.record_error(self, error_type: str, error_category: str, context: Optional[str] = None) -> None +``` + +Record an error event. + +Args: + error_type: Type of error (generic, no specific error messages) + error_category: Category of error (e.g., 'analysis', 'refactoring') + context: Optional context information (should not contain PII) + +#### TelemetryCollector.record_event + +```python +TelemetryCollector.record_event(self, event_type: str, data: Optional[Dict[str, Any]] = None) -> None +``` + +Record a telemetry event. + +Args: + event_type: Type of event (e.g., 'analysis_completed', 'refactoring_applied') + data: Additional event data (should not contain PII) + +#### TelemetryCollector.record_feature_usage + +```python +TelemetryCollector.record_feature_usage(self, feature_name: str, metadata: Optional[Dict[str, Any]] = None) -> None +``` + +Record a feature usage event. + +Args: + feature_name: Name of the feature used + metadata: Optional metadata about feature usage + +#### TelemetryCollector.record_refactoring_applied + +```python +TelemetryCollector.record_refactoring_applied(self, operation_type: str, files_affected: int, total_time_ms: float, success: bool) -> None +``` + +Record a refactoring operation event. + +Args: + operation_type: Type of refactoring operation + files_affected: Number of files affected + total_time_ms: Total refactoring time in milliseconds + success: Whether the refactoring succeeded + +### TelemetryConfig + +```python +TelemetryConfig(config_file: Optional[pathlib._local.Path] = None) +``` + +Configuration for telemetry system. + +#### TelemetryConfig.__init__ + +```python +TelemetryConfig.__init__(self, config_file: Optional[pathlib._local.Path] = None) +``` + +Initialize telemetry configuration. + +Args: + config_file: Path to telemetry configuration file + +#### TelemetryConfig.disable + +```python +TelemetryConfig.disable(self) -> None +``` + +Disable telemetry collection. + +#### TelemetryConfig.enable + +```python +TelemetryConfig.enable(self, anonymous_id: Optional[str] = None) -> None +``` + +Enable telemetry collection. + +Args: + anonymous_id: Optional anonymous identifier (generated if not provided) + +#### TelemetryConfig.save_config + +```python +TelemetryConfig.save_config(self) -> None +``` + +Save telemetry configuration to file. + +### TelemetryEvent + +```python +TelemetryEvent(event_type: str, timestamp: str = , session_id: str = , data: Dict[str, Any] = ) -> None +``` + +Represents a single telemetry event. + +#### TelemetryEvent.__init__ + +```python +TelemetryEvent.__init__(self, event_type: str, timestamp: str = , session_id: str = , data: Dict[str, Any] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + +### disable_telemetry + +```python +disable_telemetry() -> None +``` + +Disable telemetry collection globally. + +### enable_telemetry + +```python +enable_telemetry() -> None +``` + +Enable telemetry collection globally. + +### get_telemetry_collector + +```python +get_telemetry_collector(enabled: Optional[bool] = None) -> refactron.core.telemetry.TelemetryCollector +``` + +Get the global telemetry collector instance. + +Args: + enabled: Override enabled status (uses config if None) + +Returns: + Global TelemetryCollector instance + + +--- + +# refactron.core.workspace + +Workspace management for Refactron CLI. + +This module handles the mapping between remote GitHub repositories and local +directory paths, enabling seamless navigation and context switching. + +## Classes + +### WorkspaceManager + +```python +WorkspaceManager(config_path: 'Optional[Path]' = None) -> 'None' +``` + +Manages workspace mappings between repositories and local paths. + +#### WorkspaceManager.__init__ + +```python +WorkspaceManager.__init__(self, config_path: 'Optional[Path]' = None) -> 'None' +``` + +Initialize the workspace manager. + +Args: + config_path: Path to the workspaces.json file (default: ~/.refactron/workspaces.json) + +#### WorkspaceManager.add_workspace + +```python +WorkspaceManager.add_workspace(self, mapping: 'WorkspaceMapping') -> 'None' +``` + +Add or update a workspace mapping. + +Args: + mapping: The workspace mapping to add + +#### WorkspaceManager.detect_repository + +```python +WorkspaceManager.detect_repository(self, directory: 'Optional[Path]' = None) -> 'Optional[str]' +``` + +Attempt to detect the GitHub repository from the .git config. + +Args: + directory: Directory to search (default: current directory) + +Returns: + The repository full name (e.g., "user/repo"), or None if not detected + +#### WorkspaceManager.get_workspace + +```python +WorkspaceManager.get_workspace(self, repo_name: 'str') -> 'Optional[WorkspaceMapping]' +``` + +Get a workspace mapping by repository name. + +Args: + repo_name: The repository name (e.g., "repo" or "user/repo") + +Returns: + The workspace mapping, or None if not found + +#### WorkspaceManager.get_workspace_by_path + +```python +WorkspaceManager.get_workspace_by_path(self, local_path: 'str') -> 'Optional[WorkspaceMapping]' +``` + +Get a workspace mapping by local path. + +Args: + local_path: The local directory path + +Returns: + The workspace mapping, or None if not found + +#### WorkspaceManager.list_workspaces + +```python +WorkspaceManager.list_workspaces(self) -> 'list[WorkspaceMapping]' +``` + +List all workspace mappings. + +Returns: + List of all workspace mappings + +#### WorkspaceManager.remove_workspace + +```python +WorkspaceManager.remove_workspace(self, repo_full_name: 'str') -> 'bool' +``` + +Remove a workspace mapping. + +Args: + repo_full_name: The full name of the repository + +Returns: + True if removed, False if not found + +### WorkspaceMapping + +```python +WorkspaceMapping(repo_id: 'int', repo_name: 'str', repo_full_name: 'str', local_path: 'str', connected_at: 'str') -> None +``` + +Represents a mapping between a remote repository and a local path. + +#### WorkspaceMapping.__init__ + +```python +WorkspaceMapping.__init__(self, repo_id: 'int', repo_name: 'str', repo_full_name: 'str', local_path: 'str', connected_at: 'str') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### WorkspaceMapping.to_dict + +```python +WorkspaceMapping.to_dict(self) -> 'Dict[str, Any]' +``` + +Convert to dictionary for JSON serialization. + +## Functions + diff --git a/documentation/docs/api/llm.md b/documentation/docs/api/llm.md new file mode 100644 index 0000000..a57f9b6 --- /dev/null +++ b/documentation/docs/api/llm.md @@ -0,0 +1,290 @@ +# refactron.llm + +LLM integration for intelligent code suggestions using free cloud APIs. + +## Classes + +## Functions + + +--- + +# refactron.llm.backend_client + +Client for Refactron backend LLM proxy. + +## Classes + +### BackendLLMClient + +```python +BackendLLMClient(backend_url: 'Optional[str]' = None, model: 'str' = 'llama-3.3-70b-versatile', temperature: 'float' = 0.2, max_tokens: 'int' = 2000) +``` + +Client that proxies LLM requests through Refactron backend. + +#### BackendLLMClient.__init__ + +```python +BackendLLMClient.__init__(self, backend_url: 'Optional[str]' = None, model: 'str' = 'llama-3.3-70b-versatile', temperature: 'float' = 0.2, max_tokens: 'int' = 2000) +``` + +Initialize backend client. + +Args: + backend_url: Refactron backend URL + model: Model name to use + temperature: Sampling temperature + max_tokens: Maximum tokens to generate + +#### BackendLLMClient.check_health + +```python +BackendLLMClient.check_health(self) -> 'bool' +``` + +Check if the backend API is accessible. + +Returns: + True if API is accessible, False otherwise + +#### BackendLLMClient.generate + +```python +BackendLLMClient.generate(self, prompt: 'str', system: 'Optional[str]' = None, temperature: 'Optional[float]' = None, max_tokens: 'Optional[int]' = None) -> 'str' +``` + +Generate text using backend API. + +Args: + prompt: The user prompt + system: Optional system prompt + temperature: Override default temperature + max_tokens: Override default max tokens + +Returns: + Generated text + +## Functions + + +--- + +# refactron.llm.client + +Groq cloud API client for free LLM inference. + +## Classes + +### GroqClient + +```python +GroqClient(api_key: 'Optional[str]' = None, model: 'str' = 'llama-3.3-70b-versatile', temperature: 'float' = 0.2, max_tokens: 'int' = 2000) +``` + +Client for Groq cloud API (free LLM inference). + +#### GroqClient.__init__ + +```python +GroqClient.__init__(self, api_key: 'Optional[str]' = None, model: 'str' = 'llama-3.3-70b-versatile', temperature: 'float' = 0.2, max_tokens: 'int' = 2000) +``` + +Initialize Groq client. + +Args: + api_key: Groq API key (defaults to GROQ_API_KEY env var) + model: Model name to use + temperature: Sampling temperature + max_tokens: Maximum tokens to generate + +#### GroqClient.check_health + +```python +GroqClient.check_health(self) -> 'bool' +``` + +Check if the Groq API is accessible. + +Returns: + True if API is accessible, False otherwise + +#### GroqClient.generate + +```python +GroqClient.generate(self, prompt: 'str', system: 'Optional[str]' = None, temperature: 'Optional[float]' = None, max_tokens: 'Optional[int]' = None) -> 'str' +``` + +Generate text using Groq. + +Args: + prompt: The user prompt + system: Optional system prompt + temperature: Override default temperature + max_tokens: Override default max tokens + +Returns: + Generated text + +## Functions + + +--- + +# refactron.llm.models + +Data models for LLM integration. + +## Classes + +### RefactoringSuggestion + +```python +RefactoringSuggestion(issue: refactron.core.models.CodeIssue, original_code: str, context_files: List[str], proposed_code: str, explanation: str, reasoning: str, model_name: str, confidence_score: float, llm_confidence: float = 0.5, status: refactron.llm.models.SuggestionStatus = , safety_result: Optional[refactron.llm.models.SafetyCheckResult] = None, suggestion_id: str = , timestamp: float = ) -> None +``` + +A refactoring suggestion generated by the LLM. + +#### RefactoringSuggestion.__init__ + +```python +RefactoringSuggestion.__init__(self, issue: refactron.core.models.CodeIssue, original_code: str, context_files: List[str], proposed_code: str, explanation: str, reasoning: str, model_name: str, confidence_score: float, llm_confidence: float = 0.5, status: refactron.llm.models.SuggestionStatus = , safety_result: Optional[refactron.llm.models.SafetyCheckResult] = None, suggestion_id: str = , timestamp: float = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### SafetyCheckResult + +```python +SafetyCheckResult(passed: bool, score: float, issues: List[str], syntax_valid: bool = False, side_effects: List[str] = ) -> None +``` + +Result of a safety gate validation. + +#### SafetyCheckResult.__init__ + +```python +SafetyCheckResult.__init__(self, passed: bool, score: float, issues: List[str], syntax_valid: bool = False, side_effects: List[str] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### SuggestionStatus + +```python +SuggestionStatus(*values) +``` + +Status of a refactoring suggestion. + +## Functions + + +--- + +# refactron.llm.orchestrator + +Orchestrator for LLM-based refactoring suggestions. + +## Classes + +### LLMOrchestrator + +```python +LLMOrchestrator(retriever: Optional[refactron.rag.retriever.ContextRetriever] = None, llm_client: Union[refactron.llm.client.GroqClient, refactron.llm.backend_client.BackendLLMClient, NoneType] = None, safety_gate: Optional[refactron.llm.safety.SafetyGate] = None) +``` + +Coordinates RAG context retrieval and LLM generation. + +#### LLMOrchestrator.__init__ + +```python +LLMOrchestrator.__init__(self, retriever: Optional[refactron.rag.retriever.ContextRetriever] = None, llm_client: Union[refactron.llm.client.GroqClient, refactron.llm.backend_client.BackendLLMClient, NoneType] = None, safety_gate: Optional[refactron.llm.safety.SafetyGate] = None) +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### LLMOrchestrator.generate_documentation + +```python +LLMOrchestrator.generate_documentation(self, code: str, file_path: str = 'unknown') -> refactron.llm.models.RefactoringSuggestion +``` + +Generate documentation for the provided code. + +Args: + code: The code to document + file_path: Optional file path for context + +Returns: + A suggestion containing the documented code + +#### LLMOrchestrator.generate_suggestion + +```python +LLMOrchestrator.generate_suggestion(self, issue: refactron.core.models.CodeIssue, original_code: str) -> refactron.llm.models.RefactoringSuggestion +``` + +Generate a refactoring suggestion for a code issue. + +Args: + issue: The code issue to fix + original_code: The failing code snippet + +Returns: + A validated refactoring suggestion + +## Functions + + +--- + +# refactron.llm.prompts + +Prompt templates for LLM code suggestions. + +## Classes + +## Functions + + +--- + +# refactron.llm.safety + +Safety gate for validating LLM-generated code. + +## Classes + +### SafetyGate + +```python +SafetyGate(min_confidence: float = 0.7) +``` + +Validates code suggestions for safety and correctness. + +#### SafetyGate.__init__ + +```python +SafetyGate.__init__(self, min_confidence: float = 0.7) +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### SafetyGate.validate + +```python +SafetyGate.validate(self, suggestion: refactron.llm.models.RefactoringSuggestion) -> refactron.llm.models.SafetyCheckResult +``` + +Validate a refactoring suggestion. + +Args: + suggestion: The suggestion to validate + +Returns: + Safety check result + +## Functions + diff --git a/documentation/docs/api/patterns.md b/documentation/docs/api/patterns.md new file mode 100644 index 0000000..3e4bccf --- /dev/null +++ b/documentation/docs/api/patterns.md @@ -0,0 +1,914 @@ +# refactron.patterns + +Pattern Learning System for Refactron. + +## Classes + +## Functions + + +--- + +# refactron.patterns.fingerprint + +Pattern fingerprinting for code pattern identification. + +## Classes + +### PatternFingerprinter + +```python +PatternFingerprinter() -> None +``` + +Generates fingerprints for code patterns using AST-based hashing. + +#### PatternFingerprinter.__init__ + +```python +PatternFingerprinter.__init__(self) -> None +``` + +Initialize the pattern fingerprinter. + +#### PatternFingerprinter.fingerprint_code + +```python +PatternFingerprinter.fingerprint_code(self, code_snippet: str) -> str +``` + +Generate hash fingerprint for a code snippet. + +Args: + code_snippet: Source code to fingerprint + +Returns: + SHA256 hash of the normalized code pattern + +#### PatternFingerprinter.fingerprint_issue_context + +```python +PatternFingerprinter.fingerprint_issue_context(self, issue: refactron.core.models.CodeIssue, source_code: str, context_lines: int = 3) -> str +``` + +Generate fingerprint for issue context. + +Args: + issue: CodeIssue to fingerprint + source_code: Full source code of the file + context_lines: Number of lines before/after to include (default: 3) + +Returns: + SHA256 hash of the normalized issue context pattern + +#### PatternFingerprinter.fingerprint_refactoring + +```python +PatternFingerprinter.fingerprint_refactoring(self, operation: refactron.core.models.RefactoringOperation) -> str +``` + +Generate fingerprint for refactoring operation. + +Args: + operation: RefactoringOperation to fingerprint + +Returns: + SHA256 hash of the normalized refactoring pattern + +## Functions + + +--- + +# refactron.patterns.learner + +Pattern learning engine that learns from feedback and refactoring history. + +## Classes + +### PatternLearner + +```python +PatternLearner(storage: refactron.patterns.storage.PatternStorage, fingerprinter: refactron.patterns.fingerprint.PatternFingerprinter) -> None +``` + +Learns patterns from feedback and refactoring history. + +#### PatternLearner.__init__ + +```python +PatternLearner.__init__(self, storage: refactron.patterns.storage.PatternStorage, fingerprinter: refactron.patterns.fingerprint.PatternFingerprinter) -> None +``` + +Initialize pattern learner. + +Args: + storage: PatternStorage instance for loading/saving patterns + fingerprinter: PatternFingerprinter for generating code fingerprints + +Raises: + ValueError: If storage or fingerprinter is None + +#### PatternLearner.batch_learn + +```python +PatternLearner.batch_learn(self, feedback_list: List[Tuple[refactron.core.models.RefactoringOperation, refactron.patterns.models.RefactoringFeedback]]) -> Dict[str, int] +``` + +Process multiple feedback records efficiently. + +Args: + feedback_list: List of (operation, feedback) tuples to process + +Returns: + Dictionary with statistics: \{'processed': int, 'created': int, + 'updated': int, 'failed': int\} + +Raises: + ValueError: If feedback_list is None or contains None values + +#### PatternLearner.learn_from_feedback + +```python +PatternLearner.learn_from_feedback(self, operation: refactron.core.models.RefactoringOperation, feedback: refactron.patterns.models.RefactoringFeedback) -> Optional[str] +``` + +Learn from a single feedback record. + +Args: + operation: RefactoringOperation that was evaluated + feedback: Feedback record containing developer decision + +Returns: + Pattern ID if pattern was created/updated, None if skipped + +Raises: + ValueError: If operation or feedback is None + RuntimeError: If pattern storage operations fail + +#### PatternLearner.update_pattern_metrics + +```python +PatternLearner.update_pattern_metrics(self, pattern_id: str, before_metrics: refactron.core.models.FileMetrics, after_metrics: refactron.core.models.FileMetrics) -> None +``` + +Update metrics for a pattern based on before/after comparison. + +Args: + pattern_id: ID of the pattern to update + before_metrics: FileMetrics before refactoring + after_metrics: FileMetrics after refactoring + +Raises: + ValueError: If pattern_id is empty or metrics are None + RuntimeError: If pattern not found or update fails + +## Functions + + +--- + +# refactron.patterns.learning_service + +Background service for pattern learning and maintenance. + +## Classes + +### LearningService + +```python +LearningService(storage: refactron.patterns.storage.PatternStorage, learner: Optional[refactron.patterns.learner.PatternLearner] = None) -> None +``` + +Background service for pattern learning and maintenance. + +#### LearningService.__init__ + +```python +LearningService.__init__(self, storage: refactron.patterns.storage.PatternStorage, learner: Optional[refactron.patterns.learner.PatternLearner] = None) -> None +``` + +Initialize learning service. + +Args: + storage: PatternStorage instance for data access + learner: PatternLearner instance (created if None) + +Raises: + ValueError: If storage is None + +#### LearningService.cleanup_old_patterns + +```python +LearningService.cleanup_old_patterns(self, days: int = 90) -> Dict[str, int] +``` + +Remove patterns that haven't been seen recently. + +Args: + days: Number of days of inactivity before removal (default: 90) + +Returns: + Dictionary with cleanup statistics: \{'removed': int, 'total': int\} + +Raises: + ValueError: If days is negative + RuntimeError: If cleanup fails + +#### LearningService.process_pending_feedback + +```python +LearningService.process_pending_feedback(self, limit: Optional[int] = None) -> Dict[str, int] +``` + +Process any pending feedback records that haven't been learned from yet. + +Args: + limit: Maximum number of feedback records to process (None = all) + +Returns: + Dictionary with processing statistics + +Raises: + RuntimeError: If processing fails critically + +#### LearningService.update_pattern_scores + +```python +LearningService.update_pattern_scores(self) -> Dict[str, int] +``` + +Recalculate scores for all patterns. + +This updates acceptance rates and benefit scores based on current feedback. + +Returns: + Dictionary with update statistics: \{'updated': int, 'total': int\} + +Raises: + RuntimeError: If update fails + +## Functions + + +--- + +# refactron.patterns.matcher + +Pattern matching for finding similar code patterns. + +## Classes + +### PatternMatcher + +```python +PatternMatcher(storage: refactron.patterns.storage.PatternStorage, cache_ttl_seconds: int = 300) +``` + +Matches code patterns against learned patterns with scoring. + +#### PatternMatcher.__init__ + +```python +PatternMatcher.__init__(self, storage: refactron.patterns.storage.PatternStorage, cache_ttl_seconds: int = 300) +``` + +Initialize pattern matcher. + +Args: + storage: PatternStorage instance for loading patterns + cache_ttl_seconds: Cache time-to-live in seconds (default: 300 seconds / 5 minutes) + +#### PatternMatcher.calculate_pattern_score + +```python +PatternMatcher.calculate_pattern_score(self, pattern: refactron.patterns.models.RefactoringPattern, project_profile: Optional[refactron.patterns.models.ProjectPatternProfile] = None) -> float +``` + +Calculate score for pattern suggestion. + +The scoring algorithm applies multiple bonuses multiplicatively: +- Project weight: 0.0-1.0 (disabled patterns get 0.0) +- Enabled pattern bonus: 1.2x (20% bonus) +- Recency bonus: up to 1.2x (20% bonus for patterns seen in last 30 days) +- Frequency bonus: up to 1.3x (30% bonus based on log scale of occurrences) +- Benefit bonus: up to 1.15x (15% bonus based on average_benefit_score) + +These bonuses can compound to exceed 1.0, but the final score is normalized +to the range [0.0, 1.0] using min/max clipping. + +Args: + pattern: Pattern to score + project_profile: Optional project-specific profile for weighting + +Returns: + Score between 0.0 and 1.0 (higher = better suggestion) + +#### PatternMatcher.clear_cache + +```python +PatternMatcher.clear_cache(self) -> None +``` + +Clear the pattern cache (force reload on next access). + +#### PatternMatcher.find_best_matches + +```python +PatternMatcher.find_best_matches(self, code_hash: str, operation_type: Optional[str] = None, project_profile: Optional[refactron.patterns.models.ProjectPatternProfile] = None, limit: int = 10) -> List[Tuple[refactron.patterns.models.RefactoringPattern, float]] +``` + +Find best matching patterns with scores. + +Args: + code_hash: Hash of the code pattern to match + operation_type: Optional operation type to filter by + project_profile: Optional project-specific profile for weighting + limit: Maximum number of results to return + +Returns: + List of tuples (pattern, score) sorted by score (highest first) + +#### PatternMatcher.find_similar_patterns + +```python +PatternMatcher.find_similar_patterns(self, code_hash: str, operation_type: Optional[str] = None, limit: Optional[int] = None) -> List[refactron.patterns.models.RefactoringPattern] +``` + +Find patterns similar to given code hash. + +Optimized with O(1) hash-based lookup instead of O(n) linear search. + +Args: + code_hash: Hash of the code pattern to match + operation_type: Optional operation type to filter by + limit: Optional maximum number of results to return + +Returns: + List of similar patterns, sorted by acceptance rate + +## Functions + + +--- + +# refactron.patterns.models + +Data models for Pattern Learning System. + +## Classes + +### PatternMetric + +```python +PatternMetric(pattern_id: str, complexity_reduction: float = 0.0, maintainability_improvement: float = 0.0, lines_of_code_change: int = 0, issue_resolution_count: int = 0, before_metrics: Dict[str, float] = , after_metrics: Dict[str, float] = , total_evaluations: int = 0) -> None +``` + +Metrics for evaluating pattern effectiveness. + +#### PatternMetric.__init__ + +```python +PatternMetric.__init__(self, pattern_id: str, complexity_reduction: float = 0.0, maintainability_improvement: float = 0.0, lines_of_code_change: int = 0, issue_resolution_count: int = 0, before_metrics: Dict[str, float] = , after_metrics: Dict[str, float] = , total_evaluations: int = 0) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### PatternMetric.to_dict + +```python +PatternMetric.to_dict(self) -> Dict[str, Any] +``` + +Convert metrics to dictionary for serialization. + +#### PatternMetric.update + +```python +PatternMetric.update(self, complexity_reduction: float, maintainability_improvement: float, lines_of_code_change: int, issue_resolution_count: int, before_metrics: Dict[str, float], after_metrics: Dict[str, float]) -> 'PatternMetric' +``` + +Update metrics with new evaluation data (in-place mutation). + +Returns: + self to enable method chaining + +Note: + This method modifies the object in-place. The return value + is provided to enable method chaining. + +### ProjectPatternProfile + +```python +ProjectPatternProfile(project_id: str, project_path: pathlib._local.Path, enabled_patterns: Set[str] = , disabled_patterns: Set[str] = , pattern_weights: Dict[str, float] = , rule_thresholds: Dict[str, float] = , last_updated: datetime.datetime = , metadata: Dict[str, Any] = ) -> None +``` + +Project-specific pattern tuning and rules. + +#### ProjectPatternProfile.__init__ + +```python +ProjectPatternProfile.__init__(self, project_id: str, project_path: pathlib._local.Path, enabled_patterns: Set[str] = , disabled_patterns: Set[str] = , pattern_weights: Dict[str, float] = , rule_thresholds: Dict[str, float] = , last_updated: datetime.datetime = , metadata: Dict[str, Any] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### ProjectPatternProfile.disable_pattern + +```python +ProjectPatternProfile.disable_pattern(self, pattern_id: str) -> None +``` + +Disable a pattern for this project. + +#### ProjectPatternProfile.enable_pattern + +```python +ProjectPatternProfile.enable_pattern(self, pattern_id: str) -> None +``` + +Enable a pattern for this project. + +#### ProjectPatternProfile.get_pattern_weight + +```python +ProjectPatternProfile.get_pattern_weight(self, pattern_id: str, default: float = 1.0) -> float +``` + +Get weight for a pattern, returning default if not set. + +#### ProjectPatternProfile.is_pattern_enabled + +```python +ProjectPatternProfile.is_pattern_enabled(self, pattern_id: str) -> bool +``` + +Check if a pattern is enabled for this project. + +#### ProjectPatternProfile.set_pattern_weight + +```python +ProjectPatternProfile.set_pattern_weight(self, pattern_id: str, weight: float) -> None +``` + +Set custom weight for a pattern. + +#### ProjectPatternProfile.set_rule_threshold + +```python +ProjectPatternProfile.set_rule_threshold(self, rule_id: str, threshold: float) -> None +``` + +Set custom threshold for a rule. + +#### ProjectPatternProfile.to_dict + +```python +ProjectPatternProfile.to_dict(self) -> Dict[str, Any] +``` + +Convert profile to dictionary for serialization. + +### RefactoringFeedback + +```python +RefactoringFeedback(operation_id: str, operation_type: str, file_path: pathlib._local.Path, timestamp: datetime.datetime, action: str, reason: Optional[str] = None, code_pattern_hash: Optional[str] = None, project_path: Optional[pathlib._local.Path] = None, metadata: Dict[str, Any] = ) -> None +``` + +Tracks developer acceptance/rejection of refactoring suggestions. + +#### RefactoringFeedback.__init__ + +```python +RefactoringFeedback.__init__(self, operation_id: str, operation_type: str, file_path: pathlib._local.Path, timestamp: datetime.datetime, action: str, reason: Optional[str] = None, code_pattern_hash: Optional[str] = None, project_path: Optional[pathlib._local.Path] = None, metadata: Dict[str, Any] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### RefactoringFeedback.to_dict + +```python +RefactoringFeedback.to_dict(self) -> Dict[str, Any] +``` + +Convert feedback to dictionary for serialization. + +### RefactoringPattern + +```python +RefactoringPattern(pattern_id: str, pattern_hash: str, operation_type: str, code_snippet_before: str, code_snippet_after: str, acceptance_rate: float = 0.0, total_occurrences: int = 0, accepted_count: int = 0, rejected_count: int = 0, ignored_count: int = 0, average_benefit_score: float = 0.0, first_seen: datetime.datetime = , last_seen: datetime.datetime = , project_context: Dict[str, Any] = , metadata: Dict[str, Any] = ) -> None +``` + +Represents a learned pattern from successful refactorings. + +#### RefactoringPattern.__init__ + +```python +RefactoringPattern.__init__(self, pattern_id: str, pattern_hash: str, operation_type: str, code_snippet_before: str, code_snippet_after: str, acceptance_rate: float = 0.0, total_occurrences: int = 0, accepted_count: int = 0, rejected_count: int = 0, ignored_count: int = 0, average_benefit_score: float = 0.0, first_seen: datetime.datetime = , last_seen: datetime.datetime = , project_context: Dict[str, Any] = , metadata: Dict[str, Any] = ) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### RefactoringPattern.calculate_benefit_score + +```python +RefactoringPattern.calculate_benefit_score(self, metric: Optional[refactron.patterns.models.PatternMetric] = None) -> float +``` + +Calculate overall benefit score for this pattern. + +#### RefactoringPattern.to_dict + +```python +RefactoringPattern.to_dict(self) -> Dict[str, Any] +``` + +Convert pattern to dictionary for serialization. + +#### RefactoringPattern.update_from_feedback + +```python +RefactoringPattern.update_from_feedback(self, action: str) -> None +``` + +Update pattern statistics from feedback. + +## Functions + + +--- + +# refactron.patterns.ranker + +Ranking engine for refactoring suggestions based on learned patterns. + +## Classes + +### RefactoringRanker + +```python +RefactoringRanker(storage: refactron.patterns.storage.PatternStorage, matcher: refactron.patterns.matcher.PatternMatcher, fingerprinter: refactron.patterns.fingerprint.PatternFingerprinter) +``` + +Ranks refactoring suggestions based on learned patterns and project context. + +#### RefactoringRanker.__init__ + +```python +RefactoringRanker.__init__(self, storage: refactron.patterns.storage.PatternStorage, matcher: refactron.patterns.matcher.PatternMatcher, fingerprinter: refactron.patterns.fingerprint.PatternFingerprinter) +``` + +Initialize refactoring ranker. + +Args: + storage: PatternStorage instance for accessing patterns and profiles + matcher: PatternMatcher instance for finding similar patterns + fingerprinter: PatternFingerprinter instance for generating code hashes + +#### RefactoringRanker.get_ranked_with_scores + +```python +RefactoringRanker.get_ranked_with_scores(self, operations: List[refactron.core.models.RefactoringOperation], project_path: Optional[pathlib._local.Path] = None) -> List[Tuple[refactron.core.models.RefactoringOperation, float, Optional[refactron.patterns.models.RefactoringPattern]]] +``` + +Get ranked operations with scores and matching patterns. + +Args: + operations: List of refactoring operations to rank + project_path: Optional project path for project-specific scoring + +Returns: + List of tuples (operation, score, best_matching_pattern) + sorted by score descending + +#### RefactoringRanker.get_top_suggestions + +```python +RefactoringRanker.get_top_suggestions(self, operations: List[refactron.core.models.RefactoringOperation], project_path: Optional[pathlib._local.Path] = None, top_n: int = 10) -> List[refactron.core.models.RefactoringOperation] +``` + +Get top N ranked suggestions. + +Args: + operations: List of refactoring operations to rank + project_path: Optional project path for project-specific scoring + top_n: Number of top suggestions to return (default: 10) + +Returns: + List of top N RefactoringOperation instances, sorted by score descending + +#### RefactoringRanker.rank_operations + +```python +RefactoringRanker.rank_operations(self, operations: List[refactron.core.models.RefactoringOperation], project_path: Optional[pathlib._local.Path] = None) -> List[Tuple[refactron.core.models.RefactoringOperation, float]] +``` + +Rank refactoring operations by predicted value based on learned patterns. + +Scoring factors: +1. Pattern acceptance rate (base score) +2. Project-specific weights (from ProjectPatternProfile) +3. Pattern recency (recent patterns weighted higher) +4. Pattern frequency (more occurrences = more reliable) +5. Metrics improvement (complexity reduction, maintainability) +6. Risk penalty (higher risk = lower score) + +Args: + operations: List of refactoring operations to rank + project_path: Optional project path for project-specific scoring + +Returns: + List of tuples (operation, score) sorted by score descending + Score range: 0.0 (lowest priority) to 1.0 (highest priority) + +## Functions + + +--- + +# refactron.patterns.storage + +Storage management for Pattern Learning System. + +## Classes + +### PatternStorage + +```python +PatternStorage(storage_dir: Optional[pathlib._local.Path] = None) +``` + +Manages persistent storage for pattern learning data. + +#### PatternStorage.__init__ + +```python +PatternStorage.__init__(self, storage_dir: Optional[pathlib._local.Path] = None) +``` + +Initialize pattern storage. + +Args: + storage_dir: Directory to store pattern data. If None, uses default: + - First checks project root (.refactron/patterns/) + - Falls back to ~/.refactron/patterns/ + +#### PatternStorage.clear_cache + +```python +PatternStorage.clear_cache(self) -> None +``` + +Clear in-memory caches (force reload from disk). + +#### PatternStorage.get_pattern + +```python +PatternStorage.get_pattern(self, pattern_id: str) -> Optional[refactron.patterns.models.RefactoringPattern] +``` + +Get a specific pattern by ID. + +Args: + pattern_id: Pattern ID to retrieve + +Returns: + Pattern if found, None otherwise + +#### PatternStorage.get_pattern_metric + +```python +PatternStorage.get_pattern_metric(self, pattern_id: str) -> Optional[refactron.patterns.models.PatternMetric] +``` + +Get metric for a specific pattern. + +Args: + pattern_id: Pattern ID to retrieve metric for + +Returns: + Metric if found, None otherwise + +#### PatternStorage.get_project_profile + +```python +PatternStorage.get_project_profile(self, project_path: pathlib._local.Path) -> refactron.patterns.models.ProjectPatternProfile +``` + +Get or create project profile for a project. + +Args: + project_path: Path to project root + +Returns: + Project profile (created if doesn't exist) + +#### PatternStorage.load_feedback + +```python +PatternStorage.load_feedback(self, pattern_id: Optional[str] = None, project_path: Optional[pathlib._local.Path] = None) -> List[refactron.patterns.models.RefactoringFeedback] +``` + +Load feedback records from storage. + +Note: For large feedback datasets, this filters in Python after loading +all records. Consider implementing pagination or separate indices if +performance becomes an issue with very large datasets. + +Args: + pattern_id: Optional pattern ID to filter by + project_path: Optional project path to filter by + +Returns: + List of feedback records matching filters + +#### PatternStorage.load_pattern_metrics + +```python +PatternStorage.load_pattern_metrics(self) -> Dict[str, refactron.patterns.models.PatternMetric] +``` + +Load all pattern metrics from storage. + +Returns: + Dictionary mapping pattern_id to PatternMetric + Note: Returns a copy to prevent external modifications to cache. + +#### PatternStorage.load_patterns + +```python +PatternStorage.load_patterns(self) -> Dict[str, refactron.patterns.models.RefactoringPattern] +``` + +Load all patterns from storage. + +Returns: + Dictionary mapping pattern_id to RefactoringPattern + Note: Returns a copy to prevent external modifications to cache. + +#### PatternStorage.load_project_profiles + +```python +PatternStorage.load_project_profiles(self) -> Dict[str, refactron.patterns.models.ProjectPatternProfile] +``` + +Load all project profiles from storage. + +Returns: + Dictionary mapping project_id to ProjectPatternProfile + Note: Returns a copy to prevent external modifications to cache. + +#### PatternStorage.replace_patterns + +```python +PatternStorage.replace_patterns(self, patterns: Dict[str, refactron.patterns.models.RefactoringPattern]) -> None +``` + +Replace all patterns in storage with the provided dictionary. + +This method completely replaces the pattern storage, useful for cleanup +operations where patterns need to be removed. + +Args: + patterns: Dictionary mapping pattern_id to RefactoringPattern + +#### PatternStorage.save_feedback + +```python +PatternStorage.save_feedback(self, feedback: refactron.patterns.models.RefactoringFeedback) -> None +``` + +Save feedback record to storage. + +Args: + feedback: Feedback record to save + +#### PatternStorage.save_pattern + +```python +PatternStorage.save_pattern(self, pattern: refactron.patterns.models.RefactoringPattern) -> None +``` + +Save pattern to storage. + +Args: + pattern: Pattern to save + +#### PatternStorage.save_pattern_metric + +```python +PatternStorage.save_pattern_metric(self, metric: refactron.patterns.models.PatternMetric) -> None +``` + +Save pattern metric to storage. + +Args: + metric: Metric to save + +#### PatternStorage.save_project_profile + +```python +PatternStorage.save_project_profile(self, profile: refactron.patterns.models.ProjectPatternProfile) -> None +``` + +Save project profile to storage. + +Args: + profile: Profile to save + +#### PatternStorage.update_pattern_stats + +```python +PatternStorage.update_pattern_stats(self, pattern_id: str, action: str) -> None +``` + +Update pattern statistics from feedback. + +Args: + pattern_id: Pattern ID to update + action: Action taken ("accepted", "rejected", "ignored") + +## Functions + + +--- + +# refactron.patterns.tuner + +Project-specific rule tuning based on pattern learning history. + +## Classes + +### PatternStats + +```python +PatternStats(pattern_id: 'str', pattern_hash: 'str', operation_type: 'str', accepted_count: 'int' = 0, rejected_count: 'int' = 0, ignored_count: 'int' = 0) -> None +``` + +Aggregated statistics for a pattern within a specific project. + +#### PatternStats.__init__ + +```python +PatternStats.__init__(self, pattern_id: 'str', pattern_hash: 'str', operation_type: 'str', accepted_count: 'int' = 0, rejected_count: 'int' = 0, ignored_count: 'int' = 0) -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### RuleTuner + +```python +RuleTuner(storage: 'PatternStorage') -> 'None' +``` + +Tunes rules based on project-specific pattern history. + +#### RuleTuner.__init__ + +```python +RuleTuner.__init__(self, storage: 'PatternStorage') -> 'None' +``` + +Initialize self. See help(type(self)) for accurate signature. + +#### RuleTuner.analyze_project_patterns + +```python +RuleTuner.analyze_project_patterns(self, project_path: 'Path') -> 'Dict[str, Any]' +``` + +Analyze patterns for a specific project. + +Returns a dictionary with: +- project_id +- project_path +- patterns: list of per-pattern statistics combining project and global data + +#### RuleTuner.apply_tuning + +```python +RuleTuner.apply_tuning(self, project_path: 'Path', recommendations: 'Dict[str, Any]') -> 'ProjectPatternProfile' +``` + +Apply tuning recommendations to project profile. + +recommendations is expected to have keys: +- "to_disable": List[str] of pattern_ids to disable +- "to_enable": List[str] of pattern_ids to enable +- "weights": Dict[str, float] of pattern_id -> weight + +#### RuleTuner.generate_recommendations + +```python +RuleTuner.generate_recommendations(self, project_path: 'Path') -> 'Dict[str, Any]' +``` + +Generate rule tuning recommendations for a project. + +Heuristics: +- Disable patterns with sufficient feedback and low acceptance. +- Enable patterns with high acceptance. +- Adjust pattern weights based on project acceptance. + +## Functions + diff --git a/documentation/docs/api/rag.md b/documentation/docs/api/rag.md new file mode 100644 index 0000000..930892e --- /dev/null +++ b/documentation/docs/api/rag.md @@ -0,0 +1,351 @@ +# refactron.rag + +RAG (Retrieval-Augmented Generation) infrastructure for code indexing and retrieval. + +## Classes + +## Functions + + +--- + +# refactron.rag.chunker + +Code chunking strategies for RAG indexing. + +## Classes + +### CodeChunk + +```python +CodeChunk(content: 'str', chunk_type: 'str', file_path: 'str', line_range: 'Tuple[int, int]', name: 'str', dependencies: 'List[str]', metadata: 'Dict[str, Any]') -> None +``` + +Represents a semantic chunk of code. + +#### CodeChunk.__init__ + +```python +CodeChunk.__init__(self, content: 'str', chunk_type: 'str', file_path: 'str', line_range: 'Tuple[int, int]', name: 'str', dependencies: 'List[str]', metadata: 'Dict[str, Any]') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### CodeChunker + +```python +CodeChunker(parser: 'CodeParser') +``` + +Chunks parsed code into semantic units for embedding. + +#### CodeChunker.__init__ + +```python +CodeChunker.__init__(self, parser: 'CodeParser') +``` + +Initialize the chunker. + +Args: + parser: CodeParser instance for parsing files + +#### CodeChunker.chunk_file + +```python +CodeChunker.chunk_file(self, file_path: 'Path') -> 'List[CodeChunk]' +``` + +Chunk a file into semantic units. + +Args: + file_path: Path to the Python file + +Returns: + List of code chunks + +## Functions + + +--- + +# refactron.rag.indexer + +Vector index management using ChromaDB. + +## Classes + +### IndexStats + +```python +IndexStats(total_chunks: 'int', total_files: 'int', chunk_types: 'dict', embedding_model: 'str', index_path: 'str') -> None +``` + +Statistics about the RAG index. + +#### IndexStats.__init__ + +```python +IndexStats.__init__(self, total_chunks: 'int', total_files: 'int', chunk_types: 'dict', embedding_model: 'str', index_path: 'str') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### RAGIndexer + +```python +RAGIndexer(workspace_path: 'Path', embedding_model: 'str' = 'all-MiniLM-L6-v2', collection_name: 'str' = 'code_chunks', llm_client: 'Optional[GroqClient]' = None) +``` + +Manages code indexing for RAG retrieval. + +#### RAGIndexer.__init__ + +```python +RAGIndexer.__init__(self, workspace_path: 'Path', embedding_model: 'str' = 'all-MiniLM-L6-v2', collection_name: 'str' = 'code_chunks', llm_client: 'Optional[GroqClient]' = None) +``` + +Initialize the RAG indexer. + +Args: + workspace_path: Path to the workspace directory + embedding_model: Name of the sentence-transformers model + collection_name: Name of the ChromaDB collection + llm_client: Optional LLM client for code summarization + +#### RAGIndexer.add_chunks + +```python +RAGIndexer.add_chunks(self, chunks: 'List[CodeChunk]') -> 'None' +``` + +Add code chunks to the vector index. + +Args: + chunks: List of code chunks to add + +#### RAGIndexer.get_stats + +```python +RAGIndexer.get_stats(self) -> 'IndexStats' +``` + +Get statistics about the current index. + +Returns: + Index statistics + +#### RAGIndexer.index_repository + +```python +RAGIndexer.index_repository(self, repo_path: 'Optional[Path]' = None, summarize: 'bool' = False) -> 'IndexStats' +``` + +Index an entire repository. + +Args: + repo_path: Path to repository (defaults to workspace_path) + summarize: Whether to use AI to summarize code for better retrieval + +Returns: + Statistics about the indexed content + +## Functions + + +--- + +# refactron.rag.parser + +Code parser using tree-sitter for AST-aware code analysis. + +## Classes + +### CodeParser + +```python +CodeParser() +``` + +AST-aware code parser using tree-sitter. + +#### CodeParser.__init__ + +```python +CodeParser.__init__(self) +``` + +Initialize the parser. + +#### CodeParser.parse_file + +```python +CodeParser.parse_file(self, file_path: 'Path') -> 'ParsedFile' +``` + +Parse a Python file. + +Args: + file_path: Path to the Python file + +Returns: + ParsedFile object containing all parsed elements + +### ParsedClass + +```python +ParsedClass(name: 'str', body: 'str', docstring: 'Optional[str]', line_range: 'Tuple[int, int]', methods: 'List[ParsedFunction]') -> None +``` + +Represents a parsed class. + +#### ParsedClass.__init__ + +```python +ParsedClass.__init__(self, name: 'str', body: 'str', docstring: 'Optional[str]', line_range: 'Tuple[int, int]', methods: 'List[ParsedFunction]') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### ParsedFile + +```python +ParsedFile(file_path: 'str', imports: 'List[str]', functions: 'List[ParsedFunction]', classes: 'List[ParsedClass]', module_docstring: 'Optional[str]') -> None +``` + +Represents a parsed Python file. + +#### ParsedFile.__init__ + +```python +ParsedFile.__init__(self, file_path: 'str', imports: 'List[str]', functions: 'List[ParsedFunction]', classes: 'List[ParsedClass]', module_docstring: 'Optional[str]') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +### ParsedFunction + +```python +ParsedFunction(name: 'str', body: 'str', docstring: 'Optional[str]', line_range: 'Tuple[int, int]', params: 'List[str]') -> None +``` + +Represents a parsed function. + +#### ParsedFunction.__init__ + +```python +ParsedFunction.__init__(self, name: 'str', body: 'str', docstring: 'Optional[str]', line_range: 'Tuple[int, int]', params: 'List[str]') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + + +--- + +# refactron.rag.retriever + +Context retrieval from the RAG index. + +## Classes + +### ContextRetriever + +```python +ContextRetriever(workspace_path: 'Path', embedding_model: 'str' = 'all-MiniLM-L6-v2', collection_name: 'str' = 'code_chunks') +``` + +Retrieves relevant code context from the RAG index. + +#### ContextRetriever.__init__ + +```python +ContextRetriever.__init__(self, workspace_path: 'Path', embedding_model: 'str' = 'all-MiniLM-L6-v2', collection_name: 'str' = 'code_chunks') +``` + +Initialize the context retriever. + +Args: + workspace_path: Path to the workspace directory + embedding_model: Name of the sentence-transformers model + collection_name: Name of the ChromaDB collection + +#### ContextRetriever.retrieve_by_file + +```python +ContextRetriever.retrieve_by_file(self, file_path: 'str') -> 'List[RetrievedContext]' +``` + +Retrieve all chunks from a specific file. + +Args: + file_path: Path to the file + +Returns: + List of all chunks from the file + +#### ContextRetriever.retrieve_classes + +```python +ContextRetriever.retrieve_classes(self, query: 'str', top_k: 'int' = 5) -> 'List[RetrievedContext]' +``` + +Retrieve similar classes. + +Args: + query: The search query + top_k: Number of results to return + +Returns: + List of similar class chunks + +#### ContextRetriever.retrieve_functions + +```python +ContextRetriever.retrieve_functions(self, query: 'str', top_k: 'int' = 5) -> 'List[RetrievedContext]' +``` + +Retrieve similar functions. + +Args: + query: The search query + top_k: Number of results to return + +Returns: + List of similar function chunks + +#### ContextRetriever.retrieve_similar + +```python +ContextRetriever.retrieve_similar(self, query: 'str', top_k: 'int' = 5, chunk_type: 'Optional[str]' = None) -> 'List[RetrievedContext]' +``` + +Retrieve similar code chunks. + +Args: + query: The search query + top_k: Number of results to return + chunk_type: Optional filter by chunk type (function/class/module) + +Returns: + List of retrieved contexts sorted by relevance + +### RetrievedContext + +```python +RetrievedContext(content: 'str', file_path: 'str', chunk_type: 'str', name: 'str', line_range: 'tuple', distance: 'float', metadata: 'dict') -> None +``` + +Represents a retrieved code context. + +#### RetrievedContext.__init__ + +```python +RetrievedContext.__init__(self, content: 'str', file_path: 'str', chunk_type: 'str', name: 'str', line_range: 'tuple', distance: 'float', metadata: 'dict') -> None +``` + +Initialize self. See help(type(self)) for accurate signature. + +## Functions + diff --git a/documentation/docs/api/refactorers.md b/documentation/docs/api/refactorers.md new file mode 100644 index 0000000..696ca5d --- /dev/null +++ b/documentation/docs/api/refactorers.md @@ -0,0 +1,278 @@ +# refactron.refactorers + +Refactorers for automated code transformations. + +## Classes + +## Functions + + +--- + +# refactron.refactorers.add_docstring_refactorer + +Refactorer for adding docstrings to functions and classes. + +## Classes + +### AddDocstringRefactorer + +```python +AddDocstringRefactorer(config: refactron.core.config.RefactronConfig) +``` + +Suggests adding docstrings to undocumented functions and classes. + +#### AddDocstringRefactorer.__init__ + +```python +AddDocstringRefactorer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the refactorer. + +Args: + config: Refactron configuration + +#### AddDocstringRefactorer.refactor + +```python +AddDocstringRefactorer.refactor(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Find functions/classes without docstrings and suggest adding them. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of add docstring operations + +## Functions + + +--- + +# refactron.refactorers.base_refactorer + +Base refactorer class. + +## Classes + +### BaseRefactorer + +```python +BaseRefactorer(config: refactron.core.config.RefactronConfig) +``` + +Base class for all refactorers. + +#### BaseRefactorer.__init__ + +```python +BaseRefactorer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the refactorer. + +Args: + config: Refactron configuration + +#### BaseRefactorer.refactor + +```python +BaseRefactorer.refactor(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Analyze source code and return refactoring operations. + +Args: + file_path: Path to the file being refactored + source_code: Source code content + +Returns: + List of refactoring operations + +## Functions + + +--- + +# refactron.refactorers.extract_method_refactorer + +Refactorer for extracting methods from complex functions. + +## Classes + +### ExtractMethodRefactorer + +```python +ExtractMethodRefactorer(config: refactron.core.config.RefactronConfig) +``` + +Suggests extracting methods from overly complex functions. + +#### ExtractMethodRefactorer.__init__ + +```python +ExtractMethodRefactorer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the refactorer. + +Args: + config: Refactron configuration + +#### ExtractMethodRefactorer.refactor + +```python +ExtractMethodRefactorer.refactor(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Find opportunities to extract methods. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of extract method operations + +## Functions + + +--- + +# refactron.refactorers.magic_number_refactorer + +Refactorer for extracting magic numbers into named constants. + +## Classes + +### MagicNumberRefactorer + +```python +MagicNumberRefactorer(config: refactron.core.config.RefactronConfig) +``` + +Suggests extracting magic numbers into named constants. + +#### MagicNumberRefactorer.__init__ + +```python +MagicNumberRefactorer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the refactorer. + +Args: + config: Refactron configuration + +#### MagicNumberRefactorer.refactor + +```python +MagicNumberRefactorer.refactor(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Find magic numbers and suggest extracting them to constants. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of extract constant operations + +## Functions + + +--- + +# refactron.refactorers.reduce_parameters_refactorer + +Refactorer for reducing function parameters using configuration objects. + +## Classes + +### ReduceParametersRefactorer + +```python +ReduceParametersRefactorer(config: refactron.core.config.RefactronConfig) +``` + +Suggests using configuration objects for functions with many parameters. + +#### ReduceParametersRefactorer.__init__ + +```python +ReduceParametersRefactorer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the refactorer. + +Args: + config: Refactron configuration + +#### ReduceParametersRefactorer.refactor + +```python +ReduceParametersRefactorer.refactor(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Find functions with too many parameters and suggest config objects. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of parameter reduction operations + +## Functions + + +--- + +# refactron.refactorers.simplify_conditionals_refactorer + +Refactorer for simplifying complex conditional statements. + +## Classes + +### SimplifyConditionalsRefactorer + +```python +SimplifyConditionalsRefactorer(config: refactron.core.config.RefactronConfig) +``` + +Suggests simplifying deeply nested conditionals. + +#### SimplifyConditionalsRefactorer.__init__ + +```python +SimplifyConditionalsRefactorer.__init__(self, config: refactron.core.config.RefactronConfig) +``` + +Initialize the refactorer. + +Args: + config: Refactron configuration + +#### SimplifyConditionalsRefactorer.refactor + +```python +SimplifyConditionalsRefactorer.refactor(self, file_path: pathlib._local.Path, source_code: str) -> List[refactron.core.models.RefactoringOperation] +``` + +Find deeply nested conditionals and suggest simplifications. + +Args: + file_path: Path to the file + source_code: Source code content + +Returns: + List of simplification operations + +## Functions + diff --git a/documentation/docs/images/Refactron-logo-TM.png b/documentation/docs/images/Refactron-logo-TM.png new file mode 100644 index 0000000..94adc70 Binary files /dev/null and b/documentation/docs/images/Refactron-logo-TM.png differ diff --git a/documentation/docs/images/Screenshot 2026-01-29 at 17.45.00.png b/documentation/docs/images/Screenshot 2026-01-29 at 17.45.00.png new file mode 100644 index 0000000..d6252b5 Binary files /dev/null and b/documentation/docs/images/Screenshot 2026-01-29 at 17.45.00.png differ diff --git a/documentation/docs/images/logo.png b/documentation/docs/images/logo.png new file mode 100644 index 0000000..d89f4cc Binary files /dev/null and b/documentation/docs/images/logo.png differ diff --git a/docs/v1.0.13_RELEASE_GUIDE.md b/documentation/docs/v1.0.13_RELEASE_GUIDE.md similarity index 100% rename from docs/v1.0.13_RELEASE_GUIDE.md rename to documentation/docs/v1.0.13_RELEASE_GUIDE.md diff --git a/documentation/docs/v1.0.15_DETAILED_CHANGELOG.md b/documentation/docs/v1.0.15_DETAILED_CHANGELOG.md new file mode 100644 index 0000000..81d75c2 --- /dev/null +++ b/documentation/docs/v1.0.15_DETAILED_CHANGELOG.md @@ -0,0 +1,76 @@ +# Refactron v1.0.15 - Detailed Changelog & Architecture Update + +## 🚀 Major Architectural Change: CLI Modularization + +Refactron v1.0.15 introduces a significant architectural improvement to the Command-Line Interface (CLI). The previously monolithic `cli.py` (over 3,500 lines) has been refactored into a modular package structure `refactron.cli`. + +### Motivation +- **Maintainability**: The single file had become unwieldy, making navigation and updates difficult. +- **Scalability**: As new commands (RAG, Repo, AI) were added, the file size exploded. +- **Separation of Concerns**: UI logic, authentication, and business logic were intertwined. + +### New CLI Structure (`refactron/cli/`) +The CLI is now organized into logical modules based on functionality: + +| Module | Description | Commands | +|--------|-------------|----------| +| `main.py` | Entry point, global options, and subcommand registration. | `main` (group) | +| `auth.py` | Authentication and telemetry management. | `login`, `logout`, `auth`, `telemetry` | +| `analysis.py` | Core code analysis and reporting. | `analyze`, `report`, `metrics`, `serve-metrics`, `suggest` | +| `refactor.py` | Refactoring operations and automation. | `refactor`, `autofix`, `rollback`, `document` | +| `patterns.py` | Pattern learning and rule tuning. | `patterns` (group) | +| `repo.py` | Repository and workspace management. | `repo` (group) | +| `rag.py` | RAG (Retrieval-Augmented Generation) index management. | `rag` (group) | +| `cicd.py` | CI/CD template generation and feedback. | `generate-cicd`, `feedback`, `init` | +| `ui.py` | Shared Rich-based UI components (Theme, Console, Banners). | N/A (Internal) | +| `utils.py` | Shared utilities (Logging, Config loading, Validation). | N/A (Internal) | + +### Backward Compatibility +- The entry point `refactron.cli:main` remains valid (updated in `pyproject.toml` if necessary, though the package structure supports `__init__.py` exposing `main`). +- All command arguments and flags remain unchanged for end-users. + +--- + +## ✨ New Features (v1.0.15) + +### 🤖 Semantic Code Intelligence (AI & RAG) +Refactron now leverages LLMs (Llama 3 via Groq) and RAG (ChromaDB) to understand your code's intent. + +- **`refactron suggest `**: Generates context-aware refactoring proposals. It analyzes the code using RAG to understand dependencies and proposes safe, idiomatic improvements. +- **`refactron document `**: Automatically generates comprehensive Google-style docstrings for functions and classes, inferring context from usage. +- **Background Indexing**: Connecting a repository automatically triggers a background process to index your codebase for RAG retrieval. + +### 📂 Repository & Workspace Management +Bridge the gap between local development and cloud capabilities. + +- **`refactron repo connect `**: Connects a GitHub repository. If the repo is not found locally, it clones it to `~/.refactron/workspaces/`. +- **`refactron repo list`**: Displays all connected repositories and their status. +- **Workspaces**: Manages mapping between remote repositories and local directories. + +### 📊 Observability & Metrics +Deep insights into Refactron's performance and usage. + +- **`refactron metrics`**: Shows detailed statistics: + - Analysis time per file. + - Success/Failure rates. + - Hit counts for specific analyzers and refactorers. +- **`refactron serve-metrics`**: Starts a Prometheus-compatible HTTP server (default: port 9090) to export metrics to your monitoring stack. +- **Telemetry**: Optional, anonymous usage tracking to help improve the tool (manage with `refactron telemetry`). + +### 🚀 CI/CD Integration +- **`refactron generate-cicd`**: Instantly generate workflow configuration files. + - Supports: **GitHub Actions**, **GitLab CI**, and **Pre-commit** hooks. + - Configurable failure thresholds (fail on critical issues, etc.). + +### 🛠️ Improved Developer Experience +- **Interactive Dashboard**: Running `refactron` without arguments now launches an interactive dashboard. +- **Interactive File Selector**: When running commands like `analyze` without a target, an interactive file picker helps you select files from your workspace. +- **Styled Output**: Enhanced Rich-based output with a premium theme (Indigo/Mint/Amber palette). + +--- + +## 🔒 Security & Reliability + +- **Security Fix**: Resolved a potential URL injection vulnerability in workspace management by implementing strict URL parsing and validation. +- **Compatibility**: Restored full Python 3.8 compatibility for RAG parsing libraries. +- **Dependencies**: Added `groq` and `chromadb` as core dependencies for AI features. diff --git a/documentation/docs/v1.0.15_RELEASE_NOTES.md b/documentation/docs/v1.0.15_RELEASE_NOTES.md new file mode 100644 index 0000000..3f9528b --- /dev/null +++ b/documentation/docs/v1.0.15_RELEASE_NOTES.md @@ -0,0 +1,47 @@ +# Refactron v1.0.15 Release Notes + +## Overview +Refactron v1.0.15 is a major release introducing semantic code intelligence through **LLM Orchestration** and **Retrieval-Augmented Generation (RAG)**. This version enables Refactron to understand the intent of your code and provide highly accurate, context-aware suggestions and documentation. + +## New Features + +### 📂 Repository Management (`refactron repo`) +Bridge the gap between your local code and the cloud. +- **`refactron repo connect `**: Links a GitHub repository to a local directory or auto-clones it. +- **Auto-Indexing**: Connecting a repo triggers a background RAG indexing process so Refactron is ready for AI suggestions immediately. +- **Workspaces**: Manages local mappings in `~/.refactron/workspaces/`. + +### 🤖 AI-Powered Commands +- **`refactron suggest [path]`**: Contextual refactoring proposals that understand your project's logic. +- **`refactron document [path]`**: Automated Google-style docstrings for your entire codebase. +- **Safety Gate**: Built-in syntax and safety verification for all AI-generated code. +- **Feedback Loop**: Use `refactron feedback` to rate AI suggestions and improve pattern accuracy. + +### 📊 Observability & Metrics +- **`refactron metrics`**: Real-time statistics on analysis performance and analyzer "hits". +- **`refactron serve-metrics`**: Spin up a Prometheus-compatible metrics server. +- **Telemetry**: Opt-in anonymous usage data helps us improve Refactron for everyone. + +### 🚀 CI/CD Integration +- **`refactron ci `**: Instantly generate workflow files for GitHub Actions, GitLab CI, or pre-commit hooks. +- **Automated Gates**: Configure builds to fail on critical code debt or security issues. + +### ✨ Improved CLI Experience +- **Stylized Dashboard**: A new interactive startup screen. +- **Interactive Selectors**: Navigate and select files/folders directly within the CLI. + +## Security Fixes +- **Critical Fix**: Resolved an incomplete URL sanitization vulnerability in workspace management. All URL hostname checks now use robust parsing to prevent injection attacks. + +## Compatibility Improvements +- **Python 3.8 Support**: Restored full compatibility for older Python 3 environments, particularly for RAG parsing and fingerprinting. +- **Tree-sitter API**: Adaptive layer for stability across different Tree-sitter versions. + +--- + +**Get Started with v1.0.15:** +```bash +pip install refactron --upgrade +refactron rag index +refactron suggest myfile.py +```