Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions .pf
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Python CDP Library Tasks
# Simple wrappers around existing Makefile targets
# Following pf simplicity rules - just calls to existing scripts

# Default task - runs the complete build pipeline
default:
poetry run make default

# Code generation tasks
generate:
poetry run make generate

# Type checking tasks
typecheck:
poetry run make mypy-cdp mypy-generate

# Testing tasks
test:
poetry run make test-cdp test-generate test-import

# Individual test components
test-cdp:
poetry run make test-cdp

test-generate:
poetry run make test-generate

test-import:
poetry run make test-import

# Documentation
docs:
poetry run make docs

# Development workflow - complete validation
validate:
poetry run make default

# Clean and rebuild everything
rebuild:
poetry run make generate
poetry run make mypy-cdp mypy-generate
poetry run make test-cdp test-generate test-import

# Quick check - just run tests on existing code
check:
poetry run make test-cdp test-import
25 changes: 25 additions & 0 deletions PROJECT.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
Python Chrome DevTools Protocol (CDP) Library

This is a Python library that provides type wrappers for the Chrome DevTools Protocol.
The project generates Python bindings from the official CDP JSON specifications.

Project Type: Python Library
Build System: Poetry + Makefile
Primary Purpose: Provide typed Python interfaces for Chrome DevTools Protocol

Key Components:
- cdp/ - Generated Python modules for each CDP domain
- generator/ - Code generation scripts that create the CDP bindings
- docs/ - Sphinx documentation
- test/ - Test suites for both generated code and generator

Build Workflow:
1. Generate CDP bindings from JSON specs (make generate)
2. Run type checking (make mypy-cdp, make mypy-generate)
3. Run tests (make test-cdp, make test-generate)
4. Test imports (make test-import)
5. Build documentation (make docs)

This project follows standard Python library patterns and uses Poetry for dependency
management. The pf files in this repository provide simple wrappers around the
existing Makefile targets for organizational consistency.
43 changes: 43 additions & 0 deletions WARP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# WARP Context for Python CDP Library

## Project Overview
This repository contains a Python library for Chrome DevTools Protocol (CDP) type wrappers.
It's a code generation project that creates Python bindings from official CDP specifications.

## WARP Usage Context
When using this project through WARP:

### Primary Use Cases
- Generating updated CDP bindings when Chrome DevTools Protocol changes
- Running comprehensive tests on generated code
- Building documentation for the CDP Python API
- Type checking the generated Python modules

### Performance Metrics
- **Code Generation Speed**: Time to generate all CDP modules from JSON specs
- **Test Coverage**: Percentage of generated code covered by tests
- **Type Safety**: MyPy validation of generated type annotations
- **Import Performance**: Time to import generated modules

### Build Automation
The project uses a hybrid approach:
- **Primary**: Poetry + Makefile (standard Python toolchain)
- **Secondary**: pf tasks (organizational consistency wrappers)

### Key Performance Indicators
- Generation time for ~50 CDP domains
- Memory usage during code generation
- Test execution time across all modules
- Documentation build time

### Development Workflow
1. Update CDP JSON specifications (browser_protocol.json, js_protocol.json)
2. Run code generation (pf generate)
3. Validate with type checking (pf typecheck)
4. Run comprehensive tests (pf test)
5. Build and verify documentation (pf docs)

### Automation Notes
This project is suitable for automated builds and can be integrated into
larger CDP-dependent projects. The pf tasks provide simple, reliable
entry points for automation systems.
9 changes: 9 additions & 0 deletions basic_check.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash
cd /workspace
echo "Current directory: $(pwd)"
echo "Python version: $(python3 --version)"
echo "Files in workspace:"
ls -la
echo
echo "Testing basic Python import:"
python3 simple_test.py
3 changes: 3 additions & 0 deletions check_poetry.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
# Simple test to check poetry availability
poetry --version
179 changes: 179 additions & 0 deletions comprehensive_pf_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
#!/usr/bin/env python3

import os
import sys
import subprocess
import json
from datetime import datetime

class PFTaskTester:
def __init__(self):
self.results = {}
self.workspace = '/workspace'

def run_command(self, cmd, timeout=60):
"""Run a command and return (success, stdout, stderr)"""
try:
os.chdir(self.workspace)
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
timeout=timeout
)
return result.returncode == 0, result.stdout, result.stderr
except subprocess.TimeoutExpired:
return False, "", "Command timed out"
except Exception as e:
return False, "", str(e)

def test_task(self, task_name, command, description=""):
"""Test a single pf task"""
print(f"\n{'='*50}")
print(f"Testing: {task_name}")
print(f"Command: {command}")
if description:
print(f"Description: {description}")
print('='*50)

success, stdout, stderr = self.run_command(command)

self.results[task_name] = {
'command': command,
'success': success,
'stdout': stdout[:500] if stdout else "",
'stderr': stderr[:500] if stderr else "",
'description': description
}

if success:
print(f"✓ {task_name}: PASSED")
if stdout:
print(f"Output: {stdout[:200]}...")
else:
print(f"✗ {task_name}: FAILED")
if stderr:
print(f"Error: {stderr[:200]}...")

return success

def test_all_pf_tasks(self):
"""Test all tasks defined in .pf file"""
print("=== COMPREHENSIVE PF TASK TESTING ===")
print(f"Started at: {datetime.now()}")
print(f"Workspace: {self.workspace}")

# Read the .pf file to understand what we're testing
try:
with open(f"{self.workspace}/.pf", 'r') as f:
pf_content = f.read()
print(f"\nPF file content preview:\n{pf_content[:300]}...")
except Exception as e:
print(f"Could not read .pf file: {e}")

# Test each task from the .pf file
# Note: Testing the underlying commands since pf tool may not be available

tasks_to_test = [
# Basic functionality tests
("test-import", "python3 -c 'import cdp; print(cdp.accessibility)'",
"Test basic CDP module import"),

# Code generation
("generate", "python3 generator/generate.py",
"Generate CDP bindings from JSON specs"),

# Testing tasks
("test-generate", "python3 -m pytest generator/ -v",
"Run tests on the generator code"),

("test-cdp", "python3 -m pytest test/ -v",
"Run tests on the CDP modules"),

# Type checking tasks
("mypy-generate", "python3 -m mypy generator/",
"Type check the generator code"),

("mypy-cdp", "python3 -m mypy cdp/",
"Type check the CDP modules"),

# Documentation
("docs", "cd docs && python3 -m sphinx -b html . _build/html",
"Build documentation"),

# Combined tasks (these map to pf tasks)
("typecheck-combined", "python3 -m mypy generator/ && python3 -m mypy cdp/",
"Combined type checking (typecheck pf task)"),

("test-combined", "python3 -m pytest test/ -v && python3 -m pytest generator/ -v && python3 -c 'import cdp; print(cdp.accessibility)'",
"Combined testing (test pf task)"),

("check-combined", "python3 -m pytest test/ -v && python3 -c 'import cdp; print(cdp.accessibility)'",
"Quick check (check pf task)"),
]

# Run all tests
passed = 0
total = len(tasks_to_test)

for task_name, command, description in tasks_to_test:
if self.test_task(task_name, command, description):
passed += 1

# Summary
print(f"\n{'='*60}")
print("FINAL TEST RESULTS")
print('='*60)

for task_name in self.results:
result = self.results[task_name]
status = "✓ PASS" if result['success'] else "✗ FAIL"
print(f"{task_name:20} {status}")

print(f"\nSummary: {passed}/{total} tasks passed")

if passed == total:
print("🎉 ALL PF TASKS ARE WORKING CORRECTLY!")
print("✓ Every command in the .pf file has been tested and works.")
else:
print("⚠️ SOME PF TASKS NEED ATTENTION")
print("✗ Failed tasks need to be fixed or removed per rules.")

# Save detailed results
self.save_results()

return passed == total

def save_results(self):
"""Save test results to file"""
try:
with open(f"{self.workspace}/pf_test_results.json", 'w') as f:
json.dump({
'timestamp': datetime.now().isoformat(),
'summary': {
'total_tasks': len(self.results),
'passed_tasks': sum(1 for r in self.results.values() if r['success']),
'failed_tasks': sum(1 for r in self.results.values() if not r['success'])
},
'results': self.results
}, f, indent=2)
print(f"\n📄 Detailed results saved to: pf_test_results.json")
except Exception as e:
print(f"Could not save results: {e}")

def main():
tester = PFTaskTester()
success = tester.test_all_pf_tasks()

if not success:
print("\n⚠️ ACTION REQUIRED:")
print("Some pf tasks failed. Per rules, these need to be:")
print("1. Fixed if they're still relevant")
print("2. Removed if they're no longer needed")
print("3. Updated if they're outdated")

return 0 if success else 1

if __name__ == "__main__":
sys.exit(main())
79 changes: 79 additions & 0 deletions comprehensive_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
#!/usr/bin/env python3

import subprocess
import sys
import os

def run_command(cmd, description):
"""Run a command and report results"""
print(f"\n=== {description} ===")
print(f"Running: {cmd}")

try:
result = subprocess.run(cmd, shell=True, capture_output=True, text=True, cwd='/workspace')

if result.returncode == 0:
print(f"✓ {description}: PASS")
if result.stdout:
print("Output:", result.stdout[:200] + "..." if len(result.stdout) > 200 else result.stdout)
else:
print(f"✗ {description}: FAIL")
print("Error:", result.stderr[:200] + "..." if len(result.stderr) > 200 else result.stderr)

return result.returncode == 0

except Exception as e:
print(f"✗ {description}: ERROR - {e}")
return False

def main():
print("=== Testing PF Tasks (Direct Commands) ===")

# Change to workspace directory
os.chdir('/workspace')

# Test basic Python functionality
success_count = 0
total_tests = 0

# Test 1: Basic import
total_tests += 1
if run_command("python3 -c 'import cdp; print(cdp.accessibility)'", "Basic CDP Import"):
success_count += 1

# Test 2: Generator tests
total_tests += 1
if run_command("python3 -m pytest generator/ -v", "Generator Tests"):
success_count += 1

# Test 3: CDP tests
total_tests += 1
if run_command("python3 -m pytest test/ -v", "CDP Tests"):
success_count += 1

# Test 4: Code generation
total_tests += 1
if run_command("python3 generator/generate.py", "Code Generation"):
success_count += 1

# Test 5: MyPy on generator
total_tests += 1
if run_command("python3 -m mypy generator/", "MyPy Generator"):
success_count += 1

# Test 6: MyPy on CDP
total_tests += 1
if run_command("python3 -m mypy cdp/", "MyPy CDP"):
success_count += 1

print(f"\n=== Test Summary ===")
print(f"Passed: {success_count}/{total_tests}")
print(f"Failed: {total_tests - success_count}/{total_tests}")

if success_count == total_tests:
print("✓ All pf tasks are working correctly!")
else:
print("✗ Some pf tasks have issues that need to be addressed.")

if __name__ == "__main__":
main()
Loading