-
-
Notifications
You must be signed in to change notification settings - Fork 52
feat(resolver): add dependency conflict resolver with visual tree #636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(resolver): add dependency conflict resolver with visual tree #636
Conversation
Add comprehensive dependency analysis system with: - Visual dependency tree rendering (ASCII/Rich) - Conflict prediction with known conflicts database - Plain-language impact communication - Alternative package suggestions - Orphan package detection and cleanup Includes full test suite (26 tests). Closes #442
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughThe pull request substantially expands the dependency resolver module with a feature-rich dependency analysis system. It introduces multiple new data models, classes for conflict prediction, impact communication, visual tree rendering, and orphan package management, alongside a comprehensive test suite validating the new functionality. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DependencyResolver
participant ConflictPredictor
participant ImpactCommunicator
participant VisualTreeRenderer
participant ImpactAssessment
User->>DependencyResolver: analyze_installation(package_name)
DependencyResolver->>DependencyResolver: get_apt_dependencies()
DependencyResolver->>DependencyResolver: resolve_dependencies()
DependencyResolver->>ConflictPredictor: predict_conflicts(dependencies)
ConflictPredictor-->>DependencyResolver: Conflict[] detected
DependencyResolver->>ImpactCommunicator: generate_summary(graph, conflicts)
ImpactCommunicator-->>DependencyResolver: plain-language summary
DependencyResolver->>VisualTreeRenderer: render_tree(graph)
VisualTreeRenderer-->>DependencyResolver: ASCII/Rich tree view
DependencyResolver->>ImpactCommunicator: explain_conflict(conflicts)
ImpactCommunicator-->>DependencyResolver: conflict explanations
DependencyResolver-->>ImpactAssessment: structured analysis result
DependencyResolver-->>User: JSON-serializable analysis with dependencies, conflicts, summary, commands
sequenceDiagram
participant User
participant DependencyResolver
participant OrphanPackageManager
participant ImpactCommunicator
User->>DependencyResolver: find_orphans()
DependencyResolver->>OrphanPackageManager: find_orphans()
OrphanPackageManager->>OrphanPackageManager: detect via deborphan, rdeps checks
OrphanPackageManager->>OrphanPackageManager: validate PPA status
OrphanPackageManager-->>DependencyResolver: OrphanPackage[] list
DependencyResolver->>OrphanPackageManager: get_orphan_cleanup_commands(dry_run=True)
OrphanPackageManager->>OrphanPackageManager: estimate space savings
OrphanPackageManager-->>DependencyResolver: cleanup commands and recommendations
DependencyResolver->>ImpactCommunicator: generate_summary(orphans)
ImpactCommunicator-->>DependencyResolver: plain-language cleanup guidance
DependencyResolver-->>User: orphan analysis with commands, space savings, recommendations
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CLA Verification FailedThe following contributors have not signed the Contributor License Agreement:
How to Sign
This check runs automatically. Maintainers can update |
Summary of ChangesHello @SolariSystems, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly overhauls the dependency resolution system, transforming it into a more intelligent and user-friendly tool. The core Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive rewrite of the dependency resolver, adding many valuable features like a visual dependency tree, conflict prediction, impact analysis, and orphan package management. The code is well-structured into distinct classes for different responsibilities. The addition of a test suite is also a great improvement. My review focuses on enhancing correctness, performance, and test robustness. Specifically, I've identified an issue with the installation order logic, opportunities to improve performance in orphan detection, and ways to make the new tests more portable and reliable.
| def _calculate_installation_order( | ||
| self, package_name: str, dependencies: list[Dependency] | ||
| self, | ||
| package_name: str, | ||
| dependencies: list[Dependency], | ||
| ) -> list[str]: | ||
| """Calculate optimal installation order""" | ||
| # Simple topological sort based on dependency levels | ||
|
|
||
| # Packages with no dependencies first | ||
| """Calculate optimal installation order.""" | ||
| no_deps = [] | ||
| has_deps = [] | ||
|
|
||
| for dep in dependencies: | ||
| if not dep.is_satisfied: | ||
| # Simple heuristic: system packages first, then others | ||
| if "lib" in dep.name or dep.name in ["ca-certificates", "curl", "gnupg"]: | ||
| if "lib" in dep.name or dep.name in [ | ||
| "ca-certificates", | ||
| "curl", | ||
| "gnupg", | ||
| ]: | ||
| no_deps.append(dep.name) | ||
| else: | ||
| has_deps.append(dep.name) | ||
|
|
||
| # Build installation order | ||
| order = no_deps + has_deps | ||
|
|
||
| # Add main package last | ||
| if package_name not in order: | ||
| order.append(package_name) | ||
|
|
||
| return order |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of _calculate_installation_order is based on a simple heuristic (checking for "lib" in the name) rather than a true topological sort. This can lead to an incorrect installation order for packages with complex dependency chains, potentially causing installation failures. For a more robust solution, you should implement a proper topological sort algorithm (like Kahn's algorithm or a DFS-based approach) that builds a dependency graph and processes nodes in the correct order.
| result = subprocess.run( | ||
| ["python3", "cortex/dependency_resolver.py", "--help"], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd="/tmp/cortex" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests in TestCLI use a hardcoded cwd="/tmp/cortex". This makes the tests non-portable, as they will fail on any machine where the project is not located at that exact path. Tests should not rely on a specific directory structure outside of the repository.
You should determine the project's root directory dynamically within the test and use that as the cwd. This change should be applied to all tests within the TestCLI class.
| result = subprocess.run( | |
| ["python3", "cortex/dependency_resolver.py", "--help"], | |
| capture_output=True, | |
| text=True, | |
| cwd="/tmp/cortex" | |
| ) | |
| project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) | |
| result = subprocess.run( | |
| ["python3", "cortex/dependency_resolver.py", "--help"], | |
| capture_output=True, | |
| text=True, | |
| cwd=project_root | |
| ) |
| def estimate_space_savings(self, orphans: list[OrphanPackage]) -> int: | ||
| """ | ||
| Estimate disk space that would be freed. | ||
| Args: | ||
| orphans: List of orphan packages | ||
| Returns: | ||
| Estimated bytes that would be freed | ||
| """ | ||
| total_bytes = 0 | ||
|
|
||
| for orphan in orphans: | ||
| try: | ||
| result = subprocess.run( | ||
| [ | ||
| "dpkg-query", | ||
| "-W", | ||
| "-f=${Installed-Size}", | ||
| orphan.name, | ||
| ], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=5, | ||
| ) | ||
| if result.returncode == 0 and result.stdout.strip(): | ||
| # Size is in KB | ||
| total_bytes += int(result.stdout.strip()) * 1024 | ||
| except Exception: | ||
| pass | ||
|
|
||
| return total_bytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The estimate_space_savings method calls dpkg-query inside a loop for each orphan package. This can be inefficient and slow if there is a large number of orphans. You can significantly improve performance by making a single call to dpkg-query with all orphan package names at once.
def estimate_space_savings(self, orphans: list[OrphanPackage]) -> int:
"""
Estimate disk space that would be freed.
Args:
orphans: List of orphan packages
Returns:
Estimated bytes that would be freed
"""
if not orphans:
return 0
total_bytes = 0
orphan_names = [o.name for o in orphans]
try:
# Query all packages at once for better performance
result = subprocess.run(
[
"dpkg-query",
"-W",
"-f=${Installed-Size}\n",
*orphan_names,
],
capture_output=True,
text=True,
timeout=15,
check=True,
)
for size_kb in result.stdout.strip().splitlines():
if size_kb.isdigit():
# Size is in KB
total_bytes += int(size_kb) * 1024
except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError) as e:
logger.debug(f"Could not estimate space savings: {e}")
except Exception as e:
logger.warning(f"Unexpected error estimating space savings: {e}")
return total_bytes| def test_find_orphans_method(self): | ||
| """Test orphan detection method exists.""" | ||
| self.assertTrue(hasattr(self.manager, 'find_orphans')) | ||
| # Method should return a list of OrphanPackage | ||
| result = self.manager.find_orphans() | ||
| self.assertIsInstance(result, list) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests in TestOrphanPackageManager and other classes like TestDependencyResolver are integration tests that rely on the actual state of the system's package manager (apt, dpkg). This makes the tests non-deterministic (they can pass or fail depending on what's installed on the machine running the tests), slow, and brittle.
To improve test quality, these should be converted to unit tests. You can use unittest.mock.patch to mock subprocess.run and provide controlled, predictable output for the shell commands. This will isolate the tests from the system environment, making them faster and more reliable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@cortex/dependency_resolver.py`:
- Around line 801-833: The orphan scan currently truncates auto_packages with
list(auto_packages)[:50], so find_orphans() silently misses most orphans; remove
the hard-coded slice or replace it with a configurable limit parameter (e.g.,
max_scan) on the find_orphans method/class and iterate over the full
auto_packages when max_scan is None (or set a reasonable default), updating call
sites and tests accordingly; keep the OrphanPackage construction and use of
self.resolver.get_installed_version intact so behavior is preserved while
ensuring all auto_packages are scanned unless a caller explicitly supplies a
limit.
- Around line 185-280: The renderers ignore Dependency.depth so transitive
dependencies are shown as siblings; update _render_ascii_tree and
_render_rich_tree to build a hierarchical structure using Dependency.depth (or a
parent→children map) and render nested nodes accordingly: in _render_ascii_tree,
group/sort dependencies by depth and emit branch prefixes per depth level (using
self.BRANCH / self.LAST_BRANCH logic repeated per depth) so children are
indented under their parent; in _render_rich_tree, create a parent->children
mapping (or sort by depth) and recursively add nodes via _add_dep_to_tree (or
extend _add_dep_to_tree to accept children/depth) so Rich Tree reflects true
parent→child nesting rather than a flat list. Ensure conflict_packages, status
symbol logic, version_str and reason_str are preserved when placing each
dependency into the hierarchical tree.
- Around line 476-507: The _check_apt_conflicts function currently only flags
conflicts when the conflicting package is already installed (uses
self.resolver.is_package_installed), so it misses conflicts between packages in
the current install set; update _check_apt_conflicts to also consider the input
packages list when matching the parsed conflicting package names (i.e., treat
any match in packages as a conflict), normalizing names (strip versions/arch)
and consulting resolver.is_package_installed only for packages not in the
install set; keep building Conflict objects the same (Conflict,
ConflictType.MUTUALLY_EXCLUSIVE) and include both installed and-to-be-installed
cases in resolution_steps/description.
- Around line 1404-1428: The find_orphans function is annotated as returning
list[dict] but actually returns a dict with keys "orphans", "total_count",
"estimated_space_mb", and "cleanup_commands"; update the return type annotation
from list[dict] to a more accurate type (e.g., dict[str, Any] or a
TypedDict/NamedTuple describing those keys) and update the docstring to describe
the returned mapping and each key; locate the function by name find_orphans and
references to self.orphan_manager (estimate_space_savings,
get_orphan_cleanup_commands) when making the change.
In `@tests/test_dependency_resolver.py`:
- Around line 129-220: The tests call real apt/dpkg commands via
DependencyResolver (constructor and analyze_installation/analyze_removal) making
CI flaky; modify the tests to either mock the system calls (patch
DependencyResolver._run_command or subprocess.run using unittest.mock.patch for
TestDependencyResolver and TestEndToEnd) so they return controlled fake outputs,
or add a runtime guard in setUp to skip tests when dpkg/apt-cache is not
available (check for shutil.which("dpkg") or similar) and call self.skipTest;
target symbols: DependencyResolver, _run_command, analyze_installation,
analyze_removal, and the failing test classes to keep tests hermetic.
- Around line 286-331: Tests hard-code cwd="/tmp/cortex" which breaks CI; change
each subprocess.run call in tests/test_dependency_resolver.py (e.g., in
test_cli_help, test_cli_tree, test_cli_analyze, test_cli_json) to compute the
repo root relative to this test file and pass that as cwd (use
Path(__file__).resolve().parents[1] or equivalent) and add the necessary import
of pathlib.Path at the top or inside the test functions so the cwd is derived
from the test file location instead of a fixed path.
| def _render_ascii_tree( | ||
| self, | ||
| package_name: str, | ||
| dependencies: list[Dependency], | ||
| conflicts: list[Conflict] = None, | ||
| show_versions: bool = True, | ||
| ) -> str: | ||
| """Render tree using ASCII characters.""" | ||
| lines = [] | ||
| conflict_packages = set() | ||
|
|
||
| if conflicts: | ||
| for c in conflicts: | ||
| conflict_packages.add(c.package_a) | ||
| conflict_packages.add(c.package_b) | ||
|
|
||
| # Root package | ||
| lines.append(f"📦 {package_name}") | ||
|
|
||
| # Sort dependencies: unsatisfied first, then by name | ||
| sorted_deps = sorted( | ||
| dependencies, key=lambda d: (d.is_satisfied, d.name) | ||
| ) | ||
|
|
||
| for i, dep in enumerate(sorted_deps): | ||
| is_last = i == len(sorted_deps) - 1 | ||
| prefix = self.LAST_BRANCH if is_last else self.BRANCH | ||
|
|
||
| # Status symbol | ||
| if dep.name in conflict_packages: | ||
| status = self.CONFLICT | ||
| elif dep.is_optional: | ||
| status = self.OPTIONAL | ||
| elif dep.is_satisfied: | ||
| status = self.INSTALLED | ||
| else: | ||
| status = self.MISSING | ||
|
|
||
| # Version string | ||
| version_str = "" | ||
| if show_versions and dep.installed_version: | ||
| version_str = f" ({dep.installed_version})" | ||
| elif show_versions and dep.version: | ||
| version_str = f" (needs {dep.version})" | ||
|
|
||
| # Reason | ||
| reason_str = f" - {dep.reason}" if dep.reason else "" | ||
|
|
||
| lines.append(f"{prefix}{status} {dep.name}{version_str}{reason_str}") | ||
|
|
||
| return "\n".join(lines) | ||
|
|
||
| def _render_rich_tree( | ||
| self, | ||
| package_name: str, | ||
| dependencies: list[Dependency], | ||
| conflicts: list[Conflict] = None, | ||
| show_versions: bool = True, | ||
| ) -> str: | ||
| """Render tree using Rich library.""" | ||
| from rich.tree import Tree | ||
| from io import StringIO | ||
|
|
||
| conflict_packages = set() | ||
| if conflicts: | ||
| for c in conflicts: | ||
| conflict_packages.add(c.package_a) | ||
| conflict_packages.add(c.package_b) | ||
|
|
||
| tree = Tree(f"📦 [bold blue]{package_name}[/bold blue]") | ||
|
|
||
| # Group dependencies | ||
| required = [d for d in dependencies if not d.is_optional] | ||
| optional = [d for d in dependencies if d.is_optional] | ||
|
|
||
| # Add required dependencies | ||
| if required: | ||
| req_branch = tree.add("[bold]Required[/bold]") | ||
| for dep in sorted(required, key=lambda d: (d.is_satisfied, d.name)): | ||
| self._add_dep_to_tree( | ||
| req_branch, dep, conflict_packages, show_versions | ||
| ) | ||
|
|
||
| # Add optional dependencies | ||
| if optional: | ||
| opt_branch = tree.add("[dim]Optional[/dim]") | ||
| for dep in sorted(optional, key=lambda d: d.name): | ||
| self._add_dep_to_tree( | ||
| opt_branch, dep, conflict_packages, show_versions | ||
| ) | ||
|
|
||
| # Capture output to string | ||
| output = StringIO() | ||
| console = self._console.__class__(file=output, force_terminal=True) | ||
| console.print(tree) | ||
| return output.getvalue() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tree rendering is flat because dependency depth is ignored.
Dependency.depth is populated for transitive deps but neither renderer uses it, so nested deps appear as siblings and the visual tree can be misleading.
💡 Minimal ASCII indentation using depth
- for i, dep in enumerate(sorted_deps):
+ for i, dep in enumerate(sorted_deps):
is_last = i == len(sorted_deps) - 1
- prefix = self.LAST_BRANCH if is_last else self.BRANCH
+ indent = self.EMPTY * dep.depth
+ prefix = indent + (self.LAST_BRANCH if is_last else self.BRANCH)Consider also nesting in _render_rich_tree (e.g., build a parent→children map) for true hierarchy.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _render_ascii_tree( | |
| self, | |
| package_name: str, | |
| dependencies: list[Dependency], | |
| conflicts: list[Conflict] = None, | |
| show_versions: bool = True, | |
| ) -> str: | |
| """Render tree using ASCII characters.""" | |
| lines = [] | |
| conflict_packages = set() | |
| if conflicts: | |
| for c in conflicts: | |
| conflict_packages.add(c.package_a) | |
| conflict_packages.add(c.package_b) | |
| # Root package | |
| lines.append(f"📦 {package_name}") | |
| # Sort dependencies: unsatisfied first, then by name | |
| sorted_deps = sorted( | |
| dependencies, key=lambda d: (d.is_satisfied, d.name) | |
| ) | |
| for i, dep in enumerate(sorted_deps): | |
| is_last = i == len(sorted_deps) - 1 | |
| prefix = self.LAST_BRANCH if is_last else self.BRANCH | |
| # Status symbol | |
| if dep.name in conflict_packages: | |
| status = self.CONFLICT | |
| elif dep.is_optional: | |
| status = self.OPTIONAL | |
| elif dep.is_satisfied: | |
| status = self.INSTALLED | |
| else: | |
| status = self.MISSING | |
| # Version string | |
| version_str = "" | |
| if show_versions and dep.installed_version: | |
| version_str = f" ({dep.installed_version})" | |
| elif show_versions and dep.version: | |
| version_str = f" (needs {dep.version})" | |
| # Reason | |
| reason_str = f" - {dep.reason}" if dep.reason else "" | |
| lines.append(f"{prefix}{status} {dep.name}{version_str}{reason_str}") | |
| return "\n".join(lines) | |
| def _render_rich_tree( | |
| self, | |
| package_name: str, | |
| dependencies: list[Dependency], | |
| conflicts: list[Conflict] = None, | |
| show_versions: bool = True, | |
| ) -> str: | |
| """Render tree using Rich library.""" | |
| from rich.tree import Tree | |
| from io import StringIO | |
| conflict_packages = set() | |
| if conflicts: | |
| for c in conflicts: | |
| conflict_packages.add(c.package_a) | |
| conflict_packages.add(c.package_b) | |
| tree = Tree(f"📦 [bold blue]{package_name}[/bold blue]") | |
| # Group dependencies | |
| required = [d for d in dependencies if not d.is_optional] | |
| optional = [d for d in dependencies if d.is_optional] | |
| # Add required dependencies | |
| if required: | |
| req_branch = tree.add("[bold]Required[/bold]") | |
| for dep in sorted(required, key=lambda d: (d.is_satisfied, d.name)): | |
| self._add_dep_to_tree( | |
| req_branch, dep, conflict_packages, show_versions | |
| ) | |
| # Add optional dependencies | |
| if optional: | |
| opt_branch = tree.add("[dim]Optional[/dim]") | |
| for dep in sorted(optional, key=lambda d: d.name): | |
| self._add_dep_to_tree( | |
| opt_branch, dep, conflict_packages, show_versions | |
| ) | |
| # Capture output to string | |
| output = StringIO() | |
| console = self._console.__class__(file=output, force_terminal=True) | |
| console.print(tree) | |
| return output.getvalue() | |
| def _render_ascii_tree( | |
| self, | |
| package_name: str, | |
| dependencies: list[Dependency], | |
| conflicts: list[Conflict] = None, | |
| show_versions: bool = True, | |
| ) -> str: | |
| """Render tree using ASCII characters.""" | |
| lines = [] | |
| conflict_packages = set() | |
| if conflicts: | |
| for c in conflicts: | |
| conflict_packages.add(c.package_a) | |
| conflict_packages.add(c.package_b) | |
| # Root package | |
| lines.append(f"📦 {package_name}") | |
| # Sort dependencies: unsatisfied first, then by name | |
| sorted_deps = sorted( | |
| dependencies, key=lambda d: (d.is_satisfied, d.name) | |
| ) | |
| for i, dep in enumerate(sorted_deps): | |
| is_last = i == len(sorted_deps) - 1 | |
| indent = self.EMPTY * dep.depth | |
| prefix = indent + (self.LAST_BRANCH if is_last else self.BRANCH) | |
| # Status symbol | |
| if dep.name in conflict_packages: | |
| status = self.CONFLICT | |
| elif dep.is_optional: | |
| status = self.OPTIONAL | |
| elif dep.is_satisfied: | |
| status = self.INSTALLED | |
| else: | |
| status = self.MISSING | |
| # Version string | |
| version_str = "" | |
| if show_versions and dep.installed_version: | |
| version_str = f" ({dep.installed_version})" | |
| elif show_versions and dep.version: | |
| version_str = f" (needs {dep.version})" | |
| # Reason | |
| reason_str = f" - {dep.reason}" if dep.reason else "" | |
| lines.append(f"{prefix}{status} {dep.name}{version_str}{reason_str}") | |
| return "\n".join(lines) | |
| def _render_rich_tree( | |
| self, | |
| package_name: str, | |
| dependencies: list[Dependency], | |
| conflicts: list[Conflict] = None, | |
| show_versions: bool = True, | |
| ) -> str: | |
| """Render tree using Rich library.""" | |
| from rich.tree import Tree | |
| from io import StringIO | |
| conflict_packages = set() | |
| if conflicts: | |
| for c in conflicts: | |
| conflict_packages.add(c.package_a) | |
| conflict_packages.add(c.package_b) | |
| tree = Tree(f"📦 [bold blue]{package_name}[/bold blue]") | |
| # Group dependencies | |
| required = [d for d in dependencies if not d.is_optional] | |
| optional = [d for d in dependencies if d.is_optional] | |
| # Add required dependencies | |
| if required: | |
| req_branch = tree.add("[bold]Required[/bold]") | |
| for dep in sorted(required, key=lambda d: (d.is_satisfied, d.name)): | |
| self._add_dep_to_tree( | |
| req_branch, dep, conflict_packages, show_versions | |
| ) | |
| # Add optional dependencies | |
| if optional: | |
| opt_branch = tree.add("[dim]Optional[/dim]") | |
| for dep in sorted(optional, key=lambda d: d.name): | |
| self._add_dep_to_tree( | |
| opt_branch, dep, conflict_packages, show_versions | |
| ) | |
| # Capture output to string | |
| output = StringIO() | |
| console = self._console.__class__(file=output, force_terminal=True) | |
| console.print(tree) | |
| return output.getvalue() |
🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 185 - 280, The renderers ignore
Dependency.depth so transitive dependencies are shown as siblings; update
_render_ascii_tree and _render_rich_tree to build a hierarchical structure using
Dependency.depth (or a parent→children map) and render nested nodes accordingly:
in _render_ascii_tree, group/sort dependencies by depth and emit branch prefixes
per depth level (using self.BRANCH / self.LAST_BRANCH logic repeated per depth)
so children are indented under their parent; in _render_rich_tree, create a
parent->children mapping (or sort by depth) and recursively add nodes via
_add_dep_to_tree (or extend _add_dep_to_tree to accept children/depth) so Rich
Tree reflects true parent→child nesting rather than a flat list. Ensure
conflict_packages, status symbol logic, version_str and reason_str are preserved
when placing each dependency into the hierarchical tree.
| def _check_apt_conflicts(self, packages: list[str]) -> list[Conflict]: | ||
| """Check conflicts using apt-cache.""" | ||
| conflicts = [] | ||
|
|
||
| for pkg in packages: | ||
| try: | ||
| result = subprocess.run( | ||
| ["apt-cache", "show", pkg], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=10, | ||
| ) | ||
| if result.returncode == 0: | ||
| for line in result.stdout.split("\n"): | ||
| if line.startswith("Conflicts:"): | ||
| conflicting = line.split(":", 1)[1].strip() | ||
| for c in conflicting.split(","): | ||
| c = c.strip().split()[0] # Remove version | ||
| if self.resolver.is_package_installed(c): | ||
| conflicts.append( | ||
| Conflict( | ||
| package_a=pkg, | ||
| package_b=c, | ||
| conflict_type=ConflictType.MUTUALLY_EXCLUSIVE, | ||
| description=f"Declared conflict in package metadata", | ||
| alternatives=[], | ||
| resolution_steps=[ | ||
| f"sudo apt remove {c}", | ||
| f"sudo apt install {pkg}", | ||
| ], | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apt-conflict scan only checks installed packages.
_check_apt_conflicts ignores conflicts between packages being installed together, so predicted conflicts can be missed.
✅ Include install-set in conflict detection
- conflicts.extend(self._check_apt_conflicts(packages_to_install))
+ conflicts.extend(self._check_apt_conflicts(packages_to_install, all_packages))- def _check_apt_conflicts(self, packages: list[str]) -> list[Conflict]:
+ def _check_apt_conflicts(
+ self, packages: list[str], all_packages: set[str]
+ ) -> list[Conflict]:
@@
- if self.resolver.is_package_installed(c):
+ if c in all_packages:🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 476 - 507, The
_check_apt_conflicts function currently only flags conflicts when the
conflicting package is already installed (uses
self.resolver.is_package_installed), so it misses conflicts between packages in
the current install set; update _check_apt_conflicts to also consider the input
packages list when matching the parsed conflicting package names (i.e., treat
any match in packages as a conflict), normalizing names (strip versions/arch)
and consulting resolver.is_package_installed only for packages not in the
install set; keep building Conflict objects the same (Conflict,
ConflictType.MUTUALLY_EXCLUSIVE) and include both installed and-to-be-installed
cases in resolution_steps/description.
| # Check each for reverse dependencies | ||
| for pkg in list(auto_packages)[:50]: # Limit for performance | ||
| if not pkg: | ||
| continue | ||
|
|
||
| rdeps_result = subprocess.run( | ||
| ["apt-cache", "rdepends", "--installed", pkg], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=10, | ||
| ) | ||
|
|
||
| if rdeps_result.returncode == 0: | ||
| lines = rdeps_result.stdout.strip().split("\n") | ||
| # First line is package name, then "Reverse Depends:" | ||
| # Actual deps start at line 2 | ||
| rdeps = [ | ||
| l.strip() | ||
| for l in lines[2:] | ||
| if l.strip() and not l.strip().startswith("|") | ||
| ] | ||
|
|
||
| if len(rdeps) == 0: | ||
| orphans.append( | ||
| OrphanPackage( | ||
| name=pkg, | ||
| installed_version=self.resolver.get_installed_version( | ||
| pkg | ||
| ) | ||
| or "unknown", | ||
| reason="Automatically installed with no dependents", | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Orphan scan silently drops most auto-installed packages.
Limiting to the first 50 auto packages means find_orphans() won’t return all orphans, contradicting its docstring. Make the limit configurable or remove it.
🔧 Consider scanning all auto packages (or make limit configurable)
- for pkg in list(auto_packages)[:50]: # Limit for performance
+ for pkg in auto_packages:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Check each for reverse dependencies | |
| for pkg in list(auto_packages)[:50]: # Limit for performance | |
| if not pkg: | |
| continue | |
| rdeps_result = subprocess.run( | |
| ["apt-cache", "rdepends", "--installed", pkg], | |
| capture_output=True, | |
| text=True, | |
| timeout=10, | |
| ) | |
| if rdeps_result.returncode == 0: | |
| lines = rdeps_result.stdout.strip().split("\n") | |
| # First line is package name, then "Reverse Depends:" | |
| # Actual deps start at line 2 | |
| rdeps = [ | |
| l.strip() | |
| for l in lines[2:] | |
| if l.strip() and not l.strip().startswith("|") | |
| ] | |
| if len(rdeps) == 0: | |
| orphans.append( | |
| OrphanPackage( | |
| name=pkg, | |
| installed_version=self.resolver.get_installed_version( | |
| pkg | |
| ) | |
| or "unknown", | |
| reason="Automatically installed with no dependents", | |
| ) | |
| ) | |
| # Check each for reverse dependencies | |
| for pkg in auto_packages: | |
| if not pkg: | |
| continue | |
| rdeps_result = subprocess.run( | |
| ["apt-cache", "rdepends", "--installed", pkg], | |
| capture_output=True, | |
| text=True, | |
| timeout=10, | |
| ) | |
| if rdeps_result.returncode == 0: | |
| lines = rdeps_result.stdout.strip().split("\n") | |
| # First line is package name, then "Reverse Depends:" | |
| # Actual deps start at line 2 | |
| rdeps = [ | |
| l.strip() | |
| for l in lines[2:] | |
| if l.strip() and not l.strip().startswith("|") | |
| ] | |
| if len(rdeps) == 0: | |
| orphans.append( | |
| OrphanPackage( | |
| name=pkg, | |
| installed_version=self.resolver.get_installed_version( | |
| pkg | |
| ) | |
| or "unknown", | |
| reason="Automatically installed with no dependents", | |
| ) | |
| ) |
🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 801 - 833, The orphan scan
currently truncates auto_packages with list(auto_packages)[:50], so
find_orphans() silently misses most orphans; remove the hard-coded slice or
replace it with a configurable limit parameter (e.g., max_scan) on the
find_orphans method/class and iterate over the full auto_packages when max_scan
is None (or set a reasonable default), updating call sites and tests
accordingly; keep the OrphanPackage construction and use of
self.resolver.get_installed_version intact so behavior is preserved while
ensuring all auto_packages are scanned unless a caller explicitly supplies a
limit.
| def find_orphans(self) -> list[dict]: | ||
| """ | ||
| Find orphaned packages on the system. | ||
| for dep in graph.direct_dependencies: | ||
| dep_prefix = " " * (indent + 1) | ||
| dep_status = "✅" if dep.is_satisfied else "❌" | ||
| version_str = f" ({dep.installed_version})" if dep.installed_version else "" | ||
| print(f"{dep_prefix}{dep_status} {dep.name}{version_str} - {dep.reason}") | ||
| Returns: | ||
| List of orphan package information | ||
| """ | ||
| orphans = self.orphan_manager.find_orphans() | ||
| space_savings = self.orphan_manager.estimate_space_savings(orphans) | ||
|
|
||
| return { | ||
| "orphans": [ | ||
| { | ||
| "name": o.name, | ||
| "version": o.installed_version, | ||
| "reason": o.reason, | ||
| } | ||
| for o in orphans | ||
| ], | ||
| "total_count": len(orphans), | ||
| "estimated_space_mb": space_savings / (1024 * 1024), | ||
| "cleanup_commands": self.orphan_manager.get_orphan_cleanup_commands( | ||
| orphans, dry_run=self.dry_run | ||
| ), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check the actual function definition and return type annotation
head -1500 cortex/dependency_resolver.py | tail -150 | cat -nRepository: cortexlinux/cortex
Length of output: 6365
🏁 Script executed:
# Also check for Any import and the imports section
head -50 cortex/dependency_resolver.py | cat -nRepository: cortexlinux/cortex
Length of output: 1884
🏁 Script executed:
# Verify find_orphans calls and usages
rg -n "find_orphans\(" cortex/Repository: cortexlinux/cortex
Length of output: 387
Return type annotation doesn't match actual dict result.
find_orphans() is annotated as list[dict] but returns a dict with orphans, total_count, estimated_space_mb, and cleanup_commands keys. Update the type hint and docstring to reflect the actual return type.
🔧 Align return type annotation
-from typing import Optional
+from typing import Any, Optional
@@
- def find_orphans(self) -> list[dict]:
+ def find_orphans(self) -> dict[str, Any]:
"""
- Find orphaned packages on the system.
+ Find orphaned packages on the system.
Returns:
- List of orphan package information
+ Dict containing orphans list, total count, estimated space savings, and cleanup commands
"""🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 1404 - 1428, The find_orphans
function is annotated as returning list[dict] but actually returns a dict with
keys "orphans", "total_count", "estimated_space_mb", and "cleanup_commands";
update the return type annotation from list[dict] to a more accurate type (e.g.,
dict[str, Any] or a TypedDict/NamedTuple describing those keys) and update the
docstring to describe the returned mapping and each key; locate the function by
name find_orphans and references to self.orphan_manager (estimate_space_savings,
get_orphan_cleanup_commands) when making the change.
| def setUp(self): | ||
| self.resolver = DependencyResolver() | ||
| self.manager = OrphanPackageManager(self.resolver) | ||
|
|
||
| def test_find_orphans_method(self): | ||
| """Test orphan detection method exists.""" | ||
| self.assertTrue(hasattr(self.manager, 'find_orphans')) | ||
| # Method should return a list of OrphanPackage | ||
| result = self.manager.find_orphans() | ||
| self.assertIsInstance(result, list) | ||
|
|
||
| def test_get_cleanup_commands(self): | ||
| """Test cleanup command generation.""" | ||
| orphans = [ | ||
| OrphanPackage(name="libfoo", installed_version="1.0"), | ||
| OrphanPackage(name="libbar", installed_version="2.0"), | ||
| ] | ||
| commands = self.manager.get_orphan_cleanup_commands(orphans) | ||
| self.assertTrue(len(commands) > 0) | ||
| # Should suggest apt remove | ||
| self.assertTrue(any("apt" in cmd for cmd in commands)) | ||
|
|
||
| def test_estimate_space_savings(self): | ||
| """Test space savings estimation.""" | ||
| orphans = [ | ||
| OrphanPackage(name="libfoo", installed_version="1.0"), | ||
| OrphanPackage(name="libbar", installed_version="2.0"), | ||
| ] | ||
| # Method should exist and return a value | ||
| savings = self.manager.estimate_space_savings(orphans) | ||
| self.assertIsInstance(savings, (int, float, str)) | ||
|
|
||
|
|
||
| class TestDependencyResolver(unittest.TestCase): | ||
| """Test the main DependencyResolver class integration.""" | ||
|
|
||
| def setUp(self): | ||
| self.resolver = DependencyResolver() | ||
|
|
||
| def test_analyze_installation(self): | ||
| """Test full installation analysis.""" | ||
| result = self.resolver.analyze_installation("wget") | ||
| self.assertIn("tree", result) | ||
| self.assertIn("summary", result) | ||
| self.assertIn("conflicts", result) | ||
| self.assertIn("dependencies", result) | ||
|
|
||
| def test_analyze_removal(self): | ||
| """Test removal impact analysis.""" | ||
| result = self.resolver.analyze_removal("wget") | ||
| # Severity is nested under impact | ||
| self.assertIn("impact", result) | ||
| self.assertIn("severity", result["impact"]) | ||
|
|
||
| def test_find_orphans(self): | ||
| """Test orphan finding.""" | ||
| result = self.resolver.find_orphans() | ||
| self.assertIsInstance(result, dict) | ||
|
|
||
|
|
||
| class TestEndToEnd(unittest.TestCase): | ||
| """End-to-end integration tests.""" | ||
|
|
||
| def test_full_workflow_install(self): | ||
| """Test complete installation analysis workflow.""" | ||
| resolver = DependencyResolver() | ||
| result = resolver.analyze_installation("curl") | ||
|
|
||
| # Should have all required fields | ||
| self.assertIn("tree", result) | ||
| self.assertIn("summary", result) | ||
| self.assertIn("dependencies", result) | ||
| self.assertIn("conflicts", result) | ||
| self.assertIn("impact", result) | ||
|
|
||
| def test_full_workflow_removal(self): | ||
| """Test complete removal analysis workflow.""" | ||
| resolver = DependencyResolver() | ||
| result = resolver.analyze_removal("curl") | ||
|
|
||
| # Should have impact information | ||
| self.assertIn("impact", result) | ||
| self.assertIn("severity", result["impact"]) | ||
|
|
||
| def test_json_export(self): | ||
| """Test JSON export format.""" | ||
| resolver = DependencyResolver() | ||
| result = resolver.analyze_installation("wget") | ||
|
|
||
| # Should be JSON serializable | ||
| json_str = json.dumps(result, default=str) | ||
| parsed = json.loads(json_str) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests invoke real apt/dpkg commands, making CI flaky.
DependencyResolver runs dpkg/apt-cache on init and in analyze_*, which makes unit tests environment-dependent and slow. Consider mocking _run_command/subprocess.run, or skip these tests unless apt tools are present.
🧪 Example skip guard for system-dependent tests
+import shutil
@@
+APT_AVAILABLE = shutil.which("apt-cache") and shutil.which("dpkg")
+
+@unittest.skipUnless(APT_AVAILABLE, "apt/dpkg tools not available")
class TestDependencyResolver(unittest.TestCase):🤖 Prompt for AI Agents
In `@tests/test_dependency_resolver.py` around lines 129 - 220, The tests call
real apt/dpkg commands via DependencyResolver (constructor and
analyze_installation/analyze_removal) making CI flaky; modify the tests to
either mock the system calls (patch DependencyResolver._run_command or
subprocess.run using unittest.mock.patch for TestDependencyResolver and
TestEndToEnd) so they return controlled fake outputs, or add a runtime guard in
setUp to skip tests when dpkg/apt-cache is not available (check for
shutil.which("dpkg") or similar) and call self.skipTest; target symbols:
DependencyResolver, _run_command, analyze_installation, analyze_removal, and the
failing test classes to keep tests hermetic.
| def test_cli_help(self): | ||
| """Test CLI help output.""" | ||
| import subprocess | ||
| result = subprocess.run( | ||
| ["python3", "cortex/dependency_resolver.py", "--help"], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd="/tmp/cortex" | ||
| ) | ||
| self.assertEqual(result.returncode, 0) | ||
| self.assertIn("--tree", result.stdout) | ||
| self.assertIn("--analyze", result.stdout) | ||
| self.assertIn("--orphans", result.stdout) | ||
|
|
||
| def test_cli_tree(self): | ||
| """Test CLI tree output.""" | ||
| import subprocess | ||
| result = subprocess.run( | ||
| ["python3", "cortex/dependency_resolver.py", "wget", "--tree"], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd="/tmp/cortex" | ||
| ) | ||
| self.assertEqual(result.returncode, 0) | ||
| self.assertIn("wget", result.stdout.lower()) | ||
|
|
||
| def test_cli_analyze(self): | ||
| """Test CLI analyze output.""" | ||
| import subprocess | ||
| result = subprocess.run( | ||
| ["python3", "cortex/dependency_resolver.py", "wget", "--analyze"], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd="/tmp/cortex" | ||
| ) | ||
| self.assertEqual(result.returncode, 0) | ||
|
|
||
| def test_cli_json(self): | ||
| """Test CLI JSON output.""" | ||
| import subprocess | ||
| result = subprocess.run( | ||
| ["python3", "cortex/dependency_resolver.py", "wget", "--analyze", "--json"], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd="/tmp/cortex" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CLI tests assume repo is located at /tmp/cortex.
Hard-coding the working directory makes tests fail in most CI setups. Derive repo root from the test file location.
📁 Make cwd portable
+from pathlib import Path
@@
+REPO_ROOT = Path(__file__).resolve().parents[1]
@@
- cwd="/tmp/cortex"
+ cwd=str(REPO_ROOT)🤖 Prompt for AI Agents
In `@tests/test_dependency_resolver.py` around lines 286 - 331, Tests hard-code
cwd="/tmp/cortex" which breaks CI; change each subprocess.run call in
tests/test_dependency_resolver.py (e.g., in test_cli_help, test_cli_tree,
test_cli_analyze, test_cli_json) to compute the repo root relative to this test
file and pass that as cwd (use Path(__file__).resolve().parents[1] or
equivalent) and add the necessary import of pathlib.Path at the top or inside
the test functions so the cwd is derived from the test file location instead of
a fixed path.
|
recheck |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow contributing.md guidelines ( i.e. add a before/after video in pr description, AI Usage, Documentation file in code and tests passing status etc.). Closing this one, kindly create a new PR following all the guidelines.




Addresses issue cortexlinux/cortex-distro#50.
Changes
cortex/dependency_resolver.py: Complete rewrite with visual tree, conflict prediction, impact communication, orphan managementtests/test_dependency_resolver.py: Comprehensive test suite covering all 5 bounty requirementsHow I verified
Ran tests locally:
All tests passed.
Closes cortexlinux/cortex-distro#50
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.