Skip to content

Conversation

@SolariSystems
Copy link

@SolariSystems SolariSystems commented Jan 18, 2026

Addresses issue cortexlinux/cortex-distro#50.

Changes

  • cortex/dependency_resolver.py: Complete rewrite with visual tree, conflict prediction, impact communication, orphan management
  • tests/test_dependency_resolver.py: Comprehensive test suite covering all 5 bounty requirements

How I verified

Ran tests locally:

python3 -m unittest tests.test_dependency_resolver

All tests passed.

Closes cortexlinux/cortex-distro#50

Summary by CodeRabbit

Release Notes

  • New Features
    • Visualize dependency trees in ASCII or rich formatted output
    • Predict and analyze conflicts with plain-language explanations
    • Detect orphaned packages and generate cleanup recommendations
    • Analyze installation/removal impact with detailed summaries
    • Enhanced CLI with tree viewing, JSON export, and orphan management capabilities

✏️ Tip: You can customize this high-level summary in your review settings.

Add comprehensive dependency analysis system with:
- Visual dependency tree rendering (ASCII/Rich)
- Conflict prediction with known conflicts database
- Plain-language impact communication
- Alternative package suggestions
- Orphan package detection and cleanup

Includes full test suite (26 tests).

Closes #442
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 18, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

The pull request substantially expands the dependency resolver module with a feature-rich dependency analysis system. It introduces multiple new data models, classes for conflict prediction, impact communication, visual tree rendering, and orphan package management, alongside a comprehensive test suite validating the new functionality.

Changes

Cohort / File(s) Summary
Core Dependency Resolver Refactor
cortex/dependency_resolver.py
Added 7 new public data models (ConflictType, ImpactSeverity, Dependency, Conflict, ImpactAssessment, OrphanPackage, DependencyGraph). Introduced 4 new major classes (VisualTreeRenderer, ConflictPredictor, ImpactCommunicator, OrphanPackageManager). Expanded DependencyResolver with new public methods (analyze_installation, analyze_removal, print_visual_tree, find_orphans, get_reverse_dependencies, get_apt_dependencies, resolve_dependencies), updated constructor, added history logging via SQLite, and integrated conflict prediction into resolution flow. Modified DependencyGraph to include reverse_dependencies and properly typed Conflict fields.
Comprehensive Test Suite
tests/test_dependency_resolver.py
New test suite with 340+ lines covering all new components: VisualTreeRenderer rendering logic, ImpactCommunicator explanations and summaries, OrphanPackageManager detection and cleanup, DependencyResolver analysis workflows, end-to-end installation/removal flows, and CLI integration validation. Verifies bounty requirements including visual tree structure, conflict prediction, plain-language summaries, and orphan management.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant DependencyResolver
    participant ConflictPredictor
    participant ImpactCommunicator
    participant VisualTreeRenderer
    participant ImpactAssessment

    User->>DependencyResolver: analyze_installation(package_name)
    DependencyResolver->>DependencyResolver: get_apt_dependencies()
    DependencyResolver->>DependencyResolver: resolve_dependencies()
    DependencyResolver->>ConflictPredictor: predict_conflicts(dependencies)
    ConflictPredictor-->>DependencyResolver: Conflict[] detected
    DependencyResolver->>ImpactCommunicator: generate_summary(graph, conflicts)
    ImpactCommunicator-->>DependencyResolver: plain-language summary
    DependencyResolver->>VisualTreeRenderer: render_tree(graph)
    VisualTreeRenderer-->>DependencyResolver: ASCII/Rich tree view
    DependencyResolver->>ImpactCommunicator: explain_conflict(conflicts)
    ImpactCommunicator-->>DependencyResolver: conflict explanations
    DependencyResolver-->>ImpactAssessment: structured analysis result
    DependencyResolver-->>User: JSON-serializable analysis with dependencies, conflicts, summary, commands
Loading
sequenceDiagram
    participant User
    participant DependencyResolver
    participant OrphanPackageManager
    participant ImpactCommunicator

    User->>DependencyResolver: find_orphans()
    DependencyResolver->>OrphanPackageManager: find_orphans()
    OrphanPackageManager->>OrphanPackageManager: detect via deborphan, rdeps checks
    OrphanPackageManager->>OrphanPackageManager: validate PPA status
    OrphanPackageManager-->>DependencyResolver: OrphanPackage[] list
    DependencyResolver->>OrphanPackageManager: get_orphan_cleanup_commands(dry_run=True)
    OrphanPackageManager->>OrphanPackageManager: estimate space savings
    OrphanPackageManager-->>DependencyResolver: cleanup commands and recommendations
    DependencyResolver->>ImpactCommunicator: generate_summary(orphans)
    ImpactCommunicator-->>DependencyResolver: plain-language cleanup guidance
    DependencyResolver-->>User: orphan analysis with commands, space savings, recommendations
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes


🐰 A whisker-twitch of joy!
Dependencies dance in ASCII trees,
Conflicts predicted with graph expertise,
Orphans found and cleaned with ease,
In plain language that all can please! 🌳✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is missing the required checklist items and AI disclosure section from the template, though it covers the main changes and includes a related issue reference. Add the AI Disclosure section with checkbox and the complete Checklist section from the template to ensure full compliance with the repository's PR requirements.
✅ Passed checks (4 passed)
Check name Status Explanation
Linked Issues check ✅ Passed The implementation comprehensively addresses all five bounty requirements: visual tree rendering (#442), conflict prediction, plain-language explanations, alternative suggestions, and orphan tracking/cleanup.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the five bounty requirements from issue #442; no out-of-scope modifications detected beyond the specified dependency resolver functionality.
Docstring Coverage ✅ Passed Docstring coverage is 93.94% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The title accurately summarizes the main change: it introduces a dependency conflict resolver with visual tree rendering, which is the primary feature of this substantial refactor.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

CLA Verification Failed

The following contributors have not signed the Contributor License Agreement:

How to Sign

  1. Read the CLA document
  2. Open a CLA signature request
  3. A maintainer will add you to the signers list
  4. Comment recheck on this PR to re-run verification

This check runs automatically. Maintainers can update .github/cla-signers.json to add signers.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @SolariSystems, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly overhauls the dependency resolution system, transforming it into a more intelligent and user-friendly tool. The core DependencyResolver has been refactored to integrate new sub-components that provide visual dependency trees, predict and explain conflicts, communicate impact in plain language, and manage orphaned packages. These enhancements aim to provide users with clearer insights into their system's dependencies, enabling safer and more informed package management decisions.

Highlights

  • Comprehensive Rewrite and New Architecture: The cortex/dependency_resolver.py file has been completely rewritten, expanding significantly to introduce a modular architecture with dedicated classes for various dependency management aspects.
  • Visual Dependency Tree: A new VisualTreeRenderer class provides clear, formatted dependency tree visualizations, supporting both ASCII and Rich library output for enhanced readability.
  • Advanced Conflict Prediction: The ConflictPredictor identifies potential package conflicts, including mutually exclusive packages, port conflicts, and version mismatches, offering actionable alternatives and resolution steps.
  • Plain-Language Impact Communication: An ImpactCommunicator translates technical dependency changes and conflicts into user-friendly, plain-language explanations, making complex information accessible.
  • Orphan Package Management: The OrphanPackageManager detects and manages unneeded 'orphaned' packages, providing reasons for their orphan status, estimating potential space savings, and generating cleanup commands.
  • Enhanced Command-Line Interface (CLI): The CLI has been revamped to expose all new functionalities, offering commands for visual tree display, comprehensive installation/removal analysis, orphan management, and JSON output/export.
  • Comprehensive Test Suite: A new, extensive test suite (tests/test_dependency_resolver.py) has been added, ensuring coverage of all five bounty requirements and the new CLI functionalities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
4 Security Hotspots
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive rewrite of the dependency resolver, adding many valuable features like a visual dependency tree, conflict prediction, impact analysis, and orphan package management. The code is well-structured into distinct classes for different responsibilities. The addition of a test suite is also a great improvement. My review focuses on enhancing correctness, performance, and test robustness. Specifically, I've identified an issue with the installation order logic, opportunities to improve performance in orphan detection, and ways to make the new tests more portable and reliable.

Comment on lines 1238 to 1263
def _calculate_installation_order(
self, package_name: str, dependencies: list[Dependency]
self,
package_name: str,
dependencies: list[Dependency],
) -> list[str]:
"""Calculate optimal installation order"""
# Simple topological sort based on dependency levels

# Packages with no dependencies first
"""Calculate optimal installation order."""
no_deps = []
has_deps = []

for dep in dependencies:
if not dep.is_satisfied:
# Simple heuristic: system packages first, then others
if "lib" in dep.name or dep.name in ["ca-certificates", "curl", "gnupg"]:
if "lib" in dep.name or dep.name in [
"ca-certificates",
"curl",
"gnupg",
]:
no_deps.append(dep.name)
else:
has_deps.append(dep.name)

# Build installation order
order = no_deps + has_deps

# Add main package last
if package_name not in order:
order.append(package_name)

return order
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of _calculate_installation_order is based on a simple heuristic (checking for "lib" in the name) rather than a true topological sort. This can lead to an incorrect installation order for packages with complex dependency chains, potentially causing installation failures. For a more robust solution, you should implement a proper topological sort algorithm (like Kahn's algorithm or a DFS-based approach) that builds a dependency graph and processes nodes in the correct order.

Comment on lines +289 to +294
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "--help"],
capture_output=True,
text=True,
cwd="/tmp/cortex"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The tests in TestCLI use a hardcoded cwd="/tmp/cortex". This makes the tests non-portable, as they will fail on any machine where the project is not located at that exact path. Tests should not rely on a specific directory structure outside of the repository.

You should determine the project's root directory dynamically within the test and use that as the cwd. This change should be applied to all tests within the TestCLI class.

Suggested change
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "--help"],
capture_output=True,
text=True,
cwd="/tmp/cortex"
)
project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "--help"],
capture_output=True,
text=True,
cwd=project_root
)

Comment on lines +902 to +933
def estimate_space_savings(self, orphans: list[OrphanPackage]) -> int:
"""
Estimate disk space that would be freed.
Args:
orphans: List of orphan packages
Returns:
Estimated bytes that would be freed
"""
total_bytes = 0

for orphan in orphans:
try:
result = subprocess.run(
[
"dpkg-query",
"-W",
"-f=${Installed-Size}",
orphan.name,
],
capture_output=True,
text=True,
timeout=5,
)
if result.returncode == 0 and result.stdout.strip():
# Size is in KB
total_bytes += int(result.stdout.strip()) * 1024
except Exception:
pass

return total_bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The estimate_space_savings method calls dpkg-query inside a loop for each orphan package. This can be inefficient and slow if there is a large number of orphans. You can significantly improve performance by making a single call to dpkg-query with all orphan package names at once.

    def estimate_space_savings(self, orphans: list[OrphanPackage]) -> int:
        """
        Estimate disk space that would be freed.

        Args:
            orphans: List of orphan packages

        Returns:
            Estimated bytes that would be freed
        """
        if not orphans:
            return 0

        total_bytes = 0
        orphan_names = [o.name for o in orphans]

        try:
            # Query all packages at once for better performance
            result = subprocess.run(
                [
                    "dpkg-query",
                    "-W",
                    "-f=${Installed-Size}\n",
                    *orphan_names,
                ],
                capture_output=True,
                text=True,
                timeout=15,
                check=True,
            )
            for size_kb in result.stdout.strip().splitlines():
                if size_kb.isdigit():
                    # Size is in KB
                    total_bytes += int(size_kb) * 1024
        except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError) as e:
            logger.debug(f"Could not estimate space savings: {e}")
        except Exception as e:
            logger.warning(f"Unexpected error estimating space savings: {e}")

        return total_bytes

Comment on lines +133 to +139
def test_find_orphans_method(self):
"""Test orphan detection method exists."""
self.assertTrue(hasattr(self.manager, 'find_orphans'))
# Method should return a list of OrphanPackage
result = self.manager.find_orphans()
self.assertIsInstance(result, list)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tests in TestOrphanPackageManager and other classes like TestDependencyResolver are integration tests that rely on the actual state of the system's package manager (apt, dpkg). This makes the tests non-deterministic (they can pass or fail depending on what's installed on the machine running the tests), slow, and brittle.

To improve test quality, these should be converted to unit tests. You can use unittest.mock.patch to mock subprocess.run and provide controlled, predictable output for the shell commands. This will isolate the tests from the system environment, making them faster and more reliable.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In `@cortex/dependency_resolver.py`:
- Around line 801-833: The orphan scan currently truncates auto_packages with
list(auto_packages)[:50], so find_orphans() silently misses most orphans; remove
the hard-coded slice or replace it with a configurable limit parameter (e.g.,
max_scan) on the find_orphans method/class and iterate over the full
auto_packages when max_scan is None (or set a reasonable default), updating call
sites and tests accordingly; keep the OrphanPackage construction and use of
self.resolver.get_installed_version intact so behavior is preserved while
ensuring all auto_packages are scanned unless a caller explicitly supplies a
limit.
- Around line 185-280: The renderers ignore Dependency.depth so transitive
dependencies are shown as siblings; update _render_ascii_tree and
_render_rich_tree to build a hierarchical structure using Dependency.depth (or a
parent→children map) and render nested nodes accordingly: in _render_ascii_tree,
group/sort dependencies by depth and emit branch prefixes per depth level (using
self.BRANCH / self.LAST_BRANCH logic repeated per depth) so children are
indented under their parent; in _render_rich_tree, create a parent->children
mapping (or sort by depth) and recursively add nodes via _add_dep_to_tree (or
extend _add_dep_to_tree to accept children/depth) so Rich Tree reflects true
parent→child nesting rather than a flat list. Ensure conflict_packages, status
symbol logic, version_str and reason_str are preserved when placing each
dependency into the hierarchical tree.
- Around line 476-507: The _check_apt_conflicts function currently only flags
conflicts when the conflicting package is already installed (uses
self.resolver.is_package_installed), so it misses conflicts between packages in
the current install set; update _check_apt_conflicts to also consider the input
packages list when matching the parsed conflicting package names (i.e., treat
any match in packages as a conflict), normalizing names (strip versions/arch)
and consulting resolver.is_package_installed only for packages not in the
install set; keep building Conflict objects the same (Conflict,
ConflictType.MUTUALLY_EXCLUSIVE) and include both installed and-to-be-installed
cases in resolution_steps/description.
- Around line 1404-1428: The find_orphans function is annotated as returning
list[dict] but actually returns a dict with keys "orphans", "total_count",
"estimated_space_mb", and "cleanup_commands"; update the return type annotation
from list[dict] to a more accurate type (e.g., dict[str, Any] or a
TypedDict/NamedTuple describing those keys) and update the docstring to describe
the returned mapping and each key; locate the function by name find_orphans and
references to self.orphan_manager (estimate_space_savings,
get_orphan_cleanup_commands) when making the change.

In `@tests/test_dependency_resolver.py`:
- Around line 129-220: The tests call real apt/dpkg commands via
DependencyResolver (constructor and analyze_installation/analyze_removal) making
CI flaky; modify the tests to either mock the system calls (patch
DependencyResolver._run_command or subprocess.run using unittest.mock.patch for
TestDependencyResolver and TestEndToEnd) so they return controlled fake outputs,
or add a runtime guard in setUp to skip tests when dpkg/apt-cache is not
available (check for shutil.which("dpkg") or similar) and call self.skipTest;
target symbols: DependencyResolver, _run_command, analyze_installation,
analyze_removal, and the failing test classes to keep tests hermetic.
- Around line 286-331: Tests hard-code cwd="/tmp/cortex" which breaks CI; change
each subprocess.run call in tests/test_dependency_resolver.py (e.g., in
test_cli_help, test_cli_tree, test_cli_analyze, test_cli_json) to compute the
repo root relative to this test file and pass that as cwd (use
Path(__file__).resolve().parents[1] or equivalent) and add the necessary import
of pathlib.Path at the top or inside the test functions so the cwd is derived
from the test file location instead of a fixed path.

Comment on lines +185 to +280
def _render_ascii_tree(
self,
package_name: str,
dependencies: list[Dependency],
conflicts: list[Conflict] = None,
show_versions: bool = True,
) -> str:
"""Render tree using ASCII characters."""
lines = []
conflict_packages = set()

if conflicts:
for c in conflicts:
conflict_packages.add(c.package_a)
conflict_packages.add(c.package_b)

# Root package
lines.append(f"📦 {package_name}")

# Sort dependencies: unsatisfied first, then by name
sorted_deps = sorted(
dependencies, key=lambda d: (d.is_satisfied, d.name)
)

for i, dep in enumerate(sorted_deps):
is_last = i == len(sorted_deps) - 1
prefix = self.LAST_BRANCH if is_last else self.BRANCH

# Status symbol
if dep.name in conflict_packages:
status = self.CONFLICT
elif dep.is_optional:
status = self.OPTIONAL
elif dep.is_satisfied:
status = self.INSTALLED
else:
status = self.MISSING

# Version string
version_str = ""
if show_versions and dep.installed_version:
version_str = f" ({dep.installed_version})"
elif show_versions and dep.version:
version_str = f" (needs {dep.version})"

# Reason
reason_str = f" - {dep.reason}" if dep.reason else ""

lines.append(f"{prefix}{status} {dep.name}{version_str}{reason_str}")

return "\n".join(lines)

def _render_rich_tree(
self,
package_name: str,
dependencies: list[Dependency],
conflicts: list[Conflict] = None,
show_versions: bool = True,
) -> str:
"""Render tree using Rich library."""
from rich.tree import Tree
from io import StringIO

conflict_packages = set()
if conflicts:
for c in conflicts:
conflict_packages.add(c.package_a)
conflict_packages.add(c.package_b)

tree = Tree(f"📦 [bold blue]{package_name}[/bold blue]")

# Group dependencies
required = [d for d in dependencies if not d.is_optional]
optional = [d for d in dependencies if d.is_optional]

# Add required dependencies
if required:
req_branch = tree.add("[bold]Required[/bold]")
for dep in sorted(required, key=lambda d: (d.is_satisfied, d.name)):
self._add_dep_to_tree(
req_branch, dep, conflict_packages, show_versions
)

# Add optional dependencies
if optional:
opt_branch = tree.add("[dim]Optional[/dim]")
for dep in sorted(optional, key=lambda d: d.name):
self._add_dep_to_tree(
opt_branch, dep, conflict_packages, show_versions
)

# Capture output to string
output = StringIO()
console = self._console.__class__(file=output, force_terminal=True)
console.print(tree)
return output.getvalue()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tree rendering is flat because dependency depth is ignored.

Dependency.depth is populated for transitive deps but neither renderer uses it, so nested deps appear as siblings and the visual tree can be misleading.

💡 Minimal ASCII indentation using depth
-        for i, dep in enumerate(sorted_deps):
+        for i, dep in enumerate(sorted_deps):
             is_last = i == len(sorted_deps) - 1
-            prefix = self.LAST_BRANCH if is_last else self.BRANCH
+            indent = self.EMPTY * dep.depth
+            prefix = indent + (self.LAST_BRANCH if is_last else self.BRANCH)

Consider also nesting in _render_rich_tree (e.g., build a parent→children map) for true hierarchy.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _render_ascii_tree(
self,
package_name: str,
dependencies: list[Dependency],
conflicts: list[Conflict] = None,
show_versions: bool = True,
) -> str:
"""Render tree using ASCII characters."""
lines = []
conflict_packages = set()
if conflicts:
for c in conflicts:
conflict_packages.add(c.package_a)
conflict_packages.add(c.package_b)
# Root package
lines.append(f"📦 {package_name}")
# Sort dependencies: unsatisfied first, then by name
sorted_deps = sorted(
dependencies, key=lambda d: (d.is_satisfied, d.name)
)
for i, dep in enumerate(sorted_deps):
is_last = i == len(sorted_deps) - 1
prefix = self.LAST_BRANCH if is_last else self.BRANCH
# Status symbol
if dep.name in conflict_packages:
status = self.CONFLICT
elif dep.is_optional:
status = self.OPTIONAL
elif dep.is_satisfied:
status = self.INSTALLED
else:
status = self.MISSING
# Version string
version_str = ""
if show_versions and dep.installed_version:
version_str = f" ({dep.installed_version})"
elif show_versions and dep.version:
version_str = f" (needs {dep.version})"
# Reason
reason_str = f" - {dep.reason}" if dep.reason else ""
lines.append(f"{prefix}{status} {dep.name}{version_str}{reason_str}")
return "\n".join(lines)
def _render_rich_tree(
self,
package_name: str,
dependencies: list[Dependency],
conflicts: list[Conflict] = None,
show_versions: bool = True,
) -> str:
"""Render tree using Rich library."""
from rich.tree import Tree
from io import StringIO
conflict_packages = set()
if conflicts:
for c in conflicts:
conflict_packages.add(c.package_a)
conflict_packages.add(c.package_b)
tree = Tree(f"📦 [bold blue]{package_name}[/bold blue]")
# Group dependencies
required = [d for d in dependencies if not d.is_optional]
optional = [d for d in dependencies if d.is_optional]
# Add required dependencies
if required:
req_branch = tree.add("[bold]Required[/bold]")
for dep in sorted(required, key=lambda d: (d.is_satisfied, d.name)):
self._add_dep_to_tree(
req_branch, dep, conflict_packages, show_versions
)
# Add optional dependencies
if optional:
opt_branch = tree.add("[dim]Optional[/dim]")
for dep in sorted(optional, key=lambda d: d.name):
self._add_dep_to_tree(
opt_branch, dep, conflict_packages, show_versions
)
# Capture output to string
output = StringIO()
console = self._console.__class__(file=output, force_terminal=True)
console.print(tree)
return output.getvalue()
def _render_ascii_tree(
self,
package_name: str,
dependencies: list[Dependency],
conflicts: list[Conflict] = None,
show_versions: bool = True,
) -> str:
"""Render tree using ASCII characters."""
lines = []
conflict_packages = set()
if conflicts:
for c in conflicts:
conflict_packages.add(c.package_a)
conflict_packages.add(c.package_b)
# Root package
lines.append(f"📦 {package_name}")
# Sort dependencies: unsatisfied first, then by name
sorted_deps = sorted(
dependencies, key=lambda d: (d.is_satisfied, d.name)
)
for i, dep in enumerate(sorted_deps):
is_last = i == len(sorted_deps) - 1
indent = self.EMPTY * dep.depth
prefix = indent + (self.LAST_BRANCH if is_last else self.BRANCH)
# Status symbol
if dep.name in conflict_packages:
status = self.CONFLICT
elif dep.is_optional:
status = self.OPTIONAL
elif dep.is_satisfied:
status = self.INSTALLED
else:
status = self.MISSING
# Version string
version_str = ""
if show_versions and dep.installed_version:
version_str = f" ({dep.installed_version})"
elif show_versions and dep.version:
version_str = f" (needs {dep.version})"
# Reason
reason_str = f" - {dep.reason}" if dep.reason else ""
lines.append(f"{prefix}{status} {dep.name}{version_str}{reason_str}")
return "\n".join(lines)
def _render_rich_tree(
self,
package_name: str,
dependencies: list[Dependency],
conflicts: list[Conflict] = None,
show_versions: bool = True,
) -> str:
"""Render tree using Rich library."""
from rich.tree import Tree
from io import StringIO
conflict_packages = set()
if conflicts:
for c in conflicts:
conflict_packages.add(c.package_a)
conflict_packages.add(c.package_b)
tree = Tree(f"📦 [bold blue]{package_name}[/bold blue]")
# Group dependencies
required = [d for d in dependencies if not d.is_optional]
optional = [d for d in dependencies if d.is_optional]
# Add required dependencies
if required:
req_branch = tree.add("[bold]Required[/bold]")
for dep in sorted(required, key=lambda d: (d.is_satisfied, d.name)):
self._add_dep_to_tree(
req_branch, dep, conflict_packages, show_versions
)
# Add optional dependencies
if optional:
opt_branch = tree.add("[dim]Optional[/dim]")
for dep in sorted(optional, key=lambda d: d.name):
self._add_dep_to_tree(
opt_branch, dep, conflict_packages, show_versions
)
# Capture output to string
output = StringIO()
console = self._console.__class__(file=output, force_terminal=True)
console.print(tree)
return output.getvalue()
🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 185 - 280, The renderers ignore
Dependency.depth so transitive dependencies are shown as siblings; update
_render_ascii_tree and _render_rich_tree to build a hierarchical structure using
Dependency.depth (or a parent→children map) and render nested nodes accordingly:
in _render_ascii_tree, group/sort dependencies by depth and emit branch prefixes
per depth level (using self.BRANCH / self.LAST_BRANCH logic repeated per depth)
so children are indented under their parent; in _render_rich_tree, create a
parent->children mapping (or sort by depth) and recursively add nodes via
_add_dep_to_tree (or extend _add_dep_to_tree to accept children/depth) so Rich
Tree reflects true parent→child nesting rather than a flat list. Ensure
conflict_packages, status symbol logic, version_str and reason_str are preserved
when placing each dependency into the hierarchical tree.

Comment on lines +476 to +507
def _check_apt_conflicts(self, packages: list[str]) -> list[Conflict]:
"""Check conflicts using apt-cache."""
conflicts = []

for pkg in packages:
try:
result = subprocess.run(
["apt-cache", "show", pkg],
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
for line in result.stdout.split("\n"):
if line.startswith("Conflicts:"):
conflicting = line.split(":", 1)[1].strip()
for c in conflicting.split(","):
c = c.strip().split()[0] # Remove version
if self.resolver.is_package_installed(c):
conflicts.append(
Conflict(
package_a=pkg,
package_b=c,
conflict_type=ConflictType.MUTUALLY_EXCLUSIVE,
description=f"Declared conflict in package metadata",
alternatives=[],
resolution_steps=[
f"sudo apt remove {c}",
f"sudo apt install {pkg}",
],
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Apt-conflict scan only checks installed packages.

_check_apt_conflicts ignores conflicts between packages being installed together, so predicted conflicts can be missed.

✅ Include install-set in conflict detection
-        conflicts.extend(self._check_apt_conflicts(packages_to_install))
+        conflicts.extend(self._check_apt_conflicts(packages_to_install, all_packages))
-    def _check_apt_conflicts(self, packages: list[str]) -> list[Conflict]:
+    def _check_apt_conflicts(
+        self, packages: list[str], all_packages: set[str]
+    ) -> list[Conflict]:
@@
-                                if self.resolver.is_package_installed(c):
+                                if c in all_packages:
🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 476 - 507, The
_check_apt_conflicts function currently only flags conflicts when the
conflicting package is already installed (uses
self.resolver.is_package_installed), so it misses conflicts between packages in
the current install set; update _check_apt_conflicts to also consider the input
packages list when matching the parsed conflicting package names (i.e., treat
any match in packages as a conflict), normalizing names (strip versions/arch)
and consulting resolver.is_package_installed only for packages not in the
install set; keep building Conflict objects the same (Conflict,
ConflictType.MUTUALLY_EXCLUSIVE) and include both installed and-to-be-installed
cases in resolution_steps/description.

Comment on lines +801 to +833
# Check each for reverse dependencies
for pkg in list(auto_packages)[:50]: # Limit for performance
if not pkg:
continue

rdeps_result = subprocess.run(
["apt-cache", "rdepends", "--installed", pkg],
capture_output=True,
text=True,
timeout=10,
)

if rdeps_result.returncode == 0:
lines = rdeps_result.stdout.strip().split("\n")
# First line is package name, then "Reverse Depends:"
# Actual deps start at line 2
rdeps = [
l.strip()
for l in lines[2:]
if l.strip() and not l.strip().startswith("|")
]

if len(rdeps) == 0:
orphans.append(
OrphanPackage(
name=pkg,
installed_version=self.resolver.get_installed_version(
pkg
)
or "unknown",
reason="Automatically installed with no dependents",
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Orphan scan silently drops most auto-installed packages.

Limiting to the first 50 auto packages means find_orphans() won’t return all orphans, contradicting its docstring. Make the limit configurable or remove it.

🔧 Consider scanning all auto packages (or make limit configurable)
-                for pkg in list(auto_packages)[:50]:  # Limit for performance
+                for pkg in auto_packages:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Check each for reverse dependencies
for pkg in list(auto_packages)[:50]: # Limit for performance
if not pkg:
continue
rdeps_result = subprocess.run(
["apt-cache", "rdepends", "--installed", pkg],
capture_output=True,
text=True,
timeout=10,
)
if rdeps_result.returncode == 0:
lines = rdeps_result.stdout.strip().split("\n")
# First line is package name, then "Reverse Depends:"
# Actual deps start at line 2
rdeps = [
l.strip()
for l in lines[2:]
if l.strip() and not l.strip().startswith("|")
]
if len(rdeps) == 0:
orphans.append(
OrphanPackage(
name=pkg,
installed_version=self.resolver.get_installed_version(
pkg
)
or "unknown",
reason="Automatically installed with no dependents",
)
)
# Check each for reverse dependencies
for pkg in auto_packages:
if not pkg:
continue
rdeps_result = subprocess.run(
["apt-cache", "rdepends", "--installed", pkg],
capture_output=True,
text=True,
timeout=10,
)
if rdeps_result.returncode == 0:
lines = rdeps_result.stdout.strip().split("\n")
# First line is package name, then "Reverse Depends:"
# Actual deps start at line 2
rdeps = [
l.strip()
for l in lines[2:]
if l.strip() and not l.strip().startswith("|")
]
if len(rdeps) == 0:
orphans.append(
OrphanPackage(
name=pkg,
installed_version=self.resolver.get_installed_version(
pkg
)
or "unknown",
reason="Automatically installed with no dependents",
)
)
🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 801 - 833, The orphan scan
currently truncates auto_packages with list(auto_packages)[:50], so
find_orphans() silently misses most orphans; remove the hard-coded slice or
replace it with a configurable limit parameter (e.g., max_scan) on the
find_orphans method/class and iterate over the full auto_packages when max_scan
is None (or set a reasonable default), updating call sites and tests
accordingly; keep the OrphanPackage construction and use of
self.resolver.get_installed_version intact so behavior is preserved while
ensuring all auto_packages are scanned unless a caller explicitly supplies a
limit.

Comment on lines +1404 to +1428
def find_orphans(self) -> list[dict]:
"""
Find orphaned packages on the system.
for dep in graph.direct_dependencies:
dep_prefix = " " * (indent + 1)
dep_status = "✅" if dep.is_satisfied else "❌"
version_str = f" ({dep.installed_version})" if dep.installed_version else ""
print(f"{dep_prefix}{dep_status} {dep.name}{version_str} - {dep.reason}")
Returns:
List of orphan package information
"""
orphans = self.orphan_manager.find_orphans()
space_savings = self.orphan_manager.estimate_space_savings(orphans)

return {
"orphans": [
{
"name": o.name,
"version": o.installed_version,
"reason": o.reason,
}
for o in orphans
],
"total_count": len(orphans),
"estimated_space_mb": space_savings / (1024 * 1024),
"cleanup_commands": self.orphan_manager.get_orphan_cleanup_commands(
orphans, dry_run=self.dry_run
),
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check the actual function definition and return type annotation
head -1500 cortex/dependency_resolver.py | tail -150 | cat -n

Repository: cortexlinux/cortex

Length of output: 6365


🏁 Script executed:

# Also check for Any import and the imports section
head -50 cortex/dependency_resolver.py | cat -n

Repository: cortexlinux/cortex

Length of output: 1884


🏁 Script executed:

# Verify find_orphans calls and usages
rg -n "find_orphans\(" cortex/

Repository: cortexlinux/cortex

Length of output: 387


Return type annotation doesn't match actual dict result.

find_orphans() is annotated as list[dict] but returns a dict with orphans, total_count, estimated_space_mb, and cleanup_commands keys. Update the type hint and docstring to reflect the actual return type.

🔧 Align return type annotation
-from typing import Optional
+from typing import Any, Optional
@@
-    def find_orphans(self) -> list[dict]:
+    def find_orphans(self) -> dict[str, Any]:
         """
-        Find orphaned packages on the system.
+        Find orphaned packages on the system.

         Returns:
-            List of orphan package information
+            Dict containing orphans list, total count, estimated space savings, and cleanup commands
         """
🤖 Prompt for AI Agents
In `@cortex/dependency_resolver.py` around lines 1404 - 1428, The find_orphans
function is annotated as returning list[dict] but actually returns a dict with
keys "orphans", "total_count", "estimated_space_mb", and "cleanup_commands";
update the return type annotation from list[dict] to a more accurate type (e.g.,
dict[str, Any] or a TypedDict/NamedTuple describing those keys) and update the
docstring to describe the returned mapping and each key; locate the function by
name find_orphans and references to self.orphan_manager (estimate_space_savings,
get_orphan_cleanup_commands) when making the change.

Comment on lines +129 to +220
def setUp(self):
self.resolver = DependencyResolver()
self.manager = OrphanPackageManager(self.resolver)

def test_find_orphans_method(self):
"""Test orphan detection method exists."""
self.assertTrue(hasattr(self.manager, 'find_orphans'))
# Method should return a list of OrphanPackage
result = self.manager.find_orphans()
self.assertIsInstance(result, list)

def test_get_cleanup_commands(self):
"""Test cleanup command generation."""
orphans = [
OrphanPackage(name="libfoo", installed_version="1.0"),
OrphanPackage(name="libbar", installed_version="2.0"),
]
commands = self.manager.get_orphan_cleanup_commands(orphans)
self.assertTrue(len(commands) > 0)
# Should suggest apt remove
self.assertTrue(any("apt" in cmd for cmd in commands))

def test_estimate_space_savings(self):
"""Test space savings estimation."""
orphans = [
OrphanPackage(name="libfoo", installed_version="1.0"),
OrphanPackage(name="libbar", installed_version="2.0"),
]
# Method should exist and return a value
savings = self.manager.estimate_space_savings(orphans)
self.assertIsInstance(savings, (int, float, str))


class TestDependencyResolver(unittest.TestCase):
"""Test the main DependencyResolver class integration."""

def setUp(self):
self.resolver = DependencyResolver()

def test_analyze_installation(self):
"""Test full installation analysis."""
result = self.resolver.analyze_installation("wget")
self.assertIn("tree", result)
self.assertIn("summary", result)
self.assertIn("conflicts", result)
self.assertIn("dependencies", result)

def test_analyze_removal(self):
"""Test removal impact analysis."""
result = self.resolver.analyze_removal("wget")
# Severity is nested under impact
self.assertIn("impact", result)
self.assertIn("severity", result["impact"])

def test_find_orphans(self):
"""Test orphan finding."""
result = self.resolver.find_orphans()
self.assertIsInstance(result, dict)


class TestEndToEnd(unittest.TestCase):
"""End-to-end integration tests."""

def test_full_workflow_install(self):
"""Test complete installation analysis workflow."""
resolver = DependencyResolver()
result = resolver.analyze_installation("curl")

# Should have all required fields
self.assertIn("tree", result)
self.assertIn("summary", result)
self.assertIn("dependencies", result)
self.assertIn("conflicts", result)
self.assertIn("impact", result)

def test_full_workflow_removal(self):
"""Test complete removal analysis workflow."""
resolver = DependencyResolver()
result = resolver.analyze_removal("curl")

# Should have impact information
self.assertIn("impact", result)
self.assertIn("severity", result["impact"])

def test_json_export(self):
"""Test JSON export format."""
resolver = DependencyResolver()
result = resolver.analyze_installation("wget")

# Should be JSON serializable
json_str = json.dumps(result, default=str)
parsed = json.loads(json_str)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tests invoke real apt/dpkg commands, making CI flaky.

DependencyResolver runs dpkg/apt-cache on init and in analyze_*, which makes unit tests environment-dependent and slow. Consider mocking _run_command/subprocess.run, or skip these tests unless apt tools are present.

🧪 Example skip guard for system-dependent tests
+import shutil
@@
+APT_AVAILABLE = shutil.which("apt-cache") and shutil.which("dpkg")
+
+@unittest.skipUnless(APT_AVAILABLE, "apt/dpkg tools not available")
 class TestDependencyResolver(unittest.TestCase):
🤖 Prompt for AI Agents
In `@tests/test_dependency_resolver.py` around lines 129 - 220, The tests call
real apt/dpkg commands via DependencyResolver (constructor and
analyze_installation/analyze_removal) making CI flaky; modify the tests to
either mock the system calls (patch DependencyResolver._run_command or
subprocess.run using unittest.mock.patch for TestDependencyResolver and
TestEndToEnd) so they return controlled fake outputs, or add a runtime guard in
setUp to skip tests when dpkg/apt-cache is not available (check for
shutil.which("dpkg") or similar) and call self.skipTest; target symbols:
DependencyResolver, _run_command, analyze_installation, analyze_removal, and the
failing test classes to keep tests hermetic.

Comment on lines +286 to +331
def test_cli_help(self):
"""Test CLI help output."""
import subprocess
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "--help"],
capture_output=True,
text=True,
cwd="/tmp/cortex"
)
self.assertEqual(result.returncode, 0)
self.assertIn("--tree", result.stdout)
self.assertIn("--analyze", result.stdout)
self.assertIn("--orphans", result.stdout)

def test_cli_tree(self):
"""Test CLI tree output."""
import subprocess
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "wget", "--tree"],
capture_output=True,
text=True,
cwd="/tmp/cortex"
)
self.assertEqual(result.returncode, 0)
self.assertIn("wget", result.stdout.lower())

def test_cli_analyze(self):
"""Test CLI analyze output."""
import subprocess
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "wget", "--analyze"],
capture_output=True,
text=True,
cwd="/tmp/cortex"
)
self.assertEqual(result.returncode, 0)

def test_cli_json(self):
"""Test CLI JSON output."""
import subprocess
result = subprocess.run(
["python3", "cortex/dependency_resolver.py", "wget", "--analyze", "--json"],
capture_output=True,
text=True,
cwd="/tmp/cortex"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CLI tests assume repo is located at /tmp/cortex.

Hard-coding the working directory makes tests fail in most CI setups. Derive repo root from the test file location.

📁 Make cwd portable
+from pathlib import Path
@@
+REPO_ROOT = Path(__file__).resolve().parents[1]
@@
-            cwd="/tmp/cortex"
+            cwd=str(REPO_ROOT)
🤖 Prompt for AI Agents
In `@tests/test_dependency_resolver.py` around lines 286 - 331, Tests hard-code
cwd="/tmp/cortex" which breaks CI; change each subprocess.run call in
tests/test_dependency_resolver.py (e.g., in test_cli_help, test_cli_tree,
test_cli_analyze, test_cli_json) to compute the repo root relative to this test
file and pass that as cwd (use Path(__file__).resolve().parents[1] or
equivalent) and add the necessary import of pathlib.Path at the top or inside
the test functions so the cwd is derived from the test file location instead of
a fixed path.

@SolariSystems SolariSystems changed the title Dependency Conflict Resolver with Visual Tree feat(resolver): add dependency conflict resolver with visual tree Jan 18, 2026
@SolariSystems
Copy link
Author

recheck

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow contributing.md guidelines ( i.e. add a before/after video in pr description, AI Usage, Documentation file in code and tests passing status etc.). Closing this one, kindly create a new PR following all the guidelines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Dependency Conflict Resolver with Visual Tree

2 participants