Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 38 additions & 7 deletions src/linker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@
use anyhow::{Context, Result};
use colored::Colorize;
use std::cell::RefCell;
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::fs;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use walkdir::WalkDir;

use crate::config::{Config, SyncType, TargetConfig};
Expand Down Expand Up @@ -52,6 +53,11 @@ pub struct Linker {
project_root: PathBuf,
source_dir: PathBuf,
path_cache: RefCell<HashMap<PathBuf, PathBuf>>,
/// Cache of compressed content for AGENTS.md files to avoid redundant processing.
/// Uses Rc to share the same heap-allocated string across multiple agents.
compression_cache: RefCell<HashMap<PathBuf, Rc<String>>>,
/// Tracks which compressed files have already been ensured to be up-to-date.
ensured_outputs: RefCell<HashSet<PathBuf>>,
Comment on lines +56 to +60
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | πŸ”΅ Trivial

🧩 Analysis chain

🏁 Script executed:

# First, let's check if Linker has explicit Send/Sync bounds or implementations
rg -n --type=rust 'impl.*Send.*Linker|impl.*Sync.*Linker|Linker.*Send|Linker.*Sync' src/

Repository: dallay/agentsync

Length of output: 210


🏁 Script executed:

# Check for usage patterns that would require Linker to be Send
rg -n --type=rust 'Arc<Linker>|thread::spawn|Linker.*crossbeam|Linker.*tokio|Linker.*async' src/ tests/

Repository: dallay/agentsync

Length of output: 42


🏁 Script executed:

# Examine the Linker struct definition itself
rg -A 20 'pub struct Linker' src/linker.rs | head -60

Repository: dallay/agentsync

Length of output: 878


🏁 Script executed:

# Check if there are any tests asserting Send/Sync trait bounds on Linker
rg -n --type=rust 'assert_.*Send|assert_.*Sync|fn assert_send|fn assert_sync' tests/ src/

Repository: dallay/agentsync

Length of output: 42


🏁 Script executed:

# Check all fields of Linker struct to understand pre-PR Send status
rg -A 10 'pub struct Linker' src/linker.rs | head -30

Repository: dallay/agentsync

Length of output: 564


🏁 Script executed:

# Check how Linker is used in main.rs - does it get moved between threads?
cat -n src/main.rs | head -80

Repository: dallay/agentsync

Length of output: 2812


🏁 Script executed:

# Look for any documentation or comments about Send/Sync
rg -n --type=rust 'Send|Sync|thread|Arc<' src/linker.rs | head -20

Repository: dallay/agentsync

Length of output: 1145


🏁 Script executed:

# Check if there are any public async functions that take Linker
rg -n --type=rust 'async fn.*Linker|pub.*async.*fn' src/linker.rs | head -20

Repository: dallay/agentsync

Length of output: 42


🏁 Script executed:

# Check if Config is Send (to verify pre-PR Linker was indeed Send)
rg -n --type=rust 'pub struct Config' src/config.rs | head -5

Repository: dallay/agentsync

Length of output: 82


🏁 Script executed:

# Check actual usage of Linker in main.rs - is it ever Arc'd or spawned?
cat -n src/main.rs | tail -150

Repository: dallay/agentsync

Length of output: 6782


🏁 Script executed:

# Look for any library documentation or CHANGELOG mentioning Send/thread-safety
fd -e md -e toml -e txt | xargs grep -l -i 'send\|thread\|concurrent\|async' 2>/dev/null | head -10

Repository: dallay/agentsync

Length of output: 209


🏁 Script executed:

# Check if Rc/Arc are used elsewhere in the codebase to understand project's threading posture
rg -n 'use std::rc::Rc|use std::sync::Arc' src/

Repository: dallay/agentsync

Length of output: 93


🏁 Script executed:

# Final check: Does Config struct export Send guarantee or any trait bounds?
rg -A 5 'pub struct Config' src/config.rs | head -20

Repository: dallay/agentsync

Length of output: 278


🏁 Script executed:

# Check if there's any documentation in lib.rs about Linker's thread safety
rg -B 5 -A 5 'pub use linker::Linker' src/lib.rs

Repository: dallay/agentsync

Length of output: 42


🏁 Script executed:

# Verify: are there any comments in the PR or commit history suggesting intentional single-threaded design?
git log --oneline --all | grep -i 'rc\|cache\|compress' | head -10

Repository: dallay/agentsync

Length of output: 766


🏁 Script executed:

# Check the commit message for the Rc introduction to understand intent
git show e3cbd79 --stat | head -30

Repository: dallay/agentsync

Length of output: 1560


🏁 Script executed:

# Check if there's any public documentation or README mentioning thread-safety
cat README.md 2>/dev/null | head -50

Repository: dallay/agentsync

Length of output: 2577


🏁 Script executed:

# Verify: does any test or code explicitly construct Linker in a way that would break if it's not Send?
rg -n --type=rust 'fn.*Linker.*\->' src/ tests/

Repository: dallay/agentsync

Length of output: 42


Document the intentional single-threaded design of Linker

The compression_cache: RefCell<HashMap<PathBuf, Rc<String>>> field uses Rc, which does not implement Send, making Linker: !Send. While this is technically a narrowing of the struct's trait implementations, it reflects a deliberate performance choice documented in the commit message: to reduce memory usage and CPU time during compression by avoiding deep string clones across agents.

Since this is a single-threaded CLI tool with no async or threading patterns, and Linker is never moved across thread boundaries in practice, the practical impact is minimal. However, to prevent confusion for future library users, add a comment documenting the intentional single-threaded design:

/// Intentionally uses Rc for single-run compression caching.
/// Linker is not Send; designed for single-threaded synchronization operations.
compression_cache: RefCell<HashMap<PathBuf, Rc<String>>>,

Using Arc would add unnecessary atomic overhead without benefit in this context.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/linker.rs` around lines 56 - 60, Add a short explanatory comment above
the compression_cache field documenting that using Rc inside RefCell is
intentional to keep Linker single-threaded and avoid Arc's atomic overhead;
mention that Rc<String> makes Linker non-Send and that this is acceptable
because the CLI is single-threaded and Linker is never moved across threads in
practice, and note that using Arc would add unnecessary atomic cost β€” apply this
comment to the compression_cache: RefCell<HashMap<PathBuf, Rc<String>>>
declaration near ensured_outputs to make the intent explicit.

}

impl Linker {
Expand All @@ -66,6 +72,8 @@ impl Linker {
project_root,
source_dir,
path_cache: RefCell::new(HashMap::new()),
compression_cache: RefCell::new(HashMap::new()),
ensured_outputs: RefCell::new(HashSet::new()),
}
}

Expand Down Expand Up @@ -242,13 +250,32 @@ impl Linker {
}

fn write_compressed_agents_md(&self, source: &Path, dest: &Path) -> Result<()> {
let content = fs::read_to_string(source)
.with_context(|| format!("Failed to read AGENTS.md: {}", source.display()))?;
let compressed = compress_agents_md_content(&content);
// 1. Skip if this specific destination has already been ensured to be up-to-date.
if self.ensured_outputs.borrow().contains(dest) {
return Ok(());
}
Comment on lines 252 to +256
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Stale ensured_outputs / compression_cache if sync() is called more than once on the same Linker

Neither cache is cleared between sync() invocations. If a caller reuses a Linker instance after a source file changes (e.g., in a long-running process or a test harness), ensured_outputs will short-circuit write_compressed_agents_md (Line 254) and compression_cache will return the now-stale compressed content, silently leaving the destination file out of date.

The intended single-run lifetime should be either documented on the struct or enforced by clearing the caches at the top of sync():

πŸ›‘οΈ Option A β€” Clear caches at the start of each `sync()` call
 pub fn sync(&self, options: &SyncOptions) -> Result<SyncResult> {
+    self.compression_cache.borrow_mut().clear();
+    self.ensured_outputs.borrow_mut().clear();
     let mut result = SyncResult::default();
πŸ“„ Option B β€” Document the single-run contract on the struct
+/// **Note:** The internal `compression_cache` and `ensured_outputs` caches are
+/// populated during a single `sync()` run and are never cleared. Reusing a
+/// `Linker` instance after source files change will produce stale results.
 pub struct Linker {
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/linker.rs` around lines 252 - 256, The caches ensured_outputs and
compression_cache on the Linker persist across multiple sync() calls causing
write_compressed_agents_md and compression lookups to return stale results; to
fix, at the start of Linker::sync() clear both caches (e.g.,
borrow_mut().clear() on the ensured_outputs and compression_cache) so each sync
run recomputes outputs, and keep references to these fields (ensured_outputs,
compression_cache) and write_compressed_agents_md in mind when making the
change; alternatively, if you choose not to clear them, add clear documentation
on the Linker struct and the sync() method stating the single-run lifetime
contract.


// 2. Get or compute the compressed content for this source.
// This avoids redundant re-reading and re-compressing of the same AGENTS.md.
let compressed = {
let mut cache = self.compression_cache.borrow_mut();
if let Some(cached) = cache.get(source) {
cached.clone()
} else {
let content = fs::read_to_string(source)
.with_context(|| format!("Failed to read AGENTS.md: {}", source.display()))?;
let compressed = compress_agents_md_content(&content);
let rc = Rc::new(compressed);
cache.insert(source.to_path_buf(), rc.clone());
rc
}
};

// 3. Check if existing file already matches the compressed content to avoid unnecessary writes.
if let Ok(existing) = fs::read_to_string(dest)
&& existing == compressed
&& existing == *compressed
{
self.ensured_outputs.borrow_mut().insert(dest.to_path_buf());
return Ok(());
}

Expand All @@ -259,8 +286,12 @@ impl Linker {
.with_context(|| format!("Failed to create directory: {}", parent.display()))?;
}

fs::write(dest, compressed)
.with_context(|| format!("Failed to write compressed AGENTS.md: {}", dest.display()))
fs::write(dest, &*compressed)
.with_context(|| format!("Failed to write compressed AGENTS.md: {}", dest.display()))?;

// 4. Mark this destination as ensured.
self.ensured_outputs.borrow_mut().insert(dest.to_path_buf());
Ok(())
}

/// Create a single symlink
Expand Down
Loading