Skip to content

V2.5#51

Merged
sopaco merged 21 commits intomainfrom
v2.5
Mar 11, 2026
Merged

V2.5#51
sopaco merged 21 commits intomainfrom
v2.5

Conversation

@sopaco
Copy link
Owner

@sopaco sopaco commented Mar 11, 2026

No description provided.

sopaco and others added 21 commits March 8, 2026 20:49
The automation configuration has been moved from TOML files to
code-level
configuration through the AutomationConfig struct. Updated documentation
to reflect the new configuration approach and added parameters for
max_concurrent_llm_tasks and generate_layers_every_n_messages.
The `log` crate has been removed from core modules and all logging
calls have been updated to use the `tracing` crate instead, with
appropriate log level adjustments.
Supports workspace-managed versions in Cargo.toml and adds
--auto-confirm flag. Refactors version checking to verify
specific versions are published.
Switch from using curl commands to native Node.js HTTPS module for
checking crate availability on crates.io. This removes the dependency
on curl and improves cross-platform support, especially on Windows.
capabilities

The changes introduce entity preservation in abstract generation by
adding a
known_entities parameter to the generate_with_llm method. This ensures
named
entities are retained verbatim in generated abstracts. The abstract
generation
prompt was updated to handle entity preservation, and all callers were
updated
to pass an empty entity list by default.

Additionally, the search functionality was enhanced with intent-based
query
analysis that supports multiple language types and query patterns. A new
unified query analysis prompt was added that returns rewritten queries,
keywords, entities, intent types, and time constraints in a single LLM
call.

The embedding client now includes built-in rate limiting and caching to
improve API usage efficiency. Several configuration structs were updated
to include cache and rate limiting parameters, and the embedding cache
module
was removed in favor of the new built-in caching.
Replace the old asynchronous AutoExtractor with MemoryEventCoordinator
for synchronous memory extraction during session close. This provides
better guarantees that memory processing completes before the CLI exits.
Introduce MemoryEventCoordinator for unified event processing and
ensure stable content hashing with SHA-256. Enhance TARS UI with
better error notifications.
language rules for memory extraction

Ensure L0/L1 files are generated synchronously after memory extraction
in
on_session_closed to prevent missed updates on process exit. Also add
critical language rules to preserve original language and technical
terms
in memory extraction.
Migrate from env_logger to tracing for better structured logging
support. The new setup uses tracing-subscriber with a custom
FileLayer that writes logs to both file and memory buffer,
while maintaining backward compatibility through tracing-log
bridge for existing log:: macros.
Skip timeline layer generation when L0/L1 files already exist to
prevent redundant operations. Also optimize vector database indexing
to only process current session files instead of all historical data.
Prevent duplicate layer cascade updates when session closure triggers
update_all_layers simultaneously with background event processing of
MemoryCreated/MemoryUpdated events.
- Change tool definitions from inputSchema/handler to parameters/execute
- Update PluginAPI interface to match OpenClaw's pluginConfig
- Make config properties optional with default values
- Remove unused entry point and tools array from plugin manifest
@sopaco sopaco merged commit 377597a into main Mar 11, 2026
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant