Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
120 commits
Select commit Hold shift + click to select a range
ec93173
memory plans
leshy Mar 4, 2026
5c23e89
spec iteration
leshy Mar 5, 2026
dc0f946
spec iteration
leshy Mar 5, 2026
69540aa
query objects spec
leshy Mar 5, 2026
cc5939b
mem3 iteration
leshy Mar 5, 2026
a5d3f3c
live/passive transforms
leshy Mar 5, 2026
c87d955
initial pass on memory
leshy Mar 5, 2026
76fbea1
transform materialize
leshy Mar 5, 2026
ef7fe1d
sqlite schema: decomposed pose columns, separate payload table, R*Tre…
leshy Mar 5, 2026
936e2ce
JpegCodec for Image storage (43x smaller), ingest helpers, QualityWin…
leshy Mar 5, 2026
852a7e9
Wire parent_id lineage through transforms for automatic source data p…
leshy Mar 5, 2026
d44aaaf
Wire parent_stream into _streams registry, add tasks.md gap analysis
leshy Mar 5, 2026
fa31471
Implement project_to() for cross-stream lineage projection
leshy Mar 5, 2026
41a06f0
Make search_embedding auto-project to source stream
leshy Mar 5, 2026
bce7586
CaptionTransformer + Florence2 batch fix
leshy Mar 5, 2026
8ad469e
ObservationSet: fetch() returns list-like + stream-like result set
leshy Mar 5, 2026
2163179
search_embedding accepts str/image with auto-embedding
leshy Mar 5, 2026
1bdd496
Add sqlite_vec to mypy ignore list (no type stubs available)
leshy Mar 5, 2026
5130511
Fix mypy + pytest errors across memory and memory_old modules
leshy Mar 5, 2026
219341f
Improve similarity heatmap with normalized values and distance spread
leshy Mar 5, 2026
e7f3fcd
Remove plans/ from tracking (kept locally)
leshy Mar 5, 2026
92ee380
Address Greptile review: SQL injection guards, distance ordering, stubs
leshy Mar 5, 2026
de0efb9
Add memory Rerun visualization, fix stream iteration, update docs
leshy Mar 5, 2026
314b4d3
Rename run_e2e_export → test_e2e_export, delete viz.py + run_viz_demo…
leshy Mar 5, 2026
0b2983c
added docs
leshy Mar 5, 2026
ec499cd
removed tasks.md
leshy Mar 5, 2026
e344498
Optimize memory pipeline: TurboJPEG codec, sharpness downsample, thre…
leshy Mar 5, 2026
bf0b79a
text embedding transformer
leshy Mar 6, 2026
b4f9f96
cleanup
leshy Mar 6, 2026
19a8db3
Use Codec protocol type instead of concrete union, remove dead _pose_…
leshy Mar 6, 2026
2cbf162
correct db sessions
leshy Mar 6, 2026
74df2de
record module cleanup
leshy Mar 6, 2026
e039f90
memory elements are now Resource, simplification of memory Module
leshy Mar 7, 2026
bd3a572
Rename stream.appended to stream.observable()/subscribe()
leshy Mar 7, 2026
24a13de
repr, embedding fetch simplification
leshy Mar 7, 2026
d5db010
Make Observation generic: Observation[T] with full type safety
leshy Mar 7, 2026
2d0bedc
Simplify Stream._clone with copy.copy, remove subclass overrides
leshy Mar 7, 2026
ce9f5e8
loader refactor
leshy Mar 7, 2026
33ad5e1
Extract backend.load_data(), add stream.load_data(obs) public API
leshy Mar 7, 2026
1b67959
Add rich colored __str__ to Stream and Filter types
leshy Mar 7, 2026
2f66bc0
Unify __repr__ and __str__ via _rich_text().plain, remove duplicate r…
leshy Mar 7, 2026
e9078a9
renamed types to type
leshy Mar 7, 2026
2b79919
one -> first, time range
leshy Mar 7, 2026
dfd06c4
getitem for streams
leshy Mar 8, 2026
4734acf
readme sketch
leshy Mar 8, 2026
d6e5efc
bigoffice db in lfs, sqlite accepts Path
leshy Mar 8, 2026
31cf8a8
projection transformers
leshy Mar 8, 2026
04337db
stream info removed, stream accessor helper, TS unique per stream
leshy Mar 8, 2026
6fc6e8d
Add colored summary() output and model= param to search_embedding
leshy Mar 8, 2026
a6a06e1
stream delete
leshy Mar 8, 2026
b9af997
florence model detail settings and prefix filter
leshy Mar 8, 2026
1e42408
extracted formatting to a separate file
leshy Mar 8, 2026
0c09d49
extract rich text rendering to formatting.py, add Stream.name, fix st…
leshy Mar 8, 2026
a954f79
matching based on streams
leshy Mar 8, 2026
ab48171
projection experiments
leshy Mar 8, 2026
a80bbb9
projection bugfix
leshy Mar 8, 2026
c7522d3
observationset typing fix
leshy Mar 8, 2026
decd090
detections, cleanup
leshy Mar 8, 2026
f51923d
mini adjustments
leshy Mar 8, 2026
9edcbef
transform chaining
leshy Mar 9, 2026
24c708d
memory2: lazy pull-based stream system
leshy Mar 9, 2026
363f094
memory2: fix typing — zero type:ignore, proper generics
leshy Mar 9, 2026
90a636a
memory2: fix .live() on transform streams — reject with clear error
leshy Mar 9, 2026
2f029e4
memory2: replace custom Disposable with rxpy DisposableBase
leshy Mar 9, 2026
4061b8f
memory2: extract filters and StreamQuery from type.py into filter.py
leshy Mar 9, 2026
a44d870
memory2: store transform on Stream node, not as source tuple
leshy Mar 9, 2026
09ada62
memory2: move live logic from Stream into Backend via StreamQuery
leshy Mar 9, 2026
9ef10ab
memory2: extract impl/ layer with MemoryStore and SqliteStore scaffold
leshy Mar 10, 2026
87b94ad
memory2: add buffer.py docstring and extract buffer tests to test_buf…
leshy Mar 11, 2026
8070379
memory2: add Codec protocol and grid test for store implementations
leshy Mar 11, 2026
dde8017
memory2: add codec implementations (pickle, lcm, jpeg) with grid tests
leshy Mar 11, 2026
7ce2364
resource: add context manager to Resource; make Store/Session Resources
leshy Mar 11, 2026
d5dde81
resource: add CompositeResource with owned disposables
leshy Mar 11, 2026
9d37f1d
memory2: add BlobStore ABC with File and SQLite implementations
leshy Mar 11, 2026
a83d7a2
memory2: move blobstore.md into blobstore/ as module readme
leshy Mar 11, 2026
b6c9543
memory2: add embedding layer, vector/text search, live safety guards
leshy Mar 11, 2026
94aa659
memory2: add documentation for streaming model, codecs, and backends
leshy Mar 11, 2026
1dc68b7
query application refactor
leshy Mar 11, 2026
4d31779
memory2: replace LiveBackend with pluggable LiveChannel, add Configur…
leshy Mar 11, 2026
690c5ec
memory2: make backends Configurable, add session→stream config propag…
leshy Mar 11, 2026
f73d8d4
memory2: wire VectorStore into ListBackend, add MemoryVectorStore
leshy Mar 11, 2026
c655739
memory2: wire BlobStore into ListBackend with lazy/eager blob loading
leshy Mar 11, 2026
5b565db
memory2: allow bare generator functions as stream transforms
leshy Mar 11, 2026
da676f6
memory2: update docs to reflect current API
leshy Mar 11, 2026
a0c9c70
memory2: implement full SqliteBackend with vec0 vector search, JSONB …
leshy Mar 11, 2026
0b09404
memory2: stream rows via cursor pagination instead of fetchall()
leshy Mar 11, 2026
df076ce
memory2: add lazy/eager blob tests and spy store delegation grid tests
leshy Mar 11, 2026
bcb98bd
memory2: add R*Tree spatial index for NearFilter SQL pushdown, add e2…
leshy Mar 11, 2026
3c01a6e
auto index tags
leshy Mar 11, 2026
f368297
memory/stream str, and observables
leshy Mar 12, 2026
f89ad3f
live stream is a resource
leshy Mar 12, 2026
a32b44d
readme work
leshy Mar 12, 2026
db23275
streams and intro
leshy Mar 12, 2026
9b14894
renamed readme to arch
leshy Mar 12, 2026
67a6a83
Rename memory2 → memory, fix all imports and type errors
leshy Mar 12, 2026
f35cfe5
Merge remote-tracking branch 'origin/dev' into feat/memory/embedding
leshy Mar 12, 2026
1a6c8a1
Revert memory rename: restore memory/ from dev, new code lives in mem…
leshy Mar 12, 2026
2076ba4
Remove stray old memory module references
leshy Mar 12, 2026
05c091d
Remove LFS test databases from PR
leshy Mar 12, 2026
0570bc3
Address review findings: SQL injection guards, type fixes, cleanup
leshy Mar 12, 2026
2dcfcd9
Revert detection type changes: keep image as required field
leshy Mar 12, 2026
e88e0e5
add libturbojpeg to docker image
leshy Mar 12, 2026
f29f766
Make turbojpeg import lazy so tests skip gracefully in CI
leshy Mar 12, 2026
c56e283
Give each SqliteBackend its own connection for WAL-mode concurrency
leshy Mar 12, 2026
93d6afe
Block search_text on SqliteBackend to prevent full table scans
leshy Mar 12, 2026
317562c
Catch RuntimeError from missing turbojpeg native library in codec tests
leshy Mar 12, 2026
5a418c6
pr comments
leshy Mar 12, 2026
99c3f3e
occupancy change undo
leshy Mar 13, 2026
1103e3d
tests cleanup
leshy Mar 13, 2026
32d75d8
compression codec added, new bigoffice db uploaded
leshy Mar 13, 2026
b7e25a9
correct jpeg codec
leshy Mar 13, 2026
c2e91d8
PR comments cleanup
leshy Mar 13, 2026
8be106a
blobstore stream -> stream_name
leshy Mar 13, 2026
1e28b50
vectorstore stream -> stream_name
leshy Mar 13, 2026
6f3ef51
resource typing fixes
leshy Mar 13, 2026
30959af
move type definitions into dimos/memory2/type/ subpackage
leshy Mar 13, 2026
367fa4e
lz4 codec included, utils/ cleanup
leshy Mar 13, 2026
a0becc6
Merge remote-tracking branch 'origin/dev' into feat/memory2
leshy Mar 13, 2026
02a2332
migrated stores to a new config system
leshy Mar 13, 2026
b3e7236
config fix
leshy Mar 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions data/.lfs/go2_bigoffice.db.tar.gz
Git LFS file not shown
45 changes: 43 additions & 2 deletions dimos/core/resource.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from abc import ABC, abstractmethod
from __future__ import annotations

from abc import abstractmethod
from typing import TYPE_CHECKING, Self

class Resource(ABC):
if TYPE_CHECKING:
from types import TracebackType

from reactivex.abc import DisposableBase
from reactivex.disposable import CompositeDisposable


class Resource(DisposableBase):
@abstractmethod
def start(self) -> None: ...

Expand Down Expand Up @@ -43,3 +52,35 @@ def dispose(self) -> None:

"""
self.stop()

def __enter__(self) -> Self:
self.start()
return self

def __exit__(
self,
exctype: type[BaseException] | None,
excinst: BaseException | None,
exctb: TracebackType | None,
) -> None:
self.stop()


class CompositeResource(Resource):
"""Resource that owns child disposables, disposed on stop()."""

_disposables: CompositeDisposable

def __init__(self) -> None:
self._disposables = CompositeDisposable()

def register_disposables(self, *disposables: DisposableBase) -> None:
"""Register child disposables to be disposed when this resource stops."""
for d in disposables:
self._disposables.add(d)

def start(self) -> None:
pass

def stop(self) -> None:
self._disposables.dispose()
14 changes: 13 additions & 1 deletion dimos/core/worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,19 @@ def _suppress_console_output() -> None:
]


def _worker_entrypoint(conn: Connection, worker_id: int) -> None:
def _worker_entrypoint(
conn: Connection,
worker_id: int,
) -> None:
# Limit OpenCV internal threads to avoid idle thread contention.
# Modules that need parallel cv2 ops can call cv2.setNumThreads() in start().
try:
import cv2

cv2.setNumThreads(2)
except ImportError:
pass

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think opencv stuff belongs in worker.py. Maybe add a bootstrap_worker function in a different file and call it here.

Copy link
Contributor Author

@leshy leshy Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah me niether,

this was actually consuming a bunch of resources so quickly prefixed dimos startup with this setting but obviously doesn't belong here.

should we have some "general lib config" file that's preloaded for all dimos runs? where should this be ran from?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think just configuring this in a different file should be sufficient.

instances: dict[int, Any] = {}

try:
Expand Down
6 changes: 5 additions & 1 deletion dimos/mapping/occupancy/gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Any

import numpy as np
from scipy import ndimage # type: ignore[import-untyped]

Expand Down Expand Up @@ -50,7 +52,9 @@ def gradient(

# Compute distance transform (distance to nearest obstacle in cells)
# Unknown cells are treated as if they don't exist for distance calculation
distance_cells = ndimage.distance_transform_edt(1 - obstacle_map)
distance_cells: np.ndarray[Any, np.dtype[np.float64]] = ndimage.distance_transform_edt(
1 - obstacle_map
) # type: ignore[assignment]

# Convert to meters and clip to max distance
distance_meters = np.clip(distance_cells * occupancy_grid.resolution, 0, max_distance)
Expand Down
70 changes: 70 additions & 0 deletions dimos/memory2/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
from dimos.memory2.buffer import (
BackpressureBuffer,
Bounded,
ClosedError,
DropNew,
KeepLast,
Unbounded,
)
from dimos.memory2.embed import EmbedImages, EmbedText
from dimos.memory2.impl.memory import ListBackend, MemorySession, MemoryStore
from dimos.memory2.impl.sqlite import SqliteBackend, SqliteSession, SqliteStore, SqliteStoreConfig
from dimos.memory2.livechannel import SubjectChannel
from dimos.memory2.store import Session, SessionConfig, Store, StoreConfig, StreamNamespace
from dimos.memory2.stream import Stream
from dimos.memory2.transform import FnTransformer, QualityWindow, Transformer
from dimos.memory2.type.backend import Backend, LiveChannel, VectorStore
from dimos.memory2.type.filter import (
AfterFilter,
AtFilter,
BeforeFilter,
Filter,
NearFilter,
PredicateFilter,
StreamQuery,
TagsFilter,
TimeRangeFilter,
)
from dimos.memory2.type.observation import EmbeddedObservation, Observation

__all__ = [
"AfterFilter",
"AtFilter",
"Backend",
"BackpressureBuffer",
"BeforeFilter",
"Bounded",
"ClosedError",
"DropNew",
"EmbedImages",
"EmbedText",
"EmbeddedObservation",
"Filter",
"FnTransformer",
"KeepLast",
"ListBackend",
"LiveChannel",
"MemorySession",
"MemoryStore",
"NearFilter",
"Observation",
"PredicateFilter",
"QualityWindow",
"Session",
"SessionConfig",
"SqliteBackend",
"SqliteSession",
"SqliteStore",
"SqliteStoreConfig",
"Store",
"StoreConfig",
"Stream",
"StreamNamespace",
"StreamQuery",
"SubjectChannel",
"TagsFilter",
"TimeRangeFilter",
"Transformer",
"Unbounded",
"VectorStore",
]
115 changes: 115 additions & 0 deletions dimos/memory2/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# memory

Observation storage and streaming layer for DimOS. Pull-based, lazy, composable.

## Architecture

```
Live Sensor Data
Store → Session → Stream → [filters / transforms / terminals] → Stream → [filters / transforms / terminals] → Stream → Live hooks
↓ ↓ ↓
Backend (ListBackend, SqliteBackend) Backend In Memory
```

**Store** owns a storage location (file, in-memory). **Session** manages named streams over a shared connection. **Stream** is the query/iteration surface — lazy until a terminal is called.


Supporting Systems:

- BlobStore — separates large payloads from metadata. FileBlobStore (files on disk) and SqliteBlobStore (blob table per stream). Supports lazy loading.
- Codecs — codec_for() auto-selects: JpegCodec for images (TurboJPEG, ~10-20x compression), LcmCodec for DimOS messages, PickleCodec fallback.
- Transformers — Transformer[T,R] ABC wrapping iterator-to-iterator. EmbedImages/EmbedText enrich observations with embeddings. QualityWindow keeps best per time window.
- Backpressure Buffers — KeepLast, Bounded, DropNew, Unbounded — bridge push/pull for live mode.


## Modules

| Module | What |
|----------------|-------------------------------------------------------------------|
| `stream.py` | Stream node — filters, transforms, terminals |
| `backend.py` | Backend protocol, LiveChannel / VectorStore / BlobStore ABCs |
| `filter.py` | StreamQuery dataclass, filter types, Python query execution |
| `transform.py` | Transformer ABC, FnTransformer, FnIterTransformer, QualityWindow |
| `buffer.py` | Backpressure buffers for live mode (KeepLast, Bounded, Unbounded) |
| `store.py` | Store / Session (Configurable), StoreConfig / SessionConfig |
| `type.py` | Observation, EmbeddedObservation dataclasses |
| `embed.py` | EmbedImages / EmbedText transformers |

## Subpackages

| Package | What | Docs |
|--------------|------------------------------------------------------|--------------------------------------------------|
| `impl/` | Backend implementations (ListBackend, SqliteBackend) | [impl/README.md](impl/README.md) |
| `livechannel/` | Live notification channels (SubjectChannel) | |
| `blobstore/` | Pluggable blob storage (file, sqlite) | [blobstore/blobstore.md](blobstore/blobstore.md) |
| `codecs/` | Encode/decode for storage (pickle, JPEG, LCM) | [codecs/README.md](codecs/README.md) |

## Docs

| Doc | What |
|-----|------|
| [streaming.md](streaming.md) | Lazy vs materializing vs terminal — evaluation model, live safety |
| [embeddings.md](embeddings.md) | Embedding layer design — EmbeddedObservation, vector search, EmbedImages/EmbedText |
| [blobstore/blobstore.md](blobstore/blobstore.md) | BlobStore architecture — separate payload storage from metadata |

## Query execution

`StreamQuery` holds the full query spec (filters, text search, vector search, ordering, offset/limit). It also provides `apply(iterator)` — a Python-side execution path that runs all operations as in-memory predicates, brute-force cosine, and list sorts.

This is the **default fallback**. Backends are free to push down operations using store-specific strategies instead:

| Operation | Python fallback (`StreamQuery.apply`) | Store push-down (example) |
|----------------|---------------------------------------|----------------------------------|
| Filters | `filter.matches()` predicates | SQL WHERE clauses |
| Text search | Case-insensitive substring | FTS5 full-text index |
| Vector search | Brute-force cosine similarity | vec0 / FAISS ANN index |
| Ordering | `sorted()` materialization | SQL ORDER BY |
| Offset / limit | `islice()` | SQL OFFSET / LIMIT |

`ListBackend` delegates entirely to `StreamQuery.apply()`. `SqliteBackend` translates the query into SQL and only falls back to Python for operations it can't express natively.

Transform-sourced streams (post `.transform()`) always use `StreamQuery.apply()` since there's no backend to push down to.

## Quick start

```python
from dimos.memory2 import MemoryStore

store = MemoryStore()
with store.session() as session:
images = session.stream("images")

# Write
images.append(frame, ts=time.time(), pose=(x, y, z), tags={"camera": "front"})

# Query
recent = images.after(t).limit(10).fetch()
nearest = images.near(pose, radius=2.0).fetch()
latest = images.last()

# Transform (class or bare generator function)
edges = images.transform(Canny()).save(session.stream("edges"))

def running_avg(upstream):
total, n = 0.0, 0
for obs in upstream:
total += obs.data; n += 1
yield obs.derive(data=total / n)
avgs = stream.transform(running_avg).fetch()

# Live
for obs in images.live().transform(process):
handle(obs)

# Embed + search
images.transform(EmbedImages(clip)).save(session.stream("embedded"))
results = session.stream("embedded").search(query_vec, k=5).fetch()
```

## Implementations

| Backend | Status | Storage |
|-----------------|----------|----------------------------------------|
| `ListBackend` | Complete | In-memory (lists + brute-force search) |
| `SqliteBackend` | Complete | SQLite (WAL, FTS5, vec0) |
19 changes: 19 additions & 0 deletions dimos/memory2/blobstore/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copyright 2026 Dimensional Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from dimos.memory2.blobstore.file import FileBlobStore
from dimos.memory2.blobstore.sqlite import SqliteBlobStore
from dimos.memory2.type.backend import BlobStore

__all__ = ["BlobStore", "FileBlobStore", "SqliteBlobStore"]
86 changes: 86 additions & 0 deletions dimos/memory2/blobstore/blobstore.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# blobstore/

Separates payload blob storage from metadata indexing. Observation payloads vary hugely in size — a `Vector3` is 24 bytes, a camera frame is megabytes. Storing everything inline penalizes metadata queries. BlobStore lets large payloads live elsewhere.

## ABC (`backend.py`)

```python
class BlobStore(Resource, ABC):
def put(self, stream_name: str, key: int, data: bytes) -> None: ...
def get(self, stream_name: str, key: int) -> bytes: ... # raises KeyError if missing
def delete(self, stream_name: str, key: int) -> None: ... # silent if missing
```

- `stream_name` — stream name (used to organize storage: directories, tables)
- `key` — observation id
- `data` — encoded payload bytes (codec handles serialization, blob store handles persistence)
- Extends `Resource` (start/stop) but does NOT own its dependencies' lifecycle

## Implementations

### `file.py` — FileBlobStore

Stores blobs as files on disk, one directory per stream.

```
{root}/{stream}/{key}.bin
```

`__init__(root: str | os.PathLike[str])` — `start()` creates the root directory.

### `sqlite.py` — SqliteBlobStore

Stores blobs in a separate SQLite table per stream.

```sql
CREATE TABLE "{stream}_blob" (id INTEGER PRIMARY KEY, data BLOB NOT NULL)
```

`__init__(conn: sqlite3.Connection)` — does NOT own the connection.

**Internal use** (same db as metadata): `SqliteStore.session()` creates one connection, passes it to both the metadata backend and the blob store.

**External use** (separate db): user creates a separate connection and passes it. User manages that connection's lifecycle.

**JOIN optimization** (future): when `lazy=False` and the blob store shares the same connection as the metadata backend, `SqliteBackend` can optimize with a JOIN instead of separate queries:

```sql
SELECT m.id, m.ts, m.pose, m.tags, b.data
FROM "images" m JOIN "images_blob" b ON m.id = b.id
WHERE m.ts > ?
```

## Lazy loading

`lazy` is a stream-level flag, orthogonal to blob store choice. It controls WHEN data is loaded:

- `lazy=False` → backend loads payload during iteration (eager)
- `lazy=True` → backend sets `Observation._loader`, payload loaded on `.data` access

| lazy | blob store | loading strategy |
|------|-----------|-----------------|
| False | SqliteBlobStore (same conn) | JOIN — one round trip |
| False | any other | iterate meta, `blob_store.get()` per row |
| True | any | iterate meta only, `_loader = lambda: codec.decode(blob_store.get(...))` |

## Usage

```python
# Per-stream blob store choice
with store.session() as session:
poses = session.stream("poses", PoseStamped) # default, eager
images = session.stream("images", Image, lazy=True) # default, lazy
images = session.stream("images", Image, blob_store=file_blobs) # override
```

## Files

```
backend.py BlobStore ABC (alongside Backend, LiveBackend)
blobstore/
blobstore.md this file
__init__.py re-exports BlobStore, FileBlobStore, SqliteBlobStore
file.py FileBlobStore
sqlite.py SqliteBlobStore
test_blobstore.py grid tests across implementations
```
Loading
Loading