Personal AI agents are exploding in popularity, but nearly all of them still route intelligence through cloud APIs. Your "personal" AI continues to depend on someone else's server. At the same time, our Intelligence Per Watt research showed that local language models already handle 88.7% of single-turn chat and reasoning queries, with intelligence efficiency improving 5.3× from 2023 to 2025. The models and hardware are increasingly ready. What has been missing is the software stack to make local-first personal AI practical.
OpenJarvis is that stack. It is an opinionated framework for local-first personal AI, built around three core ideas: shared primitives for building on-device agents; evaluations that treat energy, FLOPs, latency, and dollar cost as first-class constraints alongside accuracy; and a learning loop that improves models using local trace data. The goal is simple: make it possible to build personal AI agents that run locally by default, calling the cloud only when truly necessary. OpenJarvis aims to be both a research platform and a production foundation for local AI, in the spirit of PyTorch.
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync # core framework
uv sync --extra server # + FastAPI serverYou also need a local inference backend: Ollama, vLLM, SGLang, or llama.cpp.
The fastest path is Ollama on any machine with Python 3.10+:
# 1. Install OpenJarvis
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync
# 2. Detect hardware and generate config
uv run jarvis init
# 3. Install and start Ollama (https://ollama.com)
curl -fsSL https://ollama.com/install.sh | sh
ollama serve # start the Ollama server
# 4. Pull a model
ollama pull qwen3:8b
# 5. Ask a question
uv run jarvis ask "What is the capital of France?"
# 6. Verify your setup
uv run jarvis doctorjarvis init auto-detects your hardware and recommends the best engine. After init, it prints engine-specific next steps. Run uv run jarvis doctor at any time to diagnose configuration or connectivity issues.
From source, you need to make sure Rust is installed on System:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shThen, you need the Rust extension for full functionality (security, tools, agents, etc.):
# 1. Clone and install Python deps
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync --extra dev
# 2. Build and install the Rust extension (requires Rust toolchain)
uv run maturin develop -m rust/crates/openjarvis-python/Cargo.toml
# 3. Run tests
uv run pytest tests/ -vSee Contributing for more.
OpenJarvis is part of Intelligence Per Watt, a research initiative studying the efficiency of on-device AI systems. The project is developed at Hazy Research and the Scaling Intelligence Lab at Stanford SAIL.
Laude Institute • Stanford Marlowe • Google Cloud Platform • Lambda Labs • Ollama • IBM Research • Stanford HAI
