Skip to content

dankeg/KestrelAI

Repository files navigation


Project license Pull Requests welcome code with love by dankeg

Under construction

Table of Contents

About

Kestrel is a local-first research assistant that self-schedules and executes support tasks over long horizons so you can stay focused on key, complex & novel tasks. It handles the mechanical work—searching, filtering, extracting, and organizing evidence in the background, preparing it for review and analysis.

Designed for small local models. Kestrel is built to run with small LLMs (~1–7B) via Ollama, so it’s laptop/desktop-friendly (macOS, Windows, Linux; CPU-only works, GPU optional). No server cluster required.

What it does

  • Task Exploration:: Leverages gathered evidence to iterate on tasks, identifying and pursuing new branches and next steps.
  • Source Provenance: Uses a deep research approach, but maintains a clear record of consulted searches and how they impact exploration and findings.
  • Long-running support: Keep tasks moving in the background and surface auditable checkpoints for review and redirection, rather than a single final report.

Example: Example: As you interpret a frontier paper and plan a novel study, Kestrel runs in the background to resolve the tooling (Polars vs. pandas vs. Spark), gathering real-world usage, extracting cited performance claims, and assembling a reviewable comparison.

How it works (at a glance)

  • Research Agent — Executes concrete micro-tasks (search → retrieve → filter → extract → narrow summary) and proposes specific follow-ups when evidence warrants it.
  • Orchestrator — Plans/schedules work, tracks progress and blockers, pivots when stalled, and emits checkpoints you can accept or adjust. This keeps efforts coherent over hours or days.

Orchestrator profiles

Profile Scope & behavior Typical runtime
Hummingbird Quick breadth-first sweeps; targeted lookups. Minutes–~2h
Kestrel Medium-horizon exploration with closely related branches. Multi-hour
Albatross Long-horizon, open-ended exploration with reviews. Day-scale+

(Names are mnemonic. Community-authored profiles with clear behaviors are welcome—e.g., a Parrot profile for multi-agent dialogue.)

Screenshots
Home page
Home page
Adding task
Adding task

Design

Built With

  • Backend: FastAPI
  • Frontend: React
  • Middleware: Redis
  • Model Runner: Ollama

More model engines and integrations are under development!

Architecture

Logo

Data Contracts

Redis Queues and Channels:

REST API:

Getting Started

Prerequisites

Ensure the following are installed:

Installation

Clone the repository from Github:

git clone git@github.com:dankeg/KestrelAI.git

Ensure that Docker Desktop is running (easy way to check is running docker info in the terminal)

Navigate to the root of the repo, and run the following command to build and launch the application.

docker compose up --build

Alternatively, rather than building locally images are built through Github Actions, and can be fetched with docker pull first.

After a few minutes, the application will finish building. By default, it launches at http://localhost:5173/. Navigate here in a browser, and it's ready to use!

Usage

Using Kestrel is fairly straightforward. Create a new task, define a description for the research agents to base their exploration off of, and provide a time-box. Some examples are provided of tasks Kestrel can perform.

From there, the dashboard lets you monitor progress, such as tracking searches & the resulting sources, viewing checkpoints & reports, and exporting results.

Roadmap

See the open issues for a list of proposed features (and known issues).

Support

Reach out to the maintainer at one of the following places:

Project assistance

If you want to say thank you or/and support active development of Kestrel:

  • Add a GitHub Star to the project.
  • Tweet about the Kestrel.
  • Write interesting articles about the project on Dev.to, Medium or your personal blog.

Together, we can make Kestrel better!

Contributing

All contributions are greatly appreciated. Kestrel is meant to be a practical, community-focused project, focusing on supporting real-world use-cases. New feature suggestions are welcome, especially new orchestrator profiles tailored towards particular tasks of requirements, such as observability or structure.

Please read our contribution guidelines, and thank you for being involved!

Authors & contributors

The original setup of this repository is by Ganesh Danke.

For a full list of all authors and contributors, see the contributors page.

License

This project is licensed under the MIT license.

See LICENSE for more information.

About

Long-term Research Assistants with Self-Scheduling

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages