llmctl is a minimal, Unix-style command-line tool for interacting with Large Language Models (LLMs).
It is designed for ask-only workflows, where LLM output is treated like any other CLI program output:
plain text via stdout, suitable for piping, redirection, and version control.
No agents.
No hidden state.
No browser UI.
No IDE lock-in.
llmctl follows classic Unix principles:
- Do one thing well
- Text in, text out
- Explicit context instead of implicit memory
- User stays in control
LLMs are treated as stateless text processors, not autonomous agents.
- Ask LLMs directly from the command line
- Streamed output to
stdout - Explicit context via files
- Works with OpenAI-compatible APIs
- Easy integration with editors, pipes, and scripts
- No automatic code execution
- No filesystem side effects unless you explicitly redirect output
- Estimated token counter
Build from source (Go):
git clone https://github.com/sebidev/llmctl.git
cd llmctl
go build -o llmctlPlace the binary somewhere in your PATH.
llmctl reads configuration from environment variables:
export OPENAI_API_KEY="your-api-key"
export OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_MODEL="gpt-4.1-mini"This allows easy switching between cloud providers and local OpenAI-compatible servers (e.g. LM Studio, llama.cpp servers, etc.).
Simple prompt:
llmctl "explain systemd timers"Redirect output to a file:
llmctl "hello world program in C++" > hello.cppAppend to an existing document:
llmctl "add a summary section" >> notes.mdProvide explicit context using files:
llmctl --context notes.md "expand section 5"Append the result back into the same file:
llmctl --context notes.md "expand section 5" >> notes.mdThis makes the file itself the source of truth and conversation memory.
To avoid sending entire large files, limit context size:
llmctl --context notes.md --tail 2500 "continue this section"Only the last N characters (or tokens, depending on implementation) are sent.
Because llmctl uses stdout, it works naturally with pipes:
cat error.log | llmctl "explain this error"llmctl "list common nginx systemd units" | grep nginxllmctl "generate markdown outline" >> draft.md- Writing and extending Markdown documents
- Code generation and explanation
- Infrastructure notes and documentation
- Learning and research
- Shell-based workflows without browser or IDE integration
- Not an autonomous agent
- Not an IDE plugin
- Not a chat application
- Not a code executor
If you want automation, use scripts.
If you want memory, use files.
If you want control, use llmctl.
MIT License
llmctl is intentionally boring.
That is its strength.