Infrastructure/process layer for running self-hosted AI capabilities to follow Shared Goals.
Thunder Forge focuses on operating the compute and automation stack that can work with private data (finance, healthcare, family/child privacy) without relying on third-party hosted agents.
- GitHub org: https://github.com/shared-goals/
- This repo: https://github.com/shared-goals/thunder-forge
A personal image of Joy and Happiness is treated as the base source of motives.
Details (RU): Shared Goals — use case and concept
- A motive is a reason to act (rooted in what brings joy/happiness).
- A goal is a direction or outcome shaped by one or more motives.
- When goals are shared among coauthors, motives combine and the overall dynamics increase.
Shared Goals is developed as a living Text that anyone can fork and rewrite into their own.
- Concept Text (evolving): https://github.com/bongiozzo/whattodo
- Common build submodule: https://github.com/shared-goals/text-forge
text-forge transforms a Text repository into:
- a website (with link-sharing functionality in the publishing format)
- an EPUB book
- a combined Markdown corpus suitable for AI usage (RAG/MCP agents and skills)
Thunder Forge is the infrastructure/process layer for self-hosted execution of agents and skills.
Typical managed parts:
- Nodes: machines in a self-hosted cluster (e.g., several Mac Studios)
- LLMs on nodes: models served locally via Ollama (https://github.com/ollama/ollama)
- Automation/agents runtime: workflows and long-running processes (often via n8n)
- Skills: reusable tool capabilities agents can invoke
github_reposkills (agent skills): https://github.com/agentskills/agentskills
Note: n8n and skills catalogs are intended integration points. This repo is the operational “glue” and runbooks/specs layer to run them self-hosted.
flowchart LR
SG["Shared Goals<br/>joy/happiness -> motives"] --> G["Goals<br/>(shared among coauthors)"]
T["Text<br/>(forkable Markdown)"] --> TF["text-forge<br/>site + EPUB + combined corpus"]
TF --> K["AI-ready corpus<br/>(RAG/MCP input)"]
G --> A["Agents<br/>(plan and act)"]
K --> A
A --> S["Skills<br/>(callable capabilities)"]
S --> N["Self-hosted nodes<br/>(cluster machines)"]
N --> O["Local LLMs<br/>(Ollama)"]
W["Orchestrator (optional)<br/>(n8n)"] --> A
W --> S
Shared Goals activities can involve highly sensitive data.
- Prefer self-hosted nodes and self-hosted agents for private domains.
- Keep data access least-privilege (skills should request only what they need).
- Treat secrets and tokens as production-grade (no plaintext in repos).
- Make agent activity auditable (logs, runs, and permissions).
This repository is currently at an early scaffolding stage (see LICENSE).
Near-term intended contents include:
- node inventory/specs and bootstrap runbooks
- model/LLM deployment conventions (Ollama-managed)
- agent/workflow conventions (including optional n8n patterns)
- a skills registry format + examples
This repo currently contains a minimal vertical slice:
GET /health(public)GET /mini-app/(static Mini App)POST /api/mini-app/me(Telegram initData auth + admin allowlist)POST /api/mini-app/status(live reachability checks from inventory)
- Create
tf.yml(kept out of git; it’s ignored by default):
server:
bind: 127.0.0.1
port: 8000
reload: true
telegram:
bot_token: "..."
access:
admin_telegram_ids:
- 123
mini_app_url: http://127.0.0.1:8000/mini-app/
settings:
ssh:
connect_timeout_seconds: 1.0
batch_mode: true
monitor:
ssh_port: 22
ollama_port: 11434
hosts_sync:
managed_block_start: "# BEGIN thunder-forge"
managed_block_end: "# END thunder-forge"
nodes:
defaults:
ssh_user: you
service_manager: brew
items:
- name: msm1
mgmt_ip: 192.168.1.101
fabricnet:
service_name: "Thunderbolt Bridge"
ipv4_defaults:
netmask: 255.255.255.252
router: ""
nodes:
- name: msm1
address: 172.16.10.2- Run:
make syncmake serve
The server binds to 127.0.0.1:8000 by default.
- This is intentionally restricted: if your Telegram user ID is not listed in
tf.yml(access.admin_telegram_ids), the Mini App API returns403. - No DB is used; all state is computed on-demand from
tf.yml.
This repo includes a KISS setup script that uses tf.yml to:
- configure fabric network IPv4 on each node (via SSH +
networksetupon macOS; typically "Thunderbolt Bridge") - generate a managed
/etc/hostsblock and push it to all nodes
Commands:
make setup-env(configures fabric IPs; requiresfabricnet.nodesentries)make hosts(writesartifacts/hosts.block)make push-hosts(writesartifacts/hosts.blockand updates/etc/hostson all nodes)