Skip to content

vkop007/ModelDock

Repository files navigation

ModelDock (All in One LLM Model)

D4343233-A1E6-4AB1-9C13-6452071229B8_1_201_a

Meet ModelDock – the local AI cockpit for power users. One clean chat UI, every major LLM. ChatGPT, Claude, Gemini, Grok, Qwen, Mistral, Z.ai, even local Ollama – all docked in a single interface and driven by real Chrome sessions, not fragile APIs. Forget juggling tabs, logins, and keys; ModelDock reuses your own browser sessions with Puppeteer, defeats typical bot detection, and streams responses back into a unified chat with voice support and full local privacy. Use the built‑in developer API to script, automate, and wire every provider into your tools from http://localhost:3000/docs/api without sending a single token to a third‑party server.

Features

  • Unified Chat Interface: Interact with multiple LLMs from a single UI.
  • No API Keys Required: Uses your existing browser sessions/accounts to communicate with models.
  • Cross-Platform: Supports Windows, macOS, and Linux out of the box with auto-detected Chrome installations.
  • Secure Credentials: Stores session cookies and data securely using keytar and SQLite.
  • Voice Support: Built-in voice features and customizable styling.
  • Privacy-First: Runs entirely locally on your machine.

Supported Providers

  • ChatGPT (OpenAI)
  • Claude (Anthropic)
  • Gemini (Google)
  • Grok (xAI)
  • Z.ai
  • Qwen
  • Mistral
  • Ollama (Local Models)

Prerequisites

  • Node.js: v18 or newer
  • Google Chrome: Installed on your system (paths auto-detected for Windows, macOS, and Linux).

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/ModelDock.git
    cd ModelDock
  2. Install dependencies:

    npm install

    (Note: This project uses npm but also includes a bun.lock if you prefer using Bun).

  3. Start the development server:

    npm run dev
  4. Access the Application: Open http://localhost:3000 in your web browser.

How It Works

ModelDock uses puppeteer-real-browser to launch a headless/visible Chrome instance. When you send a message to a specific provider, the backend navigates to the respective web interface, injects your session cookies (which you configure in the settings), and interacts with the chat DOM elements strictly through the browser, scraping the AI's responses and piping them back to your unified chat window.

Configuration

Navigate to the settings icon in the sidebar to configure individual providers, manage cookies, and adjust your theme/voice preferences.

Developer API

ModelDock now includes a developer-facing local API so you can talk to every supported provider without using the chat UI directly.

  • Docs page: Open http://localhost:3000/docs/api
  • Provider discovery: GET /api/v1/providers
  • Session setup: POST /api/v1/providers/:provider/session
  • Chat endpoint: POST /api/v1/chat

The public API is designed for local scripts, CLIs, automations, and companion tools. Browser-backed providers can reuse your existing web sessions by importing cookies from a local browser or by posting cookies directly to the session endpoint. If you want to protect the API, set MODELDOCK_API_KEY in your environment and send it as a Bearer token or x-modeldock-api-key header.

Contributing

We welcome contributions! Please see the CONTRIBUTING.md file for guidelines on how to get started.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

About

ModelDock is a simple local web app that lets you chat with many AI models (like ChatGPT, Claude, Gemini, Grok, Qwen, Mistral, Z.ai, and local Ollama) from one clean interface using your existing browser sessions instead of API keys. ​

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors