Send prompts to multiple LLM providers simultaneously - A privacy-focused desktop application built with Tauri and Rust.
- 🤖 Multi-Provider Support: Send prompts to ChatGPT, Gemini, and Claude simultaneously
- 🔒 Privacy-First: No credential storage, no telemetry, all data stays local
- 💾 Persistent Sessions: Provider webviews with persistent authentication sessions
- 📊 Real-Time Status: Track submission progress with retry logic and timeout handling
- 🔐 Authentication Detection: Automatic detection of login requirements
- 🎨 Split-Screen Layout: Responsive grid layout for viewing multiple providers
- ⚡ Fast & Lightweight: Built with Rust and Tauri 2.0
- Rust (latest stable) - Install Rust
- Node.js (v18+) - Install Node.js
- npm or pnpm
# Clone the repository
git clone https://github.com/your-org/chenchen.git
cd chenchen
# Install frontend dependencies
npm install
# Install Tauri CLI
cargo install tauri-cli
# Run in development mode
npm run tauri dev# Build optimized binary
npm run tauri build
# Binary will be in src-tauri/target/release/-
Launch ChenChen
- The app opens with all three providers (ChatGPT, Gemini, Claude) available
-
Select Providers
- Check the providers you want to send prompts to (1-3 providers)
- Login indicators show which providers need authentication
-
Authenticate
- Click "Login Required" buttons to authenticate with providers
- Sessions are persisted across app restarts
-
Send Prompts
- Enter your prompt in the text area
- Click "Send Prompt" or press Enter
- View real-time submission status for each provider
chenchen/
├── src/ # Svelte frontend
│ ├── components/ # UI components
│ ├── routes/ # Svelte routes
│ ├── services/ # TypeScript services
│ ├── app.css # Global styles
│ ├── app.html # App entry point
│ └── types.ts # TypeScript types
├── src-tauri/ # Rust backend
│ ├── src/
│ │ ├── injection/ # JavaScript injection
│ │ ├── layout/ # Layout calculation
│ │ ├── providers/ # Provider management
│ │ ├── status/ # Submission tracking
│ │ ├── commands.rs # Tauri commands (IPC)
│ │ ├── lib.rs # Library entry point
│ │ ├── logging.rs # Structured logging
│ │ ├── main.rs # Binary entry point
│ │ ├── state.rs # App state management
│ │ └── types.rs # Shared types
│ └── tests/ # Rust tests
└── docs/ # Documentation
├── privacy-policy.md
├── testing-guide.md
└── linux-webview-positioning-fix.md
# Backend tests (Rust)
cd src-tauri
cargo test
# Frontend tests (TypeScript/Svelte)
npm test
# All tests
npm run test:all# Rust linting
cargo clippy
# Rust formatting
cargo fmt
# TypeScript checking
npm run check- Commands: Tauri IPC interface for frontend communication
- Provider Manager: Manages provider selection state (1-3 providers)
- Layout Calculator: Computes split-screen dimensions based on selected providers
- Webview Manager: Platform-specific session persistence
- Injector: JavaScript injection for prompt submission
- Status Tracker: Submission state machine with retry logic
- Structured Logging: Dual-format (JSON + human-readable) logging
- ProviderSelector: Provider selection and authentication UI
- PromptInput: Prompt text area with validation
- StatusDisplay: Real-time submission status tracking
- Provider Panels: Split-screen webview containers
User Input → PromptInput → submit_prompt command
↓
StatusTracker creates submissions
↓
Injector generates scripts
↓
Execute in provider webviews
↓
Status updates → StatusDisplay
ChenChen is committed to user privacy:
- ✅ No credential storage - Passwords never leave webview sandboxes
- ✅ No prompt history - All data in-memory only
- ✅ No telemetry - Zero analytics or tracking
- ✅ Local-only operation - No backend servers
- ✅ Provider domains only - Network requests limited to LLM providers
See docs/privacy-policy.md for full details.
# Run privacy tests
cd src-tauri
cargo test privacy_test
# Manual network monitoring (see docs/testing-guide.md)- Unit Tests: 26 tests (providers, layout, injection, status, logging)
- Contract Tests: 19 tests (public API interfaces)
- Integration Tests: 24 tests (privacy, success rate, logging formats)
- Total: 69 automated tests
- ✅ >=95% success rate for prompt submissions (SC-002)
- ✅ <10 second submission time to 3 LLMs (SC-001)
- ✅ Zero data collection verified by privacy tests
Edit src-tauri/config/providers.json:
{
"config_version": "1.0.0",
"providers": {
"ChatGPT": {
"url": "https://chat.openai.com/",
"input_selectors": ["textarea", "#prompt-textarea"],
"submit_selectors": ["button[data-testid='send-button']"],
"auth_check_selectors": ["button[data-testid='login-button']"]
}
}
}- Ensure
src-tauri/config/providers.jsonexists - Check JSON syntax validity
- Clear webview data: delete
~/.local/share/com.chenchen.app/webviews/ - Restart the application
- Check network connection
- Verify provider websites are accessible
- Review timeout settings (30s default)
This is a known GTK issue that affects all Tauri multiwebview applications on Linux. The project includes patches to fix this:
Symptoms:
- Multiple webviews stack vertically
set_webview_position()andset_webview_size()are ignored- Works fine on Windows/macOS but fails on Linux
Solution:
The patches are already applied in Cargo.toml. If you cloned the repository, they should work automatically. To verify:
# Check that patches are applied
cargo tree -p tao --depth 0
cargo tree -p tauri-runtime-wry --depth 0
# Should show GitHub URLs from Benjaminlooi forksFor more details, see docs/linux-webview-positioning-fix.md
Related Issues:
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Follow TDD: write tests first
- Ensure all tests pass (
cargo test && npm test) - Run linters (
cargo clippy && npm run check) - Commit with descriptive messages
- Push and create a Pull Request
- TDD First: Write tests before implementation
- Privacy by Design: No data collection ever
- Dual Logging: JSON + human-readable formats
- Type Safety: Rust + TypeScript with strict typing
Version: 0.1.1 Status: MVP Complete Platform: Windows, macOS, Linux (with positioning patches)
