Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions apps/web/content/docs/faq/10.ai-models-and-privacy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -245,14 +245,18 @@ These are the supported local models:

### App updates

Char checks for updates using the Tauri updater system.
Char checks for updates automatically in release builds and downloads them in the background when one is available.

**What is sent:**
- Your current app version and platform

**Where it goes:**
- [CrabNebula](https://crabnebula.dev) — release hosting and update distribution

**How often:**
- An update check runs roughly every 30 minutes while the app is open
- If an update is found, the app downloads it and installs it on the next restart

### Authentication (when signed in)

When you sign in for Pro or cloud features, Char authenticates via Supabase.
Expand All @@ -274,7 +278,7 @@ When you sign in for Pro or cloud features, Char authenticates via Supabase.
- Send your meeting content in analytics — analytics only includes event names and app metadata
- Lock your data in a proprietary database — everything is plain Markdown and JSON files you can open, move, and use with other tools

**To use Char completely offline:** Use a local STT model for transcription and a local LLM (LM Studio or Ollama) for AI features. The only background network requests are the connectivity check and update checks.
**To keep meeting content fully local:** Use a local STT model for transcription and a local LLM (LM Studio or Ollama) for AI features. Your audio, transcripts, notes, and prompts stay on your device. Background network traffic can still come from the connectivity check, update checks/downloads, analytics if enabled, and crash reporting in release builds.

**To maximize privacy:**
1. Use a local STT model for transcription — see [Local Models](/docs/developers/local-models)
Expand Down
19 changes: 19 additions & 0 deletions apps/web/content/docs/faq/11.troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,22 @@ If issues persist, you may need to restart your Mac to refresh the keychain stat
### Why this happens

When Char updates, sometimes the code signing identity changes, which can cause macOS to treat the new version as a different application. This triggers keychain access prompts because macOS is protecting your stored credentials from unauthorized access.

## Why does Char show network activity in Activity Monitor when I'm not using it?

Usually this is expected background traffic, not your meeting content being uploaded.

The biggest source is Char's connectivity check. The desktop app sends a bare `HEAD` request to `https://www.google.com/generate_204` every 2 seconds so it can tell whether cloud-only features should be treated as online or offline.

In release builds, Char also checks for updates in the background every 30 minutes and automatically downloads an update when one is available. If you have analytics enabled, Char may also send anonymous usage events to PostHog and Outlit. Release builds also include Sentry for crash reporting.

If you are using local STT plus a local LLM like LM Studio or Ollama, your audio, transcripts, notes, and prompts still stay on your device. Local AI does **not** disable every background network request, but it does keep your meeting content local.

If you want the smallest possible network footprint:

1. Use a local STT model and a local LLM
2. Disable analytics in Settings
3. Stay signed out unless you need Pro or other cloud features
4. Avoid cloud sync, cloud transcription, and cloud LLM providers

For the complete data-flow breakdown, see [AI Models & Data Privacy](/docs/faq/ai-models-and-privacy).