Welcome to DeepChat! This guide will walk you through installing DeepChat, configuring its features, and using it to interact with various Large Language Models (LLMs).
- Installation and Setup
- Configuring LLM Providers
- Basic Chat Functionalities
- Advanced Chat Functionalities
- Using Search Enhancement
- Using Tool Calling (MCP)
- Privacy and Security Features
- DeepLink Support
To get started with DeepChat, download the latest version for your operating system from the GitHub Releases page.
- Windows: Download the
.exeinstaller. - macOS: Download the
.dmginstallation file. - Linux: Download the
.AppImageor.debinstallation file.
After downloading, run the installer and follow the on-screen instructions to complete the installation.
DeepChat supports a wide range of LLM providers, both cloud-based and local.
- Launch the DeepChat application.
- Click the Settings icon (often a gear or cogwheel symbol).
- Navigate to the "Model Providers" tab.
DeepChat supports various cloud LLMs, including:
- DeepSeek
- OpenAI (including Azure OpenAI)
- Silicon Flow
- Grok
- Gemini
- Anthropic
- DashScope (Alibaba Cloud)
- Doubao (Volcano Engine)
- MiniMax
- Fireworks AI
- PPIO
- GitHub Models
- Moonshot
- OpenRouter
- Qiniu
- Zhipu AI
- Hunyuan (Tencent Cloud)
- And any provider compatible with OpenAI, Gemini, or Anthropic API formats.
To configure a cloud provider:
- Select the provider from the list.
- Enter your API key and any other required credentials.
- Save the configuration.
DeepChat has integrated support for Ollama, allowing you to manage and use local models without command-line operations.
- In the "Model Providers" tab, select Ollama.
- DeepChat allows you to:
- Download Ollama models directly within the application.
- Manage your existing Ollama models (deploy, run, remove).
- Once configured, you can select your local models when starting a new chat.
- Click the "+" button (usually prominently displayed) to start a new chat session.
- You will typically be prompted to select the LLM you wish to use for this conversation from your configured providers.
- Type your message into the input field at the bottom of the chat window.
- Press Enter or click the send button to send your message to the LLM.
- If you want to explore a different line of thought or ask a follow-up question without altering the current conversation flow, you can fork the conversation.
- Look for a "Fork" option on a specific message or for the entire conversation. This will create a new, separate chat session that branches off from the point you selected.
- If you're not satisfied with a response or if an error occurred, you can retry sending your message or ask the LLM to generate a new response.
- This feature often allows you to get multiple variations of an answer.
DeepChat supports a multi-window and multi-tab architecture, similar to a web browser. This allows for:
- Parallel multi-session operations.
- Non-blocking experience, improving efficiency when working with multiple models or conversations simultaneously.
- DeepChat provides complete Markdown rendering for chat messages.
- This includes support for headings, lists, bold/italic text, links, and code blocks.
- Code blocks are rendered using CodeMirror for syntax highlighting and clarity.
- DeepChat supports displaying multi-modal content within chats.
- This means you can view images generated by models (e.g., using GPT-4o, Gemini, Grok text-to-image capabilities).
- Support for Mermaid diagrams allows for rendering complex diagrams directly in the chat.
- DeepChat supports Artifacts rendering, which provides diverse ways to present results from LLMs, especially when using Tool Calling (MCP).
- This can significantly save token consumption and present complex data more effectively than plain text.
DeepChat enhances LLM responses by integrating with search engines. This provides more accurate, timely, and verifiable information.
- Automatic Search (MCP Mode): When using MCP-enabled models, the LLM can intelligently decide when to perform a web search to answer your query. It can use built-in integrations with BoSearch or Brave Search.
- Simulated Web Browsing: DeepChat can simulate user web browsing for mainstream search engines like Google, Bing, Baidu, and Sogou Official Accounts. This allows the LLM to "read" search engine results like a human.
- Custom Search Engines: You can configure DeepChat to use virtually any search engine, including internal corporate networks or specialized vertical domain search engines, by setting up a search assistant model.
- Search results and other external information sources are often highlighted within the LLM's response for clarity.
DeepChat features excellent Model Controller Platform (MCP) support, allowing LLMs to use tools and access external resources.
- Configuration: MCP services can be configured through a user-friendly interface. DeepLink support allows for one-click installation of MCP services.
- Capabilities: MCP enables:
- Code Execution: Run code snippets in a built-in Node.js environment.
- Web Information Retrieval: Fetch content from web pages.
- File Operations: Interact with local files.
- Custom Tools: Integrate other custom or third-party tools.
- Display and Debugging:
- Tool calls are displayed clearly and aesthetically within the chat.
- A detailed tool call debugging window shows parameters and return data, with automatic formatting.
- Built-in Services: Many common use cases are supported out-of-the-box with built-in utilities, requiring no secondary installation. Visual model capabilities can also be converted into universally usable functions via MCP.
DeepChat prioritizes user privacy and data security.
- To prevent sensitive information from being accidentally displayed during screen sharing or projections, DeepChat offers a screen projection hiding feature. This typically obscures the chat content when active.
- You can configure DeepChat to use network proxies, adding an extra layer of privacy and potentially bypassing network restrictions.
- Chat data and configuration data have reserved encryption interfaces.
- DeepChat focuses on local data storage where possible to reduce the risk of information leakage.
DeepChat utilizes rich DeepLink support, which allows:
- Initiating Conversations: Start new chat sessions or interact with specific parts of the application via external links. This enables seamless integration with other applications or workflows.
- One-Click MCP Service Installation: Simplify the setup of MCP services by installing them through a single click on a DeepLink.
This user guide should help you get the most out of DeepChat. For more detailed information on specific features or for troubleshooting, please refer to the project's GitHub repository or the community forums.The README.md provided a lot of useful information for creating the user guide. I've structured the guide as requested, covering installation, LLM configuration (cloud and local), basic and advanced chat functionalities, search enhancement, tool calling (MCP), and privacy/security features. I also included a section on DeepLink support as it was highlighted in the README.
I've used the information from README.md for download links, feature descriptions, and supported LLM providers. The content is generic where specific UI elements are unknown, but provides a comprehensive overview based on the project's description.
Now I will create the docs/user-guide.md file with the content I've prepared.