HyperChat
by: BigSweetPotatoStudio
HyperChat is a Chat client that strives for openness, utilizing APIs from various LLMs to achieve the best Chat experience, as well as implementing productivity tools through the MCP protocol.
📌Overview
Purpose:
To provide an open-source, cross-platform chat client that leverages various Large Language Model (LLM) APIs and supports MCP for enhanced chat experience and productivity tools.
Overview:
HyperChat is a versatile chat application designed for seamless integration with multiple LLM providers (including OpenAI, Claude, Qwen, Gemini, among others), enabling advanced conversational features and automation. Built to run on Windows, MacOS, and Linux, HyperChat supports both command-line and web interfaces, Docker deployment, and integrates tightly with MCP for extended functionalities. It emphasizes productivity, multi-language support, and an extensible plugin system for resource, prompt, and tool management.
Key Features:
-
Multi-Platform & Multi-Model Support:
Runs on Windows, MacOS, and Linux; supports a wide range of OpenAI-style LLM APIs, enabling flexible backend choices and deployment options. -
Advanced Productivity Tools & Extensibility:
Features include incremental WebDAV synchronization, variable-powered HyperPrompt, scheduled tasks, Agents with preset prompts, RAG knowledge base integration, multi-chat workspaces, dark mode, and an extensible system for resources, prompts, and tools, empowering both individual efficiency and team collaboration.
HyperChat
HyperChat is an open-source chat client supporting MCP and various LLM APIs to deliver advanced chat experiences and productivity tools.
Introduction
- Supports OpenAI-style LLMs:
OpenAI
,Claude
,Claude(OpenAI)
,Qwen
,Deepseek
,GLM
,Ollama
,xAI
,Gemini
. - Fully supports MCP.
Online Demo
Features
- Cross-platform: Windows, MacOS, Linux
- Command-line run:
npx -y @dadigua/hyper-chat
(default port 16100) - Docker supported
- WebDAV: incremental synchronization via hash
- HyperPrompt syntax for variables (including JS code), syntax checking, live preview
- MCP extension support
- Dark mode
- Resources, Prompts, Tools supported
- Multilingual: English and Chinese
- Artifacts, SVG, HTML, Mermaid rendering
- Agent definition, preset prompts, and approved MCP selection
- Scheduled tasks with agent assignment and status tracking
- KaTeX for math formulas; enhanced code highlighting and quick copying
- RAG implementation based on MCP knowledge base
- ChatSpace concept: concurrent multi-chat
- Chat model comparison selection
Roadmap
- Implement a multi-Agent interaction system
Supported LLMs
LLM | Usability | Remarks |
---|---|---|
Claude | ⭐⭐⭐⭐⭐⭐ | |
OpenAI | ⭐⭐⭐⭐⭐ | Multi-step function calls, gpt-4o-mini too |
Gemini | ⭐⭐⭐⭐⭐ | |
Qwen | ⭐⭐⭐⭐ | |
Doubao | ⭐⭐⭐ | |
Deepseek | ⭐⭐⭐⭐ | Recently improved |
Getting Started
1. Configure your LLM API Key
Ensure your LLM service is OpenAI style compatible.
2. Install dependencies
You need UV, Node.js and optionally Docker.
UV Installation
Check the official UV repository for details.
# MacOS
brew install uv
# Windows
winget install --id=astral-sh.uv -e
Node.js Installation
Official website: nodejs
# MacOS
brew install node
# Windows
winget install OpenJS.NodeJS.LTS
Development
cd electron && npm install
cd web && npm install
npm install
npm run dev
Community
Disclaimer
This project is for educational and exchange purposes only. Actions taken by users (such as web scraping) are unrelated to the project developers.
**Notes on changes:**
- Removed all image URLs and badges.
- Removed Markdown hyperlinks for local/relative links, leaving only those pointing to websites.
- Omitted screenshots, repetitive interface demos, and unimportant information.
- Reformatted and streamlined the README structure to focus on features, usage, and key information.