mcp-web-ui
by: MegaGrindStone
MCP Web UI is a web-based user interface that serves as a Host within the Model Context Protocol (MCP) architecture. It provides a powerful and user-friendly interface for interacting with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.
πOverview
Purpose: To provide a user-friendly web interface for interacting with Large Language Models (LLMs) while managing context aggregation within the Model Context Protocol (MCP) architecture.
Overview: MCP Web UI simplifies AI language model interactions by offering a unified platform for diverse LLM providers, fostering real-time chat experiences and comprehensive model management through robust context handling mechanisms.
Key Features:
-
Multi-Provider LLM Integration: Supports various models from Anthropic, OpenAI, Ollama, and OpenRouter, allowing users to switch easily between them.
-
Intuitive Chat Interface: Enhances user experience with an accessible and interactive chat environment for seamless communication with AI.
-
Real-time Response Streaming: Employs Server-Sent Events (SSE) for providing instantaneous responses, enriching the user engagement during conversations.
-
Dynamic Configuration Management: Facilitates customizable settings for model management, including advanced parameters for fine-tuning AI behavior.
-
Advanced Context Aggregation: Utilizes the MCP protocol for efficient context handling across multiple interactions, maintaining continuity in conversation.
-
Persistent Chat History: Ensures users can access past interactions via BoltDB, promoting better follow-up and analysis of previous AI interactions.
-
Flexible Model Selection: Empowers users to choose and configure different language models as per their specific needs and preferences.
MCP Web UI
MCP Web UI is a web-based user interface that serves as a Host within the Model Context Protocol (MCP) architecture. It provides a powerful and user-friendly interface for interacting with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.
π Overview
MCP Web UI is designed to simplify and enhance interactions with AI language models by providing:
- A unified interface for multiple LLM providers
- Real-time, streaming chat experiences
- Flexible configuration and model management
- Robust context handling using the MCP protocol
π Features
- π€ Multi-Provider LLM Integration:
- Anthropic (Claude models)
- OpenAI (GPT models)
- Ollama (local models)
- OpenRouter (multiple providers)
- π¬ Intuitive Chat Interface
- π Real-time Response Streaming via Server-Sent Events (SSE)
- π§ Dynamic Configuration Management
- π Advanced Context Aggregation
- πΎ Persistent Chat History using BoltDB
- π― Flexible Model Selection
π Prerequisites
- Go 1.23+
- Docker (optional)
- API keys for desired LLM providers
π Installation
Quick Start
-
Clone the repository:
git clone https://github.com/MegaGrindStone/mcp-web-ui.git cd mcp-web-ui
-
Configure your environment:
mkdir -p $HOME/.config/mcpwebui cp config.example.yaml $HOME/.config/mcpwebui/config.yaml
-
Set up API keys:
export ANTHROPIC_API_KEY=your_anthropic_key export OPENAI_API_KEY=your_openai_key export OPENROUTER_API_KEY=your_openrouter_key
Running the Application
Local Development
go mod download
go run ./cmd/server/main.go
Docker Deployment
docker build -t mcp-web-ui .
docker run -p 8080:8080 \
-v $HOME/.config/mcpwebui/config.yaml:/app/config.yaml \
-e ANTHROPIC_API_KEY \
-e OPENAI_API_KEY \
-e OPENROUTER_API_KEY \
mcp-web-ui
π§ Configuration
The configuration file (config.yaml
) provides comprehensive settings for customizing the MCP Web UI.
Server Configuration
port
: The port on which the server will run (default: 8080)logLevel
: Logging verbosity (debug, info, warn, error; default: info)logMode
: Log output format (json, text; default: text)
Prompt Configuration
systemPrompt
: Default system prompt for the AI assistanttitleGeneratorPrompt
: Prompt used to generate chat titles
LLM (Language Model) Configuration
The llm
section supports multiple providers with provider-specific configurations:
Common LLM Parameters
provider
: Choose from ollama, anthropic, openai, openroutermodel
: Specific model name (e.g., 'claude-3-5-sonnet-20241022')parameters
: Fine-tune model behavior:temperature
: Randomness of responses (0.0-1.0)topP
: Nucleus sampling thresholdtopK
: Number of highest probability tokens to keepfrequencyPenalty
: Reduce repetitionpresencePenalty
: Encourage new topicsmaxTokens
: Maximum response lengthstop
: Sequences to stop generation
Provider-Specific Configurations
-
Ollama:
host
: Ollama server URL (default: http://localhost:11434)
-
Anthropic:
apiKey
: Anthropic API key (can use ANTHROPIC_API_KEY env variable)maxTokens
: Maximum token limit- Stops sequences with only whitespace are ignored; whitespace is trimmed.
-
OpenAI:
apiKey
: OpenAI API key (can use OPENAI_API_KEY env variable)endpoint
: OpenAI API endpoint (default: https://api.openai.com/v1)- For alternative OpenAI-compatible APIs, see the project's discussions.
-
OpenRouter:
apiKey
: OpenRouter API key (can use OPENROUTER_API_KEY env variable)
Title Generator Configuration
The genTitleLLM
section allows separate configuration for title generation, defaulting to the main LLM if not specified.
MCP Server Configurations
-
mcpSSEServers
: Configure Server-Sent Events (SSE) serversurl
: SSE server URLmaxPayloadSize
: Maximum payload size
-
mcpStdIOServers
: Configure Standard Input/Output serverscommand
: Command to run serverargs
: Arguments for the server command
Example MCP Server Configurations
SSE Server Example:
mcpSSEServers:
filesystem:
url: https://yoursseserver.com
maxPayloadSize: 1048576 # 1MB
StdIO Server Examples:
- Using the official filesystem MCP server:
mcpStdIOServers:
filesystem:
command: npx
args:
- -y
- "@modelcontextprotocol/server-filesystem"
- "/path/to/your/files"
- Using go-mcp filesystem MCP server:
mcpStdIOServers:
filesystem:
command: go
args:
- run
- github.com/your_username/your_app # Replace with your app
- -path
- "/data/mcp/filesystem"
Example Configuration Snippet
port: 8080
logLevel: info
systemPrompt: You are a helpful assistant.
llm:
provider: anthropic
model: claude-3-5-sonnet-20241022
maxTokens: 1000
parameters:
temperature: 0.7
genTitleLLM:
provider: openai
model: gpt-3.5-turbo
π Project Structure
cmd/
: Application entry pointinternal/handlers/
: Web request handlersinternal/models/
: Data modelsinternal/services/
: LLM provider integrationsstatic/
: Static assets (CSS)templates/
: HTML templates
π€ Contributing
- Fork the repository
- Create a feature branch
- Commit your changes
- Push and create a Pull Request
π License
MIT License