MCP HubMCP Hub
mark3labs

mcphost

by: mark3labs

A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP).

600created 08/12/2024
Visit
CLI
LLM

πŸ“ŒOverview

Purpose: MCPHost enables Large Language Models (LLMs) to seamlessly interact with external tools using the Model Context Protocol (MCP).

Overview: MCPHost serves as a host within the MCP client-server framework, facilitating LLMs in connecting to external data sources and tools. This design allows models to maintain context and perform tasks effectively and securely. It currently supports Claude 3.5 Sonnet and Ollama models.

Key Features:

  • Interactive Conversations: Engage with advanced LLMs like Claude 3.5 Sonnet and various Ollama models for dynamic interaction.

  • Multiple Concurrent MCP Servers: Connect to various MCP servers concurrently for enhanced flexibility and tool access.

  • Dynamic Tool Discovery: Automatically identify and integrate available tools as needed during interactions.

  • Tool Calling Capabilities: Utilize functions from both Claude and Ollama models for enriched responses and actions.

  • Configurable MCP Settings: Adjust server locations, command arguments, and message history for tailored usage experiences.


MCPHost πŸ€–

A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP). Currently supports Claude 3.5 Sonnet and Ollama models.

Discuss the Project on Discord

Overview 🌟

MCPHost acts as a host in the MCP client-server architecture, where:

  • Hosts (like MCPHost) are LLM applications that manage connections and interactions
  • Clients maintain 1:1 connections with MCP servers
  • Servers provide context, tools, and capabilities to the LLMs

This architecture allows language models to:

  • Access external tools and data sources πŸ› οΈ
  • Maintain consistent context across interactions πŸ”„
  • Execute commands and retrieve information safely πŸ”’

Currently supports:

  • Claude 3.5 Sonnet (claude-3-5-sonnet-20240620)
  • Any Ollama-compatible model with function calling support
  • Google Gemini models
  • Any OpenAI-compatible local or online model with function calling support

Features ✨

  • Interactive conversations with supported models
  • Support for multiple concurrent MCP servers
  • Dynamic tool discovery and integration
  • Tool calling capabilities for both model types
  • Configurable MCP server locations and arguments
  • Consistent command interface across model types
  • Configurable message history window for context management

Requirements πŸ“‹

  • Go 1.23 or later
  • Anthropic API key (for Claude)
  • Local Ollama installation with desired models (for Ollama)
  • Google API key (for Google/Gemini models)
  • One or more MCP-compatible tool servers

Environment Setup πŸ”§

  1. Anthropic API Key (for Claude):
export ANTHROPIC_API_KEY='your-api-key'
  1. Ollama Setup:
ollama pull mistral
  • Ensure Ollama is running:
ollama serve

You can also configure the Ollama client using standard environment variables, such as OLLAMA_HOST for the Ollama base URL.

  1. Google API Key (for Gemini):
export GOOGLE_API_KEY='your-api-key'
  1. OpenAI compatible setup:
  • Obtain your API server base URL, API key, and model name.

Installation πŸ“¦

go install github.com/mark3labs/mcphost@latest

Configuration βš™οΈ

MCPHost creates a configuration file at ~/.mcp.json if it doesn't exist. You can specify a custom location using the --config flag.

Example configuration:

{
  "mcpServers": {
    "sqlite": {
      "command": "uvx",
      "args": [
        "mcp-server-sqlite",
        "--db-path",
        "/tmp/foo.db"
      ]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/tmp"
      ]
    }
  }
}

Each MCP server entry requires:

  • command: The command to run (e.g., uvx, npx)
  • args: Array of arguments for the command (e.g., database path or directory)

Usage πŸš€

MCPHost is a CLI tool to interact with various AI models through a unified interface. It supports various tools via MCP servers.

Available Models

Specify models with the --model (-m) flag:

  • Anthropic Claude (default): anthropic:claude-3-5-sonnet-latest
  • OpenAI or compatible: openai:gpt-4
  • Ollama models: ollama:modelname
  • Google: google:gemini-2.0-flash

Examples

# Use Ollama with Qwen model
mcphost -m ollama:qwen2.5:3b

# Use OpenAI's GPT-4
mcphost -m openai:gpt-4

# Use OpenAI-compatible model with custom API
mcphost --model openai:<your-model-name> \
--openai-url <your-base-url> \
--openai-api-key <your-api-key>

Flags

  • --anthropic-url string: Base URL for Anthropic API (default: api.anthropic.com)
  • --anthropic-api-key string: Anthropic API key (or via ANTHROPIC_API_KEY)
  • --config string: Config file location (default: $HOME/.mcp.json)
  • --debug: Enable debug logging
  • --message-window int: Number of messages to keep in context (default: 10)
  • -m, --model string: Model to use (format: provider:model) (default: anthropic:claude-3-5-sonnet-latest)
  • --openai-url string: Base URL for OpenAI API (default: api.openai.com)
  • --openai-api-key string: OpenAI API key (or via OPENAI_API_KEY)
  • --google-api-key string: Google API key (or via GOOGLE_API_KEY)

Interactive Commands

While chatting, use these commands:

  • /help: Show available commands
  • /tools: List all available tools
  • /servers: List configured MCP servers
  • /history: Display conversation history
  • /quit: Exit the application
  • Ctrl+C: Exit at any time

Global Flags

  • --config: Specify custom config file location
  • --message-window: Set number of messages to keep in context (default: 10)

MCP Server Compatibility πŸ”Œ

MCPHost works with any MCP-compliant server. Reference implementations are available at the MCP Servers Repository.

Contributing 🀝

Contributions are welcome! You can:

  • Submit bug reports or feature requests
  • Create pull requests for improvements
  • Share custom MCP servers
  • Improve documentation

Please follow good coding practices and include tests.

License πŸ“„

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments πŸ™

  • Thanks to the Anthropic team for Claude and the MCP specification
  • Thanks to the Ollama team for their local LLM runtime
  • Thanks to all contributors who helped improve this tool