MCP HubMCP Hub
Abiorh001

mcp_omni_connect

by: Abiorh001

MCPOmni Connect is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using stdio transport. It provides seamless integration with OpenAI models and supports dynamic tool and resource management across multiple servers.

26created 19/03/2025
Visit
CLI
OpenAI

πŸ“ŒOverview

Purpose: MCPOmni Connect aims to provide a unified command-line interface for seamless integration and interaction with multiple Model Context Protocol (MCP) servers and AI models.

Overview: MCPOmni Connect is a versatile CLI framework designed to connect various MCP servers, enabling efficient communication through multiple transport protocols while harnessing the power of AI models for intelligent operations and user interactions.

Key Features:

  • Universal Connectivity: Supports standard input/output (stdio), Server-Sent Events (SSE), Docker integration, and NPX execution for versatile server connections.

  • AI-Powered Intelligence: Integrates advanced models from OpenAI, OpenRouter, and Groq, enabling dynamic system prompts, intelligent context management, and automatic tool orchestration based on user requests.

  • Security & Privacy: Ensures explicit user control over tool execution, strict data isolation, encryption for secure communication, and a privacy-first approach with minimal data collection.

  • Dynamic Tool Management: Facilitates automatic discovery and execution of tools across servers, with real-time updates on tool availability and intelligent selection based on context.


πŸš€ MCPOmni Connect - Universal Gateway to MCP Servers

MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.

✨ Key Features

πŸ”Œ Universal Connectivity

  • Multi-Protocol Support
    • Native support for stdio transport
    • Server-Sent Events (SSE) for real-time communication
    • Docker container integration
    • NPX package execution
    • Extensible transport layer for future protocols
  • Agentic Mode
    • Autonomous task execution without human intervention
    • Advanced reasoning and decision-making capabilities
    • Seamless switching between chat and agentic modes
    • Self-guided tool selection and execution
    • Complex task decomposition and handling
  • Orchestrator Agent Mode
    • Advanced planning for complex multi-step tasks
    • Intelligent task delegation across multiple MCP servers
    • Dynamic agent coordination and communication
    • Automated subtask management and execution

🧠 AI-Powered Intelligence

  • Advanced LLM Integration
    • Support for OpenAI, OpenRouter, Groq, Gemini, and DeepSeek models
    • Dynamic system prompts based on available capabilities
    • Intelligent context management
    • Automatic tool selection and chaining
    • Universal model support via custom ReAct Agent
      • Handles models without native function calling
      • Dynamic function execution based on user requests
      • Intelligent tool orchestration

πŸ”’ Security & Privacy

  • Explicit User Control
    • All tool executions require explicit user approval in chat mode
    • Clear explanation of tool actions before execution
    • Transparent disclosure of data access and usage
  • Data Protection
    • Strict data access controls
    • Server-specific data isolation
    • No unauthorized data exposure
  • Privacy-First Approach
    • Minimal data collection
    • User data remains on specified servers
    • No cross-server data sharing without consent
  • Secure Communication
    • Encrypted transport protocols
    • Secure API key management
    • Environment variable protection

πŸ’Ύ Memory Management

  • Redis-Powered Persistence
    • Long-term conversation memory storage
    • Session persistence across restarts
    • Configurable memory retention
    • Easy memory toggle with commands
  • Chat History File Storage
    • Save and load complete chat conversations
    • Continue conversations from where you left off
    • Persistent chat history across sessions
    • File-based backup and restoration
  • Intelligent Context Management
    • Automatic context pruning
    • Relevant information retrieval
    • Memory-aware responses
    • Cross-session context maintenance

πŸ’¬ Prompt Management

  • Advanced Prompt Handling
    • Dynamic prompt discovery across servers
    • Flexible argument parsing (JSON and key-value formats)
    • Cross-server prompt coordination
    • Intelligent prompt validation and context-aware execution
    • Support for complex nested arguments with automatic type conversion
  • Client-Side Sampling Support
    • Dynamic sampling configuration
    • Flexible LLM response generation
    • Customizable sampling parameters with real-time adjustments

πŸ› οΈ Tool Orchestration

  • Dynamic Tool Discovery & Management
    • Automatic tool capability detection
    • Cross-server tool coordination
    • Intelligent tool selection based on context
    • Real-time tool availability updates

πŸ“¦ Resource Management

  • Universal Resource Access
    • Cross-server resource discovery
    • Unified resource addressing and automatic resource type detection
    • Smart content summarization

πŸ”„ Server Management

  • Advanced Server Handling
    • Multiple simultaneous server connections
    • Automatic server health monitoring
    • Graceful connection management
    • Dynamic capability updates

πŸ—οΈ Architecture

Core Components

MCPOmni Connect
β”œβ”€β”€ Transport Layer
β”‚   β”œβ”€β”€ Stdio Transport
β”‚   β”œβ”€β”€ SSE Transport
β”‚   └── Docker Integration
β”œβ”€β”€ Session Management
β”‚   β”œβ”€β”€ Multi-Server Orchestration
β”‚   └── Connection Lifecycle Management
β”œβ”€β”€ Tool Management
β”‚   β”œβ”€β”€ Dynamic Tool Discovery
β”‚   β”œβ”€β”€ Cross-Server Tool Routing
β”‚   └── Tool Execution Engine
└── AI Integration
    β”œβ”€β”€ LLM Processing
    β”œβ”€β”€ Context Management
    └── Response Generation

πŸš€ Getting Started

Prerequisites

  • Python 3.10+
  • LLM API key
  • UV package manager (recommended)
  • Redis server (optional, for persistent memory)

Installation

# with uv recommended
uv add mcpomni-connect

# or using pip
pip install mcpomni-connect

Configuration

# Set environment variables
echo "LLM_API_KEY=your_key_here" > .env

# Optional: Redis configuration
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env

# Configure your MCP servers in servers_config.json

Start CLI

mcpomni_connect

πŸ§ͺ Testing

Run Tests

pytest tests/ -v           # Run all tests with verbose output
pytest tests/test_specific_file.py -v  # Run specific test file
pytest tests/ --cov=src --cov-report=term-missing  # Run with coverage report

Test Structure

tests/
β”œβ”€β”€ unit/           # Unit tests for individual components

πŸ› οΈ Development Quick Start

  1. Clone repository and enter directory
    git clone https://github.com/Abiorh001/mcp_omni_connect.git
    cd mcp_omni_connect
    
  2. Create and activate virtual environment
    uv venv
    source .venv/bin/activate
    
  3. Install dependencies
    uv sync
    
  4. Configuration
    echo "LLM_API_KEY=your_key_here" > .env
    # Configure your servers in servers_config.json
    
  5. Start client
    uv run src/main.py pr python src/main.py
    

Server Configuration Example

{
    "LLM": {
        "provider": "openai",
        "model": "gpt-4",
        "temperature": 0.5,
        "max_tokens": 5000,
        "max_context_length": 30000,
        "top_p": 0
    },
    "mcpServers": {
        "filesystem-server": {
            "command": "npx",
            "args": [
                "@modelcontextprotocol/server-filesystem",
                "/path/to/files"
            ]
        },
        "sse-server": {
            "type": "sse",
            "url": "http://localhost:3000/mcp",
            "headers": {
                "Authorization": "Bearer token"
            }
        },
        "docker-server": {
            "command": "docker",
            "args": ["run", "-i", "--rm", "mcp/server"]
        }
    }
}

🎯 Usage

Interactive Commands

  • /tools - List available tools
  • /prompts - View available prompts
  • /prompt:<name>/<args> - Execute prompt with arguments
  • /resources - List available resources
  • /resource:<uri> - Access and analyze resource
  • /debug - Toggle debug mode
  • /refresh - Update server capabilities
  • /memory - Toggle Redis memory persistence
  • /mode:auto - Switch to autonomous agentic mode
  • /mode:chat - Switch back to interactive chat mode

Memory and Chat History

/memory            # Toggle memory persistence on/off

# Outputs status example
Memory persistence is now ENABLED using Redis
Memory persistence is now DISABLED

Operation Modes

/mode:auto         # Switch to autonomous mode

# System response:
Now operating in AUTONOMOUS mode. I will execute tasks independently.

/mode:chat         # Switch back to chat mode

# System response:
Now operating in CHAT mode. I will ask for approval before executing tasks.

Mode Descriptions

  • Chat Mode (Default)
    • Requires approval for tool execution
    • Interactive, step-by-step conversation
    • Detailed action explanations
  • Autonomous Mode
    • Independent task execution
    • Self-guided decision making
    • Automatic tool selection and chaining
    • Updates on progress and results
    • Complex task handling and recovery
  • Orchestrator Mode
    • Planning complex tasks across servers
    • Intelligent delegation and communication
    • Parallel execution and dynamic resource allocation
    • Workflow management and progress monitoring

Prompt Usage Examples

/prompts                             # List all prompts
/prompt:weather/location=tokyo      # Single argument prompt
/prompt:travel-planner/from=london/to=paris/date=2024-03-25  # Multiple arguments
/prompt:analyze-data/{
    "dataset": "sales_2024",
    "metrics": ["revenue", "growth"],
    "filters": {
        "region": "europe",
        "period": "q1"
    }
}
/prompt:market-research/target=smartphones/criteria={
    "price_range": {"min": 500, "max": 1000},
    "features": ["5G", "wireless-charging"],
    "markets": ["US", "EU", "Asia"]
}

Advanced Prompt Features

  • Automatic argument validation and type checking
  • Smart handling of optional/default values
  • Access past conversation context
  • Cross-server prompt execution
  • Graceful error handling with messages
  • Detailed dynamic help for prompts

AI-Powered Interactions

  • Tool chaining
  • Context-aware responses
  • Automatic tool selection
  • Error handling
  • Maintains conversation context

Supported Models

  • OpenAI (with native function calling and ReAct Agent fallback)
  • OpenRouter (automatic capability detection)
  • Groq (ultra-fast inference)
  • Universal model support via custom ReAct Agent

πŸ”§ Advanced Features

Tool Orchestration Example

User: "Find charging stations near Silicon Valley and check their current status"

Client automatically:

  1. Uses Google Maps API to locate Silicon Valley
  2. Searches for charging stations
  3. Checks station status via EV network API
  4. Formats and presents results

Resource Analysis Example

User: "Analyze the contents of /path/to/document.pdf"

Client automatically:

  1. Identifies resource type
  2. Extracts content
  3. Processes through LLM
  4. Provides intelligent summary

πŸ” Troubleshooting

Common Issues

  1. Connection Issues

    • Ensure MCP servers are running
    • Verify server configuration
    • Check network connectivity and logs
  2. API Key Problems

    • Confirm API key in .env
    • Check permissions and environment correctness
  3. Redis Connection Problems

    • Check Redis server status
    • Verify connection settings and password
  4. Tool Execution Failures

    • Confirm tool availability and permissions
    • Check correctness of arguments

Debug Mode

Enable detailed logging with:

/debug

For support, check the GitHub Issues page or open a new issue.

🀝 Contributing

Contributions are welcome! See the Contributing Guide for details.

πŸ“„ License

This project is licensed under the MIT License.

πŸ“¬ Contact & Support


Built with ❀️ by the MCPOmni Connect Team