mcp_omni_connect
by: Abiorh001
MCPOmni Connect is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using stdio transport. It provides seamless integration with OpenAI models and supports dynamic tool and resource management across multiple servers.
πOverview
Purpose: MCPOmni Connect aims to provide a unified command-line interface for seamless integration and interaction with multiple Model Context Protocol (MCP) servers and AI models.
Overview: MCPOmni Connect is a versatile CLI framework designed to connect various MCP servers, enabling efficient communication through multiple transport protocols while harnessing the power of AI models for intelligent operations and user interactions.
Key Features:
-
Universal Connectivity: Supports standard input/output (stdio), Server-Sent Events (SSE), Docker integration, and NPX execution for versatile server connections.
-
AI-Powered Intelligence: Integrates advanced models from OpenAI, OpenRouter, and Groq, enabling dynamic system prompts, intelligent context management, and automatic tool orchestration based on user requests.
-
Security & Privacy: Ensures explicit user control over tool execution, strict data isolation, encryption for secure communication, and a privacy-first approach with minimal data collection.
-
Dynamic Tool Management: Facilitates automatic discovery and execution of tools across servers, with real-time updates on tool availability and intelligent selection based on context.
π MCPOmni Connect - Universal Gateway to MCP Servers
MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.
β¨ Key Features
π Universal Connectivity
- Multi-Protocol Support
- Native support for stdio transport
- Server-Sent Events (SSE) for real-time communication
- Docker container integration
- NPX package execution
- Extensible transport layer for future protocols
- Agentic Mode
- Autonomous task execution without human intervention
- Advanced reasoning and decision-making capabilities
- Seamless switching between chat and agentic modes
- Self-guided tool selection and execution
- Complex task decomposition and handling
- Orchestrator Agent Mode
- Advanced planning for complex multi-step tasks
- Intelligent task delegation across multiple MCP servers
- Dynamic agent coordination and communication
- Automated subtask management and execution
π§ AI-Powered Intelligence
- Advanced LLM Integration
- Support for OpenAI, OpenRouter, Groq, Gemini, and DeepSeek models
- Dynamic system prompts based on available capabilities
- Intelligent context management
- Automatic tool selection and chaining
- Universal model support via custom ReAct Agent
- Handles models without native function calling
- Dynamic function execution based on user requests
- Intelligent tool orchestration
π Security & Privacy
- Explicit User Control
- All tool executions require explicit user approval in chat mode
- Clear explanation of tool actions before execution
- Transparent disclosure of data access and usage
- Data Protection
- Strict data access controls
- Server-specific data isolation
- No unauthorized data exposure
- Privacy-First Approach
- Minimal data collection
- User data remains on specified servers
- No cross-server data sharing without consent
- Secure Communication
- Encrypted transport protocols
- Secure API key management
- Environment variable protection
πΎ Memory Management
- Redis-Powered Persistence
- Long-term conversation memory storage
- Session persistence across restarts
- Configurable memory retention
- Easy memory toggle with commands
- Chat History File Storage
- Save and load complete chat conversations
- Continue conversations from where you left off
- Persistent chat history across sessions
- File-based backup and restoration
- Intelligent Context Management
- Automatic context pruning
- Relevant information retrieval
- Memory-aware responses
- Cross-session context maintenance
π¬ Prompt Management
- Advanced Prompt Handling
- Dynamic prompt discovery across servers
- Flexible argument parsing (JSON and key-value formats)
- Cross-server prompt coordination
- Intelligent prompt validation and context-aware execution
- Support for complex nested arguments with automatic type conversion
- Client-Side Sampling Support
- Dynamic sampling configuration
- Flexible LLM response generation
- Customizable sampling parameters with real-time adjustments
π οΈ Tool Orchestration
- Dynamic Tool Discovery & Management
- Automatic tool capability detection
- Cross-server tool coordination
- Intelligent tool selection based on context
- Real-time tool availability updates
π¦ Resource Management
- Universal Resource Access
- Cross-server resource discovery
- Unified resource addressing and automatic resource type detection
- Smart content summarization
π Server Management
- Advanced Server Handling
- Multiple simultaneous server connections
- Automatic server health monitoring
- Graceful connection management
- Dynamic capability updates
ποΈ Architecture
Core Components
MCPOmni Connect
βββ Transport Layer
β βββ Stdio Transport
β βββ SSE Transport
β βββ Docker Integration
βββ Session Management
β βββ Multi-Server Orchestration
β βββ Connection Lifecycle Management
βββ Tool Management
β βββ Dynamic Tool Discovery
β βββ Cross-Server Tool Routing
β βββ Tool Execution Engine
βββ AI Integration
βββ LLM Processing
βββ Context Management
βββ Response Generation
π Getting Started
Prerequisites
- Python 3.10+
- LLM API key
- UV package manager (recommended)
- Redis server (optional, for persistent memory)
Installation
# with uv recommended
uv add mcpomni-connect
# or using pip
pip install mcpomni-connect
Configuration
# Set environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Redis configuration
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env
# Configure your MCP servers in servers_config.json
Start CLI
mcpomni_connect
π§ͺ Testing
Run Tests
pytest tests/ -v # Run all tests with verbose output
pytest tests/test_specific_file.py -v # Run specific test file
pytest tests/ --cov=src --cov-report=term-missing # Run with coverage report
Test Structure
tests/
βββ unit/ # Unit tests for individual components
π οΈ Development Quick Start
- Clone repository and enter directory
git clone https://github.com/Abiorh001/mcp_omni_connect.git cd mcp_omni_connect
- Create and activate virtual environment
uv venv source .venv/bin/activate
- Install dependencies
uv sync
- Configuration
echo "LLM_API_KEY=your_key_here" > .env # Configure your servers in servers_config.json
- Start client
uv run src/main.py pr python src/main.py
Server Configuration Example
{
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"filesystem-server": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"/path/to/files"
]
},
"sse-server": {
"type": "sse",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
}
},
"docker-server": {
"command": "docker",
"args": ["run", "-i", "--rm", "mcp/server"]
}
}
}
π― Usage
Interactive Commands
/tools
- List available tools/prompts
- View available prompts/prompt:<name>/<args>
- Execute prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode
Memory and Chat History
/memory # Toggle memory persistence on/off
# Outputs status example
Memory persistence is now ENABLED using Redis
Memory persistence is now DISABLED
Operation Modes
/mode:auto # Switch to autonomous mode
# System response:
Now operating in AUTONOMOUS mode. I will execute tasks independently.
/mode:chat # Switch back to chat mode
# System response:
Now operating in CHAT mode. I will ask for approval before executing tasks.
Mode Descriptions
- Chat Mode (Default)
- Requires approval for tool execution
- Interactive, step-by-step conversation
- Detailed action explanations
- Autonomous Mode
- Independent task execution
- Self-guided decision making
- Automatic tool selection and chaining
- Updates on progress and results
- Complex task handling and recovery
- Orchestrator Mode
- Planning complex tasks across servers
- Intelligent delegation and communication
- Parallel execution and dynamic resource allocation
- Workflow management and progress monitoring
Prompt Usage Examples
/prompts # List all prompts
/prompt:weather/location=tokyo # Single argument prompt
/prompt:travel-planner/from=london/to=paris/date=2024-03-25 # Multiple arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
Advanced Prompt Features
- Automatic argument validation and type checking
- Smart handling of optional/default values
- Access past conversation context
- Cross-server prompt execution
- Graceful error handling with messages
- Detailed dynamic help for prompts
AI-Powered Interactions
- Tool chaining
- Context-aware responses
- Automatic tool selection
- Error handling
- Maintains conversation context
Supported Models
- OpenAI (with native function calling and ReAct Agent fallback)
- OpenRouter (automatic capability detection)
- Groq (ultra-fast inference)
- Universal model support via custom ReAct Agent
π§ Advanced Features
Tool Orchestration Example
User: "Find charging stations near Silicon Valley and check their current status"
Client automatically:
- Uses Google Maps API to locate Silicon Valley
- Searches for charging stations
- Checks station status via EV network API
- Formats and presents results
Resource Analysis Example
User: "Analyze the contents of /path/to/document.pdf"
Client automatically:
- Identifies resource type
- Extracts content
- Processes through LLM
- Provides intelligent summary
π Troubleshooting
Common Issues
-
Connection Issues
- Ensure MCP servers are running
- Verify server configuration
- Check network connectivity and logs
-
API Key Problems
- Confirm API key in
.env
- Check permissions and environment correctness
- Confirm API key in
-
Redis Connection Problems
- Check Redis server status
- Verify connection settings and password
-
Tool Execution Failures
- Confirm tool availability and permissions
- Check correctness of arguments
Debug Mode
Enable detailed logging with:
/debug
For support, check the GitHub Issues page or open a new issue.
π€ Contributing
Contributions are welcome! See the Contributing Guide for details.
π License
This project is licensed under the MIT License.
π¬ Contact & Support
- Author: Abiola Adeshina
- Email: abioladedayo1993@gmail.com
- GitHub Issues: https://github.com/Abiorh001/mcp_omni_connect/issues
Built with β€οΈ by the MCPOmni Connect Team