openai-mcp
by: arthurcolle
OpenAI Code Assistant Model Context Protocol (MCP) Server
📌Overview
Purpose: This framework serves as a powerful coding assistant that facilitates software development tasks through a natural language interface, supporting multiple language model (LLM) providers.
Overview: The MCP Coding Assistant is a Python-based tool designed to enhance developer productivity by providing real-time code assistance, visualization, and cost management features. It integrates the Model Context Protocol (MCP) for interoperability with various clients and agents, making it a versatile solution for developers.
Key Features:
-
Multi-Provider Support: Compatible with various LLM providers such as OpenAI and Anthropic, allowing users to choose their preferred model for coding assistance.
-
Model Context Protocol Integration: Functions as both an MCP server and client, enabling seamless communication with multiple agents for complex problem-solving and enhancing collaboration in development tasks.
MCP Coding Assistant with Support for OpenAI and Other LLM Providers
A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
Key Features
- Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers.
- Model Context Protocol Integration:
- Run as an MCP server for use with Claude Desktop and other clients.
- Connect to any MCP server with the built-in MCP client.
- Multi-agent synchronization for complex problem solving.
- Real-Time Tool Visualization: Observe tool execution progress and results in real-time.
- Cost Management: Track token usage and expenses with budget controls.
- Comprehensive Tool Suite: File operations, search, command execution, and more.
- Enhanced UI: Rich terminal interface with progress indicators and syntax highlighting.
- Context Optimization: Smart conversation compaction and memory management.
- Agent Coordination: Specialized agents with different roles can collaborate on tasks.
Installation
-
Clone the repository.
-
Install dependencies:
pip install -r requirements.txt
-
Create a
.env
file with your API keys:# Choose one or more providers OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here # Optional model selection OPENAI_MODEL=gpt-4o ANTHROPIC_MODEL=claude-3-opus-20240229
Usage
CLI Mode
Run the CLI with the default provider determined from available API keys:
python claude.py chat
Specify a provider and model:
python claude.py chat --provider openai --model gpt-4o
Set a budget limit to manage costs:
python claude.py chat --budget 5.00
MCP Server Mode
Run as a Model Context Protocol server:
python claude.py serve
Start in development mode with the MCP Inspector:
python claude.py serve --dev
Configure host and port:
python claude.py serve --host 0.0.0.0 --port 8000
Specify additional dependencies:
python claude.py serve --dependencies pandas numpy
Load environment variables from a file:
python claude.py serve --env-file .env
MCP Client Mode
Connect to an MCP server using Claude as the reasoning engine:
python claude.py mcp-client path/to/server.py
Specify a Claude model:
python claude.py mcp-client path/to/server.py --model claude-3-5-sonnet-20241022
Try the included example server:
# In terminal 1 - start the server
python examples/echo_server.py
# In terminal 2 - connect with the client
python claude.py mcp-client examples/echo_server.py
Multi-Agent MCP Mode
Launch a multi-agent client with synchronized agents:
python claude.py mcp-multi-agent path/to/server.py
Use a custom agent configuration file:
python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
Example with the echo server:
# In terminal 1 - start the server
python examples/echo_server.py
# In terminal 2 - launch the multi-agent client
python claude.py mcp-multi-agent examples/echo_server.py --config examples/agents_config.json
Available Tools
- View: Read files with optional line limits.
- Edit: Modify files with precise text replacement.
- Replace: Create or overwrite files.
- GlobTool: Find files by pattern matching.
- GrepTool: Search file contents using regex.
- LS: List directory contents.
- Bash: Execute shell commands.
Chat Commands
/help
: Show available commands./compact
: Compress conversation history to save tokens./version
: Show version information./providers
: List available LLM providers./cost
: Show cost and usage information./budget [amount]
: Set a budget limit./quit
,/exit
: Exit the application.
Architecture
Claude Code Python Edition is built with a modular architecture:
/claude_code/
/lib/
/providers/ # LLM provider implementations
/tools/ # Tool implementations
/context/ # Context management
/ui/ # UI components
/monitoring/ # Cost tracking & metrics
/commands/ # CLI commands
/config/ # Configuration management
/util/ # Utility functions
claude.py # Main CLI entry point
mcp_server.py # Model Context Protocol server
Using with Model Context Protocol
Using Claude Code as an MCP Server
Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
-
Install and run the MCP server:
python claude.py serve
-
Open the configuration page in your browser at:
http://localhost:8000
-
Follow the instructions to configure Claude Desktop, including copying or downloading the auto-configured JSON file and completing setup steps.
Using Claude Code as an MCP Client
To connect to any MCP server using Claude Code:
-
Ensure your Anthropic API key is set in the environment or
.env
file. -
Start the MCP server you want to connect to.
-
Connect using the MCP client:
python claude.py mcp-client path/to/server.py
-
Use the interactive chat interface to type queries.
Using Multi-Agent Mode
For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
-
Create an agent configuration file or use the provided example.
-
Start your MCP server.
-
Launch the multi-agent client:
python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
-
Use commands to interact with agents:
- Type a message to broadcast to all agents.
- Use
/talk Agent_Name message
for direct communication. - Use
/agents
to list all available agents. - Use
/history
to view conversation history.
Contributing
- Fork the repository.
- Create a feature branch.
- Implement your changes with tests.
- Submit a pull request.
License
MIT
Acknowledgments
This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.
OpenAI Code Assistant
A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.
Features
- Interactive CLI for coding assistance.
- Web API for integration with other applications.
- Model Context Protocol (MCP) server implementation.
- Replication support for high availability.
- Tool-based architecture for extensibility.
- Reinforcement learning for tool optimization.
- Web client for browser-based interaction.
Installation
-
Clone the repository.
-
Install dependencies:
pip install -r requirements.txt
-
Set your OpenAI API key:
export OPENAI_API_KEY=your_api_key
Usage
CLI Mode
Run the assistant in interactive CLI mode:
python cli.py
Options:
--model
,-m
: Specify the model to use (default: gpt-4o).--temperature
,-t
: Set temperature for response generation (default: 0).--verbose
,-v
: Enable verbose output with additional information.--enable-rl
/--disable-rl
: Enable/disable reinforcement learning for tool optimization.--rl-update
: Manually trigger an update of the RL model.
API Server Mode
Run the assistant as an API server:
python cli.py serve
Options:
--host
: Host address to bind to (default: 127.0.0.1).--port
,-p
: Port to listen on (default: 8000).--workers
,-w
: Number of worker processes (default: 1).--enable-replication
: Enable replication across instances.--primary
/--secondary
: Specify primary or secondary instance.--peer
: Peer instance(s) for replication in host:port format, can be specified multiple times.
MCP Server Mode
Run the assistant as a Model Context Protocol (MCP) server:
python cli.py mcp-serve
Options:
--host
: Host address to bind to (default: 127.0.0.1).--port
,-p
: Port to listen on (default: 8000).--dev
: Enable development mode with additional logging.--dependencies
: Additional Python dependencies to install.--env-file
: Path to.env
file with environment variables.
MCP Client Mode
Connect to an MCP server using the assistant as the reasoning engine:
python cli.py mcp-client path/to/server.py
Options:
--model
,-m
: Model to use for reasoning (default: gpt-4o).--host
: MCP server host address (default: 127.0.0.1).--port
,-p
: MCP server port (default: 8000).
Deployment Script
For easier deployment, use the provided script:
./deploy.sh --host 0.0.0.0 --port 8000 --workers 4
To enable replication:
# Primary instance
./deploy.sh --enable-replication --port 8000
# Secondary instance
./deploy.sh --enable-replication --secondary --port 8001 --peer 127.0.0.1:8000
Web Client
Open web-client.html
in your browser. Ensure the API server is running.
API Endpoints
Standard API Endpoints
POST /conversation
: Create a new conversation.POST /conversation/{conversation_id}/message
: Send a message to a conversation.POST /conversation/{conversation_id}/message/stream
: Stream a message response.GET /conversation/{conversation_id}
: Get conversation details.DELETE /conversation/{conversation_id}
: Delete a conversation.GET /health
: Health check endpoint.
MCP Protocol Endpoints
GET /
: Health check (MCP protocol).POST /context
: Get context for a prompt template.GET /prompts
: List available prompt templates.GET /prompts/{prompt_id}
: Get a specific prompt template.POST /prompts
: Create a new prompt template.PUT /prompts/{prompt_id}
: Update an existing prompt template.DELETE /prompts/{prompt_id}
: Delete a prompt template.
Replication
Replication enables multiple instances with synchronized state for:
- High availability.
- Load balancing.
- Fault tolerance.
Setup:
- Start a primary instance with
--enable-replication
. - Start secondary instances with
--enable-replication --secondary --peer [primary-host:port]
.
Tools
Included tools:
- Weather: Get current weather for a location.
- View: Read files.
- Edit: Edit files.
- Replace: Write files.
- Bash: Execute bash commands.
- GlobTool: File pattern matching.
- GrepTool: Content search.
- LS: List directory contents.
- JinaSearch: Web search using Jina.ai.
- JinaFactCheck: Fact checking using Jina.ai.
- JinaReadURL: Read and summarize webpages.
CLI Commands
/help
: Show help message./compact
: Compact conversation to reduce token usage./status
: Show token usage and session info./config
: Show current configuration./rl-status
: RL tool optimizer status (if enabled)./rl-update
: Update RL model manually (if enabled)./rl-stats
: Show tool usage statistics (if enabled).