mcp-llm-bridge
by: bartolli
MCP implementation that enables communication between MCP servers and OpenAI-compatible LLMs
📌Overview
Purpose: To provide a seamless connection between Model Context Protocol (MCP) servers and OpenAI-compatible language models (LLMs) through an efficient protocol translation layer.
Overview: The MCP LLM Bridge acts as a bidirectional interface that translates MCP tool specifications to OpenAI function schemas. This allows any OpenAI-compatible model, including both cloud-based solutions and local implementations, to utilize MCP-compliant tools via a standardized method.
Key Features:
-
Bidirectional Protocol Translation: Facilitates real-time communication between MCP servers and OpenAI's function-calling interface, enabling smooth tool execution and function invocation.
-
Compatibility with Various Endpoints: Supports not only the primary OpenAI API but also local endpoints adhering to the OpenAI API specification, such as Ollama and LM Studio, enhancing flexibility and usability.
MCP LLM Bridge
A bridge connecting Model Context Protocol (MCP) servers to OpenAI-compatible LLMs. It offers primary support for the OpenAI API and compatibility with local endpoints implementing the OpenAI API specification.
The implementation provides a bidirectional protocol translation layer between MCP and OpenAI's function-calling interface. It converts MCP tool specifications into OpenAI function schemas and handles the mapping of function invocations back to MCP tool executions, allowing any OpenAI-compatible language model to utilize MCP-compliant tools.
Quick Start
# Install
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .
# Create test database
python -m mcp_llm_bridge.create_test_db
Configuration
OpenAI (Primary)
Create .env
:
OPENAI_API_KEY=your_key
OPENAI_MODEL=gpt-4o # or any other OpenAI model that supports tools
Reactivate the environment if needed: source .venv/bin/activate
Then configure the bridge in src/mcp_llm_bridge/main.py
:
config = BridgeConfig(
mcp_server_params=StdioServerParameters(
command="uvx",
args=["mcp-server-sqlite", "--db-path", "test.db"],
env=None
),
llm_config=LLMConfig(
api_key=os.getenv("OPENAI_API_KEY"),
model=os.getenv("OPENAI_MODEL", "gpt-4o"),
base_url=None
)
)
Additional Endpoint Support
The bridge also works with any endpoint implementing the OpenAI API specification.
Ollama
llm_config=LLMConfig(
api_key="not-needed",
model="mistral-nemo:12b-instruct-2407-q8_0",
base_url="http://localhost:11434/v1"
)
LM Studio
llm_config=LLMConfig(
api_key="not-needed",
model="local-model",
base_url="http://localhost:1234/v1"
)
Usage
python -m mcp_llm_bridge.main
# Try: "What are the most expensive products in the database?"
# Exit with 'quit' or Ctrl+C
Running Tests
Install the package with test dependencies:
uv pip install -e ".[test]"
Then run the tests:
python -m pytest -v tests/
License
Contributing
PRs welcome.