mcp-client-cli
by: adhikasp
A simple CLI to run LLM prompt and implement MCP client.
📌Overview
Purpose:
Provide a simple command-line interface (CLI) for interacting with Large Language Models (LLMs) using the Model Context Protocol (MCP), supporting multiple LLM providers and MCP-compatible tool servers from the terminal.
Overview:
MCP CLI client is a lightweight CLI tool that enables running LLM prompts and integrating with any MCP-compatible servers, allowing users to access advanced AI capabilities from their terminal. It serves as an alternative to graphical clients, supporting a diverse range of LLM providers (including OpenAI, Groq, and local models) and can be configured flexibly via a user-friendly JSON file.
Key Features:
-
Multi-provider LLM support:
Seamlessly connects to various LLM providers (OpenAI, Groq, local llama.cpp, etc.) through unified configuration, allowing users to work with the model of their choice. -
MCP protocol integration:
Works with any MCP-compatible server to trigger external tools (e.g., web search, YouTube summarization), enhancing AI responses with real-time data and automation from the terminal. -
Flexible and extensible configuration:
Supports JSON-based config files with environment variable overrides, customizable prompts, and tool management, enabling users to tailor the CLI to their workflows and security preferences. -
Rich input and prompt features:
Allows piped input (including images), predefined prompt templates, clipboard support (for both text and image), and conversation continuation, making the CLI versatile for a range of tasks and scripting scenarios. -
Convenient output and scripting options:
Offers flags for controlling output format, tool usage, confirmation requirements, and integration in shell pipelines or automation scripts.
MCP CLI Client
A simple command-line program for running LLM prompts and acting as a Model Context Protocol (MCP) client. You can use any MCP-compatible servers conveniently from your terminal. This tool provides an alternative to other clients, such as Claude Desktop, and supports multiple LLM providers like OpenAI, Groq, or a local LLM model via llama.cpp.
Installation and Configuration
1. Install
pip install mcp-client-cli
2. Configure
Create a ~/.llm/config.json
file to set up your LLM and MCP servers. Example:
{
"systemPrompt": "You are an AI assistant helping a software engineer...",
"llm": {
"provider": "openai",
"model": "gpt-4",
"api_key": "your-openai-api-key",
"temperature": 0.7,
"base_url": "https://api.openai.com/v1"
},
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"],
"requires_confirmation": ["fetch"],
"enabled": true,
"exclude_tools": []
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "your-brave-api-key"
},
"requires_confirmation": ["brave_web_search"]
},
"youtube": {
"command": "uvx",
"args": ["--from", "git+https://github.com/adhikasp/mcp-youtube", "mcp-youtube"]
}
}
}
- Use
requires_confirmation
for tools needing user approval before execution. - LLM API keys can be set via the config or the environment variable
LLM_API_KEY
orOPENAI_API_KEY
. - Config file can be in
~/.llm/config.json
or$PWD/.llm/config.json
. - Comments are supported in the config file using
//
.
3. Run
llm "What is the capital city of North Sumatra?"
Usage
Basic Queries
llm What is the capital city of North Sumatra?
You can omit quotes unless special shell characters are involved.
Piping from files or commands is supported:
echo "What is the capital city of North Sumatra?" | llm
echo "Given a location, tell me its capital city." > instructions.txt
cat instruction.txt | llm "West Java"
Image Input
Analyze images by piping them:
cat image.jpg | llm "What do you see in this image?"
cat screenshot.png | llm "Is there any error in this screenshot?"
Prompt Templates
Use prompt templates with a p
prefix:
llm --list-prompts # List prompt templates
llm p review # Review git changes
llm p commit # Generate commit message
llm p yt url=https://youtube.com/... # Summarize a YouTube video
Using Tools
Example showing a tool-triggering interaction:
llm What is the top article on hackernews today?
You may be prompted to confirm actions:
Confirm tool call? [y/n]: y
To bypass confirmations, use:
llm --no-confirmations "What is the top article on hackernews today?"
To output only the final message in scripts:
llm --no-intermediates "What is the time in Tokyo right now?"
Conversation Continuation
Continue previous conversations with the c
prefix:
llm asldkfjasdfkl
llm c what did i say previously?
Clipboard Support
Process clipboard content using the cb
command:
-
For text:
llm cb llm cb "What language is this code written in?"
-
For images:
llm cb "What do you see in this image?"
-
Combine with continuation:
llm cb c "Tell me more about what you see"
Clipboard feature compatibility:
- Windows: Uses PowerShell.
- macOS: Uses
pbpaste
(text),pngpaste
(for images, optional, install withbrew install pngpaste
). - Linux: Requires
xclip
(sudo apt install xclip
).
Additional Options
llm --list-tools # List all tools
llm --list-prompts # List prompt templates
llm --no-tools # Run without tools
llm --force-refresh # Refresh tool capabilities
llm --text-only # Output raw text only
llm --show-memories # Show user memories
llm --model gpt-4 # Override default model
Contributing
Contributions and bug reports are welcome!