MCP-server-Qwen_Max
by: 66julienmartin
MCP server for Qwen Max model
πOverview
Purpose: To provide a reliable implementation of the Model Context Protocol (MCP) server for the Qwen Max language model, optimized for integration with Claude Desktop.
Overview: The Qwen Max MCP Server is built using Node.js and TypeScript, offering a robust solution for text generation through various Qwen models. With a focus on stability, it supports a range of model configurations and is designed to handle complex, multi-step tasks effectively.
Key Features:
-
Text Generation with Qwen Models: Supports multiple Qwen commercial models (Max, Plus, Turbo) for versatile text generation capabilities.
-
Configurable Parameters: Allows users to set parameters such as max_tokens and temperature to customize the output according to specific needs.
-
Error Handling: Comprehensive error management system for various issues like API authentication, invalid parameters, and network problems.
-
MCP Protocol Support: Fully adheres to the Model Context Protocol, ensuring compatibility and efficient communication between components.
-
Claude Desktop Integration: Seamless integration enables users to leverage the server's capabilities within the Claude Desktop environment.
-
Extensive Token Context Windows: Offers large context windows tailored for different models, enhancing the accuracy of generated outputs.
Qwen Max MCP Server
A Model Context Protocol (MCP) server implementation for the Qwen Max language model using Node.js/TypeScript for stable and reliable integration.
Why Node.js?
Node.js provides the most stable and reliable integration with MCP servers compared to other languages like Python. The Node.js SDK offers better type safety, error handling, and compatibility with Claude Desktop.
Prerequisites
- Node.js (v18 or higher)
- npm
- Claude Desktop
- Dashscope API key
Installation
Installing via Smithery
To install Qwen Max MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
Manual Installation
git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
cd Qwen_Max
npm install
Model Selection
By default, this server uses the Qwen-Max model. The Qwen series offers several commercial models with different capabilities:
Qwen-Max
Best inference performance, especially for complex and multi-step tasks.
- Context window: 32,768 tokens
- Max input: 30,720 tokens
- Max output: 8,192 tokens
- Pricing: $0.0016/1K tokens (input), $0.0064/1K tokens (output)
- Free quota: 1 million tokens
Available versions:
- qwen-max (Stable)
- qwen-max-latest (Latest)
- qwen-max-2025-01-25 (Snapshot, aka qwen-max-0125 or Qwen2.5-Max)
Qwen-Plus
Balanced performance, speed, and cost for moderately complex tasks.
- Context window: 131,072 tokens
- Max input: 129,024 tokens
- Max output: 8,192 tokens
- Pricing: $0.0004/1K tokens (input), $0.0012/1K tokens (output)
- Free quota: 1 million tokens
Available versions:
- qwen-plus (Stable)
- qwen-plus-latest (Latest)
- qwen-plus-2025-01-25 (Snapshot, aka qwen-plus-0125)
Qwen-Turbo
Fast and low cost for simple tasks.
- Context window: 1,000,000 tokens
- Max input: 1,000,000 tokens
- Max output: 8,192 tokens
- Pricing: $0.00005/1K tokens (input), $0.0002/1K tokens (output)
- Free quota: 1 million tokens
Available versions:
- qwen-turbo (Stable)
- qwen-turbo-latest (Latest)
- qwen-turbo-2024-11-01 (Snapshot, aka qwen-turbo-1101)
To modify the model, update the model name in src/index.ts
:
// For Qwen-Max (default)
model: "qwen-max"
// For Qwen-Plus
model: "qwen-plus"
// For Qwen-Turbo
model: "qwen-turbo"
For more information about available models, visit the Alibaba Cloud Model Documentation:
https://www.alibabacloud.com/help/en/model-studio/getting-started/models?spm=a3c0i.23458820.2359477120.1.446c7d3f9LT0FY
Project Structure
qwen-max-mcp/
βββ src/
β βββ index.ts # Main server implementation
βββ build/ # Compiled files
β βββ index.js
βββ LICENSE
βββ README.md
βββ package.json
βββ package-lock.json
βββ tsconfig.json
Configuration
- Create a
.env
file in the project root:
DASHSCOPE_API_KEY=your-api-key-here
- Update Claude Desktop configuration:
{
"mcpServers": {
"qwen_max": {
"command": "node",
"args": ["/path/to/Qwen_Max/build/index.js"],
"env": {
"DASHSCOPE_API_KEY": "your-api-key-here"
}
}
}
}
Development
npm run dev # Watch mode
npm run build # Build
npm run start # Start server
Features
- Text generation with Qwen models
- Configurable parameters (max_tokens, temperature)
- Error handling
- MCP protocol support
- Claude Desktop integration
- Support for all Qwen commercial models (Max, Plus, Turbo)
- Extensive token context windows
API Usage
// Example tool call
{
"name": "qwen_max",
"arguments": {
"prompt": "Your prompt here",
"max_tokens": 8192,
"temperature": 0.7
}
}
Temperature Parameter
Controls the randomness of the model's output:
- Lower values (0.0-0.7): More focused and deterministic outputs
- Higher values (0.7-1.0): More creative and varied outputs
Recommended settings by task:
- Code generation: 0.0-0.3
- Technical writing: 0.3-0.5
- General tasks: 0.7 (default)
- Creative writing: 0.8-1.0
Error Handling
Provides detailed error messages for:
- API authentication errors
- Invalid parameters
- Rate limiting
- Network issues
- Token limit exceeded
- Model availability issues
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT