mcp-perplexity-search
by: spences10
π A Model Context Protocol (MCP) server for integrating Perplexity's AI API with LLMs.
πOverview
Purpose: To integrate Perplexity's AI API with large language models (LLMs) by providing advanced chat completion capabilities.
Overview: The MCP Perplexity Search server enhances chat interactions by leveraging specialized prompt templates suited for a variety of use cases, facilitating effective communication and information delivery through AI-driven responses.
Key Features:
-
Advanced Chat Completion: Utilizes Perplexity's AI models for generating detailed and relevant chat responses, catering to various contexts.
-
Predefined Prompt Templates: Offers templates designed for common scenarios such as technical documentation generation, security best practices analysis, code review, and structured API documentation, streamlining use in specialized fields.
-
Custom Template Support: Enables users to create and utilize bespoke prompt templates tailored to unique requirements, enhancing flexibility in communication.
-
Multiple Output Formats: Supports a variety of response formats (text, markdown, JSON), allowing users to choose the most suitable presentation of information.
-
Source URL Inclusion: Provides an option to include source URLs in responses for better context and verification of information.
-
Configurable Model Parameters: Allows fine-tuning of model behavior with adjustable parameters like temperature and max tokens, accommodating diverse output needs.
-
Model Variety Support: Compatible with several Perplexity models, including Sonar and LLaMA, expanding its capabilities across different AI frameworks.
mcp-perplexity-search
β οΈ Notice
This repository is no longer maintained.
The functionality of this tool is now available in mcp-omnisearch, which combines multiple MCP tools in one unified package.
Please use mcp-omnisearch instead.
A Model Context Protocol (MCP) server for integrating Perplexity's AI API with LLMs. This server provides advanced chat completion capabilities with specialized prompt templates for various use cases.
Features
- π€ Advanced chat completion using Perplexity's AI models
- π Predefined prompt templates for common scenarios:
- Technical documentation generation
- Security best practices analysis
- Code review and improvements
- API documentation in structured format
- π― Custom template support for specialized use cases
- π Multiple output formats (text, markdown, JSON)
- π Optional source URL inclusion in responses
- βοΈ Configurable model parameters (temperature, max tokens)
- π Support for various Perplexity models including Sonar and LLaMA
Configuration
This server requires configuration through your MCP client. Examples for different environments:
Cline Configuration
Add this to your Cline MCP settings:
{
"mcpServers": {
"mcp-perplexity-search": {
"command": "npx",
"args": ["-y", "mcp-perplexity-search"],
"env": {
"PERPLEXITY_API_KEY": "your-perplexity-api-key"
}
}
}
}
Claude Desktop with WSL Configuration
For WSL environments, add this to your Claude Desktop configuration:
{
"mcpServers": {
"mcp-perplexity-search": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"source ~/.nvm/nvm.sh && PERPLEXITY_API_KEY=your-perplexity-api-key /home/username/.nvm/versions/node/v20.12.1/bin/npx mcp-perplexity-search"
]
}
}
}
Environment Variables
The server requires the following environment variable:
PERPLEXITY_API_KEY
: Your Perplexity API key (required)
API
The server implements a single MCP tool with configurable parameters:
chat_completion
Generate chat completions using the Perplexity API with support for specialized prompt templates.
Parameters:
messages
(array, required): Array of message objects with:role
(string): 'system', 'user', or 'assistant'content
(string): The message content
prompt_template
(string, optional): Predefined template to use:technical_docs
: Technical documentation with code examplessecurity_practices
: Security implementation guidelinescode_review
: Code analysis and improvementsapi_docs
: API documentation in JSON format
custom_template
(object, optional): Custom prompt template with:system
(string): System message for assistant behaviourformat
(string): Output format preferenceinclude_sources
(boolean): Whether to include sources
format
(string, optional): 'text', 'markdown', or 'json' (default: 'text')include_sources
(boolean, optional): Include source URLs (default: false)model
(string, optional): Perplexity model to use (default: 'sonar')temperature
(number, optional): Output randomness (0-1, default: 0.7)max_tokens
(number, optional): Maximum response length (default: 1024)
Development
Setup
- Clone the repository
- Install dependencies:
pnpm install
- Build the project:
pnpm build
- Run in development mode:
pnpm dev
Publishing
The project uses changesets for version management. To publish:
- Create a changeset:
pnpm changeset
- Version the package:
pnpm changeset version
- Publish to npm:
pnpm release
Contributing
Contributions are welcome! Feel free to submit a Pull Request.
License
MIT License - see the LICENSE file for details.
Acknowledgments
- Built on the Model Context Protocol
- Powered by Perplexity SONAR API