comfy-mcp-server
by: lalanikarim
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
📌Overview
Purpose: The Comfy MCP Server is designed to generate images based on prompts by leveraging a remote Comfy server through the FastMCP framework.
Overview: This server setup interacts with a Comfy server to facilitate image generation. It allows users to submit prompts via a defined workflow and receive the resulting images efficiently.
Key Features:
-
Remote Interaction: Connects seamlessly with a remote Comfy server to process image generation prompts.
-
Custom Workflow Support: Requires user-defined workflow files from Comfy UI, allowing for tailored image generation processes based on specific needs.
-
Flexible Output Options: Users can choose to receive generated images as URLs or file downloads, enhancing versatility for different applications.
-
Environment Configuration: The server supports various configurable environment variables, ensuring flexibility in connecting to different Comfy servers and workflows.
Comfy MCP Server
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
Overview
This script sets up a server using the FastMCP framework to generate images based on prompts using a specified workflow. It interacts with a remote Comfy server to submit prompts and retrieve generated images.
Prerequisites
uv
package and project manager for Python.- Workflow file exported from Comfy UI. This code includes a sample
Flux-Dev-ComfyUI-Workflow.json
for reference. You need to export your own workflow and set the environment variables accordingly.
Install the required packages for local development:
uvx mcp[cli]
Configuration
Set the following environment variables:
COMFY_URL
: URL of your Comfy server.COMFY_WORKFLOW_JSON_FILE
: Absolute path to the API export JSON file for the ComfyUI workflow.PROMPT_NODE_ID
: ID of the text prompt node.OUTPUT_NODE_ID
: ID of the output node with the final image.OUTPUT_MODE
:url
orfile
to select desired output format.
Optionally, connect to an Ollama server for prompt generation:
OLLAMA_API_BASE
: URL where Ollama is running.PROMPT_LLM
: Name of the model hosted on Ollama for prompt generation.
Example:
export COMFY_URL=http://your-comfy-server-url:port
export COMFY_WORKFLOW_JSON_FILE=/path/to/the/comfyui_workflow_export.json
export PROMPT_NODE_ID=6
export OUTPUT_NODE_ID=9
export OUTPUT_MODE=file
Usage
Launch Comfy MCP Server with:
uvx comfy-mcp-server
Example Claude Desktop Config
{
"mcpServers": {
"Comfy MCP Server": {
"command": "/path/to/uvx",
"args": [
"comfy-mcp-server"
],
"env": {
"COMFY_URL": "http://your-comfy-server-url:port",
"COMFY_WORKFLOW_JSON_FILE": "/path/to/the/comfyui_workflow_export.json",
"PROMPT_NODE_ID": "6",
"OUTPUT_NODE_ID": "9",
"OUTPUT_MODE": "file"
}
}
}
}
Functionality
generate_image(prompt: str, ctx: Context) -> Image | str
Generates an image from a prompt:
- Checks if required environment variables are set.
- Loads prompt template from a JSON file.
- Submits the prompt to the Comfy server.
- Polls the server for processing status.
- Retrieves and returns the generated image.
generate_prompt(topic: str, ctx: Context) -> str
Generates a detailed image generation prompt from a specified topic.
Dependencies
mcp
: FastMCP server framework.json
: JSON data handling.urllib
: HTTP requests.time
: Delays for polling.os
: Environment variable access.langchain
: LLM prompt chain creation.langchain-ollama
: Ollama modules for LangChain.
License
This project is licensed under the MIT License - see the LICENSE file for details.