LLMling
by: phil65
Easy MCP (Model Context Protocol) servers and AI agents, defined as YAML.
📌Overview
Purpose: LLMling is designed for declarative development of LLM applications, emphasizing efficient resource management, prompt creation, and tool execution.
Overview: LLMling provides a comprehensive framework that utilizes a YAML-based configuration system to define LLM environments and interactions. This system allows users to set up and manage various LLM components, including resources, prompts, and tools, using a standardized protocol for seamless communication.
Key Features:
-
Static Declaration: Enables users to define their LLM environment using YAML configurations without writing code, improving accessibility and ease of use.
-
MCP Protocol: Implements the Machine Chat Protocol (MCP), facilitating standardized interactions with LLMs for streamlined processes and collaboration.
-
Resource Management: Supports the ingestion and management of various content sources, including files and CLI outputs, enabling LLMs to dynamically access and interpret diverse data.
-
Prompt System: Allows users to create both static and dynamic prompt templates, enhancing the consistency and effectiveness of LLM interactions.
-
Tool Integration: Facilitates the execution of Python functions within the LLM, extending its capabilities through custom tools and functionalities.
LLMling
A framework for declarative LLM application development focused on resource management, prompt templates, and tool execution.
This package provides the backend for two consumers: A MCP server and a pydantic-AI based Agent.
Core Concepts
LLMLing provides a YAML-based configuration system for LLM applications. It allows setting up custom MCP servers serving content defined in YAML files.
- Static Declaration: Define your LLM's environment in YAML - no code required
- MCP Protocol: Built on the Machine Chat Protocol (MCP) for standardized LLM interaction
- Component Types:
- Resources: Content providers (files, text, CLI output, etc.)
- Prompts: Message templates with arguments
- Tools: Python functions callable by the LLM
The YAML configuration creates a complete environment that provides the LLM with:
-
Access to content via resources
-
Structured prompts for consistent interaction
-
Tools for extending capabilities
-
Written from ground up in modern Python (minimum 3.12 required)
-
100% typed
-
pydantic(-ai) based
Usage
1. CLI Usage
Create a basic configuration file:
# Create a new config file with basic settings
llmling config init my_config.yml
# Add it to your stored configs
llmling config add myconfig my_config.yml
llmling config set myconfig # Make it active
Basic CLI commands:
# List available resources
llmling resource list
# Load a resource
llmling resource load python_files
# Execute a tool
llmling tool call open_url url=https://github.com
# Show a prompt
llmling prompt show greet
2. Agent Usage (powered by pydantic-AI)
Create a configuration file (config.yml
):
tools:
open_url:
import_path: "webbrowser.open"
resources:
bookmarks:
type: text
description: "Common Python URLs"
content: |
Python Website: https://python.org
Use the agent with this configuration:
from llmling import RuntimeConfig
from llmling_agent import LLMlingAgent
from pydantic import BaseModel
class WebResult(BaseModel):
opened_url: str
success: bool
async with RuntimeConfig.open("config.yml") as runtime:
agent = LLMlingAgent[WebResult](runtime)
result = await agent.run(
"Load the bookmarks resource and open the Python website URL"
)
print(f"Opened: {result.data.opened_url}")
The agent will:
- Load the bookmarks resource
- Extract the Python website URL
- Use the
open_url
tool to open it - Return the structured result
3. Server Usage
With Zed Editor
Add LLMLing as a context server in your settings.json
:
{
"context_servers": {
"llmling": {
"command": {
"env": {},
"label": "llmling",
"path": "uvx",
"args": [
"mcp-server-llmling@latest",
"start",
"path/to/your/config.yml",
"--zed-mode"
]
},
"settings": {}
}
}
}
With Claude Desktop
Configure LLMLing in your claude_desktop_config.json
:
{
"mcpServers": {
"llmling": {
"command": "uvx",
"args": [
"mcp-server-llmling@latest",
"start",
"path/to/your/config.yml"
],
"env": {}
}
}
}
Manual Server Start
Start the server directly from command line:
# Latest version
uvx mcp-server-llmling@latest start path/to/your/config.yml
Resources
Resources are content providers that load and pre-process data from various sources.
Basic Resource Types
global_config:
requirements: ["myapp"]
scripts:
- "https://gist.githubusercontent.com/.../get_readme.py"
resources:
python_files:
type: path
path: "./src/**/*.py"
watch:
enabled: true
patterns:
- "*.py"
- "!**/__pycache__/**"
processors:
- name: format_python
- name: add_header
required: false
system_prompt:
type: text
content: |
You are a code reviewer specialized in Python.
Focus on these aspects:
- Code style (PEP8)
- Best practices
- Performance
- Security
git_changes:
type: cli
command: "git diff HEAD~1"
shell: true
cwd: "./src"
timeout: 5.0
utils_module:
type: source
import_path: myapp.utils
recursive: true
include_tests: false
system_info:
type: callable
import_path: platform.uname
keyword_args:
aliased: true
Resource Groups
Group related resources for easier access:
resource_groups:
code_review:
- python_files
- git_changes
- system_prompt
documentation:
- architecture
- utils_module
File Watching
Resources supporting file watching (path
, image
) can detect changes:
resources:
config_files:
type: path
path: "./config"
watch:
enabled: true
patterns:
- "*.yml"
- "*.yaml"
- "!.private/**"
ignore_file: ".gitignore"
Resource Processing
Resources can be processed through a pipeline of processors:
context_processors:
uppercase:
type: function
import_path: myapp.processors.to_upper
async_execution: false
resources:
processed_file:
type: path
path: "./input.txt"
processors:
- name: uppercase
Prompts
Prompts are message templates formatted with arguments. LLMLing supports declarative YAML prompts and function-based prompts.
YAML-Based Prompts
prompts:
code_review:
description: "Review Python code changes"
messages:
- role: system
content: |
You are a Python code reviewer. Focus on:
- Code style (PEP8)
- Best practices
- Performance
- Security
Always structure your review as:
1. Summary
2. Issues Found
3. Suggestions
- role: user
content: |
Review the following code changes:
{code}
Focus areas: {focus_areas}
arguments:
- name: code
description: "Code to review"
required: true
- name: focus_areas
description: "Specific areas to focus on (one of: style, security, performance)"
required: false
default: "style"
Function-Based Prompts
Function-based prompts provide more control and enable auto-completion:
prompts:
analyze_code:
import_path: myapp.prompts.code_analysis
name: "Code Analysis"
description: "Analyze Python code structure and complexity"
template: |
Analyze this code: {code}
Focus on: {focus}
completions:
focus: myapp.prompts.get_analysis_focus_options
# myapp/prompts/code_analysis.py
from typing import Literal
FocusArea = Literal["complexity", "dependencies", "typing"]
def code_analysis(
code: str,
focus: FocusArea = "complexity",
include_metrics: bool = True
) -> list[dict[str, str]]:
"""Analyze Python code structure and complexity.
Args:
code: Python source code to analyze
focus: Analysis focus area (one of: complexity, dependencies, typing)
include_metrics: Whether to include numeric metrics
"""
...
def get_analysis_focus_options(current: str) -> list[str]:
"""Provide auto-completion for focus argument."""
options = ["complexity", "dependencies", "typing"]
return [opt for opt in options if opt.startswith(current)]
Message Content Types
Prompts support different content types:
prompts:
document_review:
messages:
- role: system
content: "You are a document reviewer..."
- role: user
content:
type: resource
content: "document://main.pdf"
alt_text: "Main document content"
- role: user
content:
type: image_url
content: "https://example.com/diagram.png"
alt_text: "System architecture diagram"
Argument Validation
Prompts validate arguments before formatting:
prompts:
analyze:
messages:
- role: user
content: "Analyze with level {level}"
arguments:
- name: level
description: "Analysis depth (one of: basic, detailed, full)"
required: true
type_hint: Literal["basic", "detailed", "full"]
Tools
Tools are Python functions or classes callable by the LLM, providing an extension mechanism.
Basic Tool Configuration
tools:
analyze_code:
import_path: myapp.tools.code.analyze
description: "Analyze Python code structure and metrics"
browser:
import_path: llmling.tools.browser.BrowserTool
description: "Control web browser for research"
code_metrics:
import_path: myapp.tools.analyze_complexity
name: "Analyze Code Complexity"
description: "Calculate code complexity metrics"
toolsets:
- llmling.code
- llmling.web
Toolsets
Toolsets are collections of tools supporting extension points, OpenAPI endpoints, and class-based tools.
Function-Based Tools
Example:
# myapp/tools/code.py
from typing import Any
import ast
async def analyze(
code: str,
include_metrics: bool = True
) -> dict[str, Any]:
"""Analyze Python code structure and complexity.
Args:
code: Python source code to analyze
include_metrics: Whether to include numeric metrics
Returns:
Dictionary with analysis results
"""
tree = ast.parse(code)
return {
"classes": len([n for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]),
"functions": len([n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]),
"complexity": _calculate_complexity(tree) if include_metrics else None
}
Class-Based Tools
Example:
# myapp/tools/browser.py
from typing import Literal
from playwright.async_api import Page
from llmling.tools.base import BaseTool
class BrowserTool(BaseTool):
"""Tool for web browser automation."""
name = "browser"
description = "Control web browser to navigate and interact with web pages"
def get_tools(self):
return [self.open_url, self.click_button]
Tool Collections (Toolsets)
Example:
# myapp/toolsets.py
from typing import Callable, Any
def get_mcp_tools() -> list[Callable[..., Any]]:
"""Entry point exposing tools to LLMling."""
from myapp.tools import (
analyze_code,
check_style,
count_tokens
)
return [
analyze_code,
check_style,
count_tokens
]
In pyproject.toml
:
[project.entry-points.llmling]
tools = "llmling.testing:get_mcp_tools"
Tool Progress Reporting
Tools can report progress to the client:
from llmling.tools.base import BaseTool
from pathlib import Path
class AnalysisTool(BaseTool):
name = "analyze"
description = "Analyze large codebase"
async def execute(
self,
path: str,
_meta: dict[str, Any] | None = None,
) -> dict[str, Any]:
files = list(Path(path).glob("**/*.py"))
results = []
for i, file in enumerate(files):
if _meta and "progressToken" in _meta:
self.notify_progress(
token=_meta["progressToken"],
progress=i,
total=len(files),
description=f"Analyzing {file.name}"
)
results.append(await self._analyze_file(file))
return {"results": results}
Complete Tool Example
tools:
analyze:
import_path: myapp.tools.code.analyze
browser:
import_path: myapp.tools.browser.BrowserTool
batch_analysis:
import_path: myapp.tools.AnalysisTool
toolsets:
- llmling.code
- myapp.tools
from typing import Any
from pathlib import Path
from llmling.tools.base import BaseTool
class AnalysisTool(BaseTool):
"""Tool for batch code analysis with progress reporting."""
name = "batch_analysis"
description = "Analyze multiple Python files"
async def startup(self) -> None:
self.analyzer = await self._create_analyzer()
async def execute(
self,
path: str,
recursive: bool = True,
_meta: dict[str, Any] | None = None
) -> dict[str, Any]:
files = list(Path(path).glob("**/*.py" if recursive else "*.py"))
results = []
for i, file in enumerate(files, 1):
if _meta and "progressToken" in _meta:
self.notify_progress(
token=_meta["progressToken"],
progress=i,
total=len(files),
description=f"Analyzing {file.name}"
)
try:
result = await self.analyzer.analyze_file(file)
results.append({
"file": str(file),
"metrics": result
})
except Exception as e:
results.append({
"file": str(file),
"error": str(e)
})
return {
"total_files": len(files),
"successful": len([r for r in results if "metrics" in r]),
"failed": len([r for r in results if "error" in r]),
"results": results
}
async def shutdown(self) -> None:
await self.analyzer.close()
More information is available in the documentation.