labs-ai-tools-for-devs
by: docker
An MCP server & prompt runner for all of Docker. Simple Markdown. BYO LLM.
📌Overview
Purpose: To provide a framework that leverages Docker containers to create and manage complex AI workflows through markdown prompts.
Overview: This framework enables developers to utilize Dockerized tools and large language models (LLMs) for crafting innovative workflows. By integrating markdown as a universal language for both humans and LLMs, it facilitates seamless execution across various environments.
Key Features:
-
Markdown Workflows: Allows users to define intricate workflows within markdown files that can be executed using chosen LLMs, promoting ease of use and flexibility.
-
Multi-Model Support: Configurable prompts to run with multiple LLMs, enabling the selection of the most suitable model for specific tasks and fostering the creation of multi-agent workflows.
-
Project Context Extraction: Docker images can extract contextual information from projects, allowing for more informed interactions and assistance tailored to the project's requirements.
-
Versioned Prompts Repository: Enables prompts to be stored in a Git repository for version control and sharing, facilitating collaboration among developers.
AI Tools for Developers
Agentic AI workflows enabled by Docker containers.
Just Docker. Just Markdown. BYOLLM.
MCP
Any prompts you write and their tools can now be used as MCP servers through the Model Context Protocol.
Use serve mode with the --mcp
flag. Then, register prompts via git ref or path with --register <ref>
.
# ...
serve
--mcp
--register github:docker/labs-ai-tools-for-devs?path=prompts/examples/generate_dockerfile.md
--register /Users/ai-overlordz/some/local/prompt.md
# ...
Source for many experiments in our LinkedIn newsletter.
What is this?
This is a simple Docker image which enables infinite possibilities for novel workflows by combining Dockerized Tools, Markdown, and the LLM of your choice.
Markdown is the language
Humans already speak it. So do LLM's. This software allows you to write complex workflows in markdown files, and then run them with your own LLM in your editor or terminal or any environment, thanks to Docker.
Dockerized Tools
OpenAI API compatible LLMs already support tool calling. These tools could simply be Docker images. Benefits of using Docker based on research include enabling the LLM to:
- take more complex actions
- get more context with fewer tokens
- work across a wider range of environments
- operate in a sandboxed environment
Conversation Loop
The conversation loop is the core of each workflow. Tool results, agent responses, and markdown prompts are all passed through the loop. If an agent sees an error, it will try running the tool with different parameters or different tools until it gets the right result.
Multi-Model Agents
Each prompt can be configured to run with different LLM models or even different model families. This allows you to use the best tool for the job. When combined, multi-agent workflows can have each agent run with the model best suited for its task.
With Docker, it is possible to have frontier models plan while lightweight local models execute.
Project-First Design
To get help from an assistant in your software development loop, the only necessary context is the project you are working on.
Extracting project context
An extractor is a Docker image that runs against a project and extracts information into a JSON context.
Prompts as a Trackable Artifact
Prompts are stored in a git repository and can be versioned, tracked, and shared for anyone to run in their own environment.
Get Started
It is highly recommended to use the VSCode extension to get started. It will help you create prompts and run them with your own LLM.
Running your first loop
VSCode
Install Extension
Get the latest release and install with:
code --install-extension 'labs-ai-tools-vscode-<version>.vsix'
Running:
-
Open an existing markdown file or create a new one in VSCode.
-
Run the command
>Docker AI: Set OpenAI API Key
to set an OpenAI API key, or use a dummy value for local models. -
Run the command
>Docker AI: Select target project
to select a project to run the prompt against. -
Run the command
>Docker AI: Run Prompt
to start the conversation loop.
CLI
Assuming you have a terminal open and Docker Desktop running:
- Set OpenAI key:
echo $OPENAI_API_KEY > $HOME/.openai-api-key
Note: You must set a dummy value for local models.
- Run the container in your project directory:
docker run \
--rm \
--pull=always \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount type=volume,source=docker-prompts,target=/prompts \
--mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
vonwig/prompts:latest \
run \
--host-dir $PWD \
--user $USER \
--platform "$(uname -o)" \
--prompts "github:docker/labs-githooks?ref=main&path=prompts/git_hooks"
See docs for more details on how to run the conversation loop.
Building
docker build -t vonwig/prompts:local -f Dockerfile .