console-chat-gpt
by: amidabuddha
Python CLI for AI Chat API
📌Overview
Purpose:
Enable seamless and efficient interaction with leading AI language models directly from the command line.
Overview:
console-chat-gpt v6 is a command-line interface (CLI) tool designed to facilitate conversational AI experiences using various popular AI model providers—including OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception, and locally-hosted Ollama models. It focuses on providing a user-friendly, highly customizable, and efficient chat experience entirely within your terminal on Linux or macOS.
Key Features:
-
Multi-Provider Support:
Seamlessly interact with major AI providers and open APIs, supporting easy integration with new or custom models via simple configuration. -
Unified Chat Interface:
Offers a single, consistent interface and chat completion function across all supported providers, simplifying cross-platform usage and integration. -
Advanced Configuration & Customization:
Provides full control over settings and models through a convenientconfig.toml
file and in-app menu, allowing for role selection, temperature control, and command handling. -
Feature Enhancements:
Supports streaming, conversation history, prompt caching (Anthropic), AI-managed mode (automatic model selection), error handling, image inputs (for select models), and more for an improved user experience.
console-chat-gpt v6
Your Ultimate CLI Companion for Chatting with AI Models
Enjoy seamless interactions with OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception or Ollama hosted models directly from your command line. Elevate your chat experience with efficiency and ease.
Table of Contents
- Features
- Installation and Usage
- Examples
- Contributing
DISCLAIMER:
This code and project are not connected or affiliated with OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception or any related companies.
Features
- Run any OpenAI SDK-compatible model by editing the
config.toml
file. - Run Ollama hosted models locally.
- Anthropic Prompt caching fully supported.
- Model Context Protocol (MCP) supported. Copy your
claude_desktop_config.json
asmcp_config.json
in the root directory to use with any model. - Unified chat completion function available as an independent library: Python | TypeScript.
- Streaming with all supported models (can be enabled in the
settings
menu). - OpenAI Assistants Beta fully supported.
- AI Managed mode for automatic model selection based on task complexity.
- Easy configuration through the
config.toml
file or in-app via thesettings
command. - Role selection for custom chat experiences.
- Temperature control for response creativity and randomness.
- User-friendly commands.
- Image input (with selected models).
- Clear and helpful error handling.
- Conversation history and conversation saving.
- Graceful exit to save progress.
- Actively maintained and open for contributions.
Installation and Usage
Supported on Linux and macOS terminals. For Windows, use WSL.
-
Clone the repository:
git clone https://github.com/amidabuddha/console-chat-gpt.git
-
Enter the directory:
cd console-chat-gpt
-
Install dependencies:
python3 -m pip install -r requirements.txt
-
Obtain API keys as needed:
-
On first run,
config.toml.sample
will be copied toconfig.toml
and you’ll be prompted for API keys. -
Run the application:
python3 main.py
Tip: Create an alias for easy access.
-
Use the
help
command within the chat for more options.
Examples
More examples available on the Examples page.
Contributing
Contributions are welcome! If you find bugs, have feature requests, or want to contribute, please open an issue or submit a pull request.