MCP HubMCP Hub
amidabuddha

console-chat-gpt

by: amidabuddha

Python CLI for AI Chat API

119created 05/08/2023
Visit
Python
CLI

📌Overview

Purpose:
Enable seamless and efficient interaction with leading AI language models directly from the command line.

Overview:
console-chat-gpt v6 is a command-line interface (CLI) tool designed to facilitate conversational AI experiences using various popular AI model providers—including OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception, and locally-hosted Ollama models. It focuses on providing a user-friendly, highly customizable, and efficient chat experience entirely within your terminal on Linux or macOS.

Key Features:

  • Multi-Provider Support:
    Seamlessly interact with major AI providers and open APIs, supporting easy integration with new or custom models via simple configuration.

  • Unified Chat Interface:
    Offers a single, consistent interface and chat completion function across all supported providers, simplifying cross-platform usage and integration.

  • Advanced Configuration & Customization:
    Provides full control over settings and models through a convenient config.toml file and in-app menu, allowing for role selection, temperature control, and command handling.

  • Feature Enhancements:
    Supports streaming, conversation history, prompt caching (Anthropic), AI-managed mode (automatic model selection), error handling, image inputs (for select models), and more for an improved user experience.


console-chat-gpt v6

Your Ultimate CLI Companion for Chatting with AI Models

Enjoy seamless interactions with OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception or Ollama hosted models directly from your command line. Elevate your chat experience with efficiency and ease.

Homepage


Table of Contents

  • Features
  • Installation and Usage
  • Examples
  • Contributing

DISCLAIMER:
This code and project are not connected or affiliated with OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception or any related companies.


Features

  • Run any OpenAI SDK-compatible model by editing the config.toml file.
  • Run Ollama hosted models locally.
  • Anthropic Prompt caching fully supported.
  • Model Context Protocol (MCP) supported. Copy your claude_desktop_config.json as mcp_config.json in the root directory to use with any model.
  • Unified chat completion function available as an independent library: Python | TypeScript.
  • Streaming with all supported models (can be enabled in the settings menu).
  • OpenAI Assistants Beta fully supported.
  • AI Managed mode for automatic model selection based on task complexity.
  • Easy configuration through the config.toml file or in-app via the settings command.
  • Role selection for custom chat experiences.
  • Temperature control for response creativity and randomness.
  • User-friendly commands.
  • Image input (with selected models).
  • Clear and helpful error handling.
  • Conversation history and conversation saving.
  • Graceful exit to save progress.
  • Actively maintained and open for contributions.

Installation and Usage

Supported on Linux and macOS terminals. For Windows, use WSL.

  1. Clone the repository:

    git clone https://github.com/amidabuddha/console-chat-gpt.git
    
  2. Enter the directory:

    cd console-chat-gpt
    
  3. Install dependencies:

    python3 -m pip install -r requirements.txt
    
  4. Obtain API keys as needed:

  5. On first run, config.toml.sample will be copied to config.toml and you’ll be prompted for API keys.

  6. Run the application:

    python3 main.py
    

    Tip: Create an alias for easy access.

  7. Use the help command within the chat for more options.


Examples

More examples available on the Examples page.


Contributing

Contributions are welcome! If you find bugs, have feature requests, or want to contribute, please open an issue or submit a pull request.