Your Ultimate CLI Companion for Chatting with AI Models
Enjoy seamless interactions with OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception, Moonshot AI, OpenRouter or Ollama-hosted models directly from your command line.
Elevate your chat experience with efficiency and ease.
DISCLAIMER: The intention and implementation of this code are entirely unconnected and unrelated to OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception, Moonshot AI, OpenRouter or any other related parties. There is no affiliation or relationship with OpenAI, MistralAI, Anthropic, xAI, Google, DeepSeek, Alibaba, Inception, Moonshot AI, OpenRouter or their subsidiaries in any form.
- 🆕 OpenAI image generation via Responses API. 🆕
- ⭐ OpenAI Responses API supported. ⭐
- ⭐ Run any OpenAI SDK compatible - just add the model structure with the relevant
model_name
andbase_url
to theconfig.toml
file. ⭐ - ⭐ Run Ollama hosted models locally. Ollama should be installed and the selected models already downloaded ⭐
- ⭐ Anthropic Prompt caching Fully supported ⭐
- ⭐ Model Context Protocol (MCP) supported! If you are already using MCP servers just copy your
claude_desktop_config.json
to the root directory and rename tomcp_config.json
to start using with any model! ⭐ - Unified chat completion function separated as independent library to be used in any application for seamless cross-provider API experience. The source code is available in Python and TypeScript
- Streaming with all supported models, disabled by default, may be enabled in
settings
menu - OpenAI Assistants Beta supported
- AI Managed mode Based on the complexity of the task, automatically determines which model to use.
- Configuration File: Easily customize the app's settings through the
config.toml
file for complete control over how the app works. Also supported in-app via thesettings
command. - Role selection: Users can define the role of the AI in the conversation, allowing for a more personalized and interactive experience.
- Temperature Control: Adjust the temperature of generated responses to control creativity and randomness in the conversation.
- Command Handling: The app responds to various commands entered by the user for easy and intuitive interaction.
- Image input: with selected models.
- Error Handling: Clear and helpful error messages to easily understand and resolve any issues.
- Conversation History: Review previous interactions and save conversations for future reference, providing context and continuity.
- Graceful Exit: Smoothly handle interruptions, ensuring conversations are saved before exiting to avoid loss of progress.
- A nice team: Actively adding features, open for ideas and fixing bugs.
Overall, this app focuses on providing a user-friendly and customizable experience with features that enhance personalization, control, and convenience.
The script works fine on Linux and MacOS terminals. For Windows it's recommended to use WSL.
-
Clone the repository:
git clone https://github.com/amidabuddha/console-chat-gpt.git
-
Go inside the folder:
cd console-chat-gpt
-
Install the necessary dependencies:
python3 -m pip install -r requirements.txt
-
Get your API key from OpenAI, MistralAI, Anthropic, xAI, Google AI Studio, DeepSeek, Alibaba, Inception, Moonshot AI, OpenRouter, depending on your selected LLM.
-
The
config.toml.sample
will be automatically copied intoconfig.toml
upon first run, with a prompt to enter your API key/s. Feel free to change any of the other defaults that are not available in thesettings
in-app menu as per your needs. -
Run the executable:
python3 main.py
Pro-tip: Create an alias for the executable to run from anywhere.
-
Use the
help
command within the chat to check the available options. -
Enjoy
[chat.defaults] | Main properties to generate a chat completion/response. |
---|---|
temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 more focused. May be set for each new chat session if adjust_temperature in [chat.features] is true. |
system_role | A system (or developer) message inserted into the model's context. Should be one of the listed in [chat.roles] section. May be set for each new chat session if role_selector in [chat.features] is true. |
model | Model ID used to generate the chat completion/response, like gpt-4o or o3. Should be listed in the [chat.models] section, with relevant parameters. May be set for each new chat session if model_selector in [chat.features] is true. |
[chat.features] | Configurable options of the chat application. Some are accessible from within a chat session via the settings command. |
---|---|
model_selector | A selection list of models available in section [chat.models] of config.toml . When true this list may be modified at the beginning of each new chat session. |
adjust_temperature | Prompt to change the temperature for each chat session. When true the temperature value may be modified at the beginning of each new chat session. |
role_selector | A selection list of roles available in section [chat.roles] of config.toml . When true this list may be modified at the beginning of each new chat session. |
save_chat_on_exit | When true automatically save the chat session upon using the exit command in chat. |
continue_chat | When true offers a list of previously saved chat sessions to be continued in a new session. The list may be modified from within a chat session via the chats command. |
Application logging - not yet implemented. | |
disable_intro_help_message | All chat commands available in help are printed upon chat initialization. This is targeted at new users and may be disabled by setting to false. |
assistant_mode | Enable Open AI Assistants API as an available selection upon chat initialization. |
ai_managed | Enable AI Managed mode to allow a model to automatically select the best model according to your prompt. Detailed settings below. |
streaming | If set to true, the model response data will be streamed to the client. |
mcp_client | Setting to false will prevent the default initialization of MCP servers for each chat if not needed. |
[chat.managed] | Settings dedicated to the AI Managed mode. Not available to be edited from within a chat session. |
---|---|
assistant | The preferred model that will evaluate your prompt and select the best available model out of the four configured below to handle it. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_role | Custom instruction to the evaluation model. Change this only if you know exactly what you are doing! |
assistant_generalist | Your preferred general purpose model, typically the one you use the most for any type of queries. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_fast | When speed is preferred to accuracy. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_thinker | A reasoning model for complex tasks. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_coder | Your preferred model to handle Coding and Math questions. Should be listed in the [chat.models] section, with relevant parameters. |
prompt | When AI Managed mode is used frequently the Y/N prompt may be disabled by changing this to false. |
You can find more examples on our Examples page.
Contributions are welcome! If you find any bugs, have feature requests, or want to contribute improvements, please open an issue or submit a pull request.