A Python-based console application for interacting with AI models via API.
This project allows you to send commands and interact with an AI model server using a command-line interface. You can switch between models, manage system prompts, and toggle streaming.
This project is a Python-based console application that interacts with an AI model server using an API. The program allows users to send messages to the AI model, switch between available models, toggle streaming on and off, and manage the system prompt.
/help
, /models
, /usemodel
, /assistant
, /system
, and more.requirements.txt
:
certifi==2024.8.30
charset-normalizer==3.4.0
idna==3.10
requests==2.32.3
urllib3==2.2.3
git clone https://github.com/DJJJNabba/AIinteract.git
cd AIinteract
Create and activate a virtual environment (optional but recommended):
python -m venv AIenv
AIenv\Scripts\activate
python3 -m venv AIenv
source AIenv/bin/activate
pip install -r requirements.txt
To run the main application after activating the virtual environment, use:
python AIconsoleClient.py
Once the application is running, you can use the following commands:
/help
: View available commands/models
: List available models/usemodel [number]
: Change the current model by number/system [prompt]
: Set a new system prompt/assistant [message]
: Send a message as the assistant/stream
: Toggle streaming on and off/clear
: Clear the terminal/reset
: Clear both the terminal and conversation context/info
: View current settings (streaming, system prompt, current model)/exit
: Exit the chatYou: /models
Available Models:
1. llama-3.2-1b-instruct
2. l3-evil-stheno-v3.2-8b
You: /usemodel 1
Current model set to: llama-3.2-1b-instruct
You: Hello, how are you?
Llama 3.2: I'm doing great, how can I assist you today?
If you encounter issues with dependencies, ensure you are using the correct Python version and that all dependencies in requirements.txt
are installed correctly.
Feel free to submit pull requests or open issues to improve this project. Contributions are welcome!