everything-ai

Your fully proficient, AI-powered and local chatbot assistant🤖

View the Project on GitHub AstraBert/everything-ai

everything-ai

Your fully proficient, AI-powered and local chatbot assistant🤖

GitHub top language GitHub commit activity Static Badge Static Badge Docker image size Static Badge
Flowchart

Flowchart for everything-ai

Quickstart

1. Clone this repository

git clone https://github.com/AstraBert/everything-ai.git
cd everything-ai

2. Set your .env file

Modify:

An example of a .env file could be:

VOLUME="c:/Users/User/:/User/"
MODELS_PATH="c:/Users/User/.cache/llama.cpp/"
MODEL="stories260K.gguf"
MAX_TOKENS="512"

This means that now everything that is under “c:/Users/User/” on your local machine is under “/User/” in your Docker container, that llama.cpp knows where to look for models and what model to look for, along with the maximum new tokens for its output.

3. Pull the necessary images

docker pull astrabert/everything-ai:latest
docker pull qdrant/qdrant:latest
docker pull ghcr.io/ggerganov/llama.cpp:server

4. Run the multi-container app

docker compose up

5. Go to localhost:8670 and choose your assistant

You will see something like this:

Task choice interface

Choose the task among:

./
├── test/
|   ├── label1/
|   └── label2/
└── train/
    ├── label1/
    └── label2/

You can query the database starting from your own pictures.

6. Go to localhost:7860 and start using your assistant

Once everything is ready, you can head over to localhost:7860 and start using your assistant:

Chat interface