Get Babber running on your machine in minutes. Everything from requirements
to connecting local models and cloud APIs.
Works on both Intel and Apple Silicon Macs. Apple Silicon is recommended for better local LLM performance.
Download and install Docker Desktop for Mac. After installation, launch it and wait for the Docker engine to start.
# Verify Docker is running
docker --version
docker compose version
Download Ollama from the official site. It runs as a native macOS app and serves models on port 11434.
# After installing, pull a model
ollama pull llama3.1:8b
# Verify Ollama is running
curl http://localhost:11434/api/tags
Clone the repository and run the install command. This copies the default config, starts Docker services, and pulls the default model.
git clone https://github.com/okhmat-anton/babber.git
cd babber
make install
Run the production mode or dev mode with hot reload.
# Production mode
make run
# Dev mode (hot reload for backend + frontend)
make run-dev
Navigate to http://localhost:4200 and log in with the default credentials.
admin
admin123
⚠️ Change the default password after first login in Settings → Authentication.
Ubuntu, Debian, Fedora, Arch — any distro with Docker support.
# Ubuntu / Debian
sudo apt update && sudo apt install -y docker.io docker-compose-v2
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
# Log out and back in for group changes
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1:8b
git clone https://github.com/okhmat-anton/babber.git
cd babber
make install
make run
Open http://localhost:4200 — login: admin / admin123
Run models on your own hardware — completely free, no API keys, full privacy. Ollama manages model downloads, serving, and memory.
Download from ollama.ai and install. On macOS it runs as a native app in the menu bar; on Linux it installs as a system service. Ollama serves models on port 11434.
curl -fsSL https://ollama.ai/install.sh | sh in the terminal.
In the Babber UI, click "Ollama" in the left sidebar. This page shows the connection status, installed models, and lets you pull new ones.
llama3.1:8b) in the input field and click "Pull". A progress bar shows the download status. Once complete, the model appears in the installed list below.
Babber auto-syncs Ollama models to its model registry. Once you pull a model on the Ollama page, it immediately becomes available in the model dropdown across all agents and chat sessions.
For advanced configuration, go to Settings → Models. Here you can see all registered models (both local Ollama and cloud), adjust parameters (temperature, top_p, max tokens), and set a default model for new agents.
Anthropic's powerful model family. Exceptional at analysis, writing, coding, and reasoning tasks.
Sign up at console.anthropic.com and create an API key in the Dashboard → API Keys section.
Go to Settings → API Keys in the Babber UI. Click "Add API Key" and enter:
Go to Settings → Models and click "Add Model". Enter the model ID from Anthropic's docs (e.g. the latest Claude model), select openai_compatible as the provider, and pick your API key from the dropdown.
Edit any agent and select the Claude model from the dropdown. You can also use it in multi-model chat to compare responses from different providers side by side.
Use OpenAI models directly. Same setup flow — add your API key and model config.
Sign up at platform.openai.com and create an API key in the API Keys section.
Go to Settings → API Keys, click "Add API Key" and enter:
Go to Settings → Models, click "Add Model". Enter any model ID from OpenAI's model list, select openai_compatible as the provider, and pick your API key.
Babber works with any OpenAI-compatible API. Here are some popular options.
Ultra-fast inference for open models. Free tier available.
Run open-source models in the cloud with competitive pricing.
Universal gateway to 100+ models from different providers.
High-quality European models with strong multilingual support.
Run vLLM, TGI, or llama.cpp with OpenAI-compatible API.
Access Gemini models through the OpenAI-compatible endpoint.
Babber is configured via .env file in the project root. Key settings explained below.
# ═══════ Core Services ═══════
MONGODB_URL=mongodb://agents:mongo_secret_2026@mongodb:4717/ai_agents
REDIS_URL=redis://redis:4379
CHROMADB_URL=http://chromadb:4800
# ═══════ Ollama (Local LLMs) ═══════
OLLAMA_BASE_URL=http://host.docker.internal:11434
DEFAULT_MODEL=llama3.1:8b
# ═══════ Auth ═══════
JWT_SECRET=change-this-to-random-string
ACCESS_TOKEN_EXPIRE_MINUTES=15
REFRESH_TOKEN_EXPIRE_DAYS=7
# ═══════ Ports ═══════
FRONTEND_PORT=4200
BACKEND_PORT=4700
MONGODB_PORT=4717
REDIS_PORT=4379
CHROMADB_PORT=4800
make install automatically creates .env from .env.example.
Change JWT_SECRET and database passwords before deploying to production.
The .env file is gitignored by default.
Dev mode is designed for AI-assisted development — use GitHub Copilot with Claude Opus to modify, extend, and customize Babber in real time.
Dev mode runs infrastructure (Redis, MongoDB, ChromaDB) in Docker while the backend and frontend run locally with hot reload. This means every code change you (or Copilot) make is reflected instantly — no restarts needed.
make run-dev
This single command starts all infrastructure in Docker, installs dependencies, and launches the backend (uvicorn with --reload) and frontend (Vite dev server) locally.
For the best development experience, configure VS Code with Copilot using Claude Opus as the model:
make run-dev in the terminal. Copilot has full context of the codebase via CLAUDE.md.
# 1. Start infrastructure
docker compose up -d redis chromadb mongodb
# 2. Backend (PYTHONPATH=. is required!)
cd backend
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
PYTHONPATH=. OLLAMA_BASE_URL=http://localhost:11434 \
CHROMADB_URL=http://localhost:4800 \
.venv/bin/uvicorn app.main:app --host 0.0.0.0 --port 4700 --reload
# 3. Frontend (in another terminal)
cd frontend
npm install
VITE_BACKEND_URL=http://localhost:4700 npm run dev -- --host 0.0.0.0 --port 4200
4200Frontend4700Backend API4717MongoDB4379Redis4800ChromaDB11434OllamaAlways set PYTHONPATH=. when running the backend outside Docker. Without it, Python can't resolve app.* imports.
PYTHONPATH=. .venv/bin/uvicorn app.main:app --port 4700 --reload
Use Python 3.11 specifically. Pydantic's Rust core doesn't compile on Python 3.14 yet.
# macOS with Homebrew
brew install python@3.11
# Create venv with specific version
python3.11 -m venv backend/.venv
This happens when Babber tries to use the Docker URL (host.docker.internal) in dev mode. Set the override:
OLLAMA_BASE_URL=http://localhost:11434
Kill existing processes or change ports in .env:
# Find what's using the port
lsof -i :4700
# Kill it
kill -9 <PID>
# Or use make command
make stop-dev
Make sure Docker Desktop is running. Clean up stale data if needed:
# Stop everything
make stop
# Nuclear option — removes all data
make clean
# Fresh start
make install
Ensure Ollama is running and accessible. Check the connection in Settings → Ollama. For cloud models, verify API key and base URL are correct.
Clone, install, run. Your AI agent platform is waiting.