Docker · macOS · Linux · Windows

Installation &
Setup Guide

Get Babber running on your machine in minutes. Everything from requirements
to connecting local models and cloud APIs.

System
Requirements

💻

Hardware

  • RAM: 8 GB minimum (16 GB+ recommended for local LLMs)
  • Disk: 10 GB free space (more for large models)
  • CPU: Any modern x86_64 or ARM64 (Apple Silicon works great)
  • GPU: Optional — accelerates Ollama inference, not required
📦

Software

  • Docker Desktop — required for all infrastructure services
  • Docker Compose v2+ (included with Docker Desktop)
  • Git — to clone the repository
  • Make — for build automation (pre-installed on macOS/Linux)
🔧

For Dev Mode

  • Python 3.11 — backend runtime (not 3.14, pydantic breaks)
  • Node.js 18+ — frontend build tool
  • npm — package manager (comes with Node.js)
  • Ollama — local LLM runner (installed separately)

Install on
macOS

Works on both Intel and Apple Silicon Macs. Apple Silicon is recommended for better local LLM performance.

1

Install Docker Desktop

Download and install Docker Desktop for Mac. After installation, launch it and wait for the Docker engine to start.

# Verify Docker is running
docker --version
docker compose version
2

Install Ollama (for local models)

Download Ollama from the official site. It runs as a native macOS app and serves models on port 11434.

# After installing, pull a model
ollama pull llama3.1:8b

# Verify Ollama is running
curl http://localhost:11434/api/tags
3

Clone & Install Babber

Clone the repository and run the install command. This copies the default config, starts Docker services, and pulls the default model.

git clone https://github.com/okhmat-anton/babber.git
cd babber
make install
4

Start Babber

Run the production mode or dev mode with hot reload.

# Production mode
make run

# Dev mode (hot reload for backend + frontend)
make run-dev
5

Open in Browser

Navigate to http://localhost:4200 and log in with the default credentials.

Username admin
Password admin123

⚠️ Change the default password after first login in Settings → Authentication.

Install on
Linux

Ubuntu, Debian, Fedora, Arch — any distro with Docker support.

1

Install Docker & Docker Compose

# Ubuntu / Debian
sudo apt update && sudo apt install -y docker.io docker-compose-v2
sudo systemctl enable --now docker
sudo usermod -aG docker $USER

# Log out and back in for group changes
2

Install Ollama

curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1:8b
3

Clone & Run

git clone https://github.com/okhmat-anton/babber.git
cd babber
make install
make run

Open http://localhost:4200 — login: admin / admin123

Connect
Local Models via Ollama

Run models on your own hardware — completely free, no API keys, full privacy. Ollama manages model downloads, serving, and memory.

1. Install Ollama on Your Machine

Download from ollama.ai and install. On macOS it runs as a native app in the menu bar; on Linux it installs as a system service. Ollama serves models on port 11434.

📥
macOS: Download the .dmg from ollama.ai and drag to Applications. Launch it — the llama icon appears in the menu bar.
Linux: Run curl -fsSL https://ollama.ai/install.sh | sh in the terminal.

2. Open Ollama Page in Babber

In the Babber UI, click "Ollama" in the left sidebar. This page shows the connection status, installed models, and lets you pull new ones.

🟢
Check Status: At the top of the page you'll see a green "Running" chip if Ollama is detected, or red "Stopped" if not. Make sure Ollama is running before proceeding.
🔍
Browse Catalog: Click "Browse Models" to expand the model catalog. You'll see popular models organized by category — click "Pull" next to any model to download it.
📥
Pull a Model: Type a model name (e.g. llama3.1:8b) in the input field and click "Pull". A progress bar shows the download status. Once complete, the model appears in the installed list below.

3. Models Appear Automatically

Babber auto-syncs Ollama models to its model registry. Once you pull a model on the Ollama page, it immediately becomes available in the model dropdown across all agents and chat sessions.

💡
Auto-sync: No manual model configuration needed for Ollama models. Pull a model on the Ollama page → it appears in every model selector instantly. You can also manage, delete, and monitor models right from this page.

4. Manage in Settings → Models

For advanced configuration, go to Settings → Models. Here you can see all registered models (both local Ollama and cloud), adjust parameters (temperature, top_p, max tokens), and set a default model for new agents.

Recommended Models by Use Case

llama3.1:8b
Fast general-purpose model. Good for chat, summarization, and simple tasks.
8B params~5 GBFast
qwen2.5:32b
Strong reasoning and code generation. Great balance of speed and intelligence.
32B params~20 GBMedium
deepseek-r1:70b
Advanced reasoning with chain-of-thought. Best open model for complex analysis.
70B params~40 GBSlow
mistral:7b
Lightweight and efficient. Good for quick tasks and low-resource environments.
7B params~4 GBVery Fast

Connect
Claude

Anthropic's powerful model family. Exceptional at analysis, writing, coding, and reasoning tasks.

1. Get an API Key

Sign up at console.anthropic.com and create an API key in the Dashboard → API Keys section.

2. Add API Key in Babber

Go to Settings → API Keys in the Babber UI. Click "Add API Key" and enter:

Provider OpenAI-compatible
API Key sk-ant-api03-... (your Anthropic key)
Base URL https://api.anthropic.com/v1

3. Add Model Configuration

Go to Settings → Models and click "Add Model". Enter the model ID from Anthropic's docs (e.g. the latest Claude model), select openai_compatible as the provider, and pick your API key from the dropdown.

📋
Check Anthropic's model page for current model IDs and pricing. New models appear frequently — just add the model ID in Babber and it works.

4. Assign to Agents

Edit any agent and select the Claude model from the dropdown. You can also use it in multi-model chat to compare responses from different providers side by side.

💡
Multi-model: Assign different models to different roles — use a powerful Claude model for deep analysis, a fast local model for quick tasks, and another cloud model for creative writing.

Connect
OpenAI / GPT

Use OpenAI models directly. Same setup flow — add your API key and model config.

1. Get an API Key

Sign up at platform.openai.com and create an API key in the API Keys section.

2. Add API Key in Babber

Go to Settings → API Keys, click "Add API Key" and enter:

Base URL https://api.openai.com/v1
API Key sk-... (your OpenAI key)

3. Add Model

Go to Settings → Models, click "Add Model". Enter any model ID from OpenAI's model list, select openai_compatible as the provider, and pick your API key.

📋
Check OpenAI's models page for current model IDs and pricing. Babber works with any OpenAI model — just enter the model ID.

Other
LLM Providers

Babber works with any OpenAI-compatible API. Here are some popular options.

🔥 Groq

Ultra-fast inference for open models. Free tier available.

Base URL https://api.groq.com/openai/v1
Models llama-3.1-70b, mixtral-8x7b

🌊 Together AI

Run open-source models in the cloud with competitive pricing.

Base URL https://api.together.xyz/v1
Models meta-llama/..., mistralai/...

🏠 OpenRouter

Universal gateway to 100+ models from different providers.

Base URL https://openrouter.ai/api/v1
Models Any model on OpenRouter

🔮 Mistral AI

High-quality European models with strong multilingual support.

Base URL https://api.mistral.ai/v1
Models mistral-large, mistral-medium

🤗 Local / Self-Hosted

Run vLLM, TGI, or llama.cpp with OpenAI-compatible API.

Base URL http://your-server:8000/v1
Models Any model you deploy

☁️ Google Gemini

Access Gemini models through the OpenAI-compatible endpoint.

Base URL https://generativelanguage.googleapis.com/v1beta/openai
Models gemini-2.0-flash, gemini-pro

Environment
Configuration

Babber is configured via .env file in the project root. Key settings explained below.

.env
# ═══════ Core Services ═══════
MONGODB_URL=mongodb://agents:mongo_secret_2026@mongodb:4717/ai_agents
REDIS_URL=redis://redis:4379
CHROMADB_URL=http://chromadb:4800

# ═══════ Ollama (Local LLMs) ═══════
OLLAMA_BASE_URL=http://host.docker.internal:11434
DEFAULT_MODEL=llama3.1:8b

# ═══════ Auth ═══════
JWT_SECRET=change-this-to-random-string
ACCESS_TOKEN_EXPIRE_MINUTES=15
REFRESH_TOKEN_EXPIRE_DAYS=7

# ═══════ Ports ═══════
FRONTEND_PORT=4200
BACKEND_PORT=4700
MONGODB_PORT=4717
REDIS_PORT=4379
CHROMADB_PORT=4800
⚠️
Important: make install automatically creates .env from .env.example. Change JWT_SECRET and database passwords before deploying to production. The .env file is gitignored by default.

Running in
Dev Mode

Dev mode is designed for AI-assisted development — use GitHub Copilot with Claude Opus to modify, extend, and customize Babber in real time.

What is Dev Mode?

Dev mode runs infrastructure (Redis, MongoDB, ChromaDB) in Docker while the backend and frontend run locally with hot reload. This means every code change you (or Copilot) make is reflected instantly — no restarts needed.

🤖
AI-Powered Development: Dev mode is optimized for working with GitHub Copilot + Claude Opus in VS Code. Open the project, start dev mode, and let Copilot help you build new features, create addons, modify agents, and extend the platform.

Quick Start

make run-dev

This single command starts all infrastructure in Docker, installs dependencies, and launches the backend (uvicorn with --reload) and frontend (Vite dev server) locally.

Copilot + Claude Opus Setup

For the best development experience, configure VS Code with Copilot using Claude Opus as the model:

1️⃣
Install GitHub Copilot extension in VS Code (Extensions → search "GitHub Copilot").
2️⃣
Select Claude Opus as the Copilot model: open Copilot Chat, click the model selector, choose "Claude Opus 4".
3️⃣
Open the project in VS Code and run make run-dev in the terminal. Copilot has full context of the codebase via CLAUDE.md.
4️⃣
Start building: Ask Copilot to create addons, add API endpoints, modify the UI, fix bugs — changes apply instantly with hot reload.

Manual Start (if make run-dev doesn't work)

# 1. Start infrastructure
docker compose up -d redis chromadb mongodb

# 2. Backend (PYTHONPATH=. is required!)
cd backend
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
PYTHONPATH=. OLLAMA_BASE_URL=http://localhost:11434 \
CHROMADB_URL=http://localhost:4800 \
.venv/bin/uvicorn app.main:app --host 0.0.0.0 --port 4700 --reload

# 3. Frontend (in another terminal)
cd frontend
npm install
VITE_BACKEND_URL=http://localhost:4700 npm run dev -- --host 0.0.0.0 --port 4200

Default Ports

4200Frontend
4700Backend API
4717MongoDB
4379Redis
4800ChromaDB
11434Ollama

Common
Troubleshooting

❌ "PYTHONPATH" error when running backend

Always set PYTHONPATH=. when running the backend outside Docker. Without it, Python can't resolve app.* imports.

PYTHONPATH=. .venv/bin/uvicorn app.main:app --port 4700 --reload

❌ Python 3.14 breaks pydantic-core

Use Python 3.11 specifically. Pydantic's Rust core doesn't compile on Python 3.14 yet.

# macOS with Homebrew
brew install python@3.11

# Create venv with specific version
python3.11 -m venv backend/.venv

❌ Ollama "nodename not known" error

This happens when Babber tries to use the Docker URL (host.docker.internal) in dev mode. Set the override:

OLLAMA_BASE_URL=http://localhost:11434

❌ Port already in use

Kill existing processes or change ports in .env:

# Find what's using the port
lsof -i :4700

# Kill it
kill -9 <PID>

# Or use make command
make stop-dev

❌ Docker services won't start

Make sure Docker Desktop is running. Clean up stale data if needed:

# Stop everything
make stop

# Nuclear option — removes all data
make clean

# Fresh start
make install

❌ Models not appearing in dropdown

Ensure Ollama is running and accessible. Check the connection in Settings → Ollama. For cloud models, verify API key and base URL are correct.

Ready to get started?
Three commands away.

Clone, install, run. Your AI agent platform is waiting.