Multica Docs

Self-Hosting Guide

Deploy Multica on your own infrastructure.

Architecture Overview

Multica has three components:

ComponentDescriptionTechnology
BackendREST API + WebSocket serverGo (single binary)
FrontendWeb applicationNext.js 16
DatabasePrimary data storePostgreSQL 17 with pgvector

Each user who wants to run AI agents locally also installs the multica CLI and runs the agent daemon on their own machine.

Prerequisites

  • Docker and Docker Compose

Quick Install

Two commands to set up everything:

# Install CLI + provision self-host server
curl -fsSL https://raw.githubusercontent.com/multica-ai/multica/main/scripts/install.sh | bash -s -- --with-server

# Configure CLI, authenticate, and start the daemon
multica setup self-host

This installs the CLI, checks out the latest self-host assets, pulls the official Multica images from GHCR, and configures everything for localhost. Then open http://localhost:3000 and pick a login method: configure RESEND_API_KEY in .env for email-based codes (recommended), or set APP_ENV=development in .env to enable the dev master code 888888. See Step 2 — Log In for details.

If the self-host server is already running and you only need the CLI on a macOS/Linux machine, install it with Homebrew: brew install multica-ai/tap/multica.

For a step-by-step setup, see below.

Step-by-Step Setup

Step 1 — Start the Server

git clone https://github.com/multica-ai/multica.git
cd multica
make selfhost

make selfhost automatically creates .env, generates a random JWT_SECRET, and starts all services via Docker Compose.

By default it pulls the latest stable release images from GHCR. To build the backend/web from your current checkout instead, run make selfhost-build. If the selected GHCR tag has not been published yet, make selfhost now tells you to fall back to make selfhost-build. make selfhost-build uses local multica-backend:dev / multica-web:dev tags, so it does not overwrite the pulled :latest images.

Once ready:

If you prefer running the Docker Compose steps manually: cp .env.example .env, edit JWT_SECRET, then docker compose -f docker-compose.selfhost.yml pull && docker compose -f docker-compose.selfhost.yml up -d.

Step 2 — Log In

Open http://localhost:3000. The Docker self-host stack defaults to APP_ENV=production (set in docker-compose.selfhost.yml), so the dev master code is disabled by default for safety on public deployments. Pick one of the following to log in:

  • Recommended (production): configure RESEND_API_KEY in .env, then restart the backend. Real verification codes will be sent to the email address you enter. See Configuration below.
  • Evaluation / private network: set APP_ENV=development in .env and restart the backend. Verification code 888888 will then work for any email address.
  • Without configuring either: the verification code is generated server-side and printed to the backend container logs (look for [DEV] Verification code for ...:). Useful for one-off testing on a single machine.

Changes to ALLOW_SIGNUP and GOOGLE_CLIENT_ID also take effect after restarting the backend / compose stack. The web UI reads both from /api/config at runtime, so no web rebuild is needed.

Warning: do not set APP_ENV=development on a publicly reachable instance — anyone who knows an email address can then log in with 888888.

Step 3 — Install CLI & Start Daemon

The daemon runs on your local machine (not inside Docker). It detects installed AI agent CLIs, registers them with the server, and executes tasks.

a) Install the CLI and an AI agent

brew install multica-ai/tap/multica

You also need at least one AI agent CLI:

  • Claude Code (claude on PATH)
  • Codex (codex on PATH)
  • Gemini CLI (gemini on PATH)
  • OpenCode (opencode on PATH)
  • OpenClaw (openclaw on PATH)
  • Hermes (hermes on PATH)

b) One-command setup

multica setup self-host

This automatically:

  1. Configures the CLI to connect to localhost
  2. Opens your browser for authentication
  3. Discovers your workspaces
  4. Starts the daemon in the background

For on-premise deployments with custom domains:

multica setup self-host --server-url https://api.example.com --app-url https://app.example.com

Verify the daemon is running:

multica daemon status

Alternatively, configure step by step: multica config set server_url http://localhost:8080 && multica config set app_url http://localhost:3000 && multica login && multica daemon start

Step 4 — Verify & Start Using

  1. Open your workspace at http://localhost:3000
  2. Navigate to Settings → Runtimes — you should see your machine listed
  3. Go to Settings → Agents and create a new agent
  4. Create an issue and assign it to your agent

Stopping Services

# Stop Docker Compose services
make selfhost-stop

# Stop the local daemon
multica daemon stop

Switching to Multica Cloud

If you've been self-hosting and want to switch your CLI to Multica Cloud:

multica setup

This reconfigures the CLI for multica.ai, re-authenticates, and restarts the daemon. You will be prompted before overwriting the existing configuration.

Your local Docker services are unaffected. Stop them separately if you no longer need them.

Upgrading

docker compose -f docker-compose.selfhost.yml pull
docker compose -f docker-compose.selfhost.yml up -d

Pin MULTICA_IMAGE_TAG in .env to an exact version like v0.2.4 if you want to stay on a specific release. Migrations run automatically on backend startup. If the selected GHCR tag has not been published yet, fall back to make selfhost-build or docker compose -f docker-compose.selfhost.yml -f docker-compose.selfhost.build.yml up -d --build.


Configuration

All configuration is done via environment variables. Copy .env.example as a starting point.

Required Variables

VariableDescriptionExample
DATABASE_URLPostgreSQL connection stringpostgres://multica:multica@localhost:5432/multica?sslmode=disable
JWT_SECRETMust change from default. Secret key for signing JWT tokens. Use a long random string.openssl rand -hex 32
FRONTEND_ORIGINURL where the frontend is served (used for CORS)https://app.example.com

Email (Required for Authentication)

Multica uses email-based magic link authentication via Resend.

VariableDescription
RESEND_API_KEYYour Resend API key
RESEND_FROM_EMAILSender email address (default: noreply@multica.ai)

Google OAuth (Optional)

VariableDescription
GOOGLE_CLIENT_IDGoogle OAuth client ID
GOOGLE_CLIENT_SECRETGoogle OAuth client secret
GOOGLE_REDIRECT_URIOAuth callback URL (e.g. https://app.example.com/auth/callback)

Changes take effect after restarting the backend / compose stack. The web UI reads GOOGLE_CLIENT_ID from /api/config at runtime, so no web rebuild is needed.

Signup Controls (Optional)

VariableDescription
ALLOW_SIGNUPSet to false to disable new user signups on a private instance
ALLOWED_EMAIL_DOMAINSOptional comma-separated allowlist of email domains
ALLOWED_EMAILSOptional comma-separated allowlist of exact email addresses

Changes take effect after restarting the backend / compose stack. The web UI reads ALLOW_SIGNUP from /api/config at runtime, so no web rebuild is needed.

File Storage (Optional)

For file uploads and attachments, configure S3 and CloudFront:

VariableDescription
S3_BUCKETS3 bucket name
S3_REGIONAWS region (default: us-west-2)
CLOUDFRONT_DOMAINCloudFront distribution domain
CLOUDFRONT_KEY_PAIR_IDCloudFront key pair ID for signed URLs
CLOUDFRONT_PRIVATE_KEYCloudFront private key (PEM format)

Cookies

VariableDescription
COOKIE_DOMAINOptional Domain attribute for session + CloudFront cookies. Leave empty for single-host deployments (localhost, LAN IP, or a single hostname). Only set it when the frontend and backend sit on different subdomains of one registered domain (e.g. .example.com). Do not use an IP literal — RFC 6265 forbids IP addresses in the cookie Domain attribute and browsers will drop such Set-Cookie headers.

The Secure flag on session cookies is derived automatically from the scheme of FRONTEND_ORIGIN: HTTPS origins get Secure cookies; plain-HTTP origins (LAN / private-network self-host) get non-secure cookies so the browser can actually store them.

Server

VariableDefaultDescription
PORT8080Backend server port
FRONTEND_PORT3000Frontend port
CORS_ALLOWED_ORIGINSValue of FRONTEND_ORIGINComma-separated list of allowed origins
LOG_LEVELinfoLog level: debug, info, warn, error

CLI / Daemon

These are configured on each user's machine, not on the server:

VariableDefaultDescription
MULTICA_SERVER_URLws://localhost:8080/wsWebSocket URL for daemon → server connection
MULTICA_APP_URLhttp://localhost:3000Frontend URL for CLI login flow
MULTICA_DAEMON_POLL_INTERVAL3sHow often the daemon polls for tasks
MULTICA_DAEMON_HEARTBEAT_INTERVAL15sHeartbeat frequency

Agent-specific overrides:

VariableDescription
MULTICA_CLAUDE_PATHCustom path to the claude binary
MULTICA_CLAUDE_MODELOverride the Claude model used
MULTICA_CODEX_PATHCustom path to the codex binary
MULTICA_CODEX_MODELOverride the Codex model used
MULTICA_OPENCODE_PATHCustom path to the opencode binary
MULTICA_OPENCODE_MODELOverride the OpenCode model used
MULTICA_OPENCLAW_PATHCustom path to the openclaw binary
MULTICA_OPENCLAW_MODELOverride the OpenClaw model used
MULTICA_HERMES_PATHCustom path to the hermes binary
MULTICA_HERMES_MODELOverride the Hermes model used
MULTICA_GEMINI_PATHCustom path to the gemini binary
MULTICA_GEMINI_MODELOverride the Gemini model used

Database Setup

Multica requires PostgreSQL 17 with the pgvector extension.

Using the Included Docker Compose

docker compose up -d postgres

This starts a pgvector/pgvector:pg17 container on port 5432 with default credentials (multica/multica).

Using Your Own PostgreSQL

Ensure the pgvector extension is available:

CREATE EXTENSION IF NOT EXISTS vector;

Running Migrations

Migrations must be run before starting the server:

# Using the built binary
./server/bin/migrate up

# Or from source
cd server && go run ./cmd/migrate up

Manual Setup (Without Docker Compose)

If you prefer to build and run services manually:

Prerequisites: Go 1.26+, Node.js 20+, pnpm 10.28+, PostgreSQL 17 with pgvector.

# Start your PostgreSQL (or use: docker compose up -d postgres)

# Build the backend
make build

# Run database migrations
DATABASE_URL="your-database-url" ./server/bin/migrate up

# Start the backend server
DATABASE_URL="your-database-url" PORT=8080 JWT_SECRET="your-secret" ./server/bin/server

For the frontend:

pnpm install
pnpm build

# Start the frontend (production mode)
cd apps/web
REMOTE_API_URL=http://localhost:8080 pnpm start

Reverse Proxy

In production, put a reverse proxy in front of both the backend and frontend to handle TLS and routing.

app.example.com {
    reverse_proxy localhost:3000
}

api.example.com {
    reverse_proxy localhost:8080
}

Nginx

# Frontend
server {
    listen 443 ssl;
    server_name app.example.com;

    ssl_certificate     /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Backend API
server {
    listen 443 ssl;
    server_name api.example.com;

    ssl_certificate     /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # WebSocket support
    location /ws {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_read_timeout 86400;
    }
}

When using separate domains for frontend and backend, set these environment variables accordingly:

# Backend
FRONTEND_ORIGIN=https://app.example.com
CORS_ALLOWED_ORIGINS=https://app.example.com

# Frontend
REMOTE_API_URL=https://api.example.com
NEXT_PUBLIC_API_URL=https://api.example.com
NEXT_PUBLIC_WS_URL=wss://api.example.com/ws

Health Check

The backend exposes public health endpoints:

GET /health
→ {"status":"ok"}

GET /readyz
→ {"status":"ok","checks":{"db":"ok","migrations":"ok"}}

GET /healthz
→ same response as /readyz

Use /health for basic liveness / reachability checks. Use /readyz for dependency-aware readiness probes and external monitoring that should fail when the database is unavailable or migrations are not fully applied. /healthz is kept as an alias for operator familiarity.

Upgrading

  1. Pull the latest code or image
  2. Run migrations: ./server/bin/migrate up
  3. Restart the backend and frontend

Migrations are forward-only and safe to run on a live database. They are idempotent — running them multiple times has no effect.