Multica Docs

How Multica works

How the three core components (server / daemon / AI coding tool) coordinate to run an agent's work.

Multica is a distributed platform. The web interface you see is just the front of house — the real work is done by three components: the Multica server owns the data (workspaces, issues, members, the task queue, and so on); the daemon runs on your own machine, picks up tasks, and drives the AI coding tool; and the AI coding tool (Claude Code, Codex, and other local CLIs) is the component that actually writes code. This is the biggest difference between Multica and Linear or Jira — agents don't run on our servers, they run on your machine.

The three core components

Your side
Client
Web appCLI
Daemon
Polls work from Multica. Invokes local AI coding tools:
Claude CodeCodexCursorCopilot+ 6 more
Your code.·Your keys.·Your CPU.
Multica
Server
Cloud or self-hosted
Workspaces
Issues & tasks
Agent definitions
Realtime (WebSocket)
No AI execution here.
  • Multica server — the workspaces, issue lists, and comment threads you see all live in its database. It's also a WebSocket hub that pushes real-time updates between you and your teammates. It does not execute any agent tasks.
  • Daemon — part of the Multica CLI, running on your own machine. On start it detects which AI coding tools are installed locally, registers with the server, and begins polling for tasks every 3 seconds and sending heartbeats every 15 seconds.
  • AI coding tools — one of the ten (or several in parallel): Claude Code, Codex, Cursor, Copilot, Gemini, Hermes, Kimi, OpenCode, OpenClaw, Pi. Once the daemon has picked up a task, it uses these tools to actually do the work.

Because the toolchain stays local, your API keys, code directories, and authorized tools are only ever used on your machine — the Multica server never sees any of them. This holds whether you self-host or use Cloud.

The lifecycle of a task

Take the most common scenario — you assign an issue to an agent:

  1. You click assign in the web UI. The browser sends an HTTP request to the Multica server.
  2. The server sets the assignee on that issue to the agent and, at the same time, creates an execution task in the task queue with status queued.
  3. The daemon on your machine picks up the task on its next poll (within 3 seconds). Task status becomes dispatched.
  4. The daemon creates an isolated working directory locally and invokes the corresponding AI coding tool. Task status becomes running.
  5. The AI writes code locally, runs tests, and posts comments back to the server.
  6. Execution ends. The daemon reports the result (success / failure) to the server, and task status becomes completed or failed. You see the progress update in real time in the web UI (via WebSocket).

For the detailed mechanics, see Daemon and runtimes and Tasks.

Four ways to get an agent working

It's not only "assign an issue" — Multica has 4 triggers, one per collaboration style:

HowTypical scenarioDocs
Assign an issueThe most common. Assign an issue to an agent and it starts on its ownAssigning issues
@mention an agent in a comment"Take a look at this one for me" — don't change the assignee or status, just fire off a commentMentioning agents
Direct chatStandalone conversation, not tied to an issue — ask questions, have it draft an issueChat
Autopilots (scheduled)Standing instructions — "do a standup summary every Monday morning" and the likeAutopilots

Runtimes: where it runs, and how many tools

A runtime is the pairing of "daemon × one AI coding tool." If the daemon on one machine has both Claude Code and Codex installed and is joined to two workspaces, Multica registers 4 independent runtimes (2 workspaces × 2 tools).

Only the local daemon runtime model is supported today. Cloud runtimes (where you don't need your own machine running) are coming soon, currently waitlist-only — sign up on the Downloads page.

Next steps