Deploy to production
This guide covers running mithai as a long-lived process in production: as a systemd service on Linux, as a Docker container, and the operational concerns that apply to both.
Before you deploy
Section titled “Before you deploy”Work through this checklist before starting the process:
- Secrets are not in
config.yaml. All tokens are referenced as${ENV_VAR}and set in the environment or a.envfile outside the repository. - Approval levels are reviewed. Every tool that can modify infrastructure has
"human": "approve"or usesresolve_human(). Runmithai skill validateto catch missing fields. - The memory directory (
./memory/by default) and state directory (./.mithai/state/) are backed up or snapshotted. -
mithai doctorpasses with no issues. -
mithai skill validatepasses for all skills.
mithai doctormithai skill validateRunning as a systemd service
Section titled “Running as a systemd service”This is the recommended approach for running mithai on a Linux host directly.
Create a service user
Section titled “Create a service user”sudo useradd --system --no-create-home --shell /bin/false mithaiInstall mithai
Section titled “Install mithai”Install into a virtualenv owned by the service user:
sudo mkdir -p /opt/mithaisudo python3 -m venv /opt/mithai/venvsudo /opt/mithai/venv/bin/pip install "mithai[slack]"Place configuration
Section titled “Place configuration”sudo mkdir -p /etc/mithaisudo cp config.yaml /etc/mithai/config.yamlsudo cp .env /etc/mithai/.env # contains secretssudo chmod 600 /etc/mithai/.envsudo chown -R mithai:mithai /etc/mithaiCreate data directories
Section titled “Create data directories”sudo mkdir -p /var/lib/mithai/memory /var/lib/mithai/statesudo chown -R mithai:mithai /var/lib/mithaiUpdate config.yaml to point at these directories:
learning: memory: backend: filesystem filesystem: path: /var/lib/mithai/memory
state: backend: filesystem filesystem: path: /var/lib/mithai/stateWrite the unit file
Section titled “Write the unit file”Create /etc/systemd/system/mithai.service:
[Unit]Description=mithai AI agentAfter=network-online.targetWants=network-online.target
[Service]Type=simpleUser=mithaiGroup=mithaiWorkingDirectory=/opt/mithaiExecStart=/opt/mithai/venv/bin/mithai run --config /etc/mithai/config.yamlEnvironmentFile=/etc/mithai/.envRestart=on-failureRestartSec=10sStandardOutput=journalStandardError=journalSyslogIdentifier=mithai
# Prevent interactive prompts from hanging the processStandardInput=null
[Install]WantedBy=multi-user.targetEnable and start
Section titled “Enable and start”sudo systemctl daemon-reloadsudo systemctl enable mithaisudo systemctl start mithaisudo systemctl status mithaiView logs
Section titled “View logs”# Follow live logssudo journalctl -u mithai -f
# Last 100 linessudo journalctl -u mithai -n 100
# Since last bootsudo journalctl -u mithai -bRunning with Docker
Section titled “Running with Docker”Dockerfile
Section titled “Dockerfile”The project ships a production-ready Dockerfile at deploy/Dockerfile. It uses a two-stage build: a builder stage installs the virtualenv, and a minimal runtime stage contains only what the process needs.
# syntax=docker/dockerfile:1FROM python:3.11-slim AS builder
WORKDIR /appCOPY pyproject.toml README.md ./COPY src/ src/
RUN python -m venv /app/.venv && \ /app/.venv/bin/pip install --upgrade pip --quiet && \ /app/.venv/bin/pip install ".[slack,mcp]" --quiet
FROM python:3.11-slim AS runtime
WORKDIR /appCOPY --from=builder /app/.venv /app/.venvCOPY skills/ /app/skills/
ENV PATH="/app/.venv/bin:$PATH" \ PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1
RUN mkdir -p /app/.mithai/state /app/memory
ENTRYPOINT ["mithai", "run", "--config", "/config/config.yaml"]Build it:
docker build -f deploy/Dockerfile -t mithai:latest .docker-compose.yml
Section titled “docker-compose.yml”services: mithai: image: mithai:latest restart: unless-stopped env_file: .env # secrets, never committed volumes: - ./config.yaml:/config/config.yaml:ro - mithai-memory:/app/memory - mithai-state:/app/.mithai/state stdin_open: false tty: false
volumes: mithai-memory: mithai-state:Start it:
docker compose up -ddocker compose logs -f mithaiPassing config to the container
Section titled “Passing config to the container”config.yaml references ${ENV_VAR} placeholders. These are resolved at startup from the process environment. Pass them via env_file:
.env (never commit this file):
SLACK_BOT_TOKEN=xoxb-...SLACK_APP_TOKEN=xapp-...ANTHROPIC_API_KEY=sk-ant-...The .env file in env_file is loaded by Docker Compose directly into the container environment before mithai starts.
Environment and secrets
Section titled “Environment and secrets”config.yaml uses ${VAR} syntax for any value that should come from the environment:
llm: anthropic: api_key: ${ANTHROPIC_API_KEY}
adapter: slack: bot_token: ${SLACK_BOT_TOKEN} app_token: ${SLACK_APP_TOKEN}At startup, load_config() calls _resolve_env_vars() which walks the entire config tree and substitutes every ${VAR} with the corresponding environment variable value. If the variable is not set, the literal string ${VAR} is kept — which will cause an authentication failure at startup, not a config parse error.
You can use default values with ${VAR:-default}:
llm: max_tokens: ${LLM_MAX_TOKENS:-4096}Rules:
- Never write secret values directly in
config.yaml. - Never commit
.envto your repository. Add it to.gitignore. - For production workloads, use your platform’s secret manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault, Kubernetes secrets) to inject variables into the process environment at runtime. The mechanism is the same: the variable arrives in the environment, and
${VAR}is substituted by mithai.
Keeping memory persistent
Section titled “Keeping memory persistent”mithai stores two kinds of persistent data:
| What | Default path | Contains |
|---|---|---|
| Memory | ./memory/ | MEMORY.md, daily logs, approval history, channel context |
| State | ./.mithai/state/ | Session history (conversation turns) |
If either directory is lost, the agent loses its accumulated knowledge and conversation history. Users will experience the agent as amnesiac — it will not remember previous interactions or facts it was told.
For Docker: mount both as named volumes or bind mounts:
volumes: - mithai-memory:/app/memory - mithai-state:/app/.mithai/stateNamed Docker volumes survive container restarts and image updates. Bind mounts (absolute host paths) make the data easier to back up manually.
For systemd: the data directories are at /var/lib/mithai/memory and /var/lib/mithai/state (as configured above). Back these up with your normal host backup strategy. A nightly rsync or snapshot of /var/lib/mithai/ is sufficient for most deployments.
Multiple instances
Section titled “Multiple instances”Do not run two mithai processes pointed at the same state directory. Both processes will read and write session files concurrently and corrupt each other’s state.
If you need multiple agents (for example, a DevOps agent and a Tester agent with different skills), use multi-agent mode:
agents: devops: name: "DevOps Agent" skills: allowed: [shell, kubernetes, memory] memory: path: ./memory/devops # separate memory per agent adapter: slack: bot_token: ${DEVOPS_BOT_TOKEN} app_token: ${DEVOPS_APP_TOKEN}
tester: name: "Tester Agent" skills: allowed: [shell, http_checker, memory] memory: path: ./memory/tester adapter: slack: bot_token: ${TESTER_BOT_TOKEN} app_token: ${TESTER_APP_TOKEN}With agents: configured, mithai run starts all agents in the same process. They share one state backend but each agent’s sessions are namespaced by agent_id, so there are no collisions.
Health checking
Section titled “Health checking”mithai status
Section titled “mithai status”Check what is configured and loaded:
mithai statusThis reads config.yaml without connecting to any external service. It reports the LLM provider and model, loaded skills and their tool counts, configured adapters, session counts, and memory backend.
mithai doctor
Section titled “mithai doctor”Run a full connectivity check:
mithai doctorThis attempts a real LLM API call, verifies Slack and Telegram tokens, tests kubectl connectivity if the kubernetes skill is configured, and checks that data directories are writable. Exit code is 0 when all checks pass, 1 if any fail.
Use mithai doctor in a deployment pipeline as a smoke test after deploying a new configuration.
What to monitor
Section titled “What to monitor”For production monitoring, instrument these:
- Process liveness: the mithai process is running (systemd
ActiveState=activeor Docker container health). - LLM error rate:
mithai doctorfailures in scheduled checks. - Log errors:
journalctl -u mithaiforERRORandExceptionlines. - Disk space: the memory and state directories grow over time. Session files accumulate at
.mithai/state/sessions/. Prune old ones if disk becomes a concern.
If you have telemetry configured (telemetry.enabled: true in config.yaml), mithai emits OpenTelemetry traces for each request and tool call. The mithai.request span and per-tool spans give you latency and approval-rate data.
Upgrading
Section titled “Upgrading”Upgrade mithai
Section titled “Upgrade mithai”For a virtualenv install:
/opt/mithai/venv/bin/pip install --upgrade "mithai[slack]"For Docker, rebuild the image:
docker build -f deploy/Dockerfile -t mithai:latest .docker compose up -d --no-deps mithaiAfter upgrading
Section titled “After upgrading”- Run
mithai skill validateto confirm all skills still pass validation against the new version. - Run
mithai doctorto confirm connectivity. - Restart the service:
# systemdsudo systemctl restart mithai
# Docker Composedocker compose restart mithaiUpgrading individual skills
Section titled “Upgrading individual skills”If you installed optional skills with mithai skill install, upgrade them individually:
mithai skill upgrade kubernetesmithai skill upgrade githubCore skills (shell, memory, sessions, http_checker) are bundled with the mithai binary and upgrade with it.