Self-Hosting
Vivd ships a solo self-host install path for one primary host:
- Public site on
/ - Studio on
/vivd-studio - Docker-based Studio machines by default
- Local S3-compatible project storage by default
If you want the architectural picture behind those choices, read How Vivd Works.
Before you run it
Section titled “Before you run it”Have these ready:
- A Linux host with Docker Engine and Docker Compose v2
- One host or IP you want Vivd to use
- An
OPENROUTER_API_KEY
If you want Vivd to manage HTTPS certificates itself, use a real public hostname that already points at your VPS and keep ports 80 and 443 open. localhost and raw IP installs stay on plain HTTP.
Self-hosting currently expects OpenRouter for OpenCode model access so one API key can cover multiple model tiers. Direct multi-provider self-host configuration is planned later.
Install command
Section titled “Install command”curl -fsSL https://docs.vivd.studio/install.sh | bash
Useful variations:
# Write files only, do not start containers yet
curl -fsSL https://docs.vivd.studio/install.sh | bash -s -- --no-start
# Use an upstream TLS terminator such as Dockploy or Traefik
curl -fsSL https://docs.vivd.studio/install.sh | bash -s -- --tls-mode external
# Enable single-project mode
curl -fsSL https://docs.vivd.studio/install.sh | bash -s -- --single-project true
First-run setup path
Section titled “First-run setup path”After the stack boots and you can sign in, the next place to go is
Instance Settings.
For a normal solo self-host install, the usual setup flow is:
- Open
Instance Settings -> Network. - Set the
Public hostto the real domain or IP this install should use. - Choose
HTTPS handled by:Bundled Caddyif you used the install script or the published bundle directly on your own VPS and want Vivd/Caddy to manage certificates.External proxyif you are deploying through Dockploy, Traefik, or another platform that already terminates TLS.Plain HTTPfor localhost or internal-only testing.
- If you chose
Bundled Caddy, enter the ACME email for certificate issuance and renewal notices. - Save, then expect the current UI session to become invalid if the public host changed.
- Open the new public URL, sign in again there, and then confirm both
/and/vivd-studioopen on the intended public URL. - Open
Instance Settings -> Emailif you also want branded transactional email, support email details, or deliverability/webhook policy.
That is the main self-host setup loop after install. You do not need to hunt through
advanced env vars first unless you are intentionally overriding the normal solo host
model.
The installer prompts for:
- your primary host or IP
- your
OPENROUTER_API_KEY
On a public hostname, it also asks for the ACME email Caddy should use for certificate management.
It derives the normal auth/admin host settings from that one public host, so you do not need to fill a separate cluster of repeated URL env vars for a standard solo install.
The installer now defaults to the standard latest image tag on every architecture. If you intentionally publish or maintain an arm64 self-host tag, pass it explicitly with --image-tag or VIVD_SELFHOST_IMAGE_TAG.
Optional install-time flags let you override single-project mode, image tag, and TLS handling.
It then:
- downloads the solo self-host bundle
- writes
docker-compose.yml,Caddyfile, and.env - generates fresh secrets for auth, scraper access, and Postgres
- provisions a local S3-compatible bucket for project storage
- uses that one public host as the canonical
DOMAINfor the install - sets
OPENCODE_MODEL_STANDARD=openrouter/google/gemini-3-flash-preview - sets
OPENCODE_MODEL_ADVANCED=openrouter/google/gemini-3.1-pro-preview - defaults
VIVD_SCRATCH_CREATION_MODE=studio_astro - enables
STUDIO_MACHINE_PROVIDER=docker - starts the stack with
docker compose up -d
The bundle files are public and editable:
- docker-compose.yml
- Caddyfile for bundled Caddy-managed HTTPS
- Caddyfile.plain-http for Dockploy, Traefik, or other upstream TLS terminators
Docker Compose path
Section titled “Docker Compose path”If you want to self-host from the published Compose bundle without running install.sh,
start from the same files the installer downloads:
mkdir -p ~/vivd
cd ~/vivd
curl -fsSLo docker-compose.yml https://docs.vivd.studio/install/docker-compose.yml
curl -fsSLo Caddyfile https://docs.vivd.studio/install/Caddyfile
For an upstream TLS terminator such as Dockploy or Traefik, use the plain HTTP Caddyfile instead:
curl -fsSLo Caddyfile https://docs.vivd.studio/install/Caddyfile.plain-http
If you want to see the whole self-host stack at a glance before downloading anything, start with this reduced solo Compose view. It keeps the same core services and defaults as the published bundle, but trims the optional passthrough env list down to the small surface most self-host installs actually edit. Fixed internal wiring stays inline in the Compose file instead of being shown as extra env knobs.
# Minimal solo self-host stack.
# Put this next to the published Caddyfile asset and a matching .env file.
# Fixed internal wiring stays in the compose file; the .env only covers values most installs change.
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
environment:
VIVD_CADDY_PRIMARY_HOST: ${VIVD_CADDY_PRIMARY_HOST}
VIVD_CADDY_ACME_EMAIL: ${VIVD_CADDY_ACME_EMAIL:-}
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
- caddy_sites:/etc/caddy/sites.d
- caddy_runtime_routes:/etc/caddy/runtime.d:ro
- published_sites:/srv/published:ro
depends_on:
- backend
- minio
networks:
- vivd-network
backend:
image: ghcr.io/vivd-studio/vivd-server:${VIVD_SELFHOST_IMAGE_TAG:-latest}
pull_policy: always
restart: unless-stopped
environment:
DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/vivd
BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET}
OPENROUTER_API_KEY: ${OPENROUTER_API_KEY}
OPENCODE_MODEL_STANDARD: openrouter/google/gemini-3-flash-preview
OPENCODE_MODEL_ADVANCED: openrouter/google/gemini-3.1-pro-preview
DOMAIN: ${DOMAIN}
VIVD_CADDY_TLS_MODE: ${VIVD_CADDY_TLS_MODE:-managed}
STUDIO_MACHINE_PROVIDER: docker
VIVD_SELFHOST_UPDATE_WORKDIR: /srv/selfhost
VIVD_SELFHOST_UPDATE_SERVICES: backend frontend scraper
VIVD_SELFHOST_UPDATE_HELPER_IMAGE: ${VIVD_SELFHOST_UPDATE_HELPER_IMAGE:-docker:28-cli}
VIVD_BUCKET_MODE: local
SCRAPER_API_KEY: ${SCRAPER_API_KEY}
VIVD_LOCAL_S3_ACCESS_KEY: ${VIVD_LOCAL_S3_ACCESS_KEY}
VIVD_LOCAL_S3_SECRET_KEY: ${VIVD_LOCAL_S3_SECRET_KEY}
DOCKER_STUDIO_IMAGE: ghcr.io/vivd-studio/vivd-studio:${VIVD_SELFHOST_IMAGE_TAG:-latest}
volumes:
- backend_data:/app/projects
- opencode_data:/root/.local/share/opencode/storage
- published_sites:/srv/published
- caddy_sites:/etc/caddy/sites.d
- caddy_runtime_routes:/etc/caddy/runtime.d
- ./Caddyfile:/etc/caddy/Caddyfile
- ./:/srv/selfhost:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
postgres:
condition: service_healthy
scraper:
condition: service_healthy
minio:
condition: service_started
networks:
- vivd-network
frontend:
image: ghcr.io/vivd-studio/vivd-ui:${VIVD_SELFHOST_IMAGE_TAG:-latest}
pull_policy: always
restart: unless-stopped
depends_on:
- backend
networks:
- vivd-network
scraper:
image: ghcr.io/vivd-studio/vivd-scraper:${VIVD_SELFHOST_IMAGE_TAG:-latest}
pull_policy: always
restart: unless-stopped
environment:
SCRAPER_API_KEY: ${SCRAPER_API_KEY}
OPENROUTER_API_KEY: ${OPENROUTER_API_KEY}
healthcheck:
test: ["CMD", "node", "-e", "fetch('http://127.0.0.1:3001/health').then((res) => { if (!res.ok) process.exit(1); }).catch(() => process.exit(1))"]
interval: 15s
timeout: 10s
retries: 5
start_period: 30s
networks:
- vivd-network
postgres:
image: postgres:17
restart: unless-stopped
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: vivd
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d vivd"]
interval: 5s
timeout: 5s
retries: 5
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- vivd-network
minio:
image: minio/minio:latest
restart: unless-stopped
command: server /data --console-address :9001
environment:
MINIO_ROOT_USER: ${VIVD_LOCAL_S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${VIVD_LOCAL_S3_SECRET_KEY}
volumes:
- minio_data:/data
networks:
- vivd-network
minio-init:
image: minio/mc:latest
restart: "no"
profiles:
- setup
depends_on:
- minio
environment:
VIVD_LOCAL_S3_ACCESS_KEY: ${VIVD_LOCAL_S3_ACCESS_KEY}
VIVD_LOCAL_S3_SECRET_KEY: ${VIVD_LOCAL_S3_SECRET_KEY}
entrypoint: >
/bin/sh -c '
until mc alias set vivd http://minio:9000 "$$VIVD_LOCAL_S3_ACCESS_KEY" "$$VIVD_LOCAL_S3_SECRET_KEY" >/dev/null 2>&1; do
sleep 1
done &&
mc mb --ignore-existing "vivd/vivd"
'
networks:
- vivd-network
volumes:
backend_data:
opencode_data:
published_sites:
caddy_data:
caddy_config:
caddy_sites:
caddy_runtime_routes:
postgres_data:
minio_data:
networks:
vivd-network:
driver: bridge
Save a matching .env next to that file. If you are not using install.sh, generate
fresh random secrets for the change-me values yourself:
# Required values for the minimal solo stack.
# Generate fresh random secrets for the change-me entries.
DOMAIN=https://example.com
VIVD_CADDY_PRIMARY_HOST=example.com
# VIVD_CADDY_TLS_MODE options:
# managed = bundled Caddy gets and renews certificates
# external = another proxy handles TLS; keep the public origin on https
# off = plain HTTP only
VIVD_CADDY_TLS_MODE=managed
VIVD_CADDY_ACME_EMAIL=ops@example.com
OPENROUTER_API_KEY=sk-or-v1-...
BETTER_AUTH_SECRET=change-me-to-a-long-random-secret
POSTGRES_PASSWORD=change-me-to-a-long-random-password
SCRAPER_API_KEY=change-me-to-another-long-random-secret
VIVD_LOCAL_S3_ACCESS_KEY=vivd-change-me
VIVD_LOCAL_S3_SECRET_KEY=change-me-to-a-long-random-secret
# Optional: pin all Vivd images instead of using the default latest tag.
# VIVD_SELFHOST_IMAGE_TAG=latest
For localhost, a raw IP, or an upstream TLS terminator such as Dockploy or Traefik,
switch to Caddyfile.plain-http and set
VIVD_CADDY_TLS_MODE=off. If TLS terminates upstream, keep DOMAIN on the public
https://... origin even though Vivd’s bundled Caddy stays on internal HTTP routing.
This reduced view intentionally omits fixed solo defaults such as the publish/Caddy internal paths, the Docker socket path, the runtime route prefix, and the MinIO internal endpoint details that do not usually need operator edits.
Then start the stack:
docker compose up -d
If you want the exact published bundle instead of the reduced docs view, use docker-compose.yml. For the optional and advanced env groups behind that larger file, use Self-Host Config Reference.
What gets installed
Section titled “What gets installed”The solo bundle installs:
- Caddy
- frontend
- backend
- scraper
- Postgres
- MinIO
TLS handling
Section titled “TLS handling”By default:
- a public hostname uses bundled Caddy-managed HTTPS
localhostand raw IP installs use plain HTTP
Raw-IP installs are workable for quick tests, but they are not as robust as a
real hostname with HTTPS. Modern browsers can still prefer or retry https://
for a raw IP because of browser-side HTTPS-first behavior or stale cached app
shells, and plain HTTP on an IP cannot intercept that. For reliable day-to-day
use, point a real hostname at the server and let bundled Caddy manage HTTPS, or
put Vivd behind an upstream TLS terminator.
If you want an upstream proxy to own certificates instead, run the installer with:
curl -fsSL https://docs.vivd.studio/install.sh | bash -s -- --tls-mode external
That keeps Vivd’s Caddy on plain HTTP and is the better fit when you deploy the bundle inside Dockploy, Traefik, or another platform that already manages TLS.
If you are deploying the published Compose bundle inside another platform, treat the downloaded docker-compose.yml and Caddyfile as the intended edit points. In that setup you will usually:
- switch to the plain HTTP Caddyfile
- remove or adapt the direct
80:80and443:443port bindings
Dockploy / external TLS
Section titled “Dockploy / external TLS”For a Dockploy-style setup:
- Run the installer with
--tls-mode external --no-start. - Use the generated
~/vivd/docker-compose.yml,.env, and plain HTTPCaddyfileas your deployment files. - Let Dockploy expose the app publicly and manage HTTPS certificates.
- If Dockploy expects to own the public ports itself, remove or override the direct
80:80and443:443bindings from the generated Compose file.
In that mode, keep the generated public URLs and auth origin as https://.... The upstream proxy owns TLS, while Vivd’s bundled Caddy stays on internal HTTP routing.
Result
Section titled “Result”After the stack starts, open:
https://your-host/https://your-host/vivd-studio
If you are testing on localhost or another local host, use http instead.
Then go straight to Instance Settings -> Network and confirm the install-level host
and HTTPS mode match how you actually deployed it:
- install script / direct bundle on your own server: usually
Bundled Caddy - Dockploy / Traefik / another upstream TLS terminator: usually
External proxy - localhost or internal testing: usually
Plain HTTP
If you use Bundled Caddy, make sure the ACME email is set there as well.
If you change the public host from that screen, expect to continue on the new URL and
sign in again there instead of trying to keep using the old origin.
Manage the install
Section titled “Manage the install”The installer writes the stack into ~/vivd by default.
cd ~/vivd
docker compose ps
docker compose logs -f
Troubleshooting
Section titled “Troubleshooting”If backend startup loops on db:migrate:prod with:
password authentication failed for user "postgres"
then the backend is usually reading a newly generated POSTGRES_PASSWORD from
.env while Docker is still reusing an older postgres_data volume from a
previous install.
This most often happens when you delete ~/vivd and run the installer again:
the files are new, but Docker named volumes survive unless you remove them
explicitly.
If this should be a fresh install and you do not need the old database data:
cd ~/vivd
docker compose down -v
docker compose up -d
If you do need the old data, do not remove the volume. Instead, restore the old
POSTGRES_PASSWORD value so the backend and existing Postgres volume match
again.
After install, you can change the main public host and TLS mode in Instance Settings -> Network. For the hosted self-host bundle, Vivd rewrites its bundled Caddy config from there. If you still keep an explicit advanced host override env such as VIVD_APP_URL, BETTER_AUTH_URL, or CONTROL_PLANE_HOST, that deployment-level override remains authoritative.
For most operators, this admin-side Network screen should be the first place to look
when the public URL, HTTPS behavior, or certificate email needs to change later.
If the host actually changes, the old-origin session can stop working immediately; open
the new domain and sign in again there.
See Instance Settings for the admin-side meaning of General,
Network, Capabilities, and Instance Limits.
Use Domains & Publish Targets for the launch-domain
rules behind project publishing.
Updating
Section titled “Updating”The installer is currently for fresh installs. It refuses to overwrite an existing docker-compose.yml, Caddyfile, or .env.
For normal image updates:
cd ~/vivd
docker compose pull
docker compose up -d
If you are using the hosted solo self-host bundle, Instance Settings -> General
also shows the currently running Vivd version in the UI. When that bundle is
mounted with its managed updater path, the same page can trigger the equivalent
bundle update flow for you from the browser.
If a newer release changes the published bundle files themselves, compare your current files with the latest public bundle assets and merge those changes manually:
What you usually change later
Section titled “What you usually change later”Most self-host installs do not need much more env work. You usually only edit .env later if you want:
- email delivery: choose a provider such as SMTP (
VIVD_EMAIL_PROVIDER=smtp,VIVD_SMTP_HOST,VIVD_SMTP_PORT,VIVD_SMTP_USER,VIVD_SMTP_PASSWORD) or Resend / SES, plusVIVD_EMAIL_FROM - transactional email identity/footer: optional fields in
Instance Settings -> Email, or env bootstrap values likeVIVD_EMAIL_BRAND_DISPLAY_NAME,VIVD_EMAIL_BRAND_SUPPORT_EMAIL, and the legal/footer URLs if you want more than the minimal default email footer - Turnstile:
CLOUDFLARE_ACCOUNT_ID,CLOUDFLARE_API_TOKEN - GitHub backup/sync:
GITHUB_SYNC_ENABLED,GITHUB_ORG,GITHUB_TOKEN - scraper residential proxying when website import is blocked by datacenter-IP or anti-bot checks:
PROXY_HOST, optionalPROXY_PORT,PROXY_USERNAME,PROXY_PASSWORD - a remote S3-compatible object store instead of the bundled local bucket
- an optional
OPENCODE_MODEL_PRO
For the install-wide UI surface behind those choices, use Instance Settings. Use Email & Deliverability for the admin-side email identity and suppression/webhook workflows. Use Self-Host Config Reference for the broader optional env surface behind the published Compose bundle.
SMTP example
Section titled “SMTP example”VIVD_EMAIL_PROVIDER=smtp
VIVD_EMAIL_FROM=noreply@example.com
VIVD_SMTP_HOST=smtp.example.com
VIVD_SMTP_PORT=587
VIVD_SMTP_USER=your-smtp-user
VIVD_SMTP_PASSWORD=your-smtp-password
VIVD_SMTP_SECURE=false
If your provider gives you a single SMTP URL, you can use VIVD_SMTP_URL= instead of the separate host/port/user/password fields.
Remote S3-compatible storage example
Section titled “Remote S3-compatible storage example”VIVD_BUCKET_MODE=external
VIVD_S3_BUCKET=vivd
VIVD_S3_ENDPOINT_URL=https://s3.example.com
VIVD_S3_ACCESS_KEY_ID=your-access-key
VIVD_S3_SECRET_ACCESS_KEY=your-secret-key
VIVD_S3_REGION=us-east-1
Optional additions:
VIVD_S3_PUBLIC_BASE_URLif objects should be read from a CDN or public asset hostVIVD_S3_DOWNLOAD_ENDPOINT_URLif signed downloads should use a different public object endpoint
- This installer is for the
soloprofile only. - Project storage defaults to the bundled local S3-compatible bucket. You can later switch to a remote S3-compatible bucket by updating
.env. - Public-hostname installs now let bundled Caddy obtain and renew certificates directly.
- If you terminate TLS upstream instead, use
--tls-mode externalor the published plain HTTP Caddyfile.