The following instructions will use docker compose to spin up a Trigger.dev instance. Make sure to read the self-hosting overview first.

As self-hosted deployments tend to have unique requirements and configurations, we don’t provide specific advice for securing your deployment, scaling up, or improving reliability.

Should the burden ever get too much, we’d be happy to see you on Trigger.dev cloud where we deal with these concerns for you.

Warning: This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.

What’s new?

Goodbye v3, hello v4! We made quite a few changes:

  • Much simpler setup. Provider + coordinator = supervisor. No more startup scripts. Just docker compose up.
  • Support for multiple worker machines. This is a big one, and we’re very excited about it! You can now scale your workers horizontally as needed.
  • Resource limits enforced by default. This means that tasks will be limited to the total CPU and RAM of the machine preset, preventing noisy neighbours.
  • No direct Docker socket access. The compose file now comes with Docker Socket Proxy by default. Yes, you want this.
  • No host networking. All containers are now running with network isolation, using only the network access they need.
  • No checkpoint support. This was only ever an experimental self-hosting feature and not recommended. It caused a bunch of issues. We decided to focus on the core features instead.
  • Built-in container registry and object storage. You can now deploy and execute tasks without needing third party services for this.
  • Improved CLI commands. You don’t need any additional flags to deploy anymore, and there’s a new switch command to easily switch between profiles.
  • Whitelisting for GitHub OAuth. Any whitelisted email addresses will now also apply to sign ins via GitHub, unlike v3 where they only applied to magic links.

Requirements

These are the minimum requirements for running the webapp and worker components. They can run on the same, or on separate machines.

It’s fine to run everything on the same machine for testing. To be able to scale your workers, you will want to run them separately.

Prerequisites

To run the webapp and worker components, you will need:

Webapp

This will host the webapp, postgres, redis, and related services.

  • 2+ vCPU
  • 4+ GB RAM

Worker

This will host the supervisor and all of the runs.

  • 2+ vCPU
  • 4+ GB RAM

How many workers and resources you need will depend on your workloads and concurrency requirements.

For example:

  • 10 concurrency x small-1x (0.5 vCPU, 0.5 GB RAM) = 5 vCPU and 5 GB RAM
  • 20 concurrency x small-1x (0.5 vCPU, 0.5 GB RAM) = 10 vCPU and 10 GB RAM
  • 100 concurrency x small-1x (0.5 vCPU, 0.5 GB RAM) = 50 vCPU and 50 GB RAM
  • 100 concurrency x small-2x (1 vCPU, 1 GB RAM) = 100 vCPU and 100 GB RAM

You may need to spin up multiple workers to handle peak concurrency. The good news is you don’t have to know the exact numbers upfront. You can start with a single worker and add more as needed.

Setup

Webapp

  1. Clone the repository
git clone https://github.com/triggerdotdev/trigger.dev
cd trigger.dev/hosting/docker
  1. Create a .env file
cp .env.example .env
  1. Start the webapp
cd webapp
docker compose up -d
  1. Optional: Add traefik as a reverse proxy
docker compose -f ../docker-compose.traefik.yml up -d
  1. Configure the webapp as needed using the environment variables and apply the changes:
docker compose up -d
  1. You should now be able to access the webapp at http://localhost:8030. When logging in, check the container logs for the magic link: docker compose logs -f webapp

  2. Optional: To initialize a new project, run the following command:

npx trigger.dev@v4-beta init -p <project-ref> -a http://localhost:8030

Worker

  1. Clone the repository
git clone https://github.com/triggerdotdev/trigger.dev
cd trigger.dev/hosting/docker
  1. Create a .env file
cp .env.example .env
  1. Start the worker
cd worker
docker compose up -d

Configure the supervisor as needed using the environment variables and apply the changes:

docker compose up -d

Repeat as needed for additional workers.

Combined

If you want to run the webapp and worker on the same machine, just replace the up command with the following:

# Run this from the /hosting/docker directory
docker compose -f docker-compose.webapp.yml -f docker-compose.worker.yml up -d

And optionally add traefik as a reverse proxy:

# Run this from the /hosting/docker directory
docker compose -f docker-compose.webapp.yml -f docker-compose.worker.yml -f docker-compose.traefik.yml up -d

Version locking

There are several reasons to lock the version of your Docker images:

  • Backwards compatibility. We try our best to maintain compatibility with older CLI versions, but it’s not always possible. If you don’t want to update your CLI, you can lock your Docker images to that specific version.
  • Ensuring full feature support. Sometimes, new CLI releases will also require new or updated platform features. Running unlocked images can make any issues difficult to debug. Using a specific tag can help here as well.

By default, the images will point at the latest versioned release via the v4-beta tag. You can override this by specifying a different tag in your .env file. For example:

TRIGGER_IMAGE_TAG=v4.0.0-v4-beta.21

Authentication

By default, magic link auth is the only login option. If the EMAIL_TRANSPORT env var is not set, the magic links will be logged by the webapp container and not sent via email.

The specific set of variables required will depend on your choice of email transport.

Resend

EMAIL_TRANSPORT=resend
FROM_EMAIL=
REPLY_TO_EMAIL=
RESEND_API_KEY=<your_resend_api_key>

SMTP

Note that setting SMTP_SECURE=false does not mean the email is sent insecurely. This simply means that the connection is secured using the modern STARTTLS protocol command instead of implicit TLS. You should only set this to true when the SMTP server host directs you to do so (generally when using port 465)

EMAIL_TRANSPORT=smtp
FROM_EMAIL=
REPLY_TO_EMAIL=
SMTP_HOST=<your_smtp_server>
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=<your_smtp_username>
SMTP_PASSWORD=<your_smtp_password>

AWS SES

Credentials are to be supplied as with any other program using the AWS SDK.

In this scenario, you would likely either supply the additional environment variables AWS_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY or, when running on AWS, use credentials supplied by the EC2 IMDS.

EMAIL_TRANSPORT=aws-ses
FROM_EMAIL=
REPLY_TO_EMAIL=

GitHub OAuth

To authenticate with GitHub, you will need to set up a GitHub OAuth app. It needs a callback URL https://<your_webapp_domain>/auth/github/callback and you will have to set the following env vars:

AUTH_GITHUB_CLIENT_ID=<your_client_id>
AUTH_GITHUB_CLIENT_SECRET=<your_client_secret>

Restricting access

All email addresses can sign up and log in this way. If you would like to restrict this, you can use the WHITELISTED_EMAILS env var. For example:

# every email that does not match this regex will be rejected
WHITELISTED_EMAILS="authorized@yahoo\.com|authorized@gmail\.com"

This will apply to all auth methods.

Troubleshooting

  • Deployment fails at the push step. The machine running deploy needs registry access:
docker login -u <username> <registry>
# this should now succeed
npx trigger.dev@v4-beta deploy

This needs to match the registry credentials. Defaults for the localhost:5000 registry are registry-user and very-safe-password. You should change these.

  • Magic links don’t arrive. The webapp container needs to be able to send emails. You probably need to set up an email transport. See the authentication section for more details.

You should check the logs of the webapp container to see the magic link: docker logs -f trigger-webapp-1

CLI usage

This section highlights some of the CLI commands and options that are useful when self-hosting. Please check the CLI reference for more in-depth documentation.

Login

To avoid being redirected to Trigger.dev Cloud when using the CLI, you need to specify the URL of your self-hosted instance with the --api-url or -a flag. For example:

npx trigger.dev@v4-beta login -a http://trigger.example.com

Once you’ve logged in, you shouldn’t have to specify the URL again with other commands.

Profiles

You can specify a profile when logging in. This allows you to easily use the CLI with multiple instances of Trigger.dev. For example:

npx trigger.dev@v4-beta login -a http://trigger.example.com \
    --profile self-hosted

Logging in with a new profile will also make it the new default profile.

To use a specific profile, you can use the --profile flag with other commands:

npx trigger.dev@v4-beta dev --profile self-hosted

To list all your profiles, use the list-profiles command:

npx trigger.dev@v4-beta list-profiles

To remove a profile, use the logout command:

npx trigger.dev@v4-beta logout --profile self-hosted

To switch to a different profile, use the switch command:

# To run interactively
npx trigger.dev@v4-beta switch

# To switch to a specific profile
npx trigger.dev@v4-beta switch self-hosted

Whoami

It can be useful to check you are logged into the correct instance. Running this will also show the API URL:

npx trigger.dev@v4-beta whoami

CI / GitHub Actions

When running the CLI in a CI environment, your login profiles won’t be available. Instead, you can use the TRIGGER_API_URL and TRIGGER_ACCESS_TOKEN environment variables to point at your self-hosted instance and authenticate.

For more detailed instructions, see the GitHub Actions guide.

Telemetry

By default, the Trigger.dev webapp sends telemetry data to our servers. This data is used to improve the product and is not shared with third parties. If you would like to opt-out of this, you can set the TRIGGER_TELEMETRY_DISABLED environment variable on the webapp container. The value doesn’t matter, it just can’t be empty. For example:

services:
  webapp:
    ...
    environment:
      TRIGGER_TELEMETRY_DISABLED: 1