We Tested Self-Hosted n8n On A VPS: What We Learned

Tested Self-Hosted n8n On A VPS

If you follow modern automation you have seen n8n come up more and more. We wanted to know what it takes to run it properly on a virtual private server, not just for a demo but for real workloads. While researching we found a hands-on n8n hosting guide that matched the approach we like to use, plus an n8n VPS option we could try while benchmarking. We also looked at the usual providers you would shortlist for a small but serious deployment then picked a host that answered questions quickly when we asked about proxy headers and backups.

What n8n Is For Developers

n8n is an open-source workflow engine with a visual editor. You wire nodes to talk to APIs, webhooks, files, queues, databases and you can drop into code when a prebuilt node does not cover a special case. The editor runs in the browser. The runtime executes flows on the server. It is friendly for no-code exploration yet does not hide the plumbing which is why developers and ops teams pick it for production.

Why Self-Host On A VPS

Self-hosting gives you cost predictability and control over data location. Webhooks need a stable public endpoint with TLS. Internal services often sit behind VPNs or private networks. A VPS gives you a fixed address, a clean reverse proxy, room to grow and a place where backups and snapshots make sense. If a flow touches customer data or billing you want boring reliability.

Why We Chose LumaDock For Our n8n Tests

Fast Human Support When We Asked Real Questions

We tested support first. A live chat reply arrived within minutes with clear answers about forwarding headers behind their mitigation layer and how to keep webhook URLs stable. No ticket maze. When you are mid-migration that speed matters more than a small price delta, so for us this was the deciding factor.

Docker And Compose Ready Out Of The Box

Their images gave us Docker and Docker Compose from the start so we did not spend the first hour installing basics. Sample Compose templates for n8n with a reverse proxy and PostgreSQL were easy to adapt which cut the yak shaving and let us jump to testing queue workers.

Clean Networking And Always-On Protection

Always-on DDoS mitigation is included. Firewall rules are simple to manage from the panel. Each VPS ships with a dedicated IPv4 and unmetered bandwidth so we did not have to juggle limits during webhook bursts. Keeping Postgres and Redis on private networking was straightforward.

Storage And Host Hardware We Could Trust

Triple-replicated NVMe storage kept latency consistent during write-heavy runs. We did not see odd pauses during container pulls or database checkpoints which is what we wanted under load.

Backups And Snapshots That Fit Day-2 Ops

Snapshot options were available from the panel. Performance plans include an automatic backup slot with the option to add more. Taking a snapshot before an upgrade and a nightly database dump gave us a simple rollback plan without extra tooling.

Predictable Pricing And EU Footprint

Entry plans start at a low monthly rate with a 30 day refund period. Regions in Paris, Bucharest, Frankfurt, London kept latency low to common European services and make data residency conversations easier for EU teams.

The Architecture You Actually Run

Under the editor there is a runtime engine that executes nodes. A database stores workflows, credentials and execution history. A reverse proxy terminates TLS and forwards requests. Redis is optional unless you enable queue mode. That is the whole picture you need to reason about scaling and failure modes.

Two Deployment Shapes

  • Single-Node: editor and executions run together. Simple to deploy, easy to back up, great for a start. Heavy runs can make the UI feel slow during spikes.
  • Queue Mode: the editor stays light, jobs go to Redis, workers pull and execute in parallel, PostgreSQL holds state and history. This keeps the UI responsive and lets you scale horizontally.

Gotchas Worth Respecting

  • Binary data: keep the default in-memory mode when you use queue workers or use a supported external store. Filesystem binary mode fits single-node.
  • Reverse proxy variables: setN8N_HOST,N8N_PROTOCOLandWEBHOOK_URLso links and webhooks use the right scheme and host. If there is more than one proxy hop setN8N_PROXY_HOPS.
  • Encryption key: generate a strong key before first run. Changing it later without a plan locks stored credentials.

VPS Sizing That Does Not Hurt Later

Good starting points that kept p95 latency in check for us:

  • Starter: 2 vCPU, 4 GB RAM, NVMe storage. Personal projects, light webhooks, small files.
  • Growing: 4 vCPU, 8 GB RAM. Multiple external APIs, occasional bursts, first queue workers.
  • Busy: 8 vCPU, 16 GB RAM or more. Several workers, heavier transforms, larger payloads.

Pick a region near your webhook sources and APIs. Latency shows up as slow triggers and slow retries long before you run out of CPU.

A Clean First Deployment

We prefer Docker for portability. Keep n8n, PostgreSQL and your proxy in separate services. Persist app data and database storage on volumes. Keep secrets out of the repo. Bring it up, confirm TLS then start building flows.

version: "3.9"
services:
  n8n:
    image: n8nio/n8n:latest
    restart: unless-stopped
    environment:
      - N8N_HOST=your.domain.com
      - N8N_PORT=5678
      - WEBHOOK_URL=https://your.domain.com/
      - N8N_PROTOCOL=https
      - N8N_PROXY_HOPS=1
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      # enable these when you switch to queue mode
      # - EXECUTIONS_MODE=queue
      # - QUEUE_BULL_REDIS_HOST=redis
      # - QUEUE_BULL_REDIS_PORT=6379
    depends_on:
      - postgres
    ports:
      - "127.0.0.1:5678:5678"
    volumes:
      - n8n_data:/home/node/.n8n

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=n8n
    volumes:
      - pg_data:/var/lib/postgresql/data

  # add this block when you enable queue mode
  # redis:
  #   image: redis:7-alpine
  #   restart: unless-stopped

volumes:
  n8n_data:
  pg_data:

Generate a 32-byte key and keep it safe:

openssl rand -hex 32

Reverse Proxy And TLS Without Weird URLs

Terminate TLS at Caddy, Traefik or Nginx. ForwardX-Forwarded-Proto,X-Forwarded-HostandX-Forwarded-For. Redirect HTTP to HTTPS. Enable HSTS when you are sure renewals work.

With Caddy a minimal site block looks like this:

your.domain.com {
  reverse_proxy 127.0.0.1:5678
}

With Nginx use a modern TLS snippet, set the forwarded headers and tune body size plus timeouts for large uploads or long-running nodes. Keep the n8n port bound to localhost. Expose only 80 and 443 to the public.

Environment Variables That Matter

Put them in.envthen reference withenv_fileor inline. Keep that file out of public repos.

  • Public URL:N8N_HOST,N8N_PROTOCOL,WEBHOOK_URL.
  • Proxy trust:N8N_PROXY_HOPSto match your last hop count.
  • Security:N8N_ENCRYPTION_KEYfor credentials,N8N_SECURE_COOKIE=truewhen you serve HTTPS.
  • Database:DB_TYPE=postgresdbwith host, port, user, database, password.
  • Executions:EXECUTIONS_MODE=queueplus Redis host and port when you scale out.
  • Binary data: leave default for queue workers, setN8N_DEFAULT_BINARY_DATA_MODE=filesystemonly on single-node.

Data Storage: PostgreSQL vs SQLite

SQLite is fine for a test. For production use PostgreSQL so concurrent executions do not block each other and history stays consistent. Put Postgres on NVMe. Set sensible connection limits. Let autovacuum do its work. Back it up like you mean it.

Backups And Disaster Recovery

You need both file-level and platform-level safety nets. A pattern that worked without surprises:

  • Nightly database dumps to off-box storage with encryption.
  • Persistent app data volume for/home/node/.n8n.
  • Filesystem binary path persisted only if you use single-node with filesystem mode.
  • VPS snapshots before risky changes so rollback is minutes not hours.

Practice a restore on a small test server. A backup you never restored is a note not a safety plan.

Monitoring That Catches The Right Problems

Keep the stack small and useful. We used Uptime Kuma for external checks, Prometheus for metrics and Grafana for dashboards. Watch CPU, memory, disk, Postgres connections, Redis ops, queue depth, SSL expiry and webhook latency. Alert on rate of change so you catch slow drifts not just hard failures.

Performance And Scaling

  • Workers: several small workers usually beat one giant worker.
  • Hot paths: if most time is network wait add workers. If CPU is saturated add vCPU or move heavy steps to a dedicated worker.
  • History: prune old executions on a schedule to keep the database lean.
  • Placement: keep the VPS near the APIs you call most.

Security Checklist You Will Actually Use

  • Open only 80 and 443 to the world plus SSH if needed.
  • Use SSH keys. Disable password login.
  • Restrict editor access. Consider basic auth or SSO.
  • Rotate secrets when staff changes. Treat the encryption key as critical.
  • Update the OS and containers on a schedule.
  • Verify third-party webhook signatures when providers support them.

Troubleshooting Notes From Real Runs

Webhook links look wrong fixN8N_HOST,N8N_PROTOCOL,WEBHOOK_URLand confirm forwarded headers. Restart the containers so changes apply.

Credentials went missing the app data volume did not persist or the encryption key changed. Restore from backup then keep the key stable.

Workers idle but jobs queue verify queue mode variables, Redis reachability and that editor plus workers use the same database.

Uploads fail raise proxy body size and timeouts. Check free disk space.

Disk filled overnight prune execution logs and rotate container logs. Move large binaries out of the database path.

Five Small Automations Worth Building First

  • Form To Slack Triage validate a JSON payload, enrich it, post to Slack with a clear layout, add retry with backoff.
  • Failed Payment Loop listen for failed charges, tag the CRM, send a transactional email, open a lightweight ticket.
  • CSV To API Upsert fetch a CSV, chunk rows, map fields with a Code node, upsert through a REST node, keep a progress marker.
  • Release Notes Broadcast on Git tag create release notes, publish to a static page, push to email and a private channel.
  • Support Signal Alert poll the help desk, compute simple moving averages on tags, notify when a tag crosses a threshold.

Cost And Picking A Host

Self-hosted n8n makes sense when workflows grow or when you need custom nodes and private integrations. We checked a handful of VPS providers you probably know. We ended up testing LumaDock because the specs were clear, storage was NVMe and a support reply came back quickly when we asked about webhook headers behind their mitigation layer. You can reach similar outcomes on other providers with the right plan size. Quick and useful support during setup saved us time.

On LumaDock the Bucharest region runs on AMD EPYC, Frankfurt and London run on Intel Xeon Gold. Plans include DDoS protection, firewall management, dedicated IPv4 and unmetered bandwidth. Pricing starts at a low entry point which is useful for pilots. We kept the test neutral then stuck with the plan through queue mode because the latency profile stayed consistent.

Upgrade And Change Management

  • Clone the stack to staging with a test domain. Run a smoke test.
  • Before production changes take a database dump and a snapshot.
  • For single-node stop the service, pull the image, start, test.
  • For queue mode drain workers, upgrade one, observe, roll the rest. Upgrade the editor last.
  • Keep the previous image tag for instant rollback.

When To Split Services

  • Database pressure large execution history or many concurrent connections. Move Postgres to its own VM with more RAM and NVMe.
  • Noisy workloads if one workflow spikes CPU move a worker to a separate VM.
  • Compliance if isolation is required keep Redis and Postgres on private network hosts.

What We Would Repeat Next Time

SetWEBHOOK_URLcorrectly on day one. Write the encryption key in two safe places. Add monitoring before the first real workflow. Practice a restore before the first upgrade. Move to queue mode when the editor feels sticky not after. Most avoidable problems were in those five lines.

Closing Thoughts

n8n respects the time you put into it. Treat it like an application not a throwaway container. Use PostgreSQL, keep the proxy simple, back up what matters and test your recovery steps. Start with a clean single-node install then add queue workers when graphs tell you it is time. With that baseline you get a stable automation platform that stays out of the way while your flows do the work.

YOU MAY ALSO LIKE: The JOI Database: A Revolutionary Data Management Solution

Leave a Reply

Your email address will not be published. Required fields are marked *