· Infrastructure  · 1 min read

Self-Hosting Everything: Dokploy + Cloudflare Tunnels on a Home Server

My complete self-hosting stack — Dokploy as the deployment platform, Cloudflare Tunnels for zero-open-port exposure, and a wildcard subdomain setup that makes spinning up new services trivial.

My complete self-hosting stack — Dokploy as the deployment platform, Cloudflare Tunnels for zero-open-port exposure, and a wildcard subdomain setup that makes spinning up new services trivial.

The Goal

Run production-grade personal services (Gitea, Plausible, Minio, n8n, custom apps) on a home server with:

  • Zero open inbound ports on the router
  • Automatic HTTPS via Cloudflare
  • One-command deployments via Dokploy

Stack

LayerTool
Deployment platformDokploy
Ingress / DNSCloudflare Tunnels + Workers
Container runtimeDocker + Compose
Reverse proxyTraefik (managed by Dokploy)
Object storageMinIO

How It Works

Cloudflare Tunnel (cloudflared) runs as a Docker container on the server. It opens an outbound connection to Cloudflare’s edge — no inbound firewall rules needed. DNS records point to <tunnel-id>.cfargotunnel.com.

Dokploy handles the deployment UI — connect a Git repo, set environment variables, and it handles compose file generation, rolling updates, and log streaming.

Wildcard subdomain routing: A single tunnel config routes *.yourdomain.com to the local Traefik instance, which then proxies to the correct container by hostname. Adding a new service = add a Traefik label, push, done.

Gotchas

  • Cloudflare Tunnel has a ~100 MB WebSocket payload limit — matters if you’re proxying large file uploads directly (use MinIO pre-signed URLs instead).
  • Dokploy’s built-in Traefik conflicts with an external Traefik instance — pick one.
  • cloudflared needs to be pinned to a specific version in compose to avoid surprise breakage on auto-update.
Back to Blog

Related Posts

View All Posts »
Evaluating LLM Outputs at Scale with Python

Evaluating LLM Outputs at Scale with Python

A practical walkthrough of the evaluation harness I built to benchmark LLM response quality, latency, and cost across multiple models — using DeepEval, custom rubric scorers, and OpenLit for observability.