← ~/content

Caddy + Cloudflare SSL on Proxmox — HTTPS for Every Service (2026)

Tutorial~14 min read
Caddy + Cloudflare SSL on Proxmox — HTTPS for Every Service (2026)
beginner~20 min

Prerequisites

  • Proxmox VE 9.x host
  • Pi-hole running for local DNS
  • A domain managed by Cloudflare
  • Cloudflare account

Tools

  • SSH terminal
  • Web browser
  • Cloudflare dashboard

Software

  • go1.26.1
  • caddy2.11.2
  • proxmox-ve9.1
Watch on YouTube

Right now your services are running on IP addresses and port numbers. PBS is at https://10.1.20.200:8007. Uptime Kuma is at http://10.1.20.102:3001. Pi-hole is at http://10.1.20.100/admin. Every one of them has a self-signed certificate warning or no HTTPS at all.

Caddy fixes all of that. It's a reverse proxy that sits in front of your services and handles HTTPS automatically. You access pbs.hake.rodeo instead of an IP, and the certificate is a real Let's Encrypt cert — no warnings, no port numbers.

The trick is how Caddy gets those certificates. Normally, Let's Encrypt needs to reach your server over the internet to verify you own the domain. But we're not exposing anything to the internet. Instead, we use DNS-01 challenges — Caddy creates a temporary DNS record via the Cloudflare API, Let's Encrypt reads it, and the certificate is issued. No open ports required.

This guide covers building Caddy with the Cloudflare DNS module, configuring it as a reverse proxy for your homelab services, and setting up the Pi-hole DNS records that make it all work. By the end, every service has HTTPS with valid certificates.

Prerequisites

  • A Proxmox VE 9.x host with SSH access
  • Pi-hole running for local DNS
  • A domain you own, managed by Cloudflare (free tier works)
  • A Cloudflare account to create an API token
  • Services to proxy (PBS, Uptime Kuma, Pi-hole — or whatever you have running)

Step 1: Create the Container

Caddy runs in its own container as the central reverse proxy for all services on the network.

On Proxmox host
pct create 101 local:vztmpl/debian-13-standard_13.1-2_amd64.tar.zst \
  --hostname caddy \
  --cores 1 \
  --memory 2048 \
  --rootfs local-lvm:6 \
  --net0 name=eth0,bridge=vmbr0,ip=10.1.20.101/24,gw=10.1.20.1 \
  --nameserver 10.1.20.100 \
  --unprivileged 1 \
  --features nesting=1 \
  --onboot 1 \
  --start 1

WARNING

Replace the IP, gateway, and DNS with your network values. DNS points to Pi-hole since it's already running.

The container has 2048 MB of RAM because the xcaddy build process is memory-hungry. We'll reduce this to 512 MB after the build is done.

Apply the Debian 13 tmpfs fix. Enter the container:

On Proxmox host
pct enter 101
Inside CT 101
systemctl mask tmp.mount
Inside CT 101
exit
On Proxmox host
pct reboot 101

Wait a few seconds for the container to come back, then enter it again to install dependencies:

On Proxmox host
pct enter 101
Inside CT 101
apt-get update && apt-get install -y curl wget tar

Step 2: Install Go

xcaddy needs Go to compile Caddy with custom modules. Caddy v2.11 requires Go 1.26+.

Download and install Go from the official source:

Inside CT 101
wget https://go.dev/dl/go1.26.1.linux-amd64.tar.gz

WARNING

Always download Go from go.dev — never from third-party sources. Verify you're getting the official binary.

Extract it to /usr/local:

Inside CT 101
tar -C /usr/local -xzf go1.26.1.linux-amd64.tar.gz

Add Go to the system PATH so it's available for all users:

Inside CT 101
nano /etc/profile.d/go.sh

Paste the following, then save with Ctrl+X, Y, Enter:

/etc/profile.d/go.sh
export PATH=$PATH:/usr/local/go/bin:~/go/bin

Load the new PATH:

Inside CT 101
source /etc/profile.d/go.sh

Verify Go is installed:

Inside CT 101
go version

You should see go version go1.26.1 linux/amd64. Clean up the download:

Inside CT 101
rm go1.26.1.linux-amd64.tar.gz

Step 3: Build Caddy with the Cloudflare Module

The Caddy package from apt doesn't include the Cloudflare DNS module. We need to build a custom binary that has it baked in. That's what xcaddy does — it compiles Caddy from source with any additional modules you specify.

Install xcaddy:

Inside CT 101
go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest

Build Caddy with the Cloudflare DNS module:

Inside CT 101
~/go/bin/xcaddy build --with github.com/caddy-dns/cloudflare

This takes 1-2 minutes. xcaddy downloads the Caddy source, the Cloudflare module from the official caddy-dns GitHub org, compiles everything together, and outputs a caddy binary in the current directory.

NOTE

Only use modules from the official caddy-dns organization or the caddyserver org. This space has its share of malicious packages — stick to known, trusted sources.

Verify the build:

Inside CT 101
./caddy version

Confirm the Cloudflare module is included:

Inside CT 101
./caddy list-modules | grep cloudflare

You should see dns.providers.cloudflare in the output.

Step 4: Install the Caddy Package and Replace the Binary

Instead of writing our own systemd service file, we'll install the official Caddy package — it comes with a well-tested service file, the caddy user/group, and proper directory structure. Then we replace its binary with our custom build.

Add the official Caddy apt repository:

Inside CT 101
apt-get install -y debian-keyring debian-archive-keyring apt-transport-https
Inside CT 101
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
Inside CT 101
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list
Inside CT 101
apt-get update && apt-get install -y caddy

Caddy auto-starts after install. Stop it so we can swap the binary:

Inside CT 101
systemctl stop caddy

Now the key part — use dpkg-divert to prevent future apt upgrade commands from overwriting our custom binary:

Inside CT 101
dpkg-divert --divert /usr/bin/caddy.default --rename /usr/bin/caddy

This tells apt "the real caddy binary lives at caddy.default now — don't touch the caddy path." Move our custom build into place:

Inside CT 101
mv ./caddy /usr/bin/caddy.custom

Set up alternatives so you can switch between the custom and default builds:

Inside CT 101
update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.default 10
Inside CT 101
update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.custom 50

The custom build gets priority 50 (higher), so it's the active one. Verify:

Inside CT 101
caddy version
caddy list-modules | grep cloudflare

You should see your custom Caddy version with the Cloudflare module. To switch between builds later: update-alternatives --config caddy.

TIP

When you need to update Caddy in the future, rebuild with xcaddy (xcaddy build --with github.com/caddy-dns/cloudflare), move the new binary to /usr/bin/caddy.custom, and restart: systemctl restart caddy. The dpkg-divert ensures apt never touches your custom binary.

Step 5: Create a Cloudflare API Token

Caddy needs a Cloudflare API token to create the DNS records that Let's Encrypt uses to verify domain ownership. We'll scope this token to the minimum permissions needed.

In the Cloudflare dashboard:

  1. Go to My Profile > API Tokens > Create Token
  2. Select Custom Token > Get Started
  3. Configure:
    • Token name: caddy-dns
    • Permissions: Zone > DNS > Edit
    • Zone Resources: Include > Specific zone > select your domain
  4. Click Continue to summary > Create Token
  5. Copy the token immediately — it's shown only once

WARNING

This token can only edit DNS records in the one zone you selected. It can't access other Cloudflare settings, other domains, or your account. That's the principle of least privilege — Caddy gets exactly what it needs and nothing more.

Step 6: Configure the Token in Systemd

Never put API tokens directly in the Caddyfile. We'll store the token in an environment file that only root can read, and tell systemd to pass it to Caddy.

Create the environment file:

Inside CT 101
nano /etc/caddy/caddy.env

Paste the following (replace with your actual token), then save with Ctrl+X, Y, Enter:

/etc/caddy/caddy.env
CLOUDFLARE_API_TOKEN=your-actual-token-here

Lock down permissions so only root can read it:

Inside CT 101
chmod 600 /etc/caddy/caddy.env

Create a systemd override to load the environment file:

Inside CT 101
systemctl edit caddy

This opens an editor. Paste the following between the comment lines, then save:

systemd override
[Service]
EnvironmentFile=/etc/caddy/caddy.env

Reload systemd to pick up the override:

Inside CT 101
systemctl daemon-reload

Step 7: Write the Caddyfile

This is the core configuration. Each service gets a site block with its subdomain, the Cloudflare DNS challenge for TLS, and the reverse proxy target. We also add a health check endpoint on port 80 for monitoring.

Inside CT 101
nano /etc/caddy/Caddyfile

Replace the default contents with the following, then save with Ctrl+X, Y, Enter:

/etc/caddy/Caddyfile
:80 {
	respond "Caddy is running" 200
}
 
uptime.hake.rodeo {
	tls {
		dns cloudflare {env.CLOUDFLARE_API_TOKEN}
	}
	reverse_proxy 10.1.20.102:3001
}
 
pihole.hake.rodeo {
	tls {
		dns cloudflare {env.CLOUDFLARE_API_TOKEN}
	}
	reverse_proxy 10.1.20.100
}
 
pbs.hake.rodeo {
	tls {
		dns cloudflare {env.CLOUDFLARE_API_TOKEN}
	}
	reverse_proxy https://10.1.20.200:8007 {
		transport http {
			tls_insecure_skip_verify
		}
	}
}
 
pve.hake.rodeo {
	tls {
		dns cloudflare {env.CLOUDFLARE_API_TOKEN}
	}
	reverse_proxy https://10.1.20.10:8006 {
		transport http {
			tls_insecure_skip_verify
		}
		header_up Host {hostport}
	}
}

WARNING

Replace the subdomains (*.hake.rodeo) with your own domain. Replace the IP addresses with your service IPs.

A few things to understand about this configuration:

The :80 health check block — responds with "Caddy is running" on port 80 for any request that doesn't match a domain. This gives us a simple health check endpoint for Uptime Kuma at http://10.1.20.101.

HTTP backends (Uptime Kuma, Pi-hole) — just reverse_proxy IP:port. Caddy handles HTTPS on the frontend and connects to the backend over plain HTTP. Simple.

HTTPS backends with self-signed certs (PBS, Proxmox VE) — these services have their own HTTPS with self-signed certificates. The tls_insecure_skip_verify tells Caddy to trust the backend's cert regardless. This is acceptable on a trusted LAN where you control both ends.

The PVE Host header fix — Caddy v2.11 changed how it handles HTTPS upstreams. It automatically rewrites the Host header to match the upstream address, which breaks PVE's redirect logic and WebSocket connections (noVNC consoles). Adding header_up Host {hostport} preserves the original hostname. If you're using an older Caddy version, you may not need this line — but it doesn't hurt to include it.

TIP

This is the per-subdomain approach — each domain gets its own certificate from Let's Encrypt. An alternative is a wildcard certificate (*.hake.rodeo) that covers all subdomains with a single cert. At homelab scale, the practical difference is minimal. Per-subdomain is simpler to set up and what most tutorials use. Wildcard uses less API calls and hides individual subdomains from certificate transparency logs.

Step 8: Add Pi-hole DNS Records

Every subdomain needs a Pi-hole local DNS record pointing to Caddy's IP. When a device queries pbs.hake.rodeo, Pi-hole returns Caddy's IP (not the service's IP), and Caddy handles the routing.

Exit the Caddy container:

Inside CT 101
exit

In the Pi-hole web UI (http://10.1.20.100/admin), go to Local DNS > DNS Records. Add a record for each service, all pointing to Caddy's IP:

DomainIP
uptime.hake.rodeo10.1.20.101
pihole.hake.rodeo10.1.20.101
pbs.hake.rodeo10.1.20.101
pve.hake.rodeo10.1.20.101

All four records point to 10.1.20.101 — Caddy's IP, not the individual services. Caddy reads the hostname from the request and routes to the correct backend.

Verify one resolves:

On Proxmox host
dig pbs.hake.rodeo @10.1.20.100

You should see 10.1.20.101 in the answer section.

Step 9: Start Caddy and Test

Enter the Caddy container and start the service:

On Proxmox host
pct enter 101
Inside CT 101
systemctl start caddy

Check that it's running:

Inside CT 101
systemctl status caddy --no-pager | head -10

Check the logs for certificate issuance:

Inside CT 101
journalctl -u caddy --no-pager | tail -20

You should see Caddy requesting and obtaining certificates for each domain via the Cloudflare DNS challenge.

Inside CT 101
exit

Now open a browser and test each service:

  • https://uptime.hake.rodeo — Uptime Kuma
  • https://pihole.hake.rodeo — Pi-hole admin
  • https://pbs.hake.rodeo — PBS login
  • https://pve.hake.rodeo — Proxmox VE login

Click the lock icon in the browser — the certificate should be from Let's Encrypt, not self-signed. No warnings, no port numbers, just clean HTTPS.

NOTE

The first request for each domain takes a few seconds while Caddy completes the DNS-01 challenge and obtains the certificate. After that, certs are cached and renewed automatically — you never have to think about them again.

Step 10: Adding a New Service

This is the workflow you'll follow for every service you deploy from now on. It takes about 2 minutes:

  1. Add a Caddyfile entry — enter the Caddy container, edit the Caddyfile, copy an existing site block, change the subdomain and upstream IP

  2. Add a Pi-hole DNS record — in the Pi-hole web UI, go to Local DNS > DNS Records and add the new subdomain pointing to Caddy's IP (10.1.20.101)

  3. Reload Caddy — from inside the Caddy container, apply the new config with zero downtime:

Inside CT 101
systemctl reload caddy

That's it. The new service is available at https://newservice.hake.rodeo with a valid certificate.

Step 11: Reduce Container RAM

Now that Caddy is built, reduce the container's RAM. The xcaddy build needed 2 GB, but running Caddy uses almost nothing.

On Proxmox host
pct set 101 --memory 512

This takes effect on the next container restart. Caddy runs comfortably on 512 MB.

Step 12: Add Monitoring and Backup

Uptime Kuma Monitors

Set up two monitors for Caddy in Uptime Kuma:

Ping monitor — is the container alive?

  • Type: Ping
  • Name: Caddy Ping
  • Hostname: your Caddy IP (e.g., 10.1.20.101)
  • Interval: 60 seconds

HTTP monitor — is Caddy responding?

  • Type: HTTP(s)
  • Name: Caddy HTTP
  • URL: http://10.1.20.101
  • Expected status: 200
  • Interval: 60 seconds

The HTTP monitor hits the :80 health check block we added to the Caddyfile. If Caddy is running, it returns "Caddy is running" with a 200. If the container is up but Caddy crashed, the HTTP check fails while ping still passes — so you know exactly what's wrong.

PBS Backup

Add CT 101 to your PBS backup job. In the PVE web UI, go to Datacenter > Backup, edit the backup job, and add CT 101 to the selection.

Troubleshooting

xcaddy Build Fails with OOM

The build process needs ~1.5-2 GB of RAM. If the container has less than 2048 MB, the build gets OOM-killed. Increase RAM with pct set 101 --memory 2048, reboot, and try again.

"Invalid request headers" During Certificate Issuance

The Cloudflare API token isn't reaching Caddy. Verify: is /etc/caddy/caddy.env readable? Does the systemd override exist? Run systemctl cat caddy to check the full service configuration including overrides.

PVE Redirect Loop or Broken Console

Caddy v2.11 auto-rewrites the Host header for HTTPS upstreams. Add header_up Host {hostport} to the PVE reverse proxy block. Without it, PVE's redirects and noVNC WebSocket connections break.

Certificate Not Trusted in Browser

If Caddy issues an internal cert instead of a Let's Encrypt one, the DNS-01 challenge failed. Check journalctl -u caddy for ACME errors. Common causes: wrong API token, token doesn't have Zone > DNS > Edit permission, or the domain in the Caddyfile doesn't match the zone in the token. Adding resolvers 1.1.1.1 inside the tls block can help with DNS propagation delays.

apt upgrade Overwrites Custom Binary

If you skipped the dpkg-divert step, an apt upgrade will replace your custom binary with stock Caddy (no Cloudflare module). Re-run the divert and alternatives setup from Step 4, or rebuild with xcaddy.

Caddy Won't Start — Port 80/443 in Use

Something else is binding port 80 or 443. In a dedicated Caddy container this shouldn't happen. Check with ss -tlnp | grep -E ':80|:443'.

Summary

Every service on your network now has HTTPS with valid Let's Encrypt certificates:

  • Caddy v2.11 with the Cloudflare DNS module, built via xcaddy
  • DNS-01 challenges — no ports exposed to the internet
  • Per-subdomain certificates — each service gets its own cert, automatically renewed
  • Four services proxied — PBS, Uptime Kuma, Pi-hole, and Proxmox VE
  • Health check endpoint on port 80 for monitoring
  • 2-minute workflow for adding new services: Caddyfile entry + Pi-hole DNS record + reload

With this tutorial complete, you now have all four foundational homelab services running:

ServiceWhat it doesTutorial
PBSAutomated backupsProxmox Backup Server 4.1 Setup Guide
Pi-hole + UnboundDNS filtering + recursive resolutionPi-hole v6 + Unbound on Proxmox
CaddyHTTPS reverse proxyThis guide
Uptime KumaUptime monitoringInstall & First Monitors

Every future service you deploy follows the standard post-deploy workflow: deploy the service, add a Caddy route + Pi-hole DNS record, add an Uptime Kuma monitor, add to the PBS backup job. Infrastructure that runs itself.

Related Products

Some links are affiliate links. I may earn a small commission at no extra cost to you.