Caddy + Cloudflare SSL on Proxmox — HTTPS for Every Service (2026)

Prerequisites
- •Proxmox VE 9.x host
- •Pi-hole running for local DNS
- •A domain managed by Cloudflare
- •Cloudflare account
Tools
- •SSH terminal
- •Web browser
- •Cloudflare dashboard
Software
- •go — 1.26.1
- •caddy — 2.11.2
- •proxmox-ve — 9.1
Right now your services are running on IP addresses and port numbers. PBS is at https://10.1.20.200:8007. Uptime Kuma is at http://10.1.20.102:3001. Pi-hole is at http://10.1.20.100/admin. Every one of them has a self-signed certificate warning or no HTTPS at all.
Caddy fixes all of that. It's a reverse proxy that sits in front of your services and handles HTTPS automatically. You access pbs.hake.rodeo instead of an IP, and the certificate is a real Let's Encrypt cert — no warnings, no port numbers.
The trick is how Caddy gets those certificates. Normally, Let's Encrypt needs to reach your server over the internet to verify you own the domain. But we're not exposing anything to the internet. Instead, we use DNS-01 challenges — Caddy creates a temporary DNS record via the Cloudflare API, Let's Encrypt reads it, and the certificate is issued. No open ports required.
This guide covers building Caddy with the Cloudflare DNS module, configuring it as a reverse proxy for your homelab services, and setting up the Pi-hole DNS records that make it all work. By the end, every service has HTTPS with valid certificates.
Prerequisites
- A Proxmox VE 9.x host with SSH access
- Pi-hole running for local DNS
- A domain you own, managed by Cloudflare (free tier works)
- A Cloudflare account to create an API token
- Services to proxy (PBS, Uptime Kuma, Pi-hole — or whatever you have running)
Step 1: Create the Container
Caddy runs in its own container as the central reverse proxy for all services on the network.
pct create 101 local:vztmpl/debian-13-standard_13.1-2_amd64.tar.zst \
--hostname caddy \
--cores 1 \
--memory 2048 \
--rootfs local-lvm:6 \
--net0 name=eth0,bridge=vmbr0,ip=10.1.20.101/24,gw=10.1.20.1 \
--nameserver 10.1.20.100 \
--unprivileged 1 \
--features nesting=1 \
--onboot 1 \
--start 1WARNING
Replace the IP, gateway, and DNS with your network values. DNS points to Pi-hole since it's already running.
The container has 2048 MB of RAM because the xcaddy build process is memory-hungry. We'll reduce this to 512 MB after the build is done.
Apply the Debian 13 tmpfs fix. Enter the container:
pct enter 101systemctl mask tmp.mountexitpct reboot 101Wait a few seconds for the container to come back, then enter it again to install dependencies:
pct enter 101apt-get update && apt-get install -y curl wget tarStep 2: Install Go
xcaddy needs Go to compile Caddy with custom modules. Caddy v2.11 requires Go 1.26+.
Download and install Go from the official source:
wget https://go.dev/dl/go1.26.1.linux-amd64.tar.gzWARNING
Always download Go from go.dev — never from third-party sources. Verify you're getting the official binary.
Extract it to /usr/local:
tar -C /usr/local -xzf go1.26.1.linux-amd64.tar.gzAdd Go to the system PATH so it's available for all users:
nano /etc/profile.d/go.shPaste the following, then save with Ctrl+X, Y, Enter:
export PATH=$PATH:/usr/local/go/bin:~/go/binLoad the new PATH:
source /etc/profile.d/go.shVerify Go is installed:
go versionYou should see go version go1.26.1 linux/amd64. Clean up the download:
rm go1.26.1.linux-amd64.tar.gzStep 3: Build Caddy with the Cloudflare Module
The Caddy package from apt doesn't include the Cloudflare DNS module. We need to build a custom binary that has it baked in. That's what xcaddy does — it compiles Caddy from source with any additional modules you specify.
Install xcaddy:
go install github.com/caddyserver/xcaddy/cmd/xcaddy@latestBuild Caddy with the Cloudflare DNS module:
~/go/bin/xcaddy build --with github.com/caddy-dns/cloudflareThis takes 1-2 minutes. xcaddy downloads the Caddy source, the Cloudflare module from the official caddy-dns GitHub org, compiles everything together, and outputs a caddy binary in the current directory.
NOTE
Only use modules from the official caddy-dns organization or the caddyserver org. This space has its share of malicious packages — stick to known, trusted sources.
Verify the build:
./caddy versionConfirm the Cloudflare module is included:
./caddy list-modules | grep cloudflareYou should see dns.providers.cloudflare in the output.
Step 4: Install the Caddy Package and Replace the Binary
Instead of writing our own systemd service file, we'll install the official Caddy package — it comes with a well-tested service file, the caddy user/group, and proper directory structure. Then we replace its binary with our custom build.
Add the official Caddy apt repository:
apt-get install -y debian-keyring debian-archive-keyring apt-transport-httpscurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpgcurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.listapt-get update && apt-get install -y caddyCaddy auto-starts after install. Stop it so we can swap the binary:
systemctl stop caddyNow the key part — use dpkg-divert to prevent future apt upgrade commands from overwriting our custom binary:
dpkg-divert --divert /usr/bin/caddy.default --rename /usr/bin/caddyThis tells apt "the real caddy binary lives at caddy.default now — don't touch the caddy path." Move our custom build into place:
mv ./caddy /usr/bin/caddy.customSet up alternatives so you can switch between the custom and default builds:
update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.default 10update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.custom 50The custom build gets priority 50 (higher), so it's the active one. Verify:
caddy version
caddy list-modules | grep cloudflareYou should see your custom Caddy version with the Cloudflare module. To switch between builds later: update-alternatives --config caddy.
TIP
When you need to update Caddy in the future, rebuild with xcaddy (xcaddy build --with github.com/caddy-dns/cloudflare), move the new binary to /usr/bin/caddy.custom, and restart: systemctl restart caddy. The dpkg-divert ensures apt never touches your custom binary.
Step 5: Create a Cloudflare API Token
Caddy needs a Cloudflare API token to create the DNS records that Let's Encrypt uses to verify domain ownership. We'll scope this token to the minimum permissions needed.
In the Cloudflare dashboard:
- Go to My Profile > API Tokens > Create Token
- Select Custom Token > Get Started
- Configure:
- Token name:
caddy-dns - Permissions: Zone > DNS > Edit
- Zone Resources: Include > Specific zone > select your domain
- Token name:
- Click Continue to summary > Create Token
- Copy the token immediately — it's shown only once
WARNING
This token can only edit DNS records in the one zone you selected. It can't access other Cloudflare settings, other domains, or your account. That's the principle of least privilege — Caddy gets exactly what it needs and nothing more.
Step 6: Configure the Token in Systemd
Never put API tokens directly in the Caddyfile. We'll store the token in an environment file that only root can read, and tell systemd to pass it to Caddy.
Create the environment file:
nano /etc/caddy/caddy.envPaste the following (replace with your actual token), then save with Ctrl+X, Y, Enter:
CLOUDFLARE_API_TOKEN=your-actual-token-hereLock down permissions so only root can read it:
chmod 600 /etc/caddy/caddy.envCreate a systemd override to load the environment file:
systemctl edit caddyThis opens an editor. Paste the following between the comment lines, then save:
[Service]
EnvironmentFile=/etc/caddy/caddy.envReload systemd to pick up the override:
systemctl daemon-reloadStep 7: Write the Caddyfile
This is the core configuration. Each service gets a site block with its subdomain, the Cloudflare DNS challenge for TLS, and the reverse proxy target. We also add a health check endpoint on port 80 for monitoring.
nano /etc/caddy/CaddyfileReplace the default contents with the following, then save with Ctrl+X, Y, Enter:
:80 {
respond "Caddy is running" 200
}
uptime.hake.rodeo {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
reverse_proxy 10.1.20.102:3001
}
pihole.hake.rodeo {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
reverse_proxy 10.1.20.100
}
pbs.hake.rodeo {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
reverse_proxy https://10.1.20.200:8007 {
transport http {
tls_insecure_skip_verify
}
}
}
pve.hake.rodeo {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
reverse_proxy https://10.1.20.10:8006 {
transport http {
tls_insecure_skip_verify
}
header_up Host {hostport}
}
}WARNING
Replace the subdomains (*.hake.rodeo) with your own domain. Replace the IP addresses with your service IPs.
A few things to understand about this configuration:
The :80 health check block — responds with "Caddy is running" on port 80 for any request that doesn't match a domain. This gives us a simple health check endpoint for Uptime Kuma at http://10.1.20.101.
HTTP backends (Uptime Kuma, Pi-hole) — just reverse_proxy IP:port. Caddy handles HTTPS on the frontend and connects to the backend over plain HTTP. Simple.
HTTPS backends with self-signed certs (PBS, Proxmox VE) — these services have their own HTTPS with self-signed certificates. The tls_insecure_skip_verify tells Caddy to trust the backend's cert regardless. This is acceptable on a trusted LAN where you control both ends.
The PVE Host header fix — Caddy v2.11 changed how it handles HTTPS upstreams. It automatically rewrites the Host header to match the upstream address, which breaks PVE's redirect logic and WebSocket connections (noVNC consoles). Adding header_up Host {hostport} preserves the original hostname. If you're using an older Caddy version, you may not need this line — but it doesn't hurt to include it.
TIP
This is the per-subdomain approach — each domain gets its own certificate from Let's Encrypt. An alternative is a wildcard certificate (*.hake.rodeo) that covers all subdomains with a single cert. At homelab scale, the practical difference is minimal. Per-subdomain is simpler to set up and what most tutorials use. Wildcard uses less API calls and hides individual subdomains from certificate transparency logs.
Step 8: Add Pi-hole DNS Records
Every subdomain needs a Pi-hole local DNS record pointing to Caddy's IP. When a device queries pbs.hake.rodeo, Pi-hole returns Caddy's IP (not the service's IP), and Caddy handles the routing.
Exit the Caddy container:
exitIn the Pi-hole web UI (http://10.1.20.100/admin), go to Local DNS > DNS Records. Add a record for each service, all pointing to Caddy's IP:
| Domain | IP |
|---|---|
uptime.hake.rodeo | 10.1.20.101 |
pihole.hake.rodeo | 10.1.20.101 |
pbs.hake.rodeo | 10.1.20.101 |
pve.hake.rodeo | 10.1.20.101 |
All four records point to 10.1.20.101 — Caddy's IP, not the individual services. Caddy reads the hostname from the request and routes to the correct backend.
Verify one resolves:
dig pbs.hake.rodeo @10.1.20.100You should see 10.1.20.101 in the answer section.
Step 9: Start Caddy and Test
Enter the Caddy container and start the service:
pct enter 101systemctl start caddyCheck that it's running:
systemctl status caddy --no-pager | head -10Check the logs for certificate issuance:
journalctl -u caddy --no-pager | tail -20You should see Caddy requesting and obtaining certificates for each domain via the Cloudflare DNS challenge.
exitNow open a browser and test each service:
https://uptime.hake.rodeo— Uptime Kumahttps://pihole.hake.rodeo— Pi-hole adminhttps://pbs.hake.rodeo— PBS loginhttps://pve.hake.rodeo— Proxmox VE login
Click the lock icon in the browser — the certificate should be from Let's Encrypt, not self-signed. No warnings, no port numbers, just clean HTTPS.
NOTE
The first request for each domain takes a few seconds while Caddy completes the DNS-01 challenge and obtains the certificate. After that, certs are cached and renewed automatically — you never have to think about them again.
Step 10: Adding a New Service
This is the workflow you'll follow for every service you deploy from now on. It takes about 2 minutes:
-
Add a Caddyfile entry — enter the Caddy container, edit the Caddyfile, copy an existing site block, change the subdomain and upstream IP
-
Add a Pi-hole DNS record — in the Pi-hole web UI, go to Local DNS > DNS Records and add the new subdomain pointing to Caddy's IP (
10.1.20.101) -
Reload Caddy — from inside the Caddy container, apply the new config with zero downtime:
systemctl reload caddyThat's it. The new service is available at https://newservice.hake.rodeo with a valid certificate.
Step 11: Reduce Container RAM
Now that Caddy is built, reduce the container's RAM. The xcaddy build needed 2 GB, but running Caddy uses almost nothing.
pct set 101 --memory 512This takes effect on the next container restart. Caddy runs comfortably on 512 MB.
Step 12: Add Monitoring and Backup
Uptime Kuma Monitors
Set up two monitors for Caddy in Uptime Kuma:
Ping monitor — is the container alive?
- Type: Ping
- Name: Caddy Ping
- Hostname: your Caddy IP (e.g.,
10.1.20.101) - Interval: 60 seconds
HTTP monitor — is Caddy responding?
- Type: HTTP(s)
- Name: Caddy HTTP
- URL:
http://10.1.20.101 - Expected status: 200
- Interval: 60 seconds
The HTTP monitor hits the :80 health check block we added to the Caddyfile. If Caddy is running, it returns "Caddy is running" with a 200. If the container is up but Caddy crashed, the HTTP check fails while ping still passes — so you know exactly what's wrong.
PBS Backup
Add CT 101 to your PBS backup job. In the PVE web UI, go to Datacenter > Backup, edit the backup job, and add CT 101 to the selection.
Troubleshooting
xcaddy Build Fails with OOM
The build process needs ~1.5-2 GB of RAM. If the container has less than 2048 MB, the build gets OOM-killed. Increase RAM with pct set 101 --memory 2048, reboot, and try again.
"Invalid request headers" During Certificate Issuance
The Cloudflare API token isn't reaching Caddy. Verify: is /etc/caddy/caddy.env readable? Does the systemd override exist? Run systemctl cat caddy to check the full service configuration including overrides.
PVE Redirect Loop or Broken Console
Caddy v2.11 auto-rewrites the Host header for HTTPS upstreams. Add header_up Host {hostport} to the PVE reverse proxy block. Without it, PVE's redirects and noVNC WebSocket connections break.
Certificate Not Trusted in Browser
If Caddy issues an internal cert instead of a Let's Encrypt one, the DNS-01 challenge failed. Check journalctl -u caddy for ACME errors. Common causes: wrong API token, token doesn't have Zone > DNS > Edit permission, or the domain in the Caddyfile doesn't match the zone in the token. Adding resolvers 1.1.1.1 inside the tls block can help with DNS propagation delays.
apt upgrade Overwrites Custom Binary
If you skipped the dpkg-divert step, an apt upgrade will replace your custom binary with stock Caddy (no Cloudflare module). Re-run the divert and alternatives setup from Step 4, or rebuild with xcaddy.
Caddy Won't Start — Port 80/443 in Use
Something else is binding port 80 or 443. In a dedicated Caddy container this shouldn't happen. Check with ss -tlnp | grep -E ':80|:443'.
Summary
Every service on your network now has HTTPS with valid Let's Encrypt certificates:
- Caddy v2.11 with the Cloudflare DNS module, built via xcaddy
- DNS-01 challenges — no ports exposed to the internet
- Per-subdomain certificates — each service gets its own cert, automatically renewed
- Four services proxied — PBS, Uptime Kuma, Pi-hole, and Proxmox VE
- Health check endpoint on port 80 for monitoring
- 2-minute workflow for adding new services: Caddyfile entry + Pi-hole DNS record + reload
With this tutorial complete, you now have all four foundational homelab services running:
| Service | What it does | Tutorial |
|---|---|---|
| PBS | Automated backups | Proxmox Backup Server 4.1 Setup Guide |
| Pi-hole + Unbound | DNS filtering + recursive resolution | Pi-hole v6 + Unbound on Proxmox |
| Caddy | HTTPS reverse proxy | This guide |
| Uptime Kuma | Uptime monitoring | Install & First Monitors |
Every future service you deploy follows the standard post-deploy workflow: deploy the service, add a Caddy route + Pi-hole DNS record, add an Uptime Kuma monitor, add to the PBS backup job. Infrastructure that runs itself.
Related Products
Crucial P3 Plus 4TB NVMe PCIe Gen4 SSD
GMKtec K12 Mini PC (Ryzen 7 H 255, 32GB)
SanDisk Ultra 64GB USB 3.0 Flash Drive (2-Pack)
GMKtec EVO X2 (AI MAX+ 395, 128GB)
Some links are affiliate links. I may earn a small commission at no extra cost to you.