What is this?
This is a kind of ‘partner post’ to go alongside this detailed n8n configuration post. I wanted to keep the configuration details a bit separated and the kind of ‘networking and connecting’ stuff on it’s own here.
At this point, we will assume that you have been able to get the n8n container going with the postgres database, but perhaps you’ve not yet been able to set things up so you are able to hit your DuckDNS domain and land on your n8n container successfully remotely. In this post we’ll make sure all those details are considered.
Foundational Review
Here is a quick review and some resources:
- Setting up an ubuntu server for a Podman environment
- Some basic configuration comments and a podman-compose file
- Setting up the networking for a multi-container multi pod environment
So before we kind of ‘finalize’ this series, we’ll assume at this point:
- You have, as mentioned above, the n8n container going with postgres. You’ve confirmed this with
podman pscommand and neither containers have errors - You have registered and have handy a DuckDNS domain and its token. A lot about that and more can be found here and elsewhere.
- You will be running Caddy as a stand-alone container which will operate in a multi-container / multi-pod setup. Note: the remainder of this post is not for running Caddy packaged up with other services which, of course, can be done. You can find that easily or adjust all my stuff to that setup if you want.
In the next few sections we will discuss the Caddy container and Caddyfile part of all this in detail and provide actual working files which you can try and use or adjust according to your needs.
Caddy Directory Comments
Caddy needs a directory for config and data and in the top level of the directory we’ll also put the podman-compose.yaml file for the container to keep things tidy. I’ll use a hidden directory so it stays safer in the $HOME directory like this $HOME/.caddy.
Here is directory structure to build:
- .caddy/
— config/
— data/
That’s it, really.
Inside the .caddy directory we will put the podman-compose.yaml
Inside the config directory we will put the Caddyfile
Here is a quick block of commands you can copy and paste and run in your $HOME directory if you are ok and want the same structure I have:
mkdir .caddy && \
cd .caddy && \
mkdir config && \
mkdir data && \
cd ~
Final Reminder! You won’t see your newly-created structure because you used the dot . in front of it making it hidden. To see it don’t forget the ls -al instead of just ls command.
Caddy Compose Yaml Comments
Before copying and pasting in my compose file verbatim, let’s go through it quickly so you can adjust if required.
- Uses an image (docker.io/serfriz/caddy-duckdns:latest) which should automatically create SSL certificates when the container starts. However, I had endless problems with DuckDNS behaving properly for that part of the process and ended up basically turning that off and definitively forcing manual tls certificates. You can try your luck with automated SSL and great if you are lucky. Before even trying that, here is something you can run on your server to see if it has a chance of running – switch it out with your actual domain and see if you get any errors:
curl http://your-domain.duckdns.org/.well-known/acme-challenge/test
If it fails proceed with manual certs which wasn’t so bad in my opinion. If you are gung ho, you can also try this script and replace with your domain name and yu may need to sudo apt install dig :
for server in 1.1.1.1 8.8.8.8 9.9.9.9 208.67.222.222; do echo "$server"; dig TXT _acme-challenge.yourDomain.duckdns.org @$server +short; done
I always got ‘timed out’ message which is why I gave up on automated certificates and moved to manual with DuckDNS. Worked (at least spat back quick responses) perfectly with all my owned domains (non-Duck domains) as a point of interest.
- You will see Caddy is only part of one network called ‘proxy_net’ and that it is shouted out as follows at the bottom of the config file:
networks:
proxy_net:
external: true
Be sure to give this post a quick read to help understand the why and the how.
- Rest of config should be pretty clean and simple with ports 80 and 443 exposed to the fallen world.
Caddy Podman-Compose.yaml File
Without further adoo, here is the config file that worked for me that you can do what you want with:
services:
caddy:
image: docker.io/serfriz/caddy-duckdns:latest
container_name: caddy
ports:
- "80:80"
- "443:443"
environment:
- DUCKDNS_TOKEN=superSecretTokens
networks:
- proxy_net
volumes:
- $HOME/.caddy/config/Caddyfile:/etc/caddy/Caddyfile:z
- $HOME/.caddy/data:/data:z
- $HOME/.caddy/config:/config:z
restart: unless-stopped
networks:
proxy_net:
external: true
Caddyfile Comments
Now that we have the compose file ready to go, let’s focus on the Caddyfile which will direct traffic that hits your domain to your appropriate pods. This is, no surprise, one of the most important parts and can be ‘fun’. I’m going to provide my exact Caddyfile that ended up working in my multi-domain, multi-pod environment so you can copy/paste/adjust accordingly. I’ve set ‘firstPod’ as the fake duck domain that points to my first pod, and ‘secondPod’ as the one that points to the second.
You can see the ‘tls block’ which points to the manually-created SSL certificates that live in the Caddy container. Important note! The paths for this start at /data and not at $HOME.caddy/ you will note because this is a container environment. In the Caddyfile we point to the ‘container view’ not the host view.
- All the certificate directories were pre-created see this post for more detail on that part.
- This Caddyfile is for Nextcloud and N8N running together and pointing in the appropriate ways with the appropriate setups. You can search my blog for those details if you are interested in those two but otherwise, you’ll have to adjust the blocks under the tls blocks to whatever you need.
Caddyfile
firstPod.duckdns.org {
tls /data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/firstPod.duckdns.org/fullchain.pem /data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/firstPod.duckdns.org/privkey.pem
reverse_proxy n8n_app:5678
}
secondPod.duckdns.org {
tls /data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/secondPod.duckdns.org/fullchain.pem /data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/secondPod.duckdns.org/privkey.pem
reverse_proxy nextcloud_app:80
header {
Strict-Transport-Security "max-age=15552000; includeSubDomains; preload"
}
encode gzip
log {
output file /data/nextcloud-access.log
}
# .htaccess / data / config / ... shouldn't be accessible from outside
@forbidden {
path /.htaccess /data/* /config/* /db_structure /.xml /README /3rdparty/* /lib/* /templates/* /occ /console.php
}
respond @forbidden 404
redir /.well-known/carddav /remote.php/dav 301
redir /.well-known/caldav /remote.php/dav 301
}
Create the Caddyfile
Once you have everything ready to copy/paste for testing, open up the file for editing and paste in your Caddyfile contents as follows, using the directory structure of my example:
sudo nano $HOME/.caddy/config/Caddyfile
Spin it up
Well, it’s that time where one must see if stuff works. Go into your .caddy directory and run the container spin up command:
podman-compose up -d
Check if it’s up: podman ps
Now, check the logs to make sure it’s looking relatively healthy: podman logs <containername>. You should see ‘normal reverse proxy activity’
Finally, go to the browser and try loading your thing.
Farewell
Well, that’s it. I hope my journey into the podman abyss has now brought a few folks in. I like what I see so far with Podman. Hope this has helped someone. Proceed to a bonus section if your pods refuse to resolve to the domains or elsewhere.
Bonus Section for Resolution Problems | Disabling the Stub listener in ubuntu
I don’t think you should actually do this unless you are desperate, however, I did end up wasting a bunch of time learning how to do this with AI so I thought I would throw this at the end for anyone who is pulling their hair out trying to make their containers resolve with duckDNS and other stuff. Again, don’t actually do this unless everything else has failed and unless you are comfortable with command line and un-doing stuff you do, haha. I think what this does is removes the built in ‘thing’ that listens on port 53 in ubuntu to resolve stuff to their proper places.
- Create a directory called ‘resolved.conf.d’:
sudo mkdir -p /etc/systemd/resolved.conf.d - add the comment ‘[Resolve]’ to a file called 10-podman.conf inside the new directory:
echo '[Resolve]' | sudo tee /etc/systemd/resolved.conf.d/10-podman.conf - Add the comment ‘DNSStubListener=no’ into the same 10-podman.conf file
echo 'DNSStubListener=no' | sudo tee -a /etc/systemd/resolved.conf.d/10-podman.conf - Reload the systemd-resolved service:
sudo systemctl restart systemd-resolved
echo -e ‘[Resolve]\nDNSStubListener=no’ | sudo tee /etc/systemd/resolved.conf.d/nodnsstub.conf
sudo systemctl reload systemd-resolved