Putting it all together - Podman WordPress hosting

Time to hang the mission accomplished banners!

This site is 100% powered by Podman containers!

Just some guy on the internet.

It's been a long, hard road but we made it! Around 2 months ago, way back at the end of May, I said that I was going to migrate this site, along with the others I host into containers using the Podman container engine.

As of now (2 weeks ago really) that work is done. Every part of this site has been shoehorned into containers. However, for better or for worse, and much like that fateful mission accomplished speech on the USS Abraham Lincoln. It turns out that I've still got a lot of work to do before this project is finished... Honestly, it will always be a work in progress but let's not crush my hopes so early on in this post.

For those that haven't been following along ( I know... it's also hard for me to believe that there are people who haven't been awaiting this post with bated breath), here are some links to help you get caught up if you are curious about the run-up to this post.

Some catchup reading on what I've been doing:

Containers are not easy

If you listen to the people who live and breathe containers you could understandably begin to think that anything you might ever want is just one Dockerfile away from coming into existence. Many of the container evangelists out there would have you believe that you could run the next Google or Facebook with just two DevOP's people and docker hub. I do not share that sentiment. - and even though that might not be an altogether fair representation of the attitude of the larger container world, I think it's an impression many of the more devoted Docker fans leave in their wake.

I came into this not knowing a whole lot about containers. I was a little put off by them - I've said many times that I don't understand what problem I'm supposed to be solving with containers. But I wanted to learn. So I figured no better place to do some resume building than my own blog.

I've come to have an appreciation for what you can do with containers, the portability, and the flexibility of being able to swap in a new container or roll back to an older one if something doesn't work quite right makes containers awesome. But they add a lot of management overhead, and you need a lot of knowledge, time, and experience to get all working. Especially if you want to get it working without turning off firewalls, and SELinux.

Without further ado here is the full configuration for how this site is running on Podman. I welcome any feedback you might have.

The WordPress Container(s)

General overview

For my purposes I ended up with three container images, all based on the Fedora 32 base image.

  1. Mariadb - for data storage
  2. Apache - webserver
  3. Nginx - reverse proxy

One of my goals was to run the entire operation with non-root containers. I was able to accomplish that with MariaDB and Apache. However, with the Nginx reverse proxy, I needed to make a rootful container in order to get the remote IP address of users hitting the proxy and pass it on to the Apache web server. If anyone has any suggestions on how to get around that I'm all ears. - I may end up building a micro tier ec2 instance and build out a dedicated reverse proxy. I'm old and having all these services on one just server feels icky...

Not much has changed with the database setup from my previous post about setting up MariaDB in Podman. So I'll spare you details of that in this post.

NOTE: An issue I ran into after getting everything up and running as a non-root user was hitting a too many open files limit - check your open files limits. I ended up adding the following to my /etc/security/limits.conf file:

username     soft    nofile          10240
username     hard    nofile          10240
username     soft    memlock         unlimited
username     hard    memlock         unlimited

The reverse proxy - with Let's Encrypt

Just some guy on the internet.

If you want to read a little about some of the struggles I ran into along the way while building out the configuration for the reverse proxy you can read about it in my last post. Here I will just be sharing the finalized configuration.

As mentioned previously this is the only container that I'm running as root. The configuration will work - and you can reverse proxy and all that stuff as non-root, but for whatever reason I was not able to pass the real ip of clients back to the Apache web servers as non-root. Nor could I get the real ip in the Nginx logs. So if you don't care about either of those things feel free to run this as non-root.

Also as I mentioned earlier if you know a way that I can change this over to a non-root container and still get the real ip's please let me know.

Nginx Containerfile

RUN dnf -y upgrade && dnf -y install nginx certbot python3-certbot-nginx python3-certbot-dns-route53 && dnf clean all
RUN systemctl enable nginx
RUN systemctl enable certbot-renew.timer
RUN systemctl disable systemd-update-utmp.service
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]
EXPOSE 80 443

Nginx reverse proxy systemd unit file

Description=Podman container - reverse_proxy

ExecStart=/usr/bin/podman run -i --rm -v /srv/proxy/config/nginx/:/etc/nginx/conf.d/:Z -v /srv/proxy/config/certbot/letsencrypt/:/etc/letsencrypt/:Z -v /srv/proxy/data/logs:/var/log/nginx:Z --name nginx --ip -p 80:80 -p 443:443 --hostname ngnix /sbin/init
ExecStop=/usr/bin/podman stop -t 3 nginx
ExecStopAfter=/usr/bin/podman rm -f nginx


NOTE: The repo mentioned is a private repo you will want to change the repo to one that you control.

When you run a Podman container with systemd you want to run it in "interactive" mode that is with the -i flag.

Nginx directory structure

├── config
│   ├── certbot
│   │   └── letsencrypt
│   │       ├── accounts
│   │       ├── csr
│   │       ├── keys
│   │       ├── live
│   │       │   ├──
│   │       ├── renewal
│   │       └── renewal-hooks
│   │           ├── deploy
│   │           ├── post
│   │           └── pre
│   └── nginx
|        └── sudodedit.conf
└── data
    ├── certs
    │   └── sudoedit
    └── logs

A few things of note to point out here.

I cheated, to get certbot up and running.

Instead of coming up with a fancy way to get a new cert on port 80 and then doing some back bending to get it into a persistent volume, I just did an rsync -av /etc/letsencrypt /srv/proxy/config/certbot and then exec'd into the container once it was running to execute a renewal... I could do this because I was already using letsencrypt and I was too lazy to set it up for a second time. - I know that this has violated some sacred dogma.... but it was a one-time thing, and I don't care.

To do this the way I did it you have to change a couple lines in the file /etc/letsencrypt/renewal/<>.

Open /etc/letsencrypt/renewal/<> in your favorite text editor and look for the two lines: authenticator and installer. If you were using Apache you want to change the value's to nginx.

Also notice I'm keeping my logs in a directory under /srv/proxy/data. If you do not specify a persistent volume for your logs they will be lost every time you restart the container.

Nginx Configuration

# place in /etc/nginx/conf.d/sudoedit.conf
upstream sudoeditServers {
    server server_ip_address:8080;

# HTTP server
# Proxy with no SSL

    server {rewrite ^(/.well-known/acme-challenge/.*) $1 break; # managed by Certbot

        return 301 https://$host$request_uri;
        location = /.well-known/acme-challenge/ # managed by Certbot


# HTTPS server
# Proxy with SSL

    server {

         location / {
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Host $host;
         proxy_set_header X-Forwarded-Proto https;
         proxy_pass http://sudoeditServers$request_uri;
         deny <ip_address_of_bad_people>;

        ssl                  on;
        ssl_certificate      /etc/letsencrypt/live/;
        ssl_certificate_key  /etc/letsencrypt/live/;
        include /etc/letsencrypt/options-ssl-nginx.conf;


This Nginx configuration redirects all traffic to https. - https is free so I don't see any reason to not use it. If your web host charges extra you should find a new hosting provider.

Apache webserver backend

Just some guy on the internet.

Apache Containerfile

Again a lot of this was covered in my last post. If you have questions about the choices I made here I would encourage you to read it, as it goes into some depth explaining the problems I ran into, and why I chose to solve them the way I did.

RUN dnf -y upgrade && dnf -y install httpd php-intl php-opcache php-soap php-sodium php-json php-pecl-zip php-xml php-mbstring php php-xmlrpc php-gd php-pecl-imagick php-bcmath php-pecl-mcrypt php-pdo php-cli php-fpm php-mysqlnd php-process php-common mod_fcgid; dnf clean all
RUN systemctl enable httpd php-fpm
RUN systemctl disable systemd-update-utmp.service
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]
expose 80

Systemd unit file for the Apache backend

This is a pretty standard systemd unit file. The major difference from the last one is that in the [Service] section I define the username of the user that I want the service to run as. For my purposes I wanted this to be a user that doesn't have sudo privileges on the server, and ideally it would be a user with /sbin/nologin defined in /etc/passwd. I've done some limited testing and that does seem to work, however, I still like to switch to the Podman user and do some tinkering. As I get more comfortable managing the containers and have more of this pulled into Ansible I imagine the need to do that sort of thing will fade out.

Description=Podman container -

ExecStart=/usr/bin/podman run -i --rm --ip -p 8080:80 --name --read-only --read-only-tmpfs=true -v /srv/sudoedit/config/httpd.conf:/etc/httpd/conf/httpd.conf -v /srv/sudoedit/config/php.conf:/etc/httpd/conf.d/php.conf:ro -v /srv/sudoedit/config/sudoedit.conf:/etc/httpd/conf.d/sudoedit.conf:ro -v /srv/sudoedit/config/wp-config.php:/var/www/sudoedit/wp-config.php:Z -v /srv/sudoedit/web/:/var/www/sudoedit/public_html/:Z -v /srv/sudoedit/logs/:/var/log/httpd:Z -v /srv/sudoedit/logs/:/var/log/php-fpm/:Z --hostname /sbin/init
ExecStop=/usr/bin/podman stop -t 3
ExecStopAfter=/usr/bin/podman rm -f


Website container directory tree

For the persistent volumes, I chose a directory structure as outlined below. Each site is kept at the root of /srv. I then, break everything down into config, data, logs, and web directories.

├── config
├── data
│   └── db
├── logs
│   ├── journal
│   └── private
└── web
    ├── wp-admin
    ├── wp-content
    └── wp-includes
  1. Config, contains the httpd files, virtual hosts files, any custom modules.
  2. Data, in this case just holds the database
  3. Logs, contains the web server logs.
  4. Web holds the WordPress application files.

Some of this could probably be combined. For example logs could be a child directory of data as well as web. But I wanted to have a clear and quick way to know which directories would hold which data and this made the most sense to me. You might make other decisions about how you like your directory tree. The important thing is to separate each piece in a way that makes sense for you, so that in 6 months when you look at it again, you know where to find everything.

Changes to the httpd.conf file

In order to get the real IP address of the users visiting your site you will need to add these lines to your httpd configuration.

Include conf.modules.d/*.conf
RemoteIPHeader X-Forwarded-For

Virtual host file

This is a fairly standard vhost file. The only thing to note is that it's only listening on port 80. Since the Nginx proxy is handling SSL/TLS for us we don't need to have this site listening on tcp/443 in the container.

<VirtualHost *:80>

    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/sudoedit/public_html

    ErrorLog /var/log/httpd/sudoedit_error_log
    CustomLog /var/log/httpd/sudoedit_access_log combined


WordPress wp-config.php

Add this bit of code to the top of your wp-config.php file if you are using Jetpack for backups. See this knowledge base article for more details.

if ( !empty( $_SERVER['HTTP_X_FORWARDED_FOR'] ) ) {
    $forwarded_ips = explode( ',', $_SERVER['HTTP_X_FORWARDED_FOR'] );
    $_SERVER['REMOTE_ADDR'] = $forwarded_ips[0];
    unset( $forwarded_ips );

If you end up haveing issues with css or javascript not loading you may need to add this code snippet to your wp-config.php file as well. Make sure you add it above the line that reads: /* That's all, stop editing! Happy publishing. */.

define('FORCE_SSL_ADMIN', true);
define('FORCE_SSL_LOGIN', true);
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
/* That's all, stop editing! Happy publishing. */

Pheww.... That was a lot! But after running through these steps you should have a mostly working WordPress site that is completely running on Podman! If you have a dev environment you should definitely take advantage of it so that you can work out any of the unique issues that might pertain to your site. I'm too cheap to have that extra infrastructure so I just took some extended (planned and unplanned) downtimes while I figured it out. If you run into something you can't quite figure out drop me a line and I'll be glad to offer any suggestions I might have.

Final thoughts, containers are simple - but not easy

I learned a lot in this process, and I'm moving into the next stage of this project knowing just a little bit more than the nothing I started with. Getting to this point was not easy, even though the concept by itself is simple enough.

What do I mean by simple but not easy?

What makes containers simple?

What makes containers, not easy?

Just some guy on the internet.

Basically all of the third point in the above list is why containers are simple but not easy. The idea is simple, the execution is hard. In order to put an application into a container, you have to understand how the application works. At least at a high level. You need to know things like:

  1. What network ports should be listening?
  2. Where does my app keep its configuration files?
    1. Do these files change often?
  3. What other systems does it interact with?
    1. Are those "systems" other apps on the same server? i.e. a WordPress database
    2. If so how do they communicate?
  4. Where is its data stored?

All that really just scratches the surface. Under the covers, you need to understand how the OS deals with Container runtimes, how systemd will manage the services, how uid's and gid's are mapped, how SELinux works. Then, if you are working with an application that was not designed to be "cloud-native" or container ready or whatever you have to figure out how to separate each of those pieces yourself.

Moving an application into containers, also means that you have to plan your changes in advance ( you should be doing that anyway ) which is not always easy. If for instance, you determine that you do not need to mount a volume for your configuration file - you want to put it directly into the container image. That is fine until you need to make a change that file. While you could exec into the container and make your changes, you have to understand that those changes will be lost the next time the container stops and restarts. Instead, you have to either: A) build a new container image with the corrected configuration file, or B) move that configuration file into a volume that you can manipulate outside the container to ensure the changes are persistent.

While some people will scoff at those problems, they are real problems that people deal with on a daily basis. Changing your workflows among many different groups with different priorities to align with some of these ideals is much easier said than done.

Next Steps

The last piece of this puzzle is to get the webserver/database server containers joined into a "pod". I was going to jump into that head first from this point on, but I've got a few other things I'm working on and might have to put that on the back burner for a bit. I've got some Ansible tidbits I want to share, and few other things.


Special thanks to Scott McCarty for his excellent write up covering his work migrating to Podman:

This project was deeply inspired by that post, and the adventures that lead to this point would not have been possible without being able to piggyback off of his work. Thank you.

Also to Dan Walsh for retweeting some of my posts! And for all the great work you and your team are doing on Podman. In particular these two blogs posts helped me work through several issues.

If you found this useful please support the blog.


I use Fastmail to host my email for the blog. If you follow the link from this page you'll get a 10% discount and I'll get a little bit of break on my costs as well. It's a win win.


Backblaze is a cloud backup solution for Mac and Windows desktops. I use it on my home computers, and if you sign up using the link on this page you get a free month of service through backblaze, and so do I. If you're looking for a good backup solution give them a try!



#Fedora #Podman #Wordpress