Automating Blog Deployment with NGINX Running as non-root.
The Goal
I had been doing some reading on further hardening my installation of NGINX that runs this blog. I want to create as secure a base as my knowledge would allow.
I pulled from a few sources:
-
This post talks about keeping the NGINX binary up-to-date, a few basic settings for the nginx.conf file and also got me thinking about running the service as non-root. https://help.dreamhost.com/hc/en-us/articles/222784068-The-most-important-steps-to-take-to-make-an-nginx-server-more-secure
-
I wanted to incorporate Mozilla’s best practice config file for the server and also only allow TLS 1.3, probably silly but everyone has updated web browsers mostly these days. Allowing only TLS 1.3 would protect me just as much as it would protect users, at the expense of a few people on outdated software being unable to read the blog. The generated config files are available at https://ssl-config.mozilla.org/#server=nginx&version=1.17.7&config=modern&openssl=1.1.1k&guideline=5.7
-
Lastly I hunted around and found an excellent service file with added hardening to enable NGINX to run as non-root. Kudos to Stephan13360, thanks for your work. https://github.com/stephan13360/systemd-services/tree/master/nginx
So now it was time to pull all this together and automate its deployment, I wanted to write a script that would from VM turn-on pull down the blog and service files and deploy them. Generating a known state that is easily reproducible and if anything were to happen to this VM we would be able to be back up and running in as short a time as possible.
The Script
All the code and service files are available at https://github.com/abl030/infra/tree/main/Blog_Deploy as per usual.
The actual set-up script is Blog_Nginx_Script.sh and I’ll run through it and all the assumptions made.
Firstly the scripts asks for user input and changes the host-name, this is because we have generated a generic VM with our automated install outlined in https://blog.barrett-lennard.com/posts/designing-the-infra/.
The script then installs NGINX from the official PPA, this was based on my reading of keeping the NGINX binary up-to-date. The Ubuntu default repo version is over two years old, so we install the PPA and grab the freshest bits.
Next we install our other packages, and then grab user input for the site being deployed and our github access token to clone into the blog repo. We also clone the infra repo to grab all our service and conf files.
Next the script creates a user and folder to store the site files. This is a holdover from when we ran NGINX as root, but still serves a purpose. We then copy our hugo generated public folder into the folder that will be served as our NGINX root.
Next we have to initialise a very basic http server to enable the ACME challenge, I had significant problems running this with the full conf files so I just added in a super basic conf file and tee it into the right directory. To be extra safe it might be advisable to run this through another reverse proxy which is how I run it at home, at this stage we have not strictly isolated our NGINX service.
sudo tee /etc/nginx/conf.d/$SITE_DOMAIN.conf > /dev/null <<EOF
server {
listen 80;
listen [::]:80;
server_name ${SITE_URL} www.${SITE_URL};
root /home/${SITE_USER}/public;
location / {
}
}
EOF
Next the script displays the local IP and waits for user confirmation you have port-forwarded or proxied in the ACME challenge to allow certbot to do its thing. The challenge is executed as –certonly as we are running our own conf files and certbot does not like it when nginx is running as non-root for our certificate renewals. We’ll have to handle it ourselves.
sudo certbot certonly --webroot -w /home/www-data/public/ --staple-ocsp --non-interactive --agree-tos --email $EMAIL --expand -d "$SITE_PREFIX.$SITE_DOMAIN,www.$SITE_PREFIX.$SITE_DOMAIN"
sudo ufw allow 443/tcp
Next we copy over our nginx.conf files as well as the service file to allow nginx to run as non-root.
The website conf:
server_name site_prefix.site_domain www.site_prefix.site_domain;
root /home/www-data/public;
location / {
}
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
ssl_certificate /home/nginx/fullchain.pem;
ssl_certificate_key /home/nginx/privkey.pem;
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
ssl_trusted_certificate /home/nginx/chain.pem;
ssl_stapling on;
ssl_stapling_verify on;
}
server {
if ($host = www.site_prefix.site_domain) {
return 301 https://$host$request_uri;
}
if ($host = site_prefix.site_domain) {
return 301 https://$host$request_uri;
}
listen 80;
listen [::]:80;
server_name site_prefix.site_domain www.site_prefix.site_domain;
return 404;
}
and our two other config files: Nginx.conf nginx.service
Lastly we copy over our generated certs and script to automatically copy over new certs when they are generated, lastly SED is called to clean up the variables with our user inputted domain names. Nginx is then stopped, it’s running as root and needs to be stopped not restarted. We then start nginx and now it’s all complete. NGINX is running as the user NGINX and serving /public on 443 via TLS 1.3 only with some best practice settings. The whole process takes around 5 minutes from VM creation to this point.
Some Issues found
Making NGINX run as non-root was a non-trivial process for someone of my skill level. Though with the benefit of hindsight it is not that difficult to do. The biggest hurdle was working out that the service must be stopped before it could be restarted. Normally I just restart services with :
sudo systemctl restart nginx
How ever this would result in the service panicking with the error “incorrect PID in pid file” I am assuming this is because the PID file currently in use has root and we are now going to generate a new PID file with the UUID of nginx. In any case this tripped me up many times in the creation of this script.
Lastly by running NGINX as non-root we also break certbot’s automatic nginx plugin. Thus we need to handle both our tls config ourselves and also write a little script to move over the generated certificates on renewal. This was also non-trivial, but a bit of googling helped me out. The script is deployed to /etc/letsencrypt/renewal-hooks/deploy and is available at https://github.com/abl030/infra/blob/main/Blog_Deploy/deploy_certs.sh
Conclusion
This was a fun but time consuming project that took most of a weekend. However I also look at it as an investment in the future, we now have a good base for deploying any static website that will take no time at all in the future. This script is easily transportable to a cloud instance such as a digital ocean droplet, it will run and deploy the website there as is. That is the great thing about using github to host all the files, we can deploy anywhere.
I also learnt a lot about basic server hardening and some best practices. Obviously there is always more one could do! But for know I feel we’ve got a good base to work off.