How I implemented continuous delivery with GitHub Actions
Why use CI/CD?
I created this blog in a few hours using the incredible Astro framework. However, I didn’t set up a pipeline to automatically deploy changes.
Continuous integration (CI) is the use of a shared repository (such as GitHub, GitLab, or a self-hosted Git server) to commit changes on a regular basis. This reduces the risk of merge conflicts.
Continuous deployment (or delivery) (CD) allows the automated release of changes to production. A push to the main
branch can trigger a workflow that can do whatever you need. This could lint and test your code in the cloud.
In this post, I’m going to show you how I set up a relatively straightforward deployment pipeline for this blog, and which I’ve used to push this post!
If you have questions, you can find me on LinkedIn.
Assumptions
Before we get started, I’m making some assumptions:
- You already have an application set up and pushed to GitHub
- You have Docker installed
- You know the basic commands to run your app in production mode, even locally
- You have somewhere to deploy your app to, either locally or on a virtual private server (VPS) and you can access it via SSH
- You have nginx, Apache, or some kind of reverse proxy to route traffic to your new site (only required if you need it to be publicly visible)
Step-by-step process
Create a Dockerfile
From the root of your repository, check out a new Git branch and create a file called Dockerfile
. Note that there is no file extension.
The Dockerfile
holds the steps to build your image and run your app. Sometimes this is all you need, but later we’ll also use a docker-compose.yml
to orchestrate the build and manage some additional configuration.
Here’s an example of a Dockerfile
. You’ll need to customise this for your needs, with the important parts being:
FROM
selected the base image; in short, ensure that you’re building the image to match the architecture of your VPS (or wherever you’re deploying to)WORKDIR
is the directory where your application will run within the Docker container, so it’s not too importantCOPY
is used to pick any artefacts you may need; think aboutpackage.json
,requirements.txt
, and any application code you’ll needRUN
can be used to install your dependencies, shown below withnpm run build
EXPOSE <port>
will make the port on the Docker container available to the network that we’ll create later; ensure that your expose the same port that your app runs on!CMD [<args>]
is where you can pick your final command to set your container running, which should launch your application with any necessary arguments (the equivalent of runningserve -s dist -l 5173
)
# Use the official Node.js image as the base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY frontend/package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the app
COPY frontend .
# Build the Vite project
RUN npm run build
# Install a web server to serve the production build
RUN npm install -g serve
# Expose the port the app will run on
EXPOSE 5173
# Serve the production build
CMD ["serve", "-s", "dist", "-l", "5173"]
Once you’ve customised your script, you can move on.
Create a docker-compose.yml
file
From the root of your repository, check out a new Git branch and create a file called docker-compose.yml
.
This allows you to specify any environment variables. You can also link services together: for example, you might have a Postgres database, a Redis cache, a frontend, and a backend all held together by this file.
Here’s an example. Remember the following:
- Under
services
, give your service a useful name - Your
build
directs Docker to theDockerfile
but some images (like Postgres) won’t need this - Ensure you’re mapping the
ports
correctly and matching what is in yourDockerfile
- We’ll come to Docker
networks
later, but ensure that they are internally consistent (you should be able to spot both instances below)
services:
frontend:
build:
context: .
dockerfile: Dockerfile
ports:
- "5173:5173"
environment:
# Reference the backend service by its name in the same network
- VITE_BACKEND_URL=https://web:8000
networks:
- personal-inventory-app
networks:
personal-inventory-app:
external: true
Once you’re happy, we can move on.
Test the Docker setup
Don’t forget to commit and push the Dockerfile
and docker-compose.yml
to GitHub before proceeding.
Navigate to the root of your repository on your local machine and run docker-compose up
. The image should build but you may see this error:
network <network> declared as external, but could not be found
This is to be expected; we have set the network to be external
in docker-compose.yml
which means that Docker expects the network to exist.
Let’s run the command to create the network locally:
docker network create nginx-personal-website
Now run docker-compose up
and your container should start! Browse to the port and you should see your application running. You can run docker-compose down
to shut it down.
If your application doesn’t start, it could be that the port is in use, or you’re missing some configuration. You’ll have to troubleshoot that before going further.
Create public and private SSH keys for access to the VPS
We’ll now create SSH keys that GitHub Actions can use to access your VPS. Once we’ve completed all of the steps, Actions will SSH in, git pull
your repository, and then build the image and start the container for you.
Here is GitHub’s own guide to generating an SSH public/private key pair. I’d recommend following this as it’s going to remain more up-to-date than this post.
SSH into your VPS and run this:
ssh-keygen -t ed25519 -C "[email protected]"
Follow the steps to generate a key and generate a passphrase. I won’t dwell much longer on this, but make sure that you have your full private key ready for the next step (hint: it’s the one without .pub
at the end).
Here is a nice video on public and private keys, if you’re interested.
Set up GitHub secrets
Visit your GitHub repository and go to the Settings tab. On the left-hand side, under Security, look for Secrets and variables and then Actions. On the Actions page, choose New repository secret.
Let’s set up the secrets you’ll need to deploy to the VPS. Here’s an example of how it might look. It’s recommended to create a separate user rather than root.
Name | Secret |
---|---|
DEPLOY_KEY | <the private SSH key you created earlier> |
VPS_USER | root |
VPS_HOST | 78.23.212.130 |
Create a GitHub Actions workflow
From the root of your repository, check out a new Git branch and:
- Create a
.github
folder - In the
.github
folder, create aworkflows
folder - In the
workflows
folder, create adeploy.yml
file
Using the code below for inspiration, create a script that manages your deployment.
- Give your script a useful name
- Ensure the correct branch name(s) are shown
- We use the helpful
actions/checkout@v4
which will check out the branch - The SSH key is then created using the GitHub secrets you have provided
- Finally, the script to deploy is run
- SSH into the VPS
- Navigate to the relevant folder - ensure that
cd ~/personal-website
is updated to reflect the root of where your repository will sit - Pull the latest changes - check that the branch name matches
- Restart the Docker container using the
--build
flag, which rebuilds the image
name: Deploy to VPS
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.DEPLOY_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -H ${{ secrets.VPS_HOST }} >> ~/.ssh/known_hosts
- name: Deploy via SSH
run: |
ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} << 'EOF'
set -e
cd ~/personal-website
git pull origin main
docker compose down
docker compose up --build -d
EOF
Don’t forget to commit and push these changes to GitHub for them to take effect.
Clone the repository to the VPS
The GitHub Actions script will run git pull
on the VPS, but the repository will need to be cloned first.
SSH into the VPS and navigate to the parent folder of where the repository will sit. Remember that this should match the path in your deploy.yml
Perform a git clone <GitHub SSH location>
, for example:
git clone [email protected]:parradam/personal-website.git
And don’t forget to set up any environment variables on your VPS! If you have a .env
file, you’ll likely need to do this and ensure that it contains your production settings.
Start up the Docker container on the VPS
Ensure you are in the root of the repository on the VPS, alongside your Docker files, and run docker-compose up -d
. This will build the image and run it on the VPS. The -d
runs the daemon in detached mode, so your terminal is free to work on other things.
You can check in using docker ps
and you’ll see the uptime for the container.
Add the container to the network on the VPS
Ensure that your Docker container is added to the network we created earlier. You can do this with docker network connect <network> <container>
. If you don’t know the container, use docker ps
to find it (assuming it’s online).
Confirm that the container has been added using docker network ls
.
Set up the reverse proxy on the VPS
I’m assuming that you have a docker container with nginx installed. This part is where you direct web traffic to your container.
I won’t dwell on this as the setup will vary, but remember these points:
- If you have a domain, you’ll want to set that up. You can run without one if necessary, but for a domain you’ll want to set up the DNS records (
CNAME
and others) - You should run using HTTPS. This should be handled by your reverse proxy (e.g. nginx) and you should force SSL. Your container can run with HTTP as long as you run the reverse proxy with HTTPS
- If your reverse proxy is running in a Docker container as well, ensure that the container is in the network you’ve already defined. You can do this with
docker network connect <network> <container>
. Confirm that the container has been added usingdocker network ls
- If your reverse proxy isn’t running in a Docker instance, you should be able to use
localhost:<port>
to access the container. Confirm this by usingping localhost:<port>
and ensuring that the packets land - You need to forward requests to the container name. You should have this from
docker-compose.yml
Perform a test push to trigger the GitHub Actions workflow
Once this has completed, you should be able to push to the branch(es) in your deploy.yml
file, and monitor the Actions workflow.
![[actions-workflow.png]]
You should be able to monitor the workflow by choosing one of the runs, followed by the name of the job (deploy
in this case).
![[actions-workflow-job.png]]
Hopefully, you’ll be able to see the workflow in real-time, with green ticks! If you get a failure, you’ll need to investigate why.
Common issues with deployment scripts are:
- Incorrect GitHub secrets (typos, trailing spaces)
- The VPS is offline
- There is a conflict (e.g. you have rebased or made changes to the Git repository on the VPS) - in which case, be prepared to revert or stash those changes
Useful troubleshooting scripts
If you need to troubleshoot, refer to the Docker documentation. Here are some useful commands that you might find helpful.
Command | Explanation |
---|---|
docker ps | List all containers running |
docker network ls | List all networks |
docker network inspect | Inspect a network (use the network name or ID) |
docker network connect <network> <container> | Add a container to the network (use the names or IDs) |
docker exec -it <container_id> sh | Opens the shell within the Docker container, allowing you to run commands (for example, browsing through the file system, or pinging another container) |
Conclusion
Although I’ve set up a few of these before, it was a nice refresher to go start to finish in a couple of hours.
I already have most of the local setup in place to ensure code quality:
- Pre-commit hooks
- Linting, typing and formatting checks
- Unit and integrations tests
My next step will be to set up a proper workflow to lint and test future projects prior to deployment.