RSS feed

Published on 11 May 2025 Author Adam Parr


How I implemented continuous delivery with GitHub Actions

Why use CI/CD?

I created this blog in a few hours using the incredible Astro framework. However, I didn’t set up a pipeline to automatically deploy changes.

Continuous integration (CI) is the use of a shared repository (such as GitHub, GitLab, or a self-hosted Git server) to commit changes on a regular basis. This reduces the risk of merge conflicts.

Continuous deployment (or delivery) (CD) allows the automated release of changes to production. A push to the main branch can trigger a workflow that can do whatever you need. This could lint and test your code in the cloud.

In this post, I’m going to show you how I set up a relatively straightforward deployment pipeline for this blog, and which I’ve used to push this post!

If you have questions, you can find me on LinkedIn.

Assumptions

Before we get started, I’m making some assumptions:

Step-by-step process

Create a Dockerfile

From the root of your repository, check out a new Git branch and create a file called Dockerfile. Note that there is no file extension.

The Dockerfile holds the steps to build your image and run your app. Sometimes this is all you need, but later we’ll also use a docker-compose.yml to orchestrate the build and manage some additional configuration.

Here’s an example of a Dockerfile. You’ll need to customise this for your needs, with the important parts being:

# Use the official Node.js image as the base image
FROM node:18-alpine

# Set working directory
WORKDIR /app

# Copy package.json and package-lock.json
COPY frontend/package*.json ./

# Install dependencies
RUN npm install

# Copy the rest of the app
COPY frontend .

# Build the Vite project
RUN npm run build

# Install a web server to serve the production build
RUN npm install -g serve

# Expose the port the app will run on
EXPOSE 5173

# Serve the production build
CMD ["serve", "-s", "dist", "-l", "5173"]

Once you’ve customised your script, you can move on.

Create a docker-compose.yml file

From the root of your repository, check out a new Git branch and create a file called docker-compose.yml.

This allows you to specify any environment variables. You can also link services together: for example, you might have a Postgres database, a Redis cache, a frontend, and a backend all held together by this file.

Here’s an example. Remember the following:

services:
  frontend:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "5173:5173"
    environment:
	  # Reference the backend service by its name in the same network
	  - VITE_BACKEND_URL=https://web:8000
    networks:
      - personal-inventory-app

networks:
  personal-inventory-app:
    external: true

Once you’re happy, we can move on.

Test the Docker setup

Don’t forget to commit and push the Dockerfile and docker-compose.yml to GitHub before proceeding.

Navigate to the root of your repository on your local machine and run docker-compose up. The image should build but you may see this error:

network <network> declared as external, but could not be found

This is to be expected; we have set the network to be external in docker-compose.yml which means that Docker expects the network to exist.

Let’s run the command to create the network locally:

docker network create nginx-personal-website

Now run docker-compose up and your container should start! Browse to the port and you should see your application running. You can run docker-compose down to shut it down.

If your application doesn’t start, it could be that the port is in use, or you’re missing some configuration. You’ll have to troubleshoot that before going further.

Create public and private SSH keys for access to the VPS

We’ll now create SSH keys that GitHub Actions can use to access your VPS. Once we’ve completed all of the steps, Actions will SSH in, git pull your repository, and then build the image and start the container for you.

Here is GitHub’s own guide to generating an SSH public/private key pair. I’d recommend following this as it’s going to remain more up-to-date than this post.

SSH into your VPS and run this:

ssh-keygen -t ed25519 -C "[email protected]"

Follow the steps to generate a key and generate a passphrase. I won’t dwell much longer on this, but make sure that you have your full private key ready for the next step (hint: it’s the one without .pub at the end).

Here is a nice video on public and private keys, if you’re interested.

Set up GitHub secrets

Visit your GitHub repository and go to the Settings tab. On the left-hand side, under Security, look for Secrets and variables and then Actions. On the Actions page, choose New repository secret.

Let’s set up the secrets you’ll need to deploy to the VPS. Here’s an example of how it might look. It’s recommended to create a separate user rather than root.

NameSecret
DEPLOY_KEY<the private SSH key you created earlier>
VPS_USERroot
VPS_HOST78.23.212.130

Create a GitHub Actions workflow

From the root of your repository, check out a new Git branch and:

Using the code below for inspiration, create a script that manages your deployment.

name: Deploy to VPS

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up SSH key
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.DEPLOY_KEY }}" > ~/.ssh/id_ed25519
          chmod 600 ~/.ssh/id_ed25519
          ssh-keyscan -H ${{ secrets.VPS_HOST }} >> ~/.ssh/known_hosts

      - name: Deploy via SSH
        run: |
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} << 'EOF'
            set -e
            cd ~/personal-website
            git pull origin main
            docker compose down
            docker compose up --build -d
          EOF

Don’t forget to commit and push these changes to GitHub for them to take effect.

Clone the repository to the VPS

The GitHub Actions script will run git pull on the VPS, but the repository will need to be cloned first.

SSH into the VPS and navigate to the parent folder of where the repository will sit. Remember that this should match the path in your deploy.yml

Perform a git clone <GitHub SSH location>, for example:

git clone [email protected]:parradam/personal-website.git

And don’t forget to set up any environment variables on your VPS! If you have a .env file, you’ll likely need to do this and ensure that it contains your production settings.

Start up the Docker container on the VPS

Ensure you are in the root of the repository on the VPS, alongside your Docker files, and run docker-compose up -d. This will build the image and run it on the VPS. The -d runs the daemon in detached mode, so your terminal is free to work on other things.

You can check in using docker ps and you’ll see the uptime for the container.

Add the container to the network on the VPS

Ensure that your Docker container is added to the network we created earlier. You can do this with docker network connect <network> <container>. If you don’t know the container, use docker ps to find it (assuming it’s online).

Confirm that the container has been added using docker network ls.

Set up the reverse proxy on the VPS

I’m assuming that you have a docker container with nginx installed. This part is where you direct web traffic to your container.

I won’t dwell on this as the setup will vary, but remember these points:

Perform a test push to trigger the GitHub Actions workflow

Once this has completed, you should be able to push to the branch(es) in your deploy.yml file, and monitor the Actions workflow. ![[actions-workflow.png]]

You should be able to monitor the workflow by choosing one of the runs, followed by the name of the job (deploy in this case).

![[actions-workflow-job.png]]

Hopefully, you’ll be able to see the workflow in real-time, with green ticks! If you get a failure, you’ll need to investigate why.

Common issues with deployment scripts are:

Useful troubleshooting scripts

If you need to troubleshoot, refer to the Docker documentation. Here are some useful commands that you might find helpful.

CommandExplanation
docker psList all containers running
docker network lsList all networks
docker network inspectInspect a network (use the network name or ID)
docker network connect <network> <container>Add a container to the network (use the names or IDs)
docker exec -it <container_id> shOpens the shell within the Docker container, allowing you to run commands (for example, browsing through the file system, or pinging another container)

Conclusion

Although I’ve set up a few of these before, it was a nice refresher to go start to finish in a couple of hours.

I already have most of the local setup in place to ensure code quality:

My next step will be to set up a proper workflow to lint and test future projects prior to deployment.


github-actions guide