Configuration tricks and snippets for CircleCI

A photo of computer screen displaying CircleCI configuration file in code editor
Photo by Ferenc Almasi / Unsplash

I spent a lot of time writing CircleCI configuration files for various projects. Here's a compilation of tricks/snippets that someone else might find useful.

What is CircleCI?

If you found yourself on this post and have no idea what CircleCI is - don't worry! Here's a short description.

CircleCI is continuous integration and continuous delivery (CI/CD) platform that automates the process of building, testing and deploying code. It is used by developers and DevOps teams to streamline software development workflows, enabling faster and more reliable code releases.

Basically, it is a tool to automate process of building and releasing the software you're developing.

Job defaults

This one allows you specify a default parameters for jobs in the configuration file. Whenever a parameter changes which is reusable by different jobs - you have to update it only once.

defaults: &defaults
  resource_class: small
  working_directory: ~/repo/
  docker:
    - image: cimg/node:18.18.0
ℹ️
Anchors and aliases in YAML

This one uses anchors and aliases in YAML to create reusable components through the whole configuration file.

We create an anchor called defaults that then we can use in job definition like so

jobs:
  checkout_code:
    <<: *defaults
    steps:
      - checkout
      - persist_to_workspace:
          root: ~/repo
          paths:
            - .
  install_dependencies:
    <<: *defaults
    steps:
      - attach_and_restore_cache
      - run: npm install --no-save
      - store_cache

Thanks to that, both checkout_code and install_dependencies jobs will get the same values for resource_class, working_directory and docker properties of a job.

Aliases for common values

Similarly to the Job defaults, we can create aliases for commonly used values for some properties. I usually use them for things like list of docker images my test job needs to have to work properly + with what environmental variables.

aliases:
  - &attach_workspace
    attach_workspace:
      at: ~/repo
  - &test_docker_images
    - image: cimg/node:18.18.0
    - image: cimg/postgres:16.0
      environment:
        POSTGRES_USER: postgres
        POSTGRES_PASSWORD: postgres

We can use it then like this:

  test_migrations:
    <<: *defaults
    docker: *test_docker_images
    steps:
      - attach_and_restore_cache
      - run:
          name: Apply all migrations
          command: |
            npx prisma migrate deploy
  test:
    <<: *defaults
    docker: *test_docker_images
    steps:
      - attach_and_restore_cache
      - run: npx prisma migrate deploy
      - run:
          name: Tests
          command: |
            npx nx affected --base=$NX_BASE --head=$NX_HEAD -t test
💡
I usually use aliases in combination with job defaults 😉

Extract reusable steps into commands

CircleCI configuration allows you to create your own commands (with parameters) that is available only to this project [[1]].

[[1]]: You can take it a level up by creating an Orb - a reusable configuration that can be used by different projects, either yours (private orb) or anyone (public, open source).

You can place your commands under commands key in the configuration.

The commands can receive parameters, making them usable in different contexts.

Take a look into commands that I usually use.

Store/restore cache commands

I usually create a command for storing and restoring cache, next to each other. It allows me to easily keep track of the cache keys and update them if needed.

I usually also persist files to workspace (using persist_to_workspace built-in command), which requires attaching to the workspace in another job to get the files back so I combine them into one command.

commands:
  attach_and_restore_cache:
    steps:
      - *attach_workspace
      - restore_cache:
          keys:
            - v1-dependencies-{{ .Environment.CIRCLE_BRANCH }}-{{ checksum "package.json" }}
            # fallback to using the latest cache if no exact match is found
            - v1-dependencies-{{ .Environment.CIRCLE_BRANCH }}
  store_cache:
    steps:
      - save_cache:
          paths:
            - node_modules
          key: v1-dependencies-{{ .Environment.CIRCLE_BRANCH }}-{{ checksum "package.json" }}
💡
I use the *attach_workspace alias from above for the steps to attach to workspace. I use it that way so I can do just attaching the workspace when I don't need to restore the cache.

Usage:

jobs:
  install_dependencies:
    <<: *defaults
    steps:
      - attach_and_restore_cache
      - run: npm install --no-save
      - store_cache

Run specified command(s) on remote server using SSH

Depending on your deployment pipeline, this might come handy when you have to run some commands on the remote server (e.g. docker compose up).

If there is only one step that needs this, maybe this isn't that useful, but when different jobs need to execute some commands on the remote server, consider using this to speed up the process.

This one makes use of command parameters a lot, but most of them are optional (have the default values from project environmental variables).

commands:
  execute_on_server:
    parameters:
      ssh_fingerprint:
        type: string
        default: $SSH_KEY_FINGERPRINT
      host:
        type: string
        default: $SSH_REMOTE_HOST
      user:
        type: string
        default: $SSH_REMOTE_USER
      command:
        type: string
      title:
        type: string
        default: 'Execute on server: << parameters.command >>'
    steps:
      - add_ssh_keys:
          fingerprints:
            - << parameters.ssh_fingerprint >>
      - run:
          name: Add server to known hosts
          command: ssh-keyscan -H << parameters.host >> >> ~/.ssh/known_hosts
      - run:
          name: << parameters.title >>
          command: |
            ssh -T << parameters.user >>@<< parameters.host >> \<<'EOL'
              set -eo pipefail
              << parameters.command >>
            EOL
⚠️
When copying this one to your configuration, make sure that the default environmental variables match yours! If not, adjust the command parameters (or names of the variables 😛)
💡
SSH fingerprint

Make sure you've added an SSH key to CircleCI project settings and saved the correct SSH fingerprint in the environmental variables.

You can also hardcode the fingerprint in the configuration, but then you need to update the configuration when the remote server changes for some reason.

The usage is pretty straightforward

jobs:
  deployment:
	<<: *defaults
    steps:
      - execute_on_server:
          title: Rebuild apps on remote
          command: |
            aws ecr get-login-password | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
            docker compose up -d --pull always
Because the command parameter is of string type, we can pass multi-line string, hence executing multiple commands on the remote server.

If you need to use a different server, just provide a different values for the parameters like so

jobs:
  deployment:
	<<: *defaults
    steps:
      - execute_on_server:
	      ssh_fingerprint: $SOME_OTHER_FINGERPRINT
	      host: $SOME_OTHER_HOST
	      user: $SOME_OTHER_USER
          command: |
            echo 'hello from different remote'
⚠️
This command does not support changing the SSH port, but it should be relatively easy to introduce this change. I just didn't need that feature.

Wait for <some port> to be ready

I usually use this one for checking whether database is ready to accept connections, so I can execute tests using the database. Depending on your workflow, sometimes the database is always ready before your test suite starts, but it happened few times that it wasn't which resulted with failed tests.

commands
  wait_for_db:
    steps:
      - run:
          # Our primary container isn't MySQL so run a sleep command until it's ready.
          name: Waiting for MySQL to be ready
          command: |
            for i in `seq 1 30`;
            do
              nc -z 127.0.0.1 3306 && echo Success && exit 0
              echo -n .
              sleep 1
            done
            echo Failed waiting for MySQL && exit 1

The command above checks if the port 3306 is ready to accept connections. If it is, it stops the process with successful exit code.
If it's not, it waits for 1 second and tries again (up to 30 tries).

💡
If it's not ready after 10-30 seconds, usually it means there is something wrong with the configuration.