Logic logo
  1. 07

Pull request review deployments with Docker

By pariskasid on 07 Apr 2025

Over the past eighteen months, we've developed and refined a workflow that has transformed how we review, test and ship code to production. We call it "Pull request review deployments". I recently presented this at Docker Athens, and I wanted to share the core concepts in our blog, as well.

What are pull request review deployments?

In a nutshell, we deploy every pull request on its own dedicated environment. When we open a new pull request, a new deployment happens automatically, and we get a URL where we can preview the work from that particular branch. The key difference from traditional approaches is that there's no centralized staging environment - just isolated pull request deployments and the production deployment.

This approach is primarily a Git and GitHub-based workflow. We have instrumented it to work with pushing new commits to or commenting on a pull request, to trigger deployments.

Why not use a staging environment?

Being autonomous, is a top priority for us and it requires both speed and reliability, in order to deliver great results. Traditional staging environments are not compatible with this for a few reasons:

  • They require team coordination, since they are centralized. With pull request review deployments, we can work completely autonomously.
  • The complexity of managing a centralized database for the staging environment, where multiple pull requests, might need to apply migrations, is not worth it. With dedicated environments for each pull request and an straightforward way to wipe out and rebuild the database (just a GitHub comment), this is not even an issue.
  • Staging environments, are not a great option for sharing with external stakeholder, because of their centralized nature for internal use. That's why usually an additional environment (e.g. qa or canary) might be used to share with external shareholders. PR deployments can be shared with stakeholders as needed, since they are isolated and we can create additional branches from the same commit for different audiences.

How we implement this

Infrastructure

We've kept our infrastructure remarkably simple:

  • One 64GB RAM bare-metal server with:
    • Self-hosted GitHub Actions Runner
    • Docker Swarm (Kubernetes or plain Docker could also work)
    • Ceryx (our open-source dynamic reverse proxy based on Nginx)
  • Cloudflare for DNS, security and proxy configuration

This simple setup works because these are development environments with no critical user data.

Docker configuration

Our Docker setup follows three principles:

  • Build with one command: Each application should be able to be built with a single command across all environments
  • Deploy with one command: Each application should be able to be deployed full-stack with a single command across all environments
  • Environment name awareness: The Docker configuration is aware of the environment, for data filtering later Here's a simplified example of one of our Docker Swarm configuration file:
services:
  web:
    image: ghcr.io/withlogicco/rwc:${GIT_COMMIT}
    environment:
      ENVIRONMENT: ${ENVIRONMENT}

networks:
  default:
  ceryx:
    external: true

CI/CD

Our CI/CD workflow is built on three key elements:

  • GitHub Actions self-hosted runners: We get faster builds with local Docker layer caching and direct secure connection to Docker Swarm.
  • Multiple triggers: We deploy automatically when commits are pushed to a pull request, or with IssueOps comments, like .recreate to wipe the environment and deploy from scratch.
  • Reusable workflows: We use GitHub Actions' reusable workflows, for consistent deployment workflow regardless of the trigger.

Observability

For observability we focus on three aspects:

  • Deployment status: We rely on GitHub's deployment feature and visual cues in pull requests to monitor whether a deployment succeeded or not.
  • Error tracking: We use Sentry to monitor exceptions and filter errors by release (based on Git commit) and environment (as seen above).
  • Log monitoring: Dozzle provides simple container log monitoring, and we automatically post a comment with a link to relevant logs after each successful deployment.

Data initialization

To make environments useful from the start, we:

  1. Generate fixtures from our local environment using Django's dumpdata command.
  2. Clean up sensitive information and check fixtures into Git.
  3. Load fixtures and initialize a superuser in the Docker image entry point, when deploying a preview environment.

This approach means each environment starts with useful sample data and pre-configured users for testing different roles.

Subdomains and SSL

For each deployment, we need a unique URL secured with SSL. We handle this by:

  • Using Ceryx to dynamically configure subdomain routing on our server (e.g. pr-11.rwc.<development-domain>)
  • Using Cloudflare for SSL certificates and proxying

To avoid managing DNS records at deployment times, we have set up up a wildcard DNS record (*.<development-domain>) pointing to our bare-metal server, where all environments are deployed.

Access control

Since these environments, are often sensitive, we grant access only to authorised people. We secure these environments with Cloudflare Access, which offers:

  • Multiple access groups for different environments (e.g. LOGIC team and the client stakeholders of each project)
  • Multiple identity providers (Google Workspace for our team, email OTP verification for clients)
  • No need to manage additional credentials to access for each environment (e.g. basic auth)

Summing up, the key take aways of pull request review deployments are:

  1. Complete autonomy: Team members can work without worrying about interfering with anyone's work.
  2. Isolated testing: Each feature can be tested in isolation with its own data.
  3. Easy sharing: Deployments can be shared with external stakeholders with proper access controls.
  4. End to end reviews: Reviewers can easily validate the actual implementation, not just code.
  5. Quick reset: If something goes wrong, we can easily recreate the environment with a simple command.

And as a bonus, since this is running on a 64GB bare-metal server, we get to host tens of deployed environments concurrently, without breaking the bank.

Considerations

If you're considering implementing something similar, here are a few things to keep in mind:

  • Works great for projects where each PR represents a meaningful, testable feature
  • Resource usage should be limited, if you have many active PRs simultaneously
  • Data initialization is crucial, to get a working environment from the get go - invest time in creating good fixtures

Conclusion

Pull request review deployments have been a game-changer for our development workflow at LOGIC. We have been working this way daily on multiple projects for more than a eighteen months and we can never go back. The only thing we look forward to is making this even better with more ideas we have in mind.

If you're interested in learning more or want to discuss implementing a similar approach, feel free to contact us.

Stay tuned with LOGIC

Get notified when an article lands on the LOGIC blog.