Zack Apiratitham

Migrating from Heroku to Linode

Back in August, Heroku announced that they will no longer offer a free tier for their services starting on 28 November. That would become an issue for me since I have a small Spring Boot service with a Postgres database deployed there, and they would end up costing more per month than I wanted to pay for.

In the past few months, I have been hearing more good things about Linode, so I decided to check them out. I don't need anything powerful for this purpose, so their cheapest $5/month shared CPU "Nanode" plan is more than enough. And since this is just a regular Linux server unlike Heroku's "dynos", I'm not required to use the managed database solution, making it cheaper. Plus the cheapest managed Postgres database on Heroku has a limit of 10,000 rows.

So I took on the challenge of migrating my Heroku setup over to Linode, and I'd like to share how I accomplished that here.

Table of Contents

Configuring the Server

Linode put together a very helpful guide on the basic initial configurations one should do when setting up a new instance. For the distro, I went with Ubuntu 22.04. I then just followed the guide and ran the system updates, created a limited user account, and set up SSH access from my local machine. I also set up an A record to point my custom domain to the server's IP address.

Configuring Cloud Firewall

Linode comes with a free firewall service which made it easier than configuring all these rules on the server itself. I set up the following rules to limit access to only SSH and HTTP/HTTPS traffic.

Screenshot 2022 11 26 at 11 34 12 AM

Installing Docker and Docker Compose

To help with deploying this little service, we will use Docker and Docker Compose. Installing these was easily done by following this guide for Docker and this for Docker Compose.

Docker Files

The Dockerfile I had previously for Heroku didn't require a lot of changes: it just copies the JAR from the build directory and sets up the startup command for the image:

FROM amazoncorretto:11-alpine VOLUME /tmp COPY build/libs/*.jar app.jar CMD ["java", "-Xmx300m", "-Xss512k", "-jar", "/app.jar"]

Now we need to create a new docker-compose.yml file to set up both the application image and Postgres database:

version: '3' services: estel: image: 'estel:latest' build: context: src/main/docker container_name: estel depends_on: - db ports: - restart: unless-stopped environment: - SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/estel - SPRING_DATASOURCE_USERNAME=${DB_USER} - SPRING_DATASOURCE_PASSWORD=${DB_PASSWORD} db: image: 'postgres:15.0-alpine' container_name: db ports: - restart: unless-stopped volumes: - ${DATA_DIR}:/var/lib/postgresql/data/ environment: - POSTGRES_DB=estel - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD}
  • The Postgres container is named db and is added to depends_on in the Spring application (estel) container to make sure the service starts after the database.
  • The SPRING_DATASOURCE_URL environment variable points to the db service at estel database as specified in POSTGRES_DB variable.
  • The username and password are specified via a .env file on the server, and are used in both containers.
  • The DATA_DIR variable points to a directory on the server to ensure data persistence between database restarts. Without this, all the data would be lost when we rebuild the database container.
  • I'm specifying the container ports with to ensure that they are not exposed outside of the server. I'll take care of proxying the API using Nginx below.

Configuring Nginx

For the web server, I opted to use Nginx. Here I'm setting up the subdomain to have it point to the Spring service I set up previously by using the proxy_pass directive. Requests coming in to will now get routed to port 8085 that the service is running on.

server { server_name; location / { proxy_set_header Host $host; proxy_pass; proxy_redirect off; } }

Configuring SSL

One thing I couldn't do on Heroku was enabling SSL on my dyno as that was only available with the paid plans. But now that's easily done with our plain old server. I was blown away by how easy setting up SSL was using Certbot. It configured everything correctly with my Nginx configurations, and was up and running in just a few minutes. It's also all free thanks to Let's Encrypt!

Accessing the Database

As you can see in the docker-compose file, the database is only accessible locally on the server. This made it impossible to connect to it from my local machine. I don't want to expose the database to the entire world, so the approach I went with is to connect to it via an SSH tunnel.

I use TablePlus as my database client and it supports connections over SSH. This is what my connection looks like:

Screenshot 2022 11 26 at 10 58 26 AM

If you'd like to connect via command line, the SSH tunnel can first be set up by running:

ssh -L 9955:localhost:8955 <SSH User>@<Server IP Address>
  • 9955 is the port on our local machine we will need to connect to.
  • 8955 is the port on the remote server that the database is running on.

We can then connect to the database using psql:

psql -h localhost -p 9955 -U estel estel

Configuring SSH Access with GitHub

As the docker-compose.yml is part of the project repo, to ensure we use the most up-to-date version of the file for deployments, we need to be able to pull down the latest from GitHub onto the server. This requires setting up SSH access with GitHub by following this guide. I cloned mine to ~/estel/ directory.

Creating a GitHub Actions Workflow for Deployments

Now that we have the infrastructure set up, the last thing we need is a strategy for deploying the Spring service. I came up with a GitHub Actions workflow to handle deploying this service with pushes to the main branch.

1. Setting up SSH Access

First off, we need a new SSH key so that our workflow can interact with the server. Navigate to ~/.ssh/ directory on the server and run the following to generate a new SSH key pair:

ssh-keygen -t rsa -b 4096 -C ""

When prompted, name it something different from the default "id_rsa" as to not conflict with existing key. I named mine "github-actions".

We need to add the new public key to the authorized_keys file so that the workflow using this key can access our server. In the same directory, run the following to append it to authorized_keys:

cat >> ~/.ssh/authorized_keys

Now that we have a new SSH key for our workflow, we need to add this to the repository's secrets. In the repo, navigate to "Settings" > "Secrets" > "Actions" and select "New repository secret". I named it "SSH_PRIVATE_KEY" and the value is the private key created earlier (in ~/.ssh/github-actions).

2. Adding Other Secrets

While we're here adding secrets, we need to add a few more to be used by the workflow:

  • HOST: the IP address of the Linode server
  • SSH_PORT: the SSH port on the server
  • SSH_USER: the SSH user

Here's what my Actions secrets look like now:

Screenshot 2022 11 26 at 2 10 42 AM

These three aren't necessarily secrets but I find it easier to manage them here. You can choose to hardcode these in the workflow file itself as environment variables.

3. Building the Workflow

This deployment workflow will only have one job called "build-and-deploy" which will take care of building the JAR, Docker image, transferring the image to the server, and rebuild the container with the latest image.

First we configure the environment to use Ubuntu, actions/checkout@v3 to check out our repo, and actions/setup-java@v3 for our Java environment:

jobs: build-and-deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-java@v3 with: distribution: corretto java-version: 11

Next we install the SSH key we added to the secrets earlier:

steps: # ... - name: Install SSH key uses: shimataro/ssh-key-action@v2 with: key: ${{ secrets.SSH_PRIVATE_KEY }} known_hosts: unnecessary

We're leaving known_hosts as unnecessary here as we will configure that in the next step:

steps: # ... - name: Add Known Hosts run: ssh-keyscan -p ${{ secrets.SSH_PORT}} -H ${{ secrets.HOST }} >> ~/.ssh/known_hosts

We're generating this using ssh-keyscan command and appending it to the known_hosts file.

With that setup out of the way, we now run the Gradle build:

steps: # ... - name: Setup Gradle uses: gradle/gradle-build-action@v2 - name: Execute Gradle build run: ./gradlew build

This will create a JAR file to be used in building our Docker image, which is done with the following step:

steps: # ... - name: Build the Docker image run: docker build . --file Dockerfile --tag estel

Now to get this image to our server, we could publish this to Docker Hub and have our server pull down the latest image. Unfortunately the Docker Hub free plan only allows public repositories and we'll need to upgrade to the $5/month Pro plan to publish private repositories. Instead, we're going to save the Docker image as a tar archive to be directly transferred to the server:

steps: # ... - name: Create tar archive run: docker save --output estel.tar estel

This creates an estel.tar file in our working directory. We then transfer this over to the server using scp:

steps: # ... - name: scp tar archive run: scp estel.tar ${{ secrets.SSH_USER }}@${{ secrets.HOST }}:/home/estel/docker-images/estel.tar

Lastly, we execute some commands to deploy this image:

steps: # ... - name: Execute remote commands to deploy the image uses: appleboy/ssh-action@master with: host: ${{ secrets.HOST }} username: ${{ secrets.SSH_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} port: ${{ secrets.SSH_PORT }} script: | docker load --input docker-images/estel.tar cd /home/estel/estel git fetch && git pull docker-compose up -d
  • The docker load command creates a Docker image using the tar archive we transferred over in the previous step.
  • The git commands ensure that we have the most up-to-date docker-compose.yml file from the repo.
  • docker-compose up -d then rebuilds the estel container using the latest image.

And that's it! Now every push to main branch will trigger this workflow and deploy our Spring service to the server automatically. The final workflow file can be found here.

profile picture

Hey, I’m Zack. Thanks for reading!

I'm a software developer originally from Krabi, Thailand currently living and working in the Boulder-Denver area, Colorado, USA. This blog is a place for me write about my interests and share things I find interesting.

Feel free to shoot me an email or a tweet to say hi!

© 2012-2023 Zack Apiratitham