back

Deploying software on a VPS

published: Mar 13, 2026

|

This is a simple guide to deploying a fullstack web application on a virtual private server.

Goals

Before starting, lets define some goals

  • Provision and connect to VPS using ssh.
  • Secure the server using ssh keys, a firewall and disabling root login
  • Deploying a web application (frontend, backend and a postgres database)
  • Setup a reverse proxy to serve traffic on a domain.
  • Configure SSL/HTTPS with cloudflare origin certificates.
  • Setup gh actions for automatic deployment.

Provisioning a VPS

This may vary depending upon the provider you choose such as AWS, GCP, DigitalOcean, Hetzner, etc. I’m gonna create an aws ec2 instance for demonstration. These are my current specs for reference:

  • CPU - 2 vCPU
  • RAM - 1GB
  • Storage - 8GB volume.

Your provider will ask you to create a ssh-key pair so run the following command to generate one.

> ssh-keygen -t ed25519 -C "me@quantinium.dev" # replace the email with yours
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/quantinium/.ssh/id_ed25519): /home/quantinium/.ssh/test_ed25519 # change the path where you wanna store the ssh-key pair.
Enter passphrase for "/home/quantinium/.ssh/test_ed25519" (empty for no passphrase): # add a paraphrase for more security
Enter same passphrase again:
Your identification has been saved in /home/quantinium/.ssh/test_ed25519
Your public key has been saved in /home/quantinium/.ssh/test_ed25519.pub
The key fingerprint is:
SHA256:bUV/+3g9EK33RDCkuHEbzjaJv5Rl9ZMt+4J+PykYMWU me@quantinium.dev
The key's randomart image is:
+--[ED25519 256]--+
|            ..+  |
|           o E.o |
|          o B...+|
|         . @ +o+=|
|        S = Xo+=+|
|         . + =o==|
|            *.o.*|
|           o.oo=.|
|           .o..o+|
+----[SHA256]-----+

After generating key at your specified path, print the public key and paste it into your providers dashboard.

> ls ~/.ssh | grep "test_ed25519"
.rw-------. quantinium quantinium 411 B  Wed Mar 11 14:11:08 2026 test_ed25519
.rw-r--r--. quantinium quantinium  99 B  Wed Mar 11 14:11:08 2026 test_ed25519.pub

> cat ~/.ssh//test_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIODM3k6JXmMRjCnrlPzoGYMMwwG2feHvOilYT2In/ruM me@quantinium.dev

Create your instance and grab the server public ip address and lets verify the connection.

> ssh -i ~/.ssh/testing-ed25519 root@69.223.84.134 uname -r
6.12.48+deb13-cloud-amd64

If you get a relatively similar output (which may vary depending on the distribution chosen), then you’ve successfully connected to the server. Congratulations.

Securing the server

Before deploying anything, let’s lock down the server which includes:

  • Updating system packages
  • Enable automatic security updates
  • Create a non-root user
  • Disable root login
  • Enable and configure firewall
  • Set the correct timezone

Let’s start by updating system packages.

~$ apt update && apt upgrade -y # will only work on debian or ubuntu based distrbutions.

Install unattended-upgrades to automatically apply security updates without manual intervention

~$ sudo apt install unattended-upgrades
~$ sudo dpkg-reconfigure -plow unattended-upgrades
~$ sudo systemctl status unattended-upgrades

Now that our packages have been updated, before moving forward lets check if the following file exists and if it exists then reboot your server.

~$ cat /var/run/reboot-required
~$ reboot # only if the file above exists

if you reboot, your ssh connection would be terminated so reconnect again after some time.

Now, lets create a non-root user and copy ssh keys to the newly created user using the following commands

~$ adduser quantinium # replace with your username
~$ usermod -aG sudo quantinium
~$ mkdir /home/quantinium/.ssh
~$ cp ~/.ssh/authorized_keys /home/quantinium/.ssh/
~$ chown -R quantinium:quantinium /home/quantinium/.ssh
~$ chmod 700 /home/quantinium/.ssh
~$ chmod 600 /home/quantinium/.ssh/authorized_keys

Lets disable root login. So, open sshd_config by

~$ sudo vim /etc/ssh/sshd_config

and Set PermitRootLogin No and Set PasswordAuthentication no and restart ssh server

~$ sudo systemctl restart ssh
~$ exit

Verify that root login is blocked, then login as non-root user

> ssh -i ~/.ssh/testing-ed25519 root@69.223.84.134 uname -r # this should fail as root user is not allowed anymore
> ssh -i ~/.ssh/testing-ed25519 quantinium@69.223.84.134 # should work

Let’s install and enable the firewall. I’ll be using UFW (Uncomplicated Firewall). Here we deny all incoming requests and only allow outgoing traffic, with the exception of port 22 (the OpenSSH port) to keep our SSH connection alive

~$ sudo apt install ufw
~$ sudo ufw default deny incoming
~$ sudo ufw default allow outgoing
~$ sudo ufw allow ssh # make sure to allow SSH or you may lose access to your server
~$ sudo ufw enable
~$ sudo ufw status verbose

Finally, set the correct timezone

~$ timedatectl list-timezones # see your timezone
~$ sudo timedatectl set-timezone Asia/Kolkata
~$ timedatectl status
               Local time: Wed 2026-03-11 22:49:24 IST
           Universal time: Wed 2026-03-11 17:19:24 UTC
                 RTC time: Wed 2026-03-11 17:19:24
                Time zone: Asia/Kolkata (IST, +0530)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

There’s a lot more you can do to harden a server, but this is enough to get started

Installing Docker

Before installing docker i’ll recommend you to read the documentation and follow for your own distribution.

# Add Docker's official GPG key:
~$ sudo apt update
~$ sudo apt install ca-certificates curl
~$ sudo install -m 0755 -d /etc/apt/keyrings
~$ sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
~$ sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
~$ sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF

~$ sudo apt update
~$ sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
17eec7bbc9d7: Pull complete
ea52d2000f90: Download complete
Digest: sha256:85404b3c53951c3ff5d40de0972b1bb21fafa2e8daa235355baf44f33db9dbdd
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Run the post-install steps to manage docker as a non-root user.

~$ sudo groupadd docker
~$ sudo usermod -aG docker $USER
~$ newgrp docker
~$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Building and Pushing Containers

If you’re following along with your own repo, create a Dockerfile for your application. For this guide, I’ll be using my own application which already has Dockerfiles set up for building and deploying. If you wanna learn how to write dockerfiles refer to this. Now, choose your preferred container registry and login into using docker login. I’ll be using ghcr

> docker login ghcr.io
Username:
Password:
Login Succeeded

Build and tag each container with both commit hash and latest and then push to registry. Im using commit hash as tag as it gives info about which commit was used while building the containers. Makes debugging in future easier.

# build containers
# f842281 is the commit hash when these containers were created
> docker build -t ghcr.io/quantinium3/deployment-demo/server:f842281 -t ghcr.io/quantinium3/deployment-demo/server:latest -f apps/srv/Dockerfile .
> docker build -t ghcr.io/quantinium3/deployment-demo/web:f842281 -t ghcr.io/quantinium3/deployment-demo/web:latest -f apps/web/Dockerfile .

# push containers to a registry
> docker push ghcr.io/quantinium3/deployment-demo/server:f842281
> docker push ghcr.io/quantinium3/deployment-demo/server:latest
> docker push ghcr.io/quantinium3/deployment-demo/web:f842281
> docker push ghcr.io/quantinium3/deployment-demo/web:latest

Writing Docker Compose

We could just pull these containers on our server and just deploy them but it would be easier to just write a docker-compose.yml file to manage everything together.

services:
  web:
    image: ghcr.io/quantinium3/deployment-demo/web:latest
    ports:
      - "3001:3001"
    depends_on:
      server:
        condition: service_healthy
    restart: unless-stopped

  server:
    image: ghcr.io/quantinium3/deployment-demo/server:latest
    ports:
      - "3000:3000"
    depends_on:
      db:
        condition: service_healthy
    environment:
      NODE_ENV: ${NODE_ENV}
      DATABASE_URL: ${DATABASE_URL}
    healthcheck:
      test: ["CMD", "node", "-e", "fetch('http://localhost:3000/health').then(r => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 10s
    restart: unless-stopped

  db:
    image: postgres:18-alpine
    restart: unless-stopped
    shm_size: 128mb
    environment:
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
      POSTGRES_USER: ${DATABASE_USER}
      POSTGRES_DB: ${DATABASE_DBNAME}
    expose:
      - "5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DATABASE_USER} -d ${DATABASE_DBNAME}"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 10s

volumes:
  postgres_data:

Three services are defined here:

  • web - Frontend. Pulls from ghcr, maps port 3001 and waits for server to be healthy before starting.
  • server - Backend. Pulls from ghcr, maps port 3000 and waits for database to be healthy before starting.
  • db - A standard postgres container. Data is persisted via volumes so it persists after restarts.

Before automating deployment, lets verify if everything works normally. So let’s copy docker-compose.yml and .env to server and try running it.

> scp -i ~/.ssh/testing-ed25519 .env quantinium@69.223.84.134:/home/quantinium
> scp -i ~/.ssh/testing-ed25519 docker-compose.yml quantinium@69.223.84.134:/home/quantinium

ssh into server and run the containers.

> ssh -i ~/.ssh/testing-ed25519 root@69.223.84.134
> source .env # exporting the environment variables
> docker compose up -d 
> docker ps

Configure NGINX

Now that our containers are running lets configure nginx to point to them. Before configuring nginx lets add a DNS records for your domain to point to your server

Type: A | Name: demo | Content: <server_ip> | Proxy Status: Proxied | TTL: auto

Now, while we wait for the domain to propagate, we can install and configure nginx.

Now we’ll configure nginx to make sure the requests can be served

~$ sudo apt install nginx

I’ll be using cloudflare to generate origin server certificates as my domain is on cloudflare. So, if you following me 1:1 then go to SSL > Origin Server and generate new keys for your domain. After generating the keys you’ll get two files - .pem and .key. Create a directory to store our cloudflare secrets in

~$ sudo mkdir /etc/nginx/certs
~$ sudo vim /etc/nginx/certs/demo.quantinium.dev.pem # paste the pem private key in this
~$ sudo vim /etc/nginx/certs/demo.quantinium.dev.key # paste the key in this

Create a new nginx config at /etc/nginx/sites-available/deployment-demo

~$ sudo vim /etc/nginx/sites-available/deployment-demo

and add the following nginx configuration to it

server {
    listen 443 ssl;
    http2 on;
    server_name demo.quantinium.dev;
    ssl_certificate /etc/nginx/certs/demo.quantinium.dev.pem;
    ssl_certificate_key /etc/nginx/certs/demo.quantinium.dev.key;

    location / {
        proxy_pass http://localhost:3001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
    location /api/ {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

server {
    listen 80;
    server_name demo.quantinium.dev;
    return 301 https://$host$request_uri;

}

Enable the config and reload nginx.

~$ sudo ln -s /etc/nginx/sites-available/deployment-demo /etc/nginx/sites-enabled/deployment-demo
~$ sudo nginx -t # to check if configuration is correct
~$ sudo systemctl reload nginx

Also open port 80 and 443 in the firewall.

~$ sudo ufw allow 'Nginx Full'
~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
80,443/tcp (Nginx Full)    ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)
80,443/tcp (Nginx Full (v6)) ALLOW IN    Anywhere (v6)

Now, if the domain is correcly propogated and everything is working properly, we can go to our domain and everything should work hopefully.

Continuous Deployment

Hopefully till now everything worked and our site is now fully working, but if we do changes to our code and then push to our codebase, we have to do everything again. From building containers, tagging them, pushing to ghcr ad redeploying. So, now we automate this using actions.

Building Container Image

Lets create a workflow to build our server container. For this create a new file at .github/workflows/build-server-container.yml

name: Build And Push Server Container

on:
  push:
    branches:
      - main
    paths:
      - 'apps/srv/**'
  workflow_dispatch:

env:
  IMAGE_PREFIX: ghcr.io/${{ github.repository }}

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    environment: production
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Login to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Set short SHA
        id: vars
        run: echo "sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Build and push
        uses: docker/build-push-action@v6
        with:
          context: .
          file: ./apps/srv/Dockerfile
          push: true
          tags: |
            ${{ env.IMAGE_PREFIX }}/server:${{ steps.vars.outputs.sha }}
            ${{ env.IMAGE_PREFIX }}/server:latest

Here we are defining a github workflow that does the following things

  1. This workflow runs whenever there is a push to the main branch in the server app.
  2. We are defining the following jobs that must be done:
  3. Checkout the code i.e. clone the code to the ubuntu runner.
  4. Login in to ghcr.io to push the built container
  5. Create a short sha to tag the container with
  6. Setup a qemu and buildx to build multi-platform images, export-cache, etc.
  7. Building the container and then tagging them with the created sha and latest.

Similarly, We create a workflow for creating web container. So, create a new file at .github/workflows/build-web-container.yml

name: Build And Push Web Container

on:
  push:
    branches:
      - main
    paths:
      - 'apps/web/**'
  workflow_dispatch:


env:
  IMAGE_PREFIX: ghcr.io/${{ github.repository }}

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    environment: production
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Set short SHA
        id: vars
        run: echo "sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Build and push
        uses: docker/build-push-action@v6
        with:
          context: .
          file: ./apps/web/Dockerfile
          push: true
          tags: |
            ${{ env.IMAGE_PREFIX }}/web:${{ steps.vars.outputs.sha }}
            ${{ env.IMAGE_PREFIX }}/web:latest

We are doing the same thing as server container but for web container.

Deploying to the Server

Final workflow for automatically deploying it to our server. Create a new file at .github/workflows/deploy-to-vps.yml.

name: Deploy to VPS

on:
  workflow_run:
    workflows:
      - "Build And Push Web Container"
      - "Build And Push Server Container"
    types:
      - completed

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    if: ${{ github.event.workflow_run.conclusion == 'success' }}

    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Copy docker-compose to server
        uses: appleboy/scp-action@v1
        with:
          host: ${{ secrets.SERVER_HOST }}
          username: ${{ secrets.SERVER_USER }}
          key: ${{ secrets.SERVER_SSH_KEY }}
          source: "docker-compose.yml"
          target: "~/app"

      - name: Deploy on server
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.SERVER_HOST }}
          username: ${{ secrets.SERVER_USER }}
          key: ${{ secrets.SERVER_SSH_KEY }}
          script: |
            cd ~/app
            echo "${{ secrets.ENV_FILE }}" > .env
            docker compose up -d --pull always --remove-orphans
            docker image prune -f

Here we have defined the worflow configuration to deploy to our vps that does the following things

  1. This workflow runs when either one of container build worflows have run successfully.
  2. We are defining the following jobs to be done
  3. Checkout out code i.e. clone it to the runner
  4. Copy docker-compose.yml to the server using scp-action
  5. Copy .env file to the server
  6. Run docker compose up -d --pull always --remove-orphans to always pull the newly build containers
  7. Run docker image prune -f to delete all dangling images.

Push these files to github and add the following secrets by going to Settings > Secrets and Variables > Actions and add environment secrets.

  • ENV_FILE - Contents of .env file
  • SERVER_HOST - Server’s ip address
  • SERVER_SSH_KEY - Private SSH key
  • SERVER_USER - Server username

With the secrets in place, every push to main will automatically build, push, and deploy your containers. That’s the CD part of CI/CD done.Share

Troubleshooting

  • Failed to push to ghcr. permission error: write_packages - go to your container and packages, link them to your repository and change the package setting to write packages.

If you have reached till here, thank you!. I hoped you liked it and if you hated this. thank you! for caring enough to hate. If you get stuck or get any error feel free to message me.

quantinium 2026