De-Googling Part 0: Fundamentals

Preamble
I've had a Google Workspaces (formerly GSuite) account forever. I originally had a free account, then started paying for it when Google changed their subscription model because, to be quite honest, the price was reasonable, and at the time, GSuite combined a lot of useful cloud functionality I would otherwise have had to pay for elsewhere.
Over time though the price has gone up significantly, some features (such as Gemini) have been foisted upon me without my consent, and it's started to feel more like a burden than a benefit. So I decided to start exploring other options.
I have a number of domains connected to Workspaces, and two user accounts - my partner's and my own. The key elements I wanted to replicate are Gmail, Photos, and Drive. Replicating these three elements would enable me to completely drop my paid Google Workspaces account.
Base System
Hardware
I picked up a secondhand Intel NUC with an i5 processor, 8gb RAM, and 128GB SSD, for $AUD50 on Facebook Marketplace. It has a second M.2 slot for another SSD, but for the moment I am using a 1tb SSD inside a cheap external enclosure connected via USB3 as the second drive.
I use a Linksys WRT3200ACM as my router, set a fixed IP address within my network for the NUC, and have configured ports 80 and 443 to route to the NUC's IP address. My ISP by default uses CG-NAT, but fortunately I was able to turn this off and have an exclusive dynamic IP associated with my internet connection. I could have paid extra for a static IP address, but I decided to set up dynamic DNS instead.
Operating System
I am running Ubuntu Server 24.04, however my setup could be replicated with just about any Linux flavour (e.g. Debian, Fedora) with Intel GPU driver and Docker support. I have a pretty bare-bones install, but run SSHD so that I can log into the NUC remotely (within my home network).
I use Docker's official Ubuntu repositories to source the Docker software. I won't go into too much detail about installing Docker itself. The Docker documentation details the process pretty well. Suffice to say I installed Docker itself, the docker compose plugin, and configured my default non-root user to be able to control docker.
The Fundamental Services
I have all of my docker services configured in my primary user's home directory, under the docker
subdirectory. I considered building one monolithicdocker-compose.yml
file for all services, but I prefer to keep things modular, so for each service or rough service group I have another subdirectory under ~/docker
.
Dynamic DNS
I'm assuming you have your own domains to hook up to the various services we'll install. I use a primary domain, point that to my internet connection's public IP using an A
record, and then CNAME
a series of subdomains to the primary domain's IP.
I use AWS' Route53 to manage my domains, so to dynamically update my DNS, I just needed a dockerised service that would be able to talk to AWS and update Route53. I used crazymax/ddns-route53 and created a basic docker-compose.yml
for it in ~/docker/ddns
:
services:
ddns-route53:
image: crazymax/ddns-route53:latest
container_name: ddns-route53
environment:
TZ: ${TIMEZONE}
SCHEDULE: "*/30 * * * *"
LOG_LEVEL: info
LOG_JSON: false
DDNSR53_CREDENTIALS_ACCESSKEYID: ${AWS_ACCESS_KEY}
DDNSR53_CREDENTIALS_SECRETACCESSKEY: ${AWS_SECRET_ACCESS_KEY}
DDNSR53_ROUTE53_HOSTEDZONEID: ${ROUTE53_ZONE_ID}
DDNSR53_ROUTE53_RECORDSSET_0_NAME: ${RECORDSET_0_NAME}
DDNSR53_ROUTE53_RECORDSSET_0_TYPE: ${RECORDSET_0_TYPE}
DDNSR53_ROUTE53_RECORDSSET_0_TTL: ${RECORDSET_0_TTL}
restart: always
I use a series of env vars, which I place in ~/docker/ddns/.env
. Most of them are self-explanatory. The DDNSR53_ROUTE53_*
env vars are explained in the installation instructions for crazymax/ddns-route53
.
This gives me a service that pings Route53 every 30 minutes, and updates my domain's A
record with my publicly facing IP address.
Reverse Proxy
I use a dockerised NGINX Proxy Manager, which gives me a simple interface to define proxies and route to them. Going forward I'll call NGINX Proxy Manager NPM for short.
I barely make any customisations to the docker-compose file (~/docker/npm/docker-compose.yml
). I just add volumes for the configuration and LetsEncrypt data, so that they persist when I restart the container.
services:
npm:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
Once installed and running I can go to http://my-nuc-ip-address:81
, and see the NPM Dashboard. The first step is to create an administrative user for NPM, using an email address and a password. This user will be needed to log into NPM going forward.
If you want to manage your proxies remotely, the next step is to configure a subdomain for NPM to proxy to itself. I won't go into the details of doing this, as every domain provider is different, but it just involves creating a CNAME
record for the subdomain and pointing it to your primary domain.
Once the DNS side is done, you can create a proxy in NPM from the top menu - Hosts > Proxy Hosts, Add Proxy Host.

In the Domain Names field, add the subdomain you configured. Point the Forward Hostname to npm
, or whatever you named your NPM service when creating your docker-compose.yml
file. Use Forward Port 81
. This will make NPM accept requests for your subdomain and forward them to your internal docker service named npm
on port 81.
Go to the SSL section.

Choose to Request a new SSL certificate, toggle Force SSL on, and click Save. Assuming your DNS and port forwarding is set up correctly, NPM will do 3 things
- Set up a route for LetsEncrypt to verify that you own the subdomain
- Install the SSL Certificate that LetsEncrypt generates, and
- Configure itself to redirect any HTTP requests to your subdomain to HTTPS.
If this worked correctly, you should now be able to visit https://your-subdomain
, and NPM will prompt you to log into your dashboard.
Going forward, you will be able to use the same process to set up a subdomain for each new service.
Backups
I tried a few different approaches to backups. My first port of call was AWS S3, but I'm starting to move away from S3 simply because it is needlessly expensive for personal storage use.
Instead, I hunted around for some storage providers and hit upon Hetzner, which offers a 'Storage Box' product - essentially a bare-bones VPS equipped with a few different services set up to provide storage options.
I initially set up rsync to simply back up my required directories via SSH - however this is a fairly primitive solution as all I could do was sync local to remote.
After researching a little more I discovered Borg. Borg is a backup tool that uses delta backups, meaning it just sends the changes between the current backup and the previous backup. This saves a lot of bandwidth. In addition, it provides backup versioning out of the box, so I am able to keep multiple versions backed up with minimal extra space used (due to the delta backup methodology) and prune them regularly. Best of all, Hetzner Storage Boxes support using Borg.
I decided to roll my own docker container using Alpine Linux as a base. In ~/docker/borg
, I created my Dockerfile
like so:
# Start with the latest Alpine Linux base image
FROM alpine:latest
# Install necessary packages and Borg Backup
RUN apk add --no-cache \
borgbackup \
openssh-client \
ca-certificates \
tzdata
# Set up a work directory
WORKDIR /data
# Command to run by default when the container starts
CMD ["tail", "-f", "/dev/null"]
This installs borgbackup
and a few other necessary utilities to back up to Hetzner.
I then created my docker-compose.yml
:
services:
borg-client:
build: .
container_name: borg-client
volumes:
- ${SSH_PK}:/root/.ssh/id_rsa:ro
- ${SSH_CONFIG}:/root/.ssh
- ./borg-config:/root/.config/borg
- ./borg-cache:/root/.cache/borg
environment:
BORG_REPO: ${BORG_REPO}
BORG_PASSPHRASE: ${BORG_PASSPHRASE}
SSH_COMMAND: 'ssh -i /root/.ssh/id_rsa -o StrictHostKeyChecking=no'
There are a couple of things to consider here.
First, the SSH mounts. Following the Hetzner instructions for setting up remote access to Borg, I created an SSH key pair and set up my Storage Box to recognise the public key. ${SSH_PK}
is the path to the local private key. I have mounted it in read-only mode to a location in the container's filesystem. ${SSH_CONFIG}
is a path mounted as a volume so that the container will persist its SSH known_hosts
file between invocations.
The other variables should be understandable if you have followed the Hetzner instructions, but ${BORG_REPO}
is the remote SSH path to the repo set up in the instructions, and ${BORG_PASSPHRASE}
is the password entered during the repo initialisation.
This is the bare-bones setup for the Borg container. Actual backup paths and regimes will be set up later, so we will revisit this container down the track.
Managing Scheduled Tasks
The main scheduled tasks we'll manage are backups (using the Borg container). We could simply do this using crontab
in the host operating system, but I figured it would be nice to have everything contained within the docker ecosystem, so I found a dockerised scheduled job runner called deck-chores.
Deck-chores listens to the parent docker service. Other containers can then broadcast jobs that they need to run to deck-chores, and deck-chores will invoke the jobs on the relevant containers according to their configured schedule. This is pretty nice, because we can set up the jobs directly within the docker-compose config for other containers, and deck-chores will just pick it up and run with it.
My basic configuration for deck-chores is in ~/docker/deck-chores/docker-compose.yml
:
services:
officer:
container_name: officer
image: ghcr.io/funkyfuture/deck-chores:1
restart: unless-stopped
environment:
TIMEZONE: ${TIMEZONE}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The only slightly exotic thing here is the fact that the docker socket is mounted within deck-chores. This allows the container to read broadcasted job configurations from other containers, and then control those containers to run the jobs. ${TIMEZONE}
is defined in the accompanying .env
file.
In future posts, I will explain how to create jobs in containers that deck-chores can run.
Conclusion
This post provides the groundwork to begin replacing Google Workspace with alternative services. We've covered the minimum hardware and software requirements, the basis for configuring remote access, and scaffold for backups and job scheduling. In the next post, we will cover replacing GMail with an alternative email setup.