It’s been a while since I published my last post, but it’s time to start writing again. There are many topics about Cinode internals that I’d like to cover but before I do another deep-dive post about cryptography I’d like to start with another topic - Cinode maps.

This post starts a series of blog posts about my recent adventures with Cinode maps. Despite some personal challenges I faced in 2024, I was still able to push this project forward a little bit. Let me share my story then.

Getting bigger

We first learned about the maps project when I introduced Cinobelix - our brave character that is uploading maps data into Cinode.

Current maps are pretty limited though - they only serve maps for one country - Poland. Because of some interesting Cinode projects I’d like to pursue in the future, I decided to make the maps system more robust.

What I’d like to see is:

  • The ability to extend the map with data from multiple regions - either entire countries or some smaller areas.
  • Get more detailed zoom levels:
    • For all areas where we have data provide a reasonable zoom level - this is already done for Poland (up to the 14th zoom level) but it should be extended for all the regions that have data. At that level, we can clearly see major city streets and the outlines of areas.
    • For some small selected areas such as cities, have the best possible zoom level that can be seen on OpenStreetMap: 20th. At that level we can clearly see specific buildings and all roads.

I haven’t even considered expanding coverage to the entire globe, as it would far exceed the resources available for this project.

First attempt

My first thought was that maybe it’s time to try extending the data to all of Europe. I could only do a wild guess if that would work or not. Maps data is currently being handled by a small server with pretty low spec:

Cinode-store server specs

This server contains large but slow spinning disks. In order to increase my chances of handling the Europe region I extended it with a 500GB NVMe device acting as a caching layer through lvmcache.

The wait

The Wait

You may wonder how it went. Well, the whole data import process took about a month 😱. Processing this amount of data was saturating my spinning disks IO almost entire time (even with NVMe cache), not to mention the load on the CPU. During that month I had to protect the server from unexpected power losses, couldn’t reboot it and had to minimize other activities. But I finally got the data in.

But the result wasn’t that good. First attempts to generate tiles ended up with issues with the database. Even with some PostgreSQL configuration updates I wasn’t able to generate tiles beyond certain zoom level or in more dense regions. Tile generation was taking too much time and I assume that some internal routines were cancelled due to timeouts.

Flexibility

After my first unsuccessful attempt, I decided it was time to redesign the tile generation machinery a bit to add some flexibility. The current solution uses a single overv/openstreetmap-tile-server Docker image running everything in that single container:

I briefly considered importing multiple regions separately, each with its own synchronizer (I’ll write about this approach in some future post) thus the next task would be to split one container into few smaller ones. The solution, of course, is based on the overv/openstreetmap-tile-server Docker image.

Since we’ll be dealing with multiple containers, we’ll need orchestration software to manage those components. Having worked extensively with kubernetes before, I thought I’d give it a try. Of course for a single server we don’t need something complex, thus I installed k3s on my tiny box - it’s very easy to install and does not require a lot of resources.

To make it easy to configure multiple regions, detail levels, and more, I decided to use Helm - it’s a great tool where the configuration of application on kubernetes can be easily configured through templating and value files, it also allows building larger applications from existing third-party Helm charts, greatly simplifying the configuration of building blocks such as PostgreSQL.

Identifying components

Since our goal is well defined, let’s try to imagine how it would all work together:

Architecture

In this approach, each region would have its own, dedicated initial importing and synchronization component. At that point, the biggest question was whether this approach would work at all - the first phase of the work focused on answering that question.

You may have noticed the mysterious message bus for changed tiles. This is an improvement I’d also like to implement. While working with previous iterations of the map synchronizer I noticed that the synchronization script can produce the list of map tiles that were affected by recently applied changes. In the overv/openstreetmap-tile-server Docker image, this data is used to invalidate affected tile cache entries. We could adopt a similar approach by uploading modified tiles to Cinode, eliminating the need to periodically regenerate tiles for the entire globe.

Building the Helm chart

We’ll focus on building some basic components today and setting up the project skeleton. The ultimate goal is to create a Helm chart for the entire maps system. It will contain all the components needed to run full maps synchronization suite.

Assuming Helm tool is already installed, creating a new Helm chart is pretty straightforward:

1
2
$ helm create osm-machinery
Creating osm-machinery

This sets up a not-so-basic Helm chart with example templates and configuration. I wanted to start with a clean slate, so I removed the templates folder and cleaned up the values.yaml file. I also simplified and tuned up the chart description file - Chart.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v2

name: osm-machinery

description: Toolbox to synchronize cinode maps with OpenStreetMap

type: application

version: "0.0.1"

home: https://maps.cinodenet.org/

sources:
  - https://github.com/cinode/maps

keywords:
  - cinode
  - maps
  - OpenStreetMap

kubeVersion: ">= 1.23.0"  # Support generic ephemeral volumes

Now, let’s check whether this Helm chart, even though empty, can be installed. As a kubernetes environment I used local minikube which is a perfect tool for such local tests:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
$ minikube start
😄  minikube v1.34.0 on Ubuntu 24.04
     ....
✨  Automatically selected the docker driver. Other choices: kvm2, ssh
📌  Using Docker driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.45 ...
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Let’s install the Helm chart:

1
2
3
4
5
6
7
$ helm install osm-machinery .
NAME: osm-machinery
LAST DEPLOYED: Wed Jan  1 23:27:47 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

All good so far 🎉. Now let’s extend this simple skeleton a bit. The first component that I’ll be setting up is the PostgreSQL server storing all the maps data. Helm charts can declare dependencies to other published helm charts and it turns out that there’s already a great bitnami/PostgreSQL for PostgreSQL one that we can easily use.

To instantiate PostgreSQL through our chart, we first need to declare a dependency in the Chart.yam file:

1
2
3
4
5
6
7
...

dependencies:
  - name: PostgreSQL
    version: "^16"
    repository: "https://charts.bitnami.com/bitnami"
    alias: postgres

To use any external Helm chart, we need to add its source to the list of known repositories:

1
2
$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

Finally, we need to lock the version specified in Chart.yaml (^16, which matches the latest 16.x version) to a specific value:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
$ helm dependency update 
helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading postgresql from repo https://charts.bitnami.com/bitnami
Pulled: registry-1.docker.io/bitnamicharts/postgresql:16.3.5
Digest: sha256:ab69f4c4746cc4bee8175bccbbc3ebfbcd07e5502366c6804119b2b7663ad9ea
Deleting outdated charts

This step produced another file called Chart.lock:

1
2
3
4
5
6
dependencies:
- name: postgresql
  repository: https://charts.bitnami.com/bitnami
  version: 16.3.5
digest: sha256:087c3f2d894e4d1c4ee4428c22d3a1464483a080e34ce965953273d64114f82c
generated: "2025-01-05T23:38:40.928330351+01:00"

Now let’s see if we can install our Helm chart with PostgreSQL dependency:

1
2
3
4
5
6
7
8
$ helm upgrade osm-machinery .
Release "osm-machinery" has been upgraded. Happy Helming!
NAME: osm-machinery
LAST DEPLOYED: Wed Jan  1 23:47:20 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

To validate if it went well, let’s see what was instantiated in minikube:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
$ kubectl get all
kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/osm-machinery-postgres-0   1/1     Running   0          20m

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes                  ClusterIP   10.96.0.1      <none>        443/TCP    21m
service/osm-machinery-postgres      ClusterIP   10.110.90.39   <none>        5432/TCP   20m
service/osm-machinery-postgres-hl   ClusterIP   None           <none>        5432/TCP   20m

NAME                                      READY   AGE
statefulset.apps/osm-machinery-postgres   1/1     20m

It’s that simple 😉. With few lines of code and few small steps we have a fully working PostgreSQL server. Of course it still has to be configured but that’s something I’ll leave for another post.

Helm chart publishing

To use the PostgreSQL Helm chart as a dependency, it must be published by its author. Publishing Helm chart requires uploading packaged chart somewhere and creating an index file listing all available versions.

I would like to do the same with maps project - provide the chart publicly so that anybody willing to play with it can do so. Fortunately, this can be easily achieved using GitHub Pages with the help of GitHub Actions. Setting up GitHub Pages is straightforward, so I won’t describe it here. As a result, I created a clean URL where it is available: https://maps-charts.cinodenet.org.

Once GitHub Pages were set up, I automated the publishing of the Helm chart with this GitHub workflow:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
name: Release Charts

on:
  push:
    tags:
      - "v*"

jobs:
  release:
    # depending on default permission settings for your org (contents being read-only or read-write for workloads), you will have to add permissions
    # see: https://docs.github.com/en/actions/security-guides/automatic-token-authentication#modifying-the-permissions-for-the-github_token
    permissions:
      contents: write
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Configure Git
        run: |
          git config user.name "$GITHUB_ACTOR"
          git config user.email "[email protected]"          

      - name: Install Helm
        uses: azure/setup-helm@v4
        env:
          GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

      - name: Register external repositories
        run: |
          helm repo add bitnami https://charts.bitnami.com/bitnami          

      - name: Run chart-releaser
        uses: helm/[email protected]
        env:
          CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
        with:
          charts_dir: helm

Now, anyone can install the Helm chart using this simple command:

1
2
3
4
5
6
7
8
$ helm upgrade --install --wait maps https://github.com/cinode/maps/releases/download/osm-machinery-0.0.1/osm-machinery-0.0.1.tgz 
Release "maps" does not exist. Installing it now.
NAME: maps
LAST DEPLOYED: Tue Dec 31 17:34:09 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

You can find the source code for this blog post in this repository. Keep in mind, this chart is currently a simple PostgreSQL-only setup. More components will be added soon - stay tuned!