Cinode maps - reaching the scale
Contents
It’s been a while since I published my last post, but it’s time to start writing again. There are many topics about Cinode internals that I’d like to cover but before I do another deep-dive post about cryptography I’d like to start with another topic - Cinode maps.
This post starts a series of blog posts about my recent adventures with Cinode maps. Despite some personal challenges I faced in 2024, I was still able to push this project forward a little bit. Let me share my story then.
Getting bigger
We first learned about the maps project when I introduced Cinobelix - our brave character that is uploading maps data into Cinode.
Current maps are pretty limited though - they only serve maps for one country - Poland. Because of some interesting Cinode projects I’d like to pursue in the future, I decided to make the maps system more robust.
What I’d like to see is:
- The ability to extend the map with data from multiple regions - either entire countries or some smaller areas.
- Get more detailed zoom levels:
- For all areas where we have data provide a reasonable zoom level - this is already done for Poland (up to the 14th zoom level) but it should be extended for all the regions that have data. At that level, we can clearly see major city streets and the outlines of areas.
- For some small selected areas such as cities, have the best possible zoom level that can be seen on OpenStreetMap: 20th. At that level we can clearly see specific buildings and all roads.
I haven’t even considered expanding coverage to the entire globe, as it would far exceed the resources available for this project.
First attempt
My first thought was that maybe it’s time to try extending the data to all of Europe. I could only do a wild guess if that would work or not. Maps data is currently being handled by a small server with pretty low spec:
This server contains large but slow spinning disks. In order to increase my chances of handling the Europe region I extended it with a 500GB NVMe device acting as a caching layer through lvmcache.
The wait
You may wonder how it went. Well, the whole data import process took about a month 😱. Processing this amount of data was saturating my spinning disks IO almost entire time (even with NVMe cache), not to mention the load on the CPU. During that month I had to protect the server from unexpected power losses, couldn’t reboot it and had to minimize other activities. But I finally got the data in.
But the result wasn’t that good. First attempts to generate tiles ended up with issues with the database. Even with some PostgreSQL configuration updates I wasn’t able to generate tiles beyond certain zoom level or in more dense regions. Tile generation was taking too much time and I assume that some internal routines were cancelled due to timeouts.
Flexibility
After my first unsuccessful attempt, I decided it was time to redesign the tile generation machinery a bit to add some flexibility. The current solution uses a single overv/openstreetmap-tile-server Docker image running everything in that single container:
- PostgreSQL
- tiles server (Apache + mod_tile + renderd)
- importer of the initial data (osm2pgsql)
- synchronizer with OpenStreetMap changes (osm2pgsql + trim_osc.py)
I briefly considered importing multiple regions separately, each with its own synchronizer (I’ll write about this approach in some future post) thus the next task would be to split one container into few smaller ones. The solution, of course, is based on the overv/openstreetmap-tile-server Docker image.
Since we’ll be dealing with multiple containers, we’ll need orchestration software to manage those components. Having worked extensively with kubernetes before, I thought I’d give it a try. Of course for a single server we don’t need something complex, thus I installed k3s on my tiny box - it’s very easy to install and does not require a lot of resources.
To make it easy to configure multiple regions, detail levels, and more, I decided to use Helm - it’s a great tool where the configuration of application on kubernetes can be easily configured through templating and value files, it also allows building larger applications from existing third-party Helm charts, greatly simplifying the configuration of building blocks such as PostgreSQL.
Identifying components
Since our goal is well defined, let’s try to imagine how it would all work together:
In this approach, each region would have its own, dedicated initial importing and synchronization component. At that point, the biggest question was whether this approach would work at all - the first phase of the work focused on answering that question.
You may have noticed the mysterious message bus for changed tiles. This is an improvement I’d also like to implement. While working with previous iterations of the map synchronizer I noticed that the synchronization script can produce the list of map tiles that were affected by recently applied changes. In the overv/openstreetmap-tile-server Docker image, this data is used to invalidate affected tile cache entries. We could adopt a similar approach by uploading modified tiles to Cinode, eliminating the need to periodically regenerate tiles for the entire globe.
Building the Helm chart
We’ll focus on building some basic components today and setting up the project skeleton. The ultimate goal is to create a Helm chart for the entire maps system. It will contain all the components needed to run full maps synchronization suite.
Assuming Helm tool is already installed, creating a new Helm chart is pretty straightforward:
|
|
This sets up a not-so-basic Helm chart with example templates and configuration. I wanted to start with a clean slate, so I removed the templates
folder and cleaned up the values.yaml
file. I also simplified and tuned up the chart description file - Chart.yaml
:
|
|
Now, let’s check whether this Helm chart, even though empty, can be installed. As a kubernetes environment I used local minikube which is a perfect tool for such local tests:
|
|
Let’s install the Helm chart:
|
|
All good so far 🎉. Now let’s extend this simple skeleton a bit. The first component that I’ll be setting up is the PostgreSQL server storing all the maps data. Helm charts can declare dependencies to other published helm charts and it turns out that there’s already a great bitnami/PostgreSQL for PostgreSQL one that we can easily use.
To instantiate PostgreSQL through our chart, we first need to declare a dependency in the Chart.yam
file:
|
|
To use any external Helm chart, we need to add its source to the list of known repositories:
|
|
Finally, we need to lock the version specified in Chart.yaml
(^16
, which matches the latest 16.x
version) to a specific value:
|
|
This step produced another file called Chart.lock
:
|
|
Now let’s see if we can install our Helm chart with PostgreSQL dependency:
|
|
To validate if it went well, let’s see what was instantiated in minikube:
|
|
It’s that simple 😉. With few lines of code and few small steps we have a fully working PostgreSQL server. Of course it still has to be configured but that’s something I’ll leave for another post.
Helm chart publishing
To use the PostgreSQL Helm chart as a dependency, it must be published by its author. Publishing Helm chart requires uploading packaged chart somewhere and creating an index file listing all available versions.
I would like to do the same with maps project - provide the chart publicly so that anybody willing to play with it can do so. Fortunately, this can be easily achieved using GitHub Pages with the help of GitHub Actions. Setting up GitHub Pages is straightforward, so I won’t describe it here. As a result, I created a clean URL where it is available: https://maps-charts.cinodenet.org.
Once GitHub Pages were set up, I automated the publishing of the Helm chart with this GitHub workflow:
|
|
Now, anyone can install the Helm chart using this simple command:
|
|
You can find the source code for this blog post in this repository. Keep in mind, this chart is currently a simple PostgreSQL-only setup. More components will be added soon - stay tuned!
Author BYO
LastMod 2025-01-11