Cinode maps - PostgreSQL Tuning
Contents
In the previous post, I introduced the basic structure of the Cinode Maps Helm chart. This resulted in a simple PostgreSQL installation with default settings. Before it can support Cinode Maps, however, the setup requires slight tweaking. Let me guide you through the changes I made.
The source of information
The original Cinode Maps generator was built using the overv/openstreetmap-tile-server Docker image. It is a perfect solution to quickly spawn a custom map tiles server as a single “batteries-included” Docker container. We will keep it as a basis for the new multi-container architecture - let’s try to recreate the PostgreSQL setup as closely as possible.
Looking into the source code of that Docker image, we can identify the following:
- the most important file with custom configuration is in the postgresql.custom.conf.tmpl one
- there’s also some additional setup in the entrypoint script of the image.
Injecting custom PostgreSQL server configuration
Let’s start by recreating the configuration of the PostgreSQL server using the template from the original Docker-image. It’s pretty straightforward to apply a custom configuration using the Bitnami PostgreSQL chart, as it has a dedicated Helm value for that:
|
|
This configuration closely mirrors the configuration template in the base tile server Docker image. An additional autovacuum = on
line, not present in the original template, appears at the end. It’s added in the entrypoint script and can be embedded directly into our configuration.
Switching database name and the main database user
The overv/openstreetmap-tile-server Docker image uses a predefined PostgreSQL user called renderer
with a default password also set to renderer
. The password could be adjusted for improved security, but at this stage, I opted to retain the defaults to prevent obscure errors from authentication issues.
Setting up the user starts at this line of the entrypoint file of the original Docker image. Same thing can be easily configured in the Bitnami PostgreSQL chart through the values file:
|
|
Installing necessary extensions and setting up permissions
The tile server will require two extensions to work properly: postgis and hstore. Those are manually added in the original entrypoint script. That script also sets up permissions for the renderer
user right after. We can express both operations using an initialization script:
|
|
The initial export PGPASSWORD=...
is needed to pass proper credentials to the psql
command. It can only be passed through the PGPASSWORD
environment variable - unlike the user, which is specified with the -U
option, the password doesn’t have a similar option (likely due to security concerns).
Debugging this script was a bit challenging. It only executes once during database setup, and uninstalling it doesn’t always remove all data. As a result, simply reinstalling the Helm chart may skip the script due to lingering files. Manual cleanup resolved the issue.
Configuring shared volume
PostgreSQL instance used for tiles server is very sensitive to the size of the shared memory volume. For small regions it is not a big issue, but merging many regions, including some larger ones, quickly exposes the problem. Bitnami PostgreSQL chart is already prepared for this and with a small tweak in the values file we can get what we want - a large local shared memory volume:
|
|
Persistent storage
Since the maps setup will initially be a single-machine one, I’d like to have control over the storage used for the PostgreSQL server and other data, such as the tiles cache. In my case, this should be a volume with an NVMe device to ensure sufficient I/O speed.
I couldn’t easily figure out how to configure k3s installation to choose a specific folder using its default storage provisioner, so I looked for alternatives.
What worked for me was setting up a custom persistent volume of a hostPath
type and instructing Bitnami PostgreSQL chart to use that volume.
The volume itself is pretty straightforward:
|
|
To bind such volume to persistent volume claim I used a unique storageClassName
in both the volume and the claim object. The uniqueness of the storageClassName
property guarantees that the volume won’t be accidentally attached to some other claim that matches volume’s properties.
To use it in the Bitnami PostgreSQL chart we tweak the values file:
|
|
The presence of the hostPath
value, absent from the original PostgreSQL chart, might surprise some readers. This specific entry is only used in my additional template. If it is absent, no persistent volume will be created. Bitnami chart does not use that value at all.
The manually created persistent volume approach, however, has some drawbacks. It’s limited to a single-host setup and may require manual filesystem permission changes to allow the container to access that space.
Disable resource control
To mimic the behavior of the initial setup using a single Docker container, I switched off any resource requests or limits:
|
|
That way PostgreSQL server is free to use as much local resources as it wants, similarly to running a single Docker image without explicitly specified limits. This was necessary due to the fact that Bitnami PostgreSQL chart comes with default values that do not fit well into our case.
Conclusion
With all those changes, the spawned PostgreSQL instance mimics the one installed in the original overv/openstreetmap-tile-server Docker image. You can find the code in the v0.0.2 version of the helm chart.
That’s all for today. Next time, I’ll begin working on the tile server itself - stay tuned!
Author BYO
LastMod 2025-03-03