Author Archives: onny

Host your own Mapbox GL JS vector tiles map

I’ve done some research recently on how I could host my own online map viewer with a MapBox GL JS instance, an excellent and modern open-source alternative for Google Maps. The server should also serve own preprocessed map data from OpenStreetmap planet extracts. No external or third-party service will be required, see the demo here (higher zoom levels are online available for the city Karlsruhe).

In the following part, I’m going to show how to generate own vector tiles for the backend map server and how to setup a running demo instance of MapBox GL JS which will be using our own map data.

Generate vector tiles

Usually, we would download the raw Openstreetmap data extract of a specific region and maybe further “crop” or shrink it to a smaller part to save computation time. For the city Karlsruhe, I took the geographic coordinates or boundaries of Karlsruhe from this page. Generating the vector tiles from this data set would be very easy with the OpenMapTiles framework:
pacaur -S osmconvert
git clone https://github.com/openmaptiles/openmaptiles.git
cd openmaptiles
make
mkdir data
wget "https://download.geofabrik.de/europe/germany/baden-wuerttemberg/karlsruhe-regbez-latest.osm.pbf"
osmconvert karlsruhe-regbez-latest.osm.pbf -b=7.893,48.73,8.816,49.246 -o=data/karlsruhe-latest.osm.pbf
sed -i "s/QUICKSTART_MAX_ZOOM=.*$/QUICKSTART_MAX_ZOOM=14/g" .env
./quickstart karlsruhe-latest

Before starting the quickstart script with my custom extract file name, I also set the maximum depth of detail (max zoom = 14). Unfortunately the quickstart script has a bug with custom extracts and it will need a complete rework to function again. So the manual workaround to do this is a bit more complex:
git clone https://github.com/openmaptiles/openmaptiles.git
cd openmaptiles
./quickstart.sh karlsruhe-regbez
cp ./data/docker-compose-config.yml karlsruhe-config.yml

Again, I’m using the quickstart script and generate vector tiles for a predefined region. Copy and modify the standard config according to your BBOX (smaller boundaries), the name of the extract and the zoom level (higher zoom level means more computation time).

Start generating the vector tiles using the OpenMapTiles schema:
docker-compose up -d postgres
docker-compose -f docker-compose.yml -f ./karlsruhe-config.yml run --rm generate-vectortiles
docker-compose run --rm openmaptiles-tools generate-metadata ./data/tiles.mbtiles
docker-compose run --rm openmaptiles-tools chmod 666 ./data/tiles.mbtiles
cp ./data/tiles.mbtiles ./data/karlsruhe.mbtiles

The compiled vector tiles can be directly tested with the included tile server:
make start-tileserver

Setup Mapbox GL JS frontend

The frontend example code (see demo link above) is available in our Gitlab repository. Everything you need to host this demo is a web server with Php support and an Sqlite module. Further copy the vector tiles file “karlsruhe.mbtiles” into the tileserver subdirectory.
cd /var/www
pacman -S php-sqlite
git clone https://git.project-insanity.org/onny/web-mapbox-gl-js-offline-example.git
mv /tmp/karlsruhe.mbtiles web-mapbox-gl-js-offline-example/tileserver/

Don’t forget to enable the Php Sqlite module in your php.ini file:

Following urls need to be changed according to your domain name (e.g. to localhost or example.com) in the style.json file:

Note that the tiles url also contains a reference to the “karlsruhe.mbtiles” file. The min- and maxzoom properties are very important if you want to be able to “overzoom” your map above the zoom level 14. It does not define zoom restrictions in the Mapbox GL JS interface. Here, the maxzoom property should not be higher than the maxzoom level of your generated vector tiles. Otherwise the tileserver will serve “empty” tilesets which will result a blank view in Mapbox GL JS.
I forked the original OSM Bright map style from here.

Easily setup Signal 2FA on Nextcloud 14

Two-factor authentication (short 2FA) is an important security concept to secure unauthorized access to your web applications. Popular online services like Google Mail, Instagram or Facebook already provide this mechanism to secure user accounts with an additional one-time token. Considering someone is able to obtain your username and password combination, for example on a public internet terminal in the library or the airport, he or she won’t be able to gain access on a second device without knowing the additional security token (the second factor). This token will be send to you on a different channel or device.

Starting with version 14 of Nextcloud, there’s now also a new app called two-factor gateway which can send these additional tokens to Signal Messenger, Threema, E-Mail etc. Setting up this infrastructure is a bit more complex since your server must be able to support one of these gateways. In this post I’ll describe how to setup the Signal 2FA gateway on an Archlinux machine.

Signal 2FA setup

To get started, it is recommended to get a new, temporary “disposible mobile phone number” for the registration process. I ordered a batch of phone numbers on the site getsmscode.com (which is a bit shady …) and was able to register and verify this new number on the Signal servers. First, install the Nextcloud app and the gateway daemon:
pacaur -S nextcloud-app-twofactor-gateway signal-web-gateway

Put your phone number into the configuration file at /etc/webapps/signal-web-gateway/config.yml:
[...]
tel: "+1774****"
[...]

Configure the gateway and verify the phone number (you’ll receive the verification SMS on the merchant website):
cd /usr/share/webapps/nextcloud
sudo -u http ./occ twofactorauth:gateway:configure signal # leave default options (press return)
cd /var/lib/signal-web-gateway
sudo -u signal signal-web-gateway # enter verification
systemctl enable --now signal-web-gateway

Enable the twofactor gateway app in Nextcloud and configure it on your user settings page in the security part (see the following picture).


Next time you login into Nextcloud you’ll be asked for the token after entering username and password.

Device and client specific passwords

Other clients which access your Nextcloud instance might need to be reconfigured after enabling two-factor authentication. For instance, I use the Android app DavDroid for syncing my contacts and calendar entries and it won’t be able to login with 2FA enabled. In such cases, you’ll need to generate an app specific password, as shown in the picture above, which will be used only by this app and won’t require 2FA.

Cloud synchronization performance tests of various Linux clients

I already tried using Nextcloud as a backup solution which will sync my complete home directory into a Nextcloud instance. This is very practical if you want to have access to your files when you don’t have your laptop with you or when it gets lost. Of course there are some drawbacks with this approach: Remote files are unencrypted (so that the Nextcloud file explorer can see them), accidental deletions results in data loss on both sides and lack of file versioning or snapshots (depends on your Nextcloud configuration).
An other issue is the performance in case of syncing many small files. This is a problem in case of the Nextcloud Desktop client because it creates a separate http connection (WebDav) for every single file. I created a bug report on Github to start a discussion for finding a better and faster approach.

In this post I’m going to compare the performance of some Linux file synchronization clients and further describe how to automate the benchmarking.

The test results

First of all, here are my test results. This graph shows the time in seconds needed to synchronize ~8MB of local-to-remote and ~1MB of remote-to-local files (972 single files):

Since my connection speed is not the bottleneck in this test, Nextcloud will need up to 8 minutes to index and up-/download the files. Considering the very small file size in total, this is not a very good result. I’m not sure what technique Dropbox is using but I guess it’s a proprietary, more enhanced protocol. Rclone is a really interesting tool which supports various cloud services. One limitation is that rclone does not support bidirectional file synchronization yet, so it is not really comparable to the other clients.

Preparations

In this part I’ll sketch my setup I used for the benchmark. It will run in a container on Archlinux using systemd-nspawn and as a user called test. In the next steps, we’ll install the AUR handler pacaur und the syncing desktop clients.
pacman -S arch-install-scripts
btrfs subvol create /var/lib/container/archlinux-base
mkdir /etc/systemd/nspawn
pacstrap /var/lib/container/archlinux-base base base-devel
systemctl enable --now systemd-networkd systemd-resolved
systemd-nspawn -nD /var/lib/machines/archlinux-nextcloudcli --template=/var/lib/container/archlinux-base
systemctl start systemd-nspawn@archlinux-nextcloudcli
machinectl shell root@archlinux-nextcloudcli /bin/bash -c "systemctl enable --now systemd-networkd systemd-resolved"
machinectl shell root@archlinux-nextcloudcli
useradd -m test
passwd test
vim /etc/sudoers # add user test
su test
cd
curl "https://aur.archlinux.org/cgit/aur.git/snapshot/cower.tar.gz" | tar xz -C .
curl "https://aur.archlinux.org/cgit/aur.git/snapshot/pacaur.tar.gz" | tar xz -C .
cd cower
makepkg -si --skipinteg
cd ../pacaur
makepkg -si
pacaur -S nextcloud-client rclone dropbox-cli

In our test scenario we’ll sync the openssh source code to the server and also test the bi-directional synchronisation performance by putting the mosh source code into the Nextcloud instance.
su test
cd
wget https://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-7.7p1.tar.gz
wget https://github.com/mobile-shell/mosh/archive/mosh-1.3.2.tar.gz
tar -xvf openssh-7.7p1.tar.gz -C sync
tar -xvf mosh-1.3.2.tar.gz -C sync_remote
du -hs sync/openssh-7.7p1 sync_remote/mosh-mosh-1.3.2
# 8.3M sync/openssh-7.7p1
# 1.3M sync_remote/mosh-mosh-1.3.2
find sync/openssh-7.7p1 sync_remote/mosh-mosh-1.3.2 | wc -l
# 972
nextcloudcmd -h -n -u test -p test123 sync_remote https://nextcloud.project-insanity.org/remote.php/webdav/

After these steps we’ll have the local folder sync and on the remote target the folder mosh-mosh-1.3.2. Further we have Nextcloud Desktop 2.3.3 (latest stable version), rclone and Dropbox installed.

Starting the tests

Following command starts the synchronisation with Nextcloud Client 2.3.3 (stable):
time nextcloudcmd -h -n -u test -p test123 /home/test/sync https://nextcloud.project-insanity.org/remote.php/webdav/

The prepended time command will return the running time of the command.
In the next step we’ll install Nextcloud Desktop 2.5.0 Beta 2 (which was the latest development version at this time), remove all sync databases and rerun the test:
pacaur -S nextcloud-desktop-git
rm -r sync/._sync* sync/mosh-mosh-1.3.2
time nextcloudcmd -h -n -u test -p test123 /home/test/sync https://nextcloud.project-insanity.org/remote.php/webdav/

Starting the test with rclone is different. You have to setup your Nextcloud instance with the configuration wizard.
rclone config
rm -r sync/._sync* sync/mosh-mosh-1.3.2
time rclone copy /home/test/sync remote:

In case of Dropbox, I had to copy the files into the synced directory and simultaneously copy the remote files to a local folder, to trigger the process. The filestatus command will show when it is finished.
dropbox-cli filestatus
date && cp -r sync/openssh-7.7p1 Dropbox/ && cp -r Dropbox/mosh-mosh-1.3.2 /tmp/
date

Conclusion

This very small and limited test shows that commercial alternatives to Nextcloud such as Dropbox will offer faster and better desktop tools for file synchronization. It is not clear if Nextcloud developers will address this issue any time soon and to my knowledge there are no alternative clients with better performance. Let me know if you know other file synchronization tools for Linux which I can cover in future tests.

Backing up encrypted and compressed VM snapshot to Azure Cloud

For some time now I was thinking about a good backup solution for our root server. We are using our hard drives in RAID0 mode which means that the two hard drives are not mirrored. Therefore we could use the complete 5TB space. In this scenario, complete data loss is quite likely from time to time, in case one of the two hard drives gets corrupted.
One way to solve this issue is a remote backup of the single vm images. Using LVM it’s possible to take a snapshot from a running virtual machine image. So we can safely compress and transfer the image at a specific state.
Since my home server wouldn’t have enough space to store the backup, I was looking for a cheap “cloud storage”. Besides Amazon AWS there’s also Microsoft Azure. The price per gigabyte is quite good for a low latency and low redundancy option. To register at Azure you’ll need a valid credit card. After that, you can test the service in trail mode for free.

Create backup

Transferring large files to Azure is a bit tricky. I had difficulties using the offical client software called azcopy. I found an other version of this tool, an not yet released preview: azcopy-v10. Using this version, I was able to copy bigger files with 500gb+ successfully. I created an AUR package, so it is easy to install in ArchLinux.
Together with LVM and GnuPG, I combined several commands, so that I could compress, encrypt and transfer the VM snapshot at once in a single step :D Considering the active image you want to backup is called “mail” and resides in a volume group “vg0”. You can create a snapshot with this command:
lvcreate -s -n mail_snap -L 20G /dev/vg0/mail

Install azcopy-v10 and start the transfer:
pacaur -S azcopy-v10
pv -cN source /dev/vg0/mail_snap | gpg --batch --passphrase "my_secret_password" --symmetric --compress-algo zlib | azcopy cp "https://myaccount.blob.core.windows.net/mycontainer/mail_$(date +"%Y-%m-%d").img.gpg?sas"

This is what the command does:

  • With the command pv, we are piping the contents of the snapshot to gpg and we’ll have an additional progress bar in our terminal.
  • GPG is encrypting the snapshot with a specific passphrase, which you’ll have to define. Please note that this usage is considered unsafe because you should never type or provide your passwords in plain text. Please consult the gpg manual on how to setup asymetric encryption for better security. Further, gpg is using zlib to compress the archive.
  • The last part in this chain is azcopy, which will read our encrypted and compressed data stream from STDIN. There you’ll have to define the URL to your storage account on Azure, the destination filename and the one time session key called SAS. This information can be found in the Azure portal where you can create your blob storage account.

SAS Token inside the Azure portal


After the transfer is complete, you can remove the snapshot from LVM:
lvremote /dev/vg0/mail_snap

Restore backup

To restore a backup, just use azcopy as well:
azcopy cp "https://myaccount.blob.core.windows.net/mycontainer/mail.img.gpg?sas" /mnt/playground.img.gpg
gpg -o /mnt/playground.img -d /mnt/playground.img.gpg

Gpg will ask for the passphrase you specified before.

Auto-update Android apps with F-Droid & Yalp Store

I consider auto updates of userland software as an important and also convenient security feature, easpecially on mobile platforms. As far as I know this is already the default behaviour for Android systems with Google Play Store preinstalled.

Some time ago I switched from Play Store to the open-source F-Droid market which offers many good free and open-source apps as an alternative. Since I couldn’t yet find a good replacement for Scout, Soundhound etc. I also used the open-source app Yalp store to fetch these apps and updates from Google without requiring Gapps or a Google account.

Usally third-party apps or installation files (apks) can be installed without the need of “rooting” (acquiring super user permissions) the phone. But you have to explicitly grant permissions for every single installation or update. If you want to automate these steps, you have to install Yalp store and F-Droid as system apps.

F-Droid Privileged Extension

Instead of installing the usual F-Droid apk, you can also flash F-Droid as a so called “privileged extension“. It comes as a zip file which you can obtain here. Put this zip file on your mobile phone storage and reboot into your phones recovery mode. In my setup I was using the recovery app TWRP which has to be installed manually on a rooted phone. Unfortunately rooting a phone and installing a recovery app is a difficult step which I’ll cannot cover here. If you already have TWRP or something similar installed, I recommend you to do a full system backup before flashing anything. In recovery, select and install the F-Droid privileged extension zip file.
After rebooting back into your Android, you have to change following settings inside F-Droid to enable auto-updates:

  • Enable expert mode
  • Enable privileged extension
  • Enable auto-update, e.g. in an interval of every day
  • Automatically install apps in background

Yalp store auto-update

Yalp store is using a different technique to obtain system permissions. It relies on a backend which, once accepted by the user, will grant super user (short “su”) rights to Yalp store. Instead on relying on closed-source third party apps, I would recommend the offical su addon provided by LineageOS since version 15.1. This “addonsu”-zip file has also to be flashed within your recovery mode. Once installed you have to enable root permissions for apps in the Android developer menu (see here how you can enable and access it).

In the Yalp store settings, you have to enable auto-update:

  • Installation method: Use root permissions
  • Enable: Install apps as soon as download is finished
  • Search for updates: E.g. daily
  • Enable: Auto download available updates
  • Enable: Automatically install new updates (root)

I also activated the automatic whitelist feature so that auto-updates are only installed for apps managed by Yalp store.

After that everything should work flawlessly and you should be notified when an app has been updated in the background.

Changelog

  • 26.07.18: Changed Yalp Store root method to offical su-addon of LineageOS 15.1
  • 20.05.18: Changed Yalp Store SuperSU dependency to open source alternative Superuser app.