Odroid HC1 based swarm cluster in a 19″ rack

Since several months now, I’m running my applications on a home made cluster of 4 Odroid HC1 running docker containers orchestrated by Swarm. Of course all HC1 are powered by my homemade powersupply.

I chose HC1 and not MC1 because of SSD support. Running system on a SSD is a lot faster than on a microSD.

Below is the story of this build…

4 Odroid HC1 in a 3d printed 19″ rack

I made a custom fan cooled 19″ rack mount support for the odroid.

The 3d print design is available on thingiverse

Initial mount:

I soldered dupond cables to power the fan directly from the 5V input of each HC1:

Results with all fan mounted:

Final result in the rack:

Base install

Archlinux

Well, nothing special here, just follow the official arch doc for each HC1:
https://archlinuxarm.org/platforms/armv7/samsung/odroid-xu4

Saltstack

As I use saltstack to “templatize” all my servers, I installed saltstack master and minion (the NAS will be the master for all others servers). I already documented this here

From the salt master, a simple check show that all node are under control:

SSD as root FS

Partition ssd :

Format the futur root partition:

Mount ssd root partition:

Clone sdcard to ssd root partition:

Change boot parameters so root is /dev/sda1:

Recompile boot config:

Reboot:

Remove all from sdcard, and put /boot files at root of it:

Adapt boot.txt because boot files are in root of boot partition and no more in /boot directory:

Check that /dev/sda is root :

SSD Benchmark

Benchmarking is complex and I’m not going to say that I did perfectly right, but at least it gives an idea of how fast an SSD can be on HC1.

The SSD I connected to each of my HC1 is a Sandisk X400 128Gb.

I Launched the following test 3 times.

hdparm -tT /dev/sda => 362.6 Mb/s

dd write 4k => 122 Mb/s

dd write 1m => 119 Mb/s

dd read 4k => 307 Mb/s

dd read 1m => 357 Mb/s

 
I tried the same test with with IRQ affinity to big cores, but it did not shown any significant impact on performance.

Finalize installation

I’m not going to copy paste all my salstack states and templates here, as it obviously depends on personal needs and tastes.

Basicaly, my “HC1 Node” template does the following on each node:

  • Change mirrorlist
  • Install custom sysadminscripts
  • Remove alarmuser
  • Add some sysadmin tools (lsof, wget, etc.)
  • Change mmc and ssd scheduler to deadline
  • Add my user
  • Install cron
  • Configure log rotate
  • Set journald config (RuntimeMaxUse=50M and Storage=volatile to lesser flash storage writes)
  • Add mail ability (ssmtp)

Then changing password for my user using saltstack :

Finaly, to ensure that not disk corruption would stop a node from booting, I forced fsck at boot time on all nodes by :

  • adding “fsck.mode=force” in kernel line in /boot/boot.txt
  • compile it with mkscr
  • rebooting

Docker Swarm deploy

Swarm module in my saltstack seems not recognized despite I used the version 2018.3.1. So I ended up with executing commands directly, which is not really a problem as I’m not going to add a node everyday…

build the master:

add worker:

add the 2nd and 3rd master for a failover ability:

Checking all nodes status with “docker node ls” now display one leader and 2 nodes “reachable”

Then, I deployed a custom docker daemon configuration (daemon.json) to switch storage driver to overlay2 (the default one is to slow on xu4) and allows the usage of my custom docker registry.

Docker images for the swarm cluster

The concept

As of now, using a container orchestrator implies to either use stateless containers or to use a global storage solution. I first tried to use glusterfs on all nodes. It was working perfectly but way to slow (between 25 and 36 Mb/s even with optimized settings and irq affinity to big cores).

I ended up with a simple but yet very efficient solution for my needs :

  • An automated daily backup of all volumes on all nodes (to a network drive)
  • An automated daily mysql database backup on all nodes (run only when mysql is detected)
  • Containers that are able to restore their volumes from the backup during first startup
  • An automated daily clean-up of containers and volumes on all nodes:

Thus, each time a node is shut down or a stack restarted, each container is able to start on any nodes retrieving its data automatically (if not stateless).

Daily backup script (extract):

Daily Clean-up script:

Custom Docker images

All my DockerFiles are documented and available on github :

https://github.com/jit06/docker-images

Simple distributed image build

To make a simple distributed build system, I made some scripts to distribute my docker containers build across the 4 Odroid HC1.

All containers are then put in a local registry, tagged with the current date. 

Local image builder that build, tag and put in registry (script name : docker_build_image) :

Build several images given in argument (script name:docker_build_batch) :

Distribute builds using saltstack on the salt-master, using previous script.

The special image “archlinux” is built first if found, because all other images depend on it.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.