Starting services only when the network is ready on Debian/systemd

TL;DR: ^

  • Make sure that whatever configures your network supports network-online.target
  • Override the service unit to have Wants=network.target network-online.target and the same for After=

Overview ^

Sometimes you only want services to start up once there is a network configured. Most network services can handle the situation where there is initially no network, waiting until the network appears, because this is a very common situation.

Other services though may not in themselves be expecting to use the network, and so have never thought about it. Also a great thing about open source software is that it tends to be very composable, so it’s not possible to predict the ways that people will use combinations of software.

The problem ^

systemd will tend to start things as soon as it can. If your service is not configured to wait for the network that means it will most likely be started up before the network exists. If your service then tries to do something that requires a network it will receive an error, which it may not be prepared to handle.

A concrete example: ulogd2 ^

A real life example for me is ulogd2. ulogd2 allows your firewall rules to log things in a variety of ways, in incredible detail.

Most of the ways people configure it involve just logging to the local filesystem, so it doesn’t actually require the network to be configured first.

The default systemd configuration in Debian buster for the ulogd2 service looks like this:

$ sudo systemctl cat ulogd2.service
[Unit]
Description=Netfilter Userspace Logging Daemon
Documentation=man:ulogd(8)
 
[Service]
Type=forking
PIDFile=/run/ulog/ulogd.pid
ExecStart=/usr/sbin/ulogd --daemon --uid ulog --pidfile /run/ulog/ulogd.pid
 
[Install]
Alias=ulogd.service
WantedBy=multi-user.target

As you can see there’s nothing in there that says to wait for a network.

I use a database plugin for ulogd2 that makes it log to a (remote) database. As a consequence as soon as it starts up it tries to establish a database connection, immediately fails as there is no route to any remote host, retries a few times and then bails out.

Most of the time it exhausts its retries before the network is up, so the result is that the service is in a failed state. Simply manually starting the service (or having config management do it) resolves that, but that’s a mess.

Ideally I don’t want systemd to start ulogd2 until there is a network.

Wants=network-online.target” mate. Job done. No! ^

If like me you know just enough about systemd to be dangerous, you figure that what you want to do is add something like this to the [Unit] section of the service unit file:

[Unit]
…
Wants=network.target network-online.target
After=network.target network-online.target

This is only part of the correct solution. If you do only this then you’ll probably find that nothing actually changes.

About network-online.target ^

The thing about the network-online target is that it doesn’t exist unless you’re using a “modern” method of bringing up your networking, like NetworkManager or systemd-networkd.

If you’re not doing that then systemd works out that the network-online target can never be reached and ignores it as a Want.

I’m using ifupdown on servers as it still does everything I need it to. To make ifupdown support the network-online target on Debian, you should enable the ifupdown-wait-online service:

$ sudo systemctl enable ifupdown-wait-online.service

This will inject the network-online “target reached” state when every interface that is marked as “auto” in /etc/network/interfaces is up.

Editing a service file ^

The temptation now may be to edit the ulogd2 service file that’s under /lib/systemd/system/ to contain the Want/After bits.

That will work but it isn’t the correct way because if there is a package update then your changes will be overwritten.

A better way is to place a new service file into /etc/systemd/system/. That will entirely override the distributed copy. The obvious downside there is that if there’s an improvement to the packaged service file then you’ll never use it, as you’ve entirely overridden it with your own file.

Overrides to the rescue ^

The best way is to use an override file, and the easiest way to do that is with systemctl edit:

$ sudo systemctl edit ulogd2
[your favourite editor starts]
[Unit]
Wants=network.target network-online.target
After=network.target network-online.target

Check your changes took effect:

$ sudo systemctl cat ulogd2.service
# /lib/systemd/system/ulogd2.service
[Unit]
Description=Netfilter Userspace Logging Daemon
Documentation=man:ulogd(8)
 
[Service]
Type=forking
PIDFile=/run/ulog/ulogd.pid
ExecStart=/usr/sbin/ulogd --daemon --uid ulog --pidfile /run/ulog/ulogd.pid
 
[Install]
Alias=ulogd.service
WantedBy=multi-user.target
 
# /etc/systemd/system/ulogd2.service.d/override.conf
[Unit]
Wants=network.target network-online.target
After=network.target network-online.target

Note that this shows you where the files actually are. That makes it easy to distribute this through config management.

Debian-installer, mdadm configuration and the Bad Blocks Controversy

Updates! ^

Since this was posted on 2020-09-13 there was some interest in the comments and on Hacker News and I learned some things which required updates. I’ve tried to indicate them with struck out text.

Of particular note is the re-add method of removing BBLs.

MD and mdadm ^

MD is the Linux kernel driver that is used for running software RAID arrays. mdadm is the software that you run to manage MD devices. They are both part of the same project.

First, about the Bad Blocks List ^

Since about 2010, MD has had a bad blocks log (BBL) feature. When it fails to read from an underlying device it will (sometimes?) mark that block as bad and read the correct data from a different device, and then forever more redirect reads away from those bad blocks. This feature defaults to being on.

One problem with this feature is that read errors can occur for many reasons besides permanent failure of part of a storage device. For example, it could be a failure of the backplane or controller that causes many read errors on multiple devices, or the devices could be reached over a network of some sort and temporary network problems could propagate errors.

Even if the particular part of the device is unreadable, the operating system is supposed to try to write the correct data over the top. This write will either clear the problem or else be redirected to a spare sector on the drive by the drive’s firmware. The operating system is not supposed to be taking on this role, the drives are, and when the drives fail to do so then the redundancy of the array is supposed to save the day.

Even worse, there are apparently bugs somewhere in the BBL code that cause a device’s BBL to be copied onto a new device when the array is rebuilt or a device replaced. Clearly it does not make sense for a new device to get a copy of another device’s BBL because they are inherently a per-device thing. So far there has been no successful intentional reproduction of this, only people unwittingly hitting it at the worst possible moments. It has been reproduced that adding or replacing a device results in a BBL being copied. I am not aware of a formal bug report for this yet.

mdadm doesn’t even try particularly hard to warn you if a new bad block is found. Unlike when a device fails, it doesn’t send you an email. The MD driver writes in the syslog about the bad block(s). There’s also no change to /proc/mdstat. You have to examine some files in sysfs.

As a result the current situation is that:

No one seems to have made any progress on fixing any of this in 10 years.

Doing something about it ^

I’ll say right now that this story doesn’t (yet?) have a satisfying ending.

I’ve been aware of the “Bad Blocks Controversy” for about 5 years but I haven’t ever personally experienced any problems and it was always at the bottom of my list to look at. Roy’s recent thread spurred me into deciding that in future no MD array I created would have a BBL.

I also took the opportunity to deploy Sarah Newman’s Ansible role which checks all array components have an empty BBL. None of BitFolk‘s array components currently have any entries in their BBLs – phew!

Removing an existing BBL ^

Currently the only way to remove a BBL from an array component is to stop the array and then assemble it with an argument like this:

There are two ways to remove the BBL from the devices of existing arrays.

Fail and re-add each device with update

It doesn’t seem to be documented anywhere, but you can fail a device out of an array and re-add it with an update to remove the BBL on that device, like this:

# mdadm --fail /dev/md0 /dev/sdb1 \
        --remove /dev/sdb1 \
        --re-add /dev/sdb1 \
        --update=no-bbl
mdadm: set /dev/sdb1 faulty in /dev/md0                                              
mdadm: hot removed /dev/sdb1 from /dev/md0                   
mdadm: re-added /dev/sdb1

This will only work if your array has a bitmap, otherwise it will refuse to re-add. Most arrays do get a bitmap, but small arrays won’t by default. Fortunately you can easily add a bitmap like this:

# mdadm --grow --bitmap=internal /dev/md0

The downside of this approach is that your array will have reduced redundancy while it rebuilds. It should rebuild pretty quickly though as the bitmap will cause only changed parts to be rewritten.

(This won’t work if a BBL currently has any entries)

Stop the array and assemble again with update

The other way to remove BBL from devices is to stop the array and assemble it manually like this:

# mdadm --assemble /dev/mdX --update=no-bbl

The big problem with this is that stopping the array obviously causes downtime for whatever is using it. If your root filesystem is on an MD array (and why wouldn’t it be, if you use MD?) then that means the entire server, and you’re having to do this from sort of rescue environment.

I have suggested that a config option be added to remove a BBL on assembly, so that this will happen the next time the machine is rebooted. This does not appear to have provoked any interest.

This method is quicker since it operates on all devices and doesn’t require a rebuild, but personally I usually find downtime more painful so I’d be inclined to schedule an “at-risk” maintenance window and do it the re-add way.

Avoiding the BBL at creation time ^

So if the BBL cannot be easily removed, at least it can be prevented from ever existing, right? When Neil Brown, the previous MD maintainer, was asked in 2016 if the feature could be defaulted to off, Neil said that putting this in the config file was as good as that:

CREATE bbl=no

The thing is, it’s not as good as disabling it by default when you consider what many users’ experience is of running the mdadm command: they don’t run mdadm, something else runs it for them. I’d go as far as to say that the majority of uses of mdadm are done by helper scripts and installers, not by human beings.

If it’s a program that is running mdadm for you then you are going to have to find out how to set that mdadm.conf before it reads it.

Take for example my own process of installing Debian. I do it by booting the Debian Installer by PXE. I have some pre-seeding done to answer a lot of the installer questions, but actually I do still do the disk partitioning stage in the installer’s text interface.

So there I was thinking this is actually going to be quite simple, because the Debian Installer is really lovely about letting you execute a shell and poke around. Surely all I am going to need to do is open a shell once and edit /etc/mdadm/mdadm.conf and then go back into the mdcfg menu and carry on, right? Oh dear me no.

You can read the details of my wild ride that involved me uploading a binary of strace into the d-i to run mdadm under to work out what was going on, but just the relevant discoveries are in this article for those who’d rather not.

mdadm in d-i uses a config file at /tmp/mdadm.conf

After quite a bit of confusion over why even arrays I created manually with the mdadm command in the d-i shell still had a BBL, I discovered that the mdadm binary in d-i is compiled to have its config at /tmp/mdadm.conf. I don’t know why, but probably there is a good reason.

(At this point a number of people responded, “that’s because everything else will be set read-only.” That’s not the case with debian-installer which runs entirely off of a tmpfs. It’s all writeable.)

So just make the edit to /tmp/mdadm.conf then?

Oh ho ho no. Every time you go into the MD configuration section (mdcfg) it clobbers its own /tmp/mdadm.conf, and you can’t get to the “execute a shell” option without returning to the MD configuration section.

If you’re on something with multiple virtual consoles (like if you’re sitting in front of a conventional PC) then you could switch to one of those after you’ve entered the MD configuration part and modify /tmp/mdadm.conf then. I don’t have that option because I’m on a serial console.

I thought I didn’t have that option because I’m on a serial console, but it was pointed out to me that when the Debian installer detects it’s running in a serial console it runs itself under GNU Screen. So, by using the usual screen commands of ctrl+a n or ctrl+a p, one can switch backwards and forwards through the different virtual consoles. Neat!

There is also an earlier option to load an installer component that enables one to continue the installation process over SSH. If you select that then you can SSH in to the running installer system so if you do that after you’ve entered the MD configuration bit in your main console then I guess you can then edit the config file and continue.

By one of those methods of getting a shell, after you’ve already entered the array configuration part but before you’ve actually created any arrays, I think you could edit /tmp/mdadm.conf to have “CREATE bbl=no” and the installer’s mdadm binary would respect that when you switch back.

Alternatively you could just use the shell to create your arrays instead of using the Ddebian installer to do it. If it’s a simple case where you’ve just got an sda and an sdb disk identically partitioned and you want to make a bunch of arrays on them, it can be a fairly legible shell session like:

~ # mkdir -vp /etc/mdadm && echo "CREATE bbl=no" > /etc/mdadm/mdadm.conf
~ # for part in 1 2 3 5; do \
      mdadm --create \
            -v \
            --config=/etc/mdadm/mdadm.conf \
            /dev/md${part} \
            --level=1 \
            --raid-devices=2 \
            /dev/sd[ab]${part}; \
    done

Do not try this until you understand exactly what it is doing.

It iterates the list 1, 2, 3, 5 (I use the 4th partition for something else) and makes arrays called mdX out of sdaX and sdbX. The mdadm binary is forced to use our config file that disables creation of a BBL.

You can verify that a BBL does not exist on any of the array components like this:

~ # mdadm --examine-badblocks /dev/sda1
No bad-blocks list configured on /dev/sda1

You should get identical output for every component. If a component did have a BBL it would output something like this:

~ # mdadm --examine-badblocks /dev/sda1
Bad-blocks list is empty in /dev/sda1

You can then exit the d-i shell and go back to the disk partitioning section. You won’t need the MD configuration part now but even if you do go into it, it should detect all your manually-created arrays.

How to make progress? ^

All of this isn’t great but at least it’s fairly easy to pause the Debian installer and take some manual action. I suspect users of other Linux distributions may not be so lucky, and so I too think it would be a good idea if this buggy feature was disabled by default, or at least if there were a way to tell mdadm to remove the BBL on assembly.

In fact I would very much like to be able to tell it to remove the BBL on assembly so that I can disable the BBL feature on all my existing servers.

mdadm actually gets called by udev from inside the initramfs in incremental assembly mode, so I think the incremental assembly code needs to look in the config file for this “remove all the BBLs” directive and do it then during assembly as if update=no-bbl had been specified on a command line.

It should be possible to write a script that:

  1. Looks in /sys/block/md* to find device components of all arrays.
  2. Checks each one to see if it has a BBL.
  3. If any are found, add a bitmap if necessary.
  4. Do the fail/remove/re-add trick on each one in turn, waiting for the array to go back into sync each time.

i.e. it should be possible to automate this and run it at the end of an install so the entire install process can remain automated, or run it on a host any time after it’s been provisioned.

Recovering From an Exif Disaster

The Discovery ^

Sometime in late December (2019) I noticed that when I clicked on a tag in Shotwell, the photo management software that I use, it was showing either zero or hardly any matching photos when I knew for sure that there should be many more.

(When I say “tag” in this article it’s mostly going to refer to the type of tags you generally put on an image, i.e. the tags that identify who or what is in the image, what event it is associated with, the place it was taken etc. Images can have many different kinds of tags containing all manner of metadata, but for avoidance of doubt please assume that I don’t mean any of those.)

I have Shotwell set to store the tags in the image files themselves, in the metadata. There is a standard for this called Exif. What seems to have happened is that Shotwell had removed a huge number of tags from the files themselves. At the time of discovery I had around 15,500 photos in my collection and it looked like the only way to tell what was in them would be by looking at them. Disaster.

Here follows some notes about what I found out when trying to recover from this situation, in case it si ever useful for anyone.

Shotwell still had a visible tag hierarchy, so I could for example click on the “Pets/Remy” tag, but this brought up only one photo that I took on 14 December 2019. I’ve been taking photos of Remy for years so I knew there should be many more. Here’s Remy.

Remy at The Avenue Ealing Christmas Fair, December 2019
Remy at The Avenue Ealing Christmas Fair

Luckily, I have backups.

Comparing Good and Bad Copies of a Photo ^

I knew this must have happened fairly recently because I’d have noticed quite quickly that photos were “missing”. I had a look for a recent photo that I knew I’d tagged with a particular thing, and then looked in the backups to see when it was last modified.

As an example I found a photo that was taken on 30 October 2019 that should have been tagged “Pets/Violet” but no longer was. It had been modified (but not by me) on 7 December 2019.

A broken photo of Violet
A broken photo of Violet

(Sorry about the text-as-images; I’m reconstructing this series of events from a Twitter thread, where things necessarily had to be posted as screenshots.)

What the above shows is that the version of the photo that existed on 30 October 2019 had the tags “Pets“, “Edna“, and “Violet” but then the version that was written on 7 December 2019 lost the “Violet” tag.

Here I used the exiftool utility to display EXIF tags from the photo files. You can do that like this:

$ exiftool -s $filename

Using egrep I limited this to the tag keys “Subject“, “Keywords“, and “TagsListLastKeywordXMP” but this was a slight mistake: “TagsListLastKeywordXMP” was actually a typo, is totally irrelevant and should be ignored.

Subject” and “Keywords” were always identical for any photo I examined and contained the flattened list of tags. For example, in Shotwell that photo originally had the tags:

  • Pets/Edna
  • Pets/Violet

It seems that Shotwell flattens that to:

  • Pets
  • Edna
  • Violet

and then stores it in “Subject” and “Keywords“.

The tags with hierarchy are actually in the key “TagsList” like:

  • Pets
  • Pets/Edna
  • Pets/Violet

Fixing One Photo ^

I tested stuffing the tag “Violet” back in to this file under the keys “Subject” and “Keywords“:

$ exiftool -keywords+="…" -subject+="…" $filename

Stuffing the Violet tag back in
Stuffing the Violet tag back in

This shows that the “Violet” tag is now back in the current version of the file. After restarting Shotwell and doing a free text search for “Violet”, this photo now shows up whereas before it did not. It still did not show up when I clicked on “Pets/Violet” in the tag hierarchy however. It was then that I realised I also needed to put “Pets/Violet” into the “TagsList” key.

I ended up using a script to do this in bulk fashion, but individually I think you should be able to do this like:

$ exiftool -keywords+=Violet -subject+=Violet -TagsList+=Pets/Violet

After restarting Shotwell I was able to click on the “Pets/Violet” tag and see this photo.

Fixing All the Photos? ^

My process to recover from this, then, was to compile a list of each file that had been modified at the suspected time of disaster, and for each:

  1. Read the list of tags from “Keywords
  2. Read the list of tags from “Subject
  3. De-duplicate them and store them as $keywords
  4. Read the list of tags from “TagsList” and store them as $tagslist
  5. Stuff $keywords back into both “Subject” and “Keywords” of the current version of the file

Gulp.

Which files were tampered with? ^

It was relatively easy to work out which files had been screwed with, because thankfully I didn’t make any other photo modifications on 7 December 2019. So any photo that got modified that day was probably a candidate.

I haven’t mentioned what actually caused this problem yet. I don’t know exactly. At 16:53 on 7 December 2019 I was importing some photos into Shotwell, and I do seem to recall it crashed at some point, either while I was doing that or shortly after.

The photos from that import and all others afterwards had retained their tags correctly, but many that existed prior to that time seemed to be missing some or all tags. I have no idea why such a crash would cause Shotwell to do that but that must have been what did it.

Running this against my backups identified 3,721 files that had been modified on 7 December 2019:

$ cd weekly.2/specialbrew.21tc.bitfolk.com/srv/tank/Photos/Andy
$ find . -type f \
  -newermt "2019-12-07 00:00:00" \! \
  -newermt "2019-12-07 23:59:59" > ~/busted.txt

The next thing I did was to check that each of these file paths still exist in the current photo store and in the known-good backups (weekly.3).

Extract tags from known-good copies ^

Next up, I wrote a script which:

  1. Goes to the known-good copies of the files
  2. Extracts the Subject and Keywords and deduplicates them
  3. Extracts the TagsList
  4. Writes it all into a hash
  5. Dumps that out as a YAML file

All scripts mentioned here script use the Perl module Image::ExifTool which is part of the exiftool package.

backup_host$ ./gather_tags.pl < ~/busted.txt > ~/tags.yaml

tags.yaml looks a bit like this:

---
2011/01/16/16012011163.jpg:
  keywords:
  - Hatter
  - Pets
  tagslist:
  - Pets
  - Pets/Hatter
[]
2019/11/29/20191129_095218~2.jpg:
  keywords:
  - Bedfont Lakes
  - Feltham
  - London
  - Mandy
  - Pets
  - Places
  tagslist:
  - Pets
  - Pets/Mandy
  - Places
  - Places/London
  - Places/London/Feltham
  - Places/London/Feltham/Bedfont Lakes

Stuff tags back into current versions of photos ^

After transferring tags.yaml back to my home fileserver it was time to use it to stuff the tags back into the files that had lost them.

One thing to note while doing this is that if you just add a tag, it adds it even if the same tag already exists, leading to duplicates. I thought it best to first delete the tag and then add it again so that there would only be one instance of each one.

I called that one fix_tags.pl.

$ ./fix_tags.pl tags.yaml

Profit! Or, only slight loss, I guess ^

16m53s of runtime later, it had completed its work… 🙌 2020 will definitely be the year of Linux on the desktop¹.

¹ As long as you know how to manipulate EXIF tags from a programming language and have a functioning backup system and even then don’t mind losing some stuff

Losing some stuff…? ^

Unfortunately there were some things I couldn’t restore. It was at this point that I discovered that Shotwell does not ever put tags into video files (even though they do support EXIF tags…)

That means that the only record of the tags on a video file is in Shotwell’s own database, which I did not back up as I didn’t think I needed to.

Getting Tags Out of Shotwell ^

I am now backing that up, but should this sort of thing happen in the future I’d need to know how to manipulate the tags for videos in Shotwell’s database.

Shotwell’s database is an SQLite file that’s normally at $HOME/.local/share/shotwell/data/photo.db. I’m fairly familiar with SQLite so I had a poke around, but couldn’t immediately see how these tags were stored. I had to ask on the Shotwell mailing list.

Here’s how Shotwell does it. There’s a table called TagTable which stores the name of each tag and a comma-separated list of every photo/video which matches it:

sqlite> .schema TagTable 
CREATE TABLE TagTable (id INTEGER PRIMARY KEY, name TEXT UNIQUE NOT NULL, photo_id_list TEXT, time_created INTEGER);

The photo_id_list column holds the comma-separated list. Each item in the list is of the form:

  1. “thumb” or “video-” depending on whether the item is a photo or a video
  2. 16 hex digits, zero padded, which is the ID value from the PhotosTable or VideosTable for that item
  3. a comma

Full example of extracting tags for the video file 2019/12/31/20191231_121604.mp4:

$ sqlite3 /home/andy/.local/share/shotwell/DATA/photo.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" FOR usage hints.
sqlite> SELECT id
        FROM VideoTable
        WHERE filename LIKE '%20191231%';
553
sqlite> SELECT printf("%016x", 553);
0000000000000229
sqlite> SELECT name
        FROM TagTable
        WHERE photo_id_list LIKE '%video-0000000000000229,%';
/Places
/Places/London
/Places/London/Feltham
/Pets
/Places/London/Feltham/Bedfont Lakes
/Pets/Marge
/Pets/Mandy

If that is not completely clear:

  • The ID for that video file is 553
  • 553 in hexadecial is 229
  • Pad that to 16 digits, add “video-” at the front and “.” at the end (even the last item in the list has a comma at the end)
  • Search for that string in photo_id_list
  • If a row matches then the name column is a tag that is attached to that file

I don’t exactly know how I would have identified which videos got messed with, but at least I would have had both versions of the database to compare, and I now know how I would do the comparison.

Should Tags Even Be In Photos? ^

During my Twitter thread it was suggested to me that tags should not be stored in photos, but only in the photo cataloging software, where they can be backed up along with everything else.

I disagree with this for several reasons:

  • Exif exists for the purpose of storing tags like this.

  • When I move my photos from one piece of software to another I want it to be able to read the tags. I don’t want to have to input them all over again. That would be unimaginably tedious.

    When I moved from F-Spot to Shotwell the fact that the tags were in the files saved me countless hours of work. It just worked on import.

    If there wasn’t a dedicated importer feature then it would be so much work that really the only way to do it would be to extract the tags from the database and insert them again programmatically, which is basically admitting that to change software you need to be an expert. That really isn’t how this should work.

  • If the only copy of my tags is in the internal database of a unique piece of cataloging software, then I have to become an expert on the internal data store of that piece of software. I don’t want to have to do that.

    I’ve been forced to do that here for Shotwell because of a deficiency of Shotwell in not storing video tags in the files. But if we’re only talking about photos then I could have avoided it, and could also avoid having to be an expert on every future piece of cataloging software.

  • Even if I’m not moving to a different cataloging solution, lots of software understands Exif and it’s useful to be able to query those things from other software.

    I regard it very much like artist, album, author, genre etc tags in the metadata of digital music and ebooks, all of which are in the files; you would not expect to have to reconstruct these out of the database of some other bit of software every time you wanted to use them elsewhere.

It was a mistake not to backup the Shotwell database though; I thought I did not need it as I thought all tags were being stored in files, and tags were the only things I cared about. As it happened, tags were not being stored in video files and tags for video files only exist in Shotwell’s database.

Other Thoughts ^

Having backups was obviously a lifesaver here. It took me ~3 weeks to notice.

Being able to manipulate them like a regular filesystem made things a lot more convenient, so that’s a property I will want to keep in whatever future backup arrangements I have.

I might very well switch to different photo management software now, assuming I could find any that I prefer, but all software has bugs. Whatever I switch to I would have to ensure that I knew how to extract the tags from that as well, if it doesn’t store them in the files.

I don’t want to store my photos and videos “in the cloud” but it is a shortcoming of Shotwell that I can basically only use it from my desktop at home. Its database does not support multiple or remote access. I wonder if there is some web-based thing that can just read (and cache) the tags out of the files, build dynamic galleries and allow arbitrary searches on them…

Shotwell’s database schema and its use of 16 hexadecimal digits (nibbles?) means I can only store a maximum of 18,446,744,073,709,551,615 (1.844674407×10¹⁹ -1) photos or videos of dogs. Arbitrary limits suck so much.

Greyhounds Marge, Janti and Will at Sainsbury's Staines with Wimbledon Greyhound Welfare, December 2019
Marge, Janti and Will at Sainsbury’s Staines with Wimbledon Greyhound Welfare, December 2019

Getting LWP to use a newer OpenSSL

Something broke ^

Today I had a look at a customer’s problem. They had a Perl application that connects to a third party API, and as of sometime today it had started failing to connect, although the remote site API still seemed to be responding in general.

The particular Perl module for this service (doesn’t really matter what it was) wasn’t being very verbose about what was going on. It simply said:

Failed to POST to https://api.example.com/api/v1/message.json

I started by writing a small test program using LWP::UserAgent to do a POST to the same URI, and this time I saw:

500 Can’t connect to api.example.com:443 (SSL connect attempt failed with unknown errorerror:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure)

So, it’s failing to do a TLS handshake. But this was working yesterday. Has anything changed? Yes, the remote service was under a denial of service attack today and they’ve just moved it behind a CDN. TLS connections are now being terminated by the CDN, not the service’s own backend.

And oh dear, the customer’s host is Debian squeeze (!) which comes with OpenSSL 0.9.8. This is badly out of date. Neither the OS nor the OpenSSL version is supported for security any more. It needs to be upgraded.

Unfortunately I am told that upgrading the OS is not an option at this time. So can we update Perl?

Well yes, we could build our own Perl reasonably easily. The underlying issue is OpenSSL, though. So it would be an upgrade of:

  • OpenSSL
  • Perl
  • Net::SSLeay
  • IO::Socket::SSL
  • LWP, as the app’s HTTP client is using that

It’s not actually that bad though. In fact you do not need to build a whole new Perl, you only need to build OpenSSL, Net::SSLeay and IO::Socket::SSL and then tell Perl (and the system’s LWP) to use the new versions of those.

Of course, everything else on the system still uses a dangerously old OpenSSL, so this is not really a long term way to avoid upgrading the operating system.

Building OpenSSL ^

After downloading and unpacking the latest stable release of OpenSSL, the sequence of commands for building, testing and installing it look like this:

$ ./config --prefix=/opt/openssl \
           --openssldir=/opt/openssl \
           -Wl,-rpath,'$(LIBRPATH)'
$ make
$ make test
$ sudo make install

The rpath thing is so that the binaries will find the libraries in the alternate path. If you were instead going to add the library path to the system’s ld.so.conf then you wouldn’t have to have that bit, but I wanted this to be self-contained.

When I did this the first time, all the tests failed and at the install step it said:

ar: /opt/openssl/lib/libcrypto.so: File format not recognized

This turned out to be because the system’s Text::Template Perl module was too old. Version 1.46 or above is required, and squeeze has 1.45.

Installing a newer Text::Template ^

So, before I could even build OpenSSL I needed to install a newer Text::Template. Cpanminus to the rescue.

$ sudo mkdir /opt/perl
$ cd /opt/perl
$ sudo cpanm --local-lib=./cpanm Text::Template

That resulted in me having a newer Text::Template in /opt/perl/cpanm/lib/perl5/. So to make sure every future invocation of Perl used that:

$ export PERL5LIB=/opt/perl/cpanm/lib/perl5/
$ perl -e 'use Text::Template; print $Text::Template::VERSION,"\n";'
1.58

Repeating the OpenSSL build steps from above then resulted in an OpenSSL install in /opt/openssl that passed all its own tests.

Installing newer Net::SSLeay and IO::Socket::SSL ^

Cpanminus once again comes to the rescue, with a twist:

$ cd /opt/perl
$ OPENSSL_PREFIX=/opt/openssl cpanm --local-lib=./cpanm Net::SSLeay IO::Socket::SSL

The OPENSSL_PREFIX is part of Net::SSLeay’s build instructions, and then IO::Socket::SSL uses that as well.

Using the result ^

Ultimately the customer’s Perl application needed to be told to use these new modules. This could be done with either the PERL5LIB environment variable or else by putting:

use lib '/opt/perl/cpanm/lib/perl5';

At the top of the main script.

The application was then once more able to talk TLS to the CDN and it all worked again.

Other recommendations ^

The customer could maybe consider putting the application into a container on a new install of the operating system.

That way, the kernel and whole of the OS would be modern and supported, but just this application would be running with a terribly outdated userland. Over time, more of the bits inside the container could be moved out to the modern host (or another container), avoiding having to do everything at once.

“Designer” dog breeds are not mongrels. They’re much worse.

It doesn’t sit well with me when owners of long-recognised breed dogs mock the new “designer” breeds as being mongrels.

“That’s not a cockapoo, it’s just a mongrel!”

Surely all currently recognised breeds were once mongrels until a human being created them and convinced enough other humans to agree that this is a good set of characteristics?

What are designer breeds made from? Dogs with certain characteristics. Who decides they are an official breed or not? Some kennel club.

All of the existing, recognised older breeds were bred for a purpose or function that primarily served human beings and only secondarily served the welfare of the animal itself. That is, these supposedly superior recognised breeds would never have existed if left to natural selection alone because they all have negatives, it’s just a question of degree. And those breeds are kept in existence again by human desires, not through any higher power. A “purebred” dog exists because humans wanted it to, just as a designer breed dog exists because humans wanted it to. Who cares what it’s called?

If someone shows me their new designer breed puppy and wants to call it a cockapoo because it was bred from a cocker spaniel and a poodle, I don’t care. Am I supposed to respect a kennel club and say “that’s not a real thing”? No, it is a real thing, it’s right in front of me.

Don’t get me wrong, I’ll be sad, because it’s yet another puppy farmed for human pleasure with no regard to its health or future, when there’s tens of thousands of healthy dogs that no one wants already in rescue kennels. Doubly so if the pairing has created something that health-wise is going to have a really hard life. But it’s not a mongrel. In some cases it could dream of being a mongrel because it would have a better life.

The name of the breed isn’t the issue. There are certainly some kennel club recognised breeds that from a welfare point of view just shouldn’t exist as their whole life is a torment, the way certain human-desired traits have been so excessively selected for. A lot of times when the “mongrel” rebuke is brought out, I think it is just something intended to annoy the owners and to convey the idea that a dog should have been rescued, not a puppy farmed. In those cases maybe a new word is needed but I don’t know what that word would be.

I just don’t get the designer breed snobbery though. When it is purely a breed issue or when it’s the owner of one type of established breed owned from birth disparaging another owner of a farmed puppy I feel like it’s really hypocritical. It’s not established vs designer, it’s rescued vs farmed.

Fail2Ban, iptables and config management

Fail2Ban ^

Fail2Ban is a piece of software which can watch log files and take an arbitrary action when a certain number of matches are found.

It is most commonly used to read logs from an SSH daemon in order to insert a firewall rule against hosts that repeatedly fail to log in. Hence Fail → Ban.

Wherever possible, it is best to require public key and/or multi-factor authentication for SSH login. Then, it does not matter how many times an attacker tries to guess passwords as they should never succeed. It’s just log noise.

Sadly I have some hosts where some users require password authentication to be available from the public Internet. Also, even on the hosts that can have password authentication disabled, it is irritating to see the same IPs trying over and over.

Putting SSH on a different port is not sufficient, by the way. It may cut down the log noise a little, but the advent of services that scan the entire Internet and then sell the results has meant that if you run an SSH daemon on any port, it will be found and be the subject of dictionary attacks.

So, Fail2Ban.

iptables ^

The usual firewall on Linux is iptables. By default, when Fail2Ban wants to block an IP address it will insert a rule and then when the block expires it will remove it again.

iptables Interaction With Configuration Management ^

I’ve had all my hosts in configuration management for about 10 years now, and that includes the firewall setup. First it was Puppet but these days it is Ansible.

That worked great when the firewall rules were only managed in the config management, but Fail2Ban introduces firewall changes itself.

Now, it’s been many years since I moved on from Puppet so perhaps a way around this has been found there now. At the time though, I was using the Puppetlabs firewall module and it really did not like seeing changes from outside itself. It would keep reverting them.

It was possible to tell it not to meddle with rules that it didn’t add, but it never did work completely correctly. I would still see changes at every run.

Blackholes To The Rescue ^

I never did manage to come up with a way to control the firewall rules in Puppet but still allow Fail2Ban to add and remove its rules and chains, without there being modifications at every Puppet run.

Instead I sidestepped the problem by using the “route” action of Fail2Ban instead of the “iptables” action. The “route” action simply inserts a blackhole route, as if you did this at the command line:

# ip route add blackhole 192.168.1.1

That blocks all traffic to/from that IP address. Some people may have wanted to only block SSH traffic from those hosts but in my view those hosts are bad actors and I am happy to drop all traffic from/to them.

Problem solved? Well, not entirely.

Multiple Jailhouse Blues ^

Fail2Ban isn’t just restricted to processing logs for one service. Taken together, the criteria for banning for a given time over a given set of log files is called a jail, and there can be multiple jails.

When using iptables as the jail action this isn’t much of an issue because the rules are added to separate iptables chains named after the jail itself, e.g. f2b-sshd. You can therefore have the same IP address appearing in multiple different chains and whichever is hit first will ban it.

A common way to configure Fail2Ban is to have one jail banning hosts that have a short burst of failures for a relatively short period of time, and then another jail that bans persistent attackers for a much longer period of time. For example, there could be an sshd jail that looks for 3 failures in 3 minutes and bans for 20 minutes, and then an sshd-hourly jail that looks for 5 failures in an hour and bans for a day.

This doesn’t work with the “route” action because there is only one routing table and you can’t have duplicate routes in it.

Initially you may think you can cause the actual execution of the actions to still succeed with something like this:

actionban   = ip route add blackhole <ip> || true
actionunban = ip route del blackhole <ip> || true

i.e. force them to always succeed even if the IP is already banned or already expired.

The problem now is that the short-term jails can remove bans that the long-term jails have added. It’s a race condition as to which order the adds and removes are done in.

Ansible iptables_raw Deal ^

As I say, I switched to Ansible quite a while ago, and for firewalling here I chose the iptables_raw module.

This has the same issues with changed rules as all my earlier Puppet efforts did.

The docs say that you can set keep_unmanaged and then rules from outside of this module won’t be meddled with. This is true, but still Ansible reports changes on every host every time. It isn’t actually doing a change, it is just noting a change.

I think this is because every time iptables_raw changes the rules, it uses iptables-save to save them out to a file. Then Fail2Ban adds and removes some rules, and next time iptables_raw compares the live rule set with the save file that it saved out last time. So there’s always changes (assuming any Fail2Ban activity).

Someone did ask about the possibility of ignoring some chains, which would be ideal for ignoring all the f2b-* chains, but the response seems to indicate that this will not be happening.

So I am still looking for a way to manage Linux host firewalls in Ansible that can ignore some chains and not want to be in sole control of all rules.

Paul mentioned that from Ansible he uses ferm, which writes rules to files before actioning them, so doesn’t suffer from this problem.

That is a possibility, but if I am going to rewrite all of that I think I should probably do it with something that is going to support nftables, which ferm apparently isn’t.

The Metric System ^

All is not lost, though it is severely bodged.

Routes can have metrics. The metric goes from 0 to 9999, and the lower the number the more important the route is.

There can be multiple routes for the same destination but with different metrics; for example if you have a metric 10 route and a metric 20 route for the same destination, the metric 10 route is chosen.

That means that you can use a different metric for each jail, and then each jail can ban and unban the same IPs without interfering with other jails.

Here’s an action file for the action “route-metric”:

[Definition]
actionban   = ip route add blackhole <ip> metric <metric>
actionunban = ip route del blackhole <ip> metric <metric>

On Debian you might put that in a file called /etc/fail2ban/action.d/route-metric.conf and then in a jail definition use it like this:

[sshd-hourly]
logpath  = /var/log/auth.log
filter   = sshd
enabled  = true
action   = route-metric[metric=9998]
# 5 tries
maxretry = 5
# in one hour
findtime = 3600
# bans for 24 hours
bantime  = 86400

Just make sure to use a different metric number (9998 here) for each jail and that solves that problem.

Clearly that doesn’t solve it in a very nice way though. If you use Ansible and manage your firewall rules in it, what do you use?

Possibly this could instead be worked around by having multiple routing tables.

Experiments with RDRAND and EntropyKey

Entropy, when the shannons are gone and you can’t go on ^

The new release of Debian 10 (buster) brings with it some significant things related to entropy:

  1. systemd doesn’t trust entropy saved at last boot
  2. Many system daemons now use getrandom() which requires the CRNG be primed with good entropy
  3. The kernel by default trusts the CPU’s RDRAND instruction if it’s available

A lot of machines — especially virtual machines — don’t have access to a lot of entropy when they start up, and now that systemd isn’t accrediting stored entropy from the previous boot some essential services like ssh may take minutes to start up.

Back in 2011 or so, Intel added a CPU instruction called RDRAND which provides entropy, but there was some concern that it was an unauditable feature that could easily have been compromised, so it never did get used as the sole source of entropy on capable CPUs.

Later on, an option to trust the CPU for providing boot-time entropy was added, and this option was enabled by default in Debian kernels from 10.0 onwards.

I am okay with using RDRAND for boot-time entropy, but some people got very upset about it.

Out of interest I had a look at what effect the various kernel options related to RDRAND would have, and also what about when I use BitFolk’s entropy service.

(As of July 2019 this wiki article is in dire need of rewrite since I believe it states some untrue things about urandom, but the details of what the entropy service is and how to use it are correct)

Experiments ^

These experiments were carried out on a virtual machine which is a default install of Debian 10 (buster) on BitFolk. At package selection only “Standard system utilities” and “SSH server” were selected.

Default boot ^

SSH is available just over 1 second after boot.

[    1.072760] random: get_random_bytes called from start_kernel+0x93/0x52c with crng_init=0
[    1.138541] random: crng done (trusting CPU's manufacturer)

Don’t trust RDRAND for early entropy ^

If I tell the kernel not to trust RDRAND for early entropy by using random.trust_cpu=off on the kernel command line then SSH is available after about 4.5 seconds.

[    1.115416] random: get_random_bytes called from start_kernel+0x93/0x52c with crng_init=0
[    1.231606] random: fast init done
[    4.260130] random: systemd-random-: uninitialized urandom read (512 bytes read)
[    4.484274] random: crng init done

Don’t use RDRAND at all ^

If I completely disable the kernel’s use of RDRAND by using nordrand on the kernel command line then SSH is available after just under 49 seconds.

[    1.110475] random: get_random_bytes called from start_kernel+0x93/0x52c with crng_init=0
[    1.225991] random: fast init done
[    4.298185] random: systemd-random-: uninitialized urandom read (512 bytes read)
[    4.674676] random: dbus-daemon: uninitialized urandom read (12 bytes read)
[    4.682873] random: dbus-daemon: uninitialized urandom read (12 bytes read)
[   48.876084] random: crng init done

Use entropy service but not RDRAND ^

If I disable RDRAND but use BitFolk’s entropy service then SSH is available in just over 10 seconds. I suppose this is slower than with random.trust_cpu=off because in that case RDRAND is still allowed after initial seeding, and we must wait for a userland daemon to start.

Using the entropy service requires the network to be up so I’m not sure how easy it would be to decrease this delay, but 10 seconds is still a lot better than 49 seconds.

[    1.075910] random: get_random_bytes called from start_kernel+0x93/0x52c with crng_init=0
[    1.186650] random: fast init done
[    4.207010] random: systemd-random-: uninitialized urandom read (512 bytes read)
[    4.606789] random: dbus-daemon: uninitialized urandom read (12 bytes read)
[    4.613975] random: dbus-daemon: uninitialized urandom read (12 bytes read)
[   10.257513] random: crng init done

Use entropy service but don’t trust CPU for early seeding ^

This was no different to just random.trust_cpu=off (about 4.5s). I suspect because early seeding completed and then RDRAND supplied more entropy before the network came up and the entropy service daemon could start.

Thoughts ^

I’m glad that my CPUs have RDRAND and I’m prepared to use it for boot-time seeding of the CSPRNG, but not as the machines’ sole entropy source.

With RDRAND available, using the BitFolk entropy service probably doesn’t make that much sense as RDRAND will always be able to supply.

More paranoid customers may want to use random.trust_cpu=off but even then probably don’t need the entropy service since once the CSPRNG is seeded, RDRAND can be mixed in and away they go.

The truly paranoid may want to disable RDRAND in which case using the entropy service would be recommended since otherwise long delays at boot will happen and severe delays during times of high entropy demand could be seen.

For those who aren’t BitFolk customers and don’t have access to hardware entropy sources and don’t have a CPU with RDRAND support there are some tough choices. Every other option listed on Debian’s relevant wiki article has at least one expert who says it’s a bad choice.

Linux RAID-10 fixed on imbalanced devices?

Recap ^

In a previous article I demonstrated that Linux RAID-10 lacked an optimisation for non-rotational devices that was present in RAID-1.

In the case of imbalanced devices such as my system with one SATA SSD and one PCI NVMe, this could cause RAID-10 to perform 3 times worse than RAID-1 at random reads.

A possible fix ^

Kernel developer Guoqing Jiang contacted me to provide a patch to add the same optimisation that is present in RAID-1 to RAID-10.

Updated performance figures ^

I’ve applied Guoqing’s patch and re-run the tests for the RAID-10 targets. Figures for other targets are from the pervious post for comparison.

Sequential IO ^

Test Throughput (MiB/s)
Fast RAID-1 Fast RAID-10 Fast RAID-10 (patched) Slow RAID-1 Slow RAID-10 Slow RAID-10 (patched)
Read 1,237 1,682 2,141 198 188 211
Write 321 321 321 18 19 19

The patched RAID-10 is the clear winner for sequential IO. It even peforms about 27% faster than the unpacthed variant.

Random IO ^

Test IOPS
Fast RAID-1 Fast RAID-10 Fast RAID-10 (patched) Slow RAID-1 Slow RAID-10 Slow RAID-10 (patched)
Random Read 602,000 208,000 602,000 501 501 487
Random Write 82,200 82,200 82,200 25 21 71

The patched RAID-10 is now indistinguishable from performance of RAID-1, almost 3 times faster than without the patch!

I am unable to explain why RAID-10 writes on the slow devices (HDDs) is so much better than before.

The patch ^

Guoqing Jiang’s patch is as follows in case anyone wants to test it. Guoqing has only compile-tested it as they don’t have the required hardware. I have tested it and it seems okay, but don’t use it on any data you care about yet.

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 25e97de36717..2ebe49b18aeb 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -745,15 +745,19 @@ static struct md_rdev *read_balance(struct r10conf *conf,
        int sectors = r10_bio->sectors;
        int best_good_sectors;
        sector_t new_distance, best_dist;
-       struct md_rdev *best_rdev, *rdev = NULL;
+       struct md_rdev *best_dist_rdev, *best_pending_rdev, *rdev = NULL;
        int do_balance;
-       int best_slot;
+       int best_dist_slot, best_pending_slot;
+       int has_nonrot_disk = 0;
+       unsigned int min_pending;
        struct geom *geo = &conf->geo;
 
        raid10_find_phys(conf, r10_bio);
        rcu_read_lock();
-       best_slot = -1;
-       best_rdev = NULL;
+       best_dist_slot = -1;
+       min_pending = UINT_MAX;
+       best_dist_rdev = NULL;
+       best_pending_rdev = NULL;
        best_dist = MaxSector;
        best_good_sectors = 0;
        do_balance = 1;
@@ -775,6 +779,8 @@ static struct md_rdev *read_balance(struct r10conf *conf,
                sector_t first_bad;
                int bad_sectors;
                sector_t dev_sector;
+               unsigned int pending;
+               bool nonrot;
 
                if (r10_bio->devs[slot].bio == IO_BLOCKED)
                        continue;
@@ -811,8 +817,8 @@ static struct md_rdev *read_balance(struct r10conf *conf,
                                        first_bad - dev_sector;
                                if (good_sectors > best_good_sectors) {
                                        best_good_sectors = good_sectors;
-                                       best_slot = slot;
-                                       best_rdev = rdev;
+                                       best_dist_slot = slot;
+                                       best_dist_rdev = rdev;
                                }
                                if (!do_balance)
                                        /* Must read from here */
@@ -825,14 +831,23 @@ static struct md_rdev *read_balance(struct r10conf *conf,
                if (!do_balance)
                        break;
 
-               if (best_slot >= 0)
+               nonrot = blk_queue_nonrot(bdev_get_queue(rdev->bdev));
+               has_nonrot_disk |= nonrot;
+               pending = atomic_read(&rdev->nr_pending);
+               if (min_pending > pending && nonrot) {
+                       min_pending = pending;
+                       best_pending_slot = slot;
+                       best_pending_rdev = rdev;
+               }
+
+               if (best_dist_slot >= 0)
                        /* At least 2 disks to choose from so failfast is OK */
                        set_bit(R10BIO_FailFast, &r10_bio->state);
                /* This optimisation is debatable, and completely destroys
                 * sequential read speed for 'far copies' arrays.  So only
                 * keep it for 'near' arrays, and review those later.
                 */
-               if (geo->near_copies > 1 && !atomic_read(&rdev->nr_pending))
+               if (geo->near_copies > 1 && !pending)
                        new_distance = 0;
 
                /* for far > 1 always use the lowest address */
@@ -841,15 +856,21 @@ static struct md_rdev *read_balance(struct r10conf *conf,
                else
                        new_distance = abs(r10_bio->devs[slot].addr -
                                           conf->mirrors[disk].head_position);
+
                if (new_distance < best_dist) {
                        best_dist = new_distance;
-                       best_slot = slot;
-                       best_rdev = rdev;
+                       best_dist_slot = slot;
+                       best_dist_rdev = rdev;
                }
        }
        if (slot >= conf->copies) {
-               slot = best_slot;
-               rdev = best_rdev;
+               if (has_nonrot_disk) {
+                       slot = best_pending_slot;
+                       rdev = best_pending_rdev;
+               } else {
+                       slot = best_dist_slot;
+                       rdev = best_dist_rdev;
+               }
        }
 
        if (slot >= 0) {

Exploring different Linux RAID-10 layouts with unbalanced devices

Background ^

In a previous article I explored the performance of different Linux RAID configurations in a situation where there are two very mismatched devices.

The two devices are a Samsung SM883 SATA SSD and a Samsung PM983 NVMe. Both of these devices are very fast, but the NVMe can be 6 times faster than the SSD for random (4KiB) reads.

The previous article established that due to performance optimisations in Linux RAID-1 targeted at non-rotational devices like SSDs, RAID-1 outperforms RAID-10 by about 3x for random reads in this unbalanced setup.

RAID-10 Layouts ^

A respondent on the linux-raid list suggested I test out different RAID-10 layouts. The default RAID-10 layout on Linux corresponds to the standard and is called near. There are also two alternative layouts, far and offset. Wikipedia has a good article on the difference between these three layouts.

Charts ^

Click on the thumbnails to see full size images.

Sequential IO ^

Reads ^


far and offset layouts perform the same: about twice the speed of a single SSD, but only ~57% of RAID-1 and interestingly only ~77% of RAID-10 near layout.

Writes ^


All layouts perform the same for sequential writes (the same as RAID-1).

Random IO ^

Reads ^


far and offset performed slightly worse than near (~94%) and still only about a third of RAID-1.

Writes ^


All layouts of RAID-10 perform the same as RAID-1 for random writes.

Data Tables ^

This is just the raw data for the charts above. Skip to the conclusions if you’re not interested in seeing the numbers for the things you already saw as pictures.

Sequential IO ^

Test Throughput (MiB/s)
SSD NVMe HDD Fast RAID-1 Fast RAID-10 (near) Fast RAID-10 (far) Fast RAID-10 (offset) Slow RAID-1 Slow RAID-10 (near)
Read 489 2,227 26 1,237 1,682 954 954 198 188
Write 447 1,754 20 321 321 321 322 18 19

Random IO ^

Test IOPS
SSD NVMe HDD Fast RAID-1 Fast RAID-10 (near) Fast RAID-10 (far) Fast RAID-10 (offset) Slow RAID-1 Slow RAID-10 (near)
Random Read 98,200 605,000 256 602,000 208,000 196,000 196,000 501 501
Random Write 86,100 435,000 74 82,200 82,200 82,300 82,300 25 21

Conclusions ^

I was not able to see any difference between the non-default Linux RAID-10 layouts for my devices and I think it’s likely this holds for all non-rotational devices in general.

far and offset layouts performed significantly worse than the default near layout for sequential read IO and no better than the default near layout in any other scenario.

Since layouts other than the default near restrict the reshaping options for RAID-10, I don’t recommend using them for RAID-10 composed entirely of non-rotational devices.

Additionally, if — as in my case — the devices have a big variance in performance compared to each other then it remains best to use RAID-1.

Appendix ^

Setup ^

I’ll only cover what has changed from the previous article.

Partitioning ^

I added two extra 10GiB partitions on each device; one for testing the far layout and the other for testing the offset layout.

$ sudo gdisk /dev/sdc                                                        
GPT fdisk (gdisk) version 1.0.3                                                                                                                                           
Partition table scan:                                                                
  MBR: protective                                                                   
  BSD: not present                                                                   
  APM: not present                                                                   
  GPT: present                                                                      
 
Found valid GPT with protective MBR; using GPT.                                     
 
Command (? for help): p                                                        
Disk /dev/sdc: 7501476528 sectors, 3.5 TiB                                           Model: SAMSUNG MZ7KH3T8                                            
Sector size (logical/physical): 512/4096 bytes                   
Disk identifier (GUID): 7D7DFDA2-502C-47FE-A437-5442CCCE7E6B
Partition table holds up to 128 entries                                              Main partition table begins at sector 2 and ends at sector 33                        
First usable sector is 34, last usable sector is 7501476494                         
Partitions will be aligned on 2048-sector boundaries                                
Total free space is 7438561901 sectors (3.5 TiB)                                                                                                                          
Number  Start (sector)    End (sector)  Size       Code  Name                        
   1            2048        20973567   10.0 GiB    8300  Linux filesystem          
   2        20973568        41945087   10.0 GiB    8300  Linux filesystem            
   3        41945088        62916607   10.0 GiB    8300  Linux filesystem
 
Command (? for help): n                  
Partition number (4-128, default 4):                                                
First sector (34-7501476494, default = 62916608) or {+-}size{KMGTP}:                 Last sector (62916608-7501476494, default = 7501476494) or {+-}size{KMGTP}: +10g     
Current type is 'Linux filesystem'                                                  
Hex code or GUID (L to show codes, Enter = 8300):                                   
Changed type of partition to 'Linux filesystem'                                      
 
Command (? for help): n                                                             
Partition number (5-128, default 5):                                                 
First sector (34-7501476494, default = 83888128) or {+-}size{KMGTP}:                
Last sector (83888128-7501476494, default = 7501476494) or {+-}size{KMGTP}: +10g     
Current type is 'Linux filesystem'                                             
Hex code or GUID (L to show codes, Enter = 8300):                                    Changed type of partition to 'Linux filesystem'                                
 
Command (? for help): w                                            
                                                                                     Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING        
PARTITIONS!!                                                                        
 
Do you want to proceed? (Y/N): y                                                     
OK; writing new GUID partition table (GPT) to /dev/sdc.                             
Warning: The kernel is still using the old partition table.                          The new table will be used at the next reboot or after you                           
run partprobe(8) or kpartx(8)                                                        
The operation has completed successfully.                                            
$ sudo gdisk /dev/nvme0n1                                                            
GPT fdisk (gdisk) version 1.0.3                                                   
 
Partition table scan:                                                                
  MBR: protective                                                              
  BSD: not present                                                             
  APM: not present                                                                   
  GPT: present                                                                
 
Found valid GPT with protective MBR; using GPT.                                     
 
Command (? for help): p                                                          
Disk /dev/nvme0n1: 7501476528 sectors, 3.5 TiB                        
Model: SAMSUNG MZQLB3T8HALS-00007                                                
Sector size (logical/physical): 512/512 bytes                                        
Disk identifier (GUID): C6F311B7-BE47-47C1-A1CB-F0A6D8C13136                        
Partition table holds up to 128 entries                                           
Main partition table begins at sector 2 and ends at sector 33                     
First usable sector is 34, last usable sector is 7501476494                          
Partitions will be aligned on 2048-sector boundaries                          
Total free space is 7438561901 sectors (3.5 TiB)                                   
 
Number  Start (sector)    End (sector)  Size       Code  Name                        
   1            2048        20973567   10.0 GiB    8300  Linux filesystem
   2        20973568        41945087   10.0 GiB    8300  Linux filesystem
   3        41945088        62916607   10.0 GiB    8300  Linux filesystem
 
Command (? for help): n
Partition number (4-128, default 4):
First sector (34-7501476494, default = 62916608) or {+-}size{KMGTP}:
Last sector (62916608-7501476494, default = 7501476494) or {+-}size{KMGTP}: +10g
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
 
Command (? for help): n
Partition number (5-128, default 5):
First sector (34-7501476494, default = 83888128) or {+-}size{KMGTP}:
Last sector (83888128-7501476494, default = 7501476494) or {+-}size{KMGTP}: +10g
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
 
Command (? for help): w
 
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
 
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/nvme0n1.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
$ sudo partprobe /dev/sdc
$ sudo partprobe /dev/nvme0n1

Array creation ^

$ sudo mdadm --create \
  --verbose \
  --assume-clean \
  /dev/md8 \
  --level=10 \
  --raid-devices=2 \
  --layout=f2 \
  /dev/sdc4 /dev/nvme0n1p4
mdadm: chunk size defaults to 512K
mdadm: size set to 10476544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md8 started.
$ sudo mdadm --create \
  --verbose \
  --assume-clean \
  /dev/md9 \
  --level=10 \
  --raid-devices=2 \
  --layout=o2 \
  /dev/sdc5 /dev/nvme0n1p5
mdadm: chunk size defaults to 512K
mdadm: size set to 10476544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md9 started.
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
 
md9 : active raid10 nvme0n1p5[1] sdc5[0]
      10476544 blocks super 1.2 512K chunks 2 offset-copies [2/2] [UU]
 
md8 : active raid10 nvme0n1p4[1] sdc4[0]
      10476544 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
 
md7 : active raid10 sde3[1] sdd3[0]
      10476544 blocks super 1.2 2 near-copies [2/2] [UU]
 
md6 : active raid1 sde2[1] sdd2[0]
      10476544 blocks super 1.2 [2/2] [UU]
 
md5 : active raid10 nvme0n1p3[1] sdc3[0]
      10476544 blocks super 1.2 2 near-copies [2/2] [UU]
 
md4 : active raid1 nvme0n1p2[1] sdc2[0]
      10476544 blocks super 1.2 [2/2] [UU]
 
md2 : active (auto-read-only) raid10 sda3[0] sdb3[1]
      974848 blocks super 1.2 2 near-copies [2/2] [UU]
 
md0 : active raid1 sdb1[1] sda1[0]
      497664 blocks super 1.2 [2/2] [UU]
 
md1 : active raid10 sda2[0] sdb2[1]
      1950720 blocks super 1.2 2 near-copies [2/2] [UU]
 
md3 : active raid10 sda5[0] sdb5[1]
      12025856 blocks super 1.2 2 near-copies [2/2] [UU]
 
unused devices: <none>

Raw fio Output ^

Only output from the tests on the arrays with non-default output is shown here; the rest is in the previous article.

This is a lot of output and it’s the last thing in this article, so if you’re not interested in it you should stop reading now.

fast-raid10-f2_seqread: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B[175/9131]
B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
fast-raid10-f2_seqread: Laying out IO file (1 file / 8192MiB)
 
fast-raid10-f2_seqread: (groupid=0, jobs=4): err= 0: pid=5287: Sun Jun  2 00:18:35 20
19
  read: IOPS=244k, BW=954MiB/s (1001MB/s)(32.0GiB/34340msec)
   bw (  KiB/s): min=968176, max=984312, per=100.00%, avg=977239.00, stdev=740.55, sa
mples=272
   iops        : min=242044, max=246078, avg=244309.69, stdev=185.14, samples=272
  cpu          : usr=6.98%, sys=33.05%, ctx=738159, majf=0, minf=167
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=8388608,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
   READ: bw=954MiB/s (1001MB/s), 954MiB/s-954MiB/s (1001MB/s-1001MB/s), io=32.0GiB (3
4.4GB), run=34340-34340msec
 
Disk stats (read/write):
    md8: ios=8379702/75, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=276780
2/15, aggrmerge=1426480/64, aggrticks=618421/9, aggrin_queue=604770, aggrutil=99.93%
  nvme0n1: ios=4194304/15, merge=0/64, ticks=154683/0, in_queue=160368, util=99.93%
  sdc: ios=1341300/16, merge=2852961/64, ticks=1082160/19, in_queue=1049172, util=99.
81%
fast-raid10-o2_seqread: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096
B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
fast-raid10-o2_seqread: Laying out IO file (1 file / 8192MiB)
 
fast-raid10-o2_seqread: (groupid=0, jobs=4): err= 0: pid=5312: Sun Jun  2 00:19:31 20
19
  read: IOPS=244k, BW=954MiB/s (1000MB/s)(32.0GiB/34358msec)
   bw (  KiB/s): min=969458, max=981640, per=100.00%, avg=976601.62, stdev=607.72, sa
mples=272
   iops        : min=242364, max=245410, avg=244150.46, stdev=151.95, samples=272
  cpu          : usr=5.91%, sys=33.95%, ctx=732590, majf=0, minf=162
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=8388608,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
   READ: bw=954MiB/s (1000MB/s), 954MiB/s-954MiB/s (1000MB/s-1000MB/s), io=32.0GiB (3
4.4GB), run=34358-34358msec
 
Disk stats (read/write):
    md9: ios=8385126/75, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=276691
0/15, aggrmerge=1427340/64, aggrticks=618657/10, aggrin_queue=606194, aggrutil=99.99%
  nvme0n1: ios=4194304/15, merge=0/64, ticks=157297/1, in_queue=163632, util=99.94%
  sdc: ios=1339516/16, merge=2854681/64, ticks=1080017/19, in_queue=1048756, util=99.
99%
fast-raid10-f2_seqwrite: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 40
96B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
 
fast-raid10-f2_seqwrite: (groupid=0, jobs=4): err= 0: pid=5337: Sun Jun  2 00:21:13 2
019
  write: IOPS=82.2k, BW=321MiB/s (337MB/s)(32.0GiB/101992msec); 0 zone resets
   bw (  KiB/s): min=315288, max=336184, per=99.99%, avg=328946.06, stdev=670.42, sam
ples=812
   iops        : min=78822, max=84046, avg=82236.45, stdev=167.60, samples=812
  cpu          : usr=2.15%, sys=34.88%, ctx=973042, majf=0, minf=38
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,8388608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
  WRITE: bw=321MiB/s (337MB/s), 321MiB/s-321MiB/s (337MB/s-337MB/s), io=32.0GiB (34.4
GB), run=101992-101992msec
 
Disk stats (read/write):                                                    [92/9131]
    md8: ios=0/8380840, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/83882
06, aggrmerge=0/461, aggrticks=0/704880, aggrin_queue=724510, aggrutil=100.00%
  nvme0n1: ios=0/8388649, merge=0/20, ticks=0/123227, in_queue=202792, util=100.00%
  sdc: ios=0/8387763, merge=0/902, ticks=0/1286533, in_queue=1246228, util=98.78%
fast-raid10-o2_seqwrite: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 40
96B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
 
fast-raid10-o2_seqwrite: (groupid=0, jobs=4): err= 0: pid=5366: Sun Jun  2 00:22:56 2
019
  write: IOPS=82.4k, BW=322MiB/s (337MB/s)(32.0GiB/101820msec); 0 zone resets
   bw (  KiB/s): min=316248, max=420304, per=100.00%, avg=331319.30, stdev=3808.39, s
amples=807
   iops        : min=79062, max=105076, avg=82829.76, stdev=952.10, samples=807
  cpu          : usr=2.19%, sys=34.22%, ctx=975496, majf=0, minf=37
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,8388608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
  WRITE: bw=322MiB/s (337MB/s), 322MiB/s-322MiB/s (337MB/s-337MB/s), io=32.0GiB (34.4
GB), run=101820-101820msec
 
Disk stats (read/write):
    md9: ios=0/8374085, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/83882
42, aggrmerge=0/442, aggrticks=0/704724, aggrin_queue=728030, aggrutil=100.00%
  nvme0n1: ios=0/8388657, merge=0/21, ticks=0/124463, in_queue=211316, util=100.00%
  sdc: ios=0/8387828, merge=0/864, ticks=0/1284985, in_queue=1244744, util=98.83%
fast-raid10-f2_randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
 
fast-raid10-f2_randread: (groupid=0, jobs=4): err= 0: pid=5412: Sun Jun  2 00:23:39 2
019
  read: IOPS=196k, BW=767MiB/s (804MB/s)(32.0GiB/42725msec)
   bw (  KiB/s): min=753863, max=816072, per=99.95%, avg=784998.94, stdev=3053.58, sa
mples=340
   iops        : min=188465, max=204018, avg=196249.72, stdev=763.40, samples=340
  cpu          : usr=4.97%, sys=25.34%, ctx=884047, majf=0, minf=161
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=8388608,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
   READ: bw=767MiB/s (804MB/s), 767MiB/s-767MiB/s (804MB/s-804MB/s), io=32.0GiB (34.4
GB), run=42725-42725msec
 
Disk stats (read/write):
    md8: ios=8371889/4, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=4191336
/15, aggrmerge=2963/1, aggrticks=1470317/6, aggrin_queue=854184, aggrutil=100.00%
  nvme0n1: ios=4194304/15, merge=0/1, ticks=317755/0, in_queue=338708, util=100.00%
  sdc: ios=4188368/16, merge=5926/2, ticks=2622880/12, in_queue=1369660, util=99.90%
fast-raid10-o2_randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
 
fast-raid10-o2_randread: (groupid=0, jobs=4): err= 0: pid=5437: Sun Jun  2 00:24:22 2
019
  read: IOPS=196k, BW=767MiB/s (804MB/s)(32.0GiB/42725msec)
   bw (  KiB/s): min=741672, max=832016, per=99.96%, avg=785051.96, stdev=4207.46, sa
mples=340
   iops        : min=185418, max=208004, avg=196262.98, stdev=1051.86, samples=340
  cpu          : usr=4.51%, sys=25.36%, ctx=886783, majf=0, minf=164
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=8388608,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
   READ: bw=767MiB/s (804MB/s), 767MiB/s-767MiB/s (804MB/s-804MB/s), io=32.0GiB (34.4
GB), run=42725-42725msec
 
Disk stats (read/write):                                                     [8/9131]
    md9: ios=8371755/4, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=4191564
/7, aggrmerge=2733/1, aggrticks=1469572/3, aggrin_queue=853154, aggrutil=100.00%
  nvme0n1: ios=4194304/7, merge=0/1, ticks=317525/0, in_queue=336088, util=100.00%
  sdc: ios=4188825/8, merge=5466/1, ticks=2621620/6, in_queue=1370220, util=99.87%
fast-raid10-f2_randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (
T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
 
fast-raid10-f2_randwrite: (groupid=0, jobs=4): err= 0: pid=5462: Sun Jun  2 00:26:04
2019
  write: IOPS=82.3k, BW=321MiB/s (337MB/s)(32.0GiB/101961msec); 0 zone resets
   bw (  KiB/s): min=318832, max=396249, per=100.00%, avg=329384.74, stdev=1762.35, s
amples=810
   iops        : min=79708, max=99061, avg=82346.02, stdev=440.57, samples=810
  cpu          : usr=2.42%, sys=34.38%, ctx=975633, majf=0, minf=39
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,8388608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
  WRITE: bw=321MiB/s (337MB/s), 321MiB/s-321MiB/s (337MB/s-337MB/s), io=32.0GiB (34.4
GB), run=101961-101961msec
 
Disk stats (read/write):
    md8: ios=0/8383420, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/83886
62, aggrmerge=0/17, aggrticks=0/704633, aggrin_queue=735234, aggrutil=100.00%
  nvme0n1: ios=0/8388655, merge=0/14, ticks=0/123197, in_queue=208804, util=100.00%
  sdc: ios=0/8388669, merge=0/20, ticks=0/1286069, in_queue=1261664, util=98.75%
fast-raid10-o2_randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (
T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.13-42-g8066f
Starting 4 processes
 
fast-raid10-o2_randwrite: (groupid=0, jobs=4): err= 0: pid=5491: Sun Jun  2 00:27:47
2019
  write: IOPS=82.3k, BW=322MiB/s (337MB/s)(32.0GiB/101880msec); 0 zone resets
   bw (  KiB/s): min=315369, max=418520, per=100.00%, avg=330793.95, stdev=3466.64, s
amples=808
   iops        : min=78842, max=104630, avg=82698.46, stdev=866.66, samples=808
  cpu          : usr=2.21%, sys=34.38%, ctx=972875, majf=0, minf=39
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,8388608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
 
Run status group 0 (all jobs):
  WRITE: bw=322MiB/s (337MB/s), 322MiB/s-322MiB/s (337MB/s-337MB/s), io=32.0GiB (34.4
GB), run=101880-101880msec
 
Disk stats (read/write):
    md9: ios=0/8368626, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/83886
67, aggrmerge=0/20, aggrticks=0/705086, aggrin_queue=732522, aggrutil=100.00%
  nvme0n1: ios=0/8388658, merge=0/19, ticks=0/123370, in_queue=209792, util=100.00%
  sdc: ios=0/8388677, merge=0/21, ticks=0/1286802, in_queue=1255252, util=98.95%

Why Linux RAID-10 sometimes performs worse than RAID-1

Background ^

In the previous article, “Linux RAID-10 may not always be the best performer, but I don’t know why“, we discovered that a RAID-10 of two mismatched drives can perform 3 times worse at random reads than a RAID-1 of the same devices.

In my case the two devices were a Samsung PM983 NVMe and a Samsung SM883 SATA SSD. The NVMe is about 6 times faster than the SSD at random reads.

An explanation ^

I posted about my findings on the linux-raid mailing list and got a very helpful reply from Guoqing Jiang.

It seems that RAID-1 has an enhancement for non-rotational devices such that it will direct IO to the least-loaded device.

For rotational devices, the distance between the location of the sectors is more important than which device is least loaded, because switching spindles has a big penalty in seeking to the correct location.

This is not so for non-rotational devices like SSD and NVMe, so if a non-rotational and lightly loaded device is available, it is selected for IO. That is why with RAID-1 I saw most of the IOs going to the fast NVME device with a corresponding boost in performance compared to RAID-10.

(click for larger)

The same enhancement has not been made for RAID-10. I do not know if this is just because no one got around to it yet, or if it’s because it may be technically harder (or impossible).