The Problem ^
Sometimes Linux logs interesting things with sector offsets. For example:
Jul 23 23:11:19 tanqueray kernel: [197925.429561] sg phys_addr:0x00000015bac60000 offset:0 length:4096 dma_address:0x00000012cf47a000 dma_length:4096 Jul 23 23:11:19 tanqueray kernel: [197925.430323] sg phys_addr:0x00000015bac5d000 offset:0 length:4608 dma_address:0x00000012cf47b000 dma_length:4608 Jul 23 23:11:19 tanqueray kernel: [197925.431052] sg phys_addr:0x00000015bac5e200 offset:512 length:3584 dma_address:0x00000012cf47c200 dma_length:3584 Jul 23 23:11:19 tanqueray kernel: [197925.431824] sg phys_addr:0x00000015bac2e000 offset:0 length:4096 dma_address:0x00000012cf47d000 dma_length:4096 . . . Jul 23 23:11:19 tanqueray kernel: [197925.434447] Invalid SGL for payload:131072 nents:32 . . . Jul 23 23:11:19 tanqueray kernel: [197925.454419] blk_update_request: I/O error, dev nvme0n1, sector 509505343 op 0x1:(WRITE) flags 0x800 phys_seg 32 prio class 0 Jul 23 23:11:19 tanqueray kernel: [197925.464644] md/raid1:md5: Disk failure on nvme0n1p5, disabling device. Jul 23 23:11:19 tanqueray kernel: [197925.464644] md/raid1:md5: Operation continuing on 1 devices.
I’d like to know which logical volume sector 509505343 of /dev/nvme0n1p5 corresponds to.
At the md level ^
Thankfully this is a RAID-1 so every device in it has the exact same layout.
$ grep -A 2 ^md5 /proc/mdstat md5 : active raid1 nvme0n1p5 sda5 3738534208 blocks super 1.2 [2/2] [UU] bitmap: 2/28 pages [8KB], 65536KB chunk
The superblock format of
1.2 also means that the RAID metadata is at the end of each device, so there is no offset there to worry about.
For all intents and purposes sector 509505343 of /dev/nvme0n1p5 is the same as sector 509505343 of /dev/md5.
If I’d been using a different RAID level like 5 or 6 then this would have been far more complicated as the data would have been striped across multiple devices at different offsets, together with parity. Some layouts of Linux RAID-10 would also have different offsets.
At the lvm level ^
LVM has physical volumes (PVs) that are split into extents, then one or more ranges of one or more extents make up a logical volume (LV). The physical volumes are just the underlying device, so in my case that’s /dev/md5.
Offset into the PV ^
LVM has some metadata at the start of the PV, so we first work out how far into the PV the extents can start:
$ sudo pvs --noheadings -o pe_start --units s /dev/md5 2048S
So, sector 509505343 is actually 509503295 sectors into this PV, because the first 2048 sectors are reserved for metadata.
How big is an extent? ^
Next we need to know how big an LVM extent is.
$ sudo pvdisplay --units s /dev/md5 | grep 'PE Size' PE Size 8192 Se
There’s 8192 sectors in each of the extents in this PV, so this sector is inside extent number
509503295 / 8192 = 62195.22644043.
It’s fractional because naturally the sector is not on an exact PE boundary. If I need to I could work out from the remainder how many sectors into PE 62195 this is, but I’m only interested in the LV name and each LV has an integer number of PEs, so that’s fine: PE 62195.
Look at the PV’s mappings ^
Now you can dump out a list of mappings for the PV. This will show you what each range of extents corresponds to. Note that there might be multiple ranges for an LV if it’s been grown later on.
$ sudo pvdisplay --maps /dev/md5 | grep -A1 'Physical extent' . . . Physical extent 58934 to 71733: Logical volume /dev/myvg/domu_backup4_xvdd -- Physical extent 71734 to 912726: FREE
What’s going on here then? ^
I’m not sure, but there appears to be a kernel bug and it’s probably got something to do with the fact that this LV is a disk with an unaligned partition table:
$ sudo fdisk -u -l /dev/myvg/domu_backup4_xvdd
Disk /dev/myvg/domu_backup4_xvdd: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x07c7ce4c
Device Boot Start End Sectors Size Id Type
/dev/myvg/domu_backup4_xvdd1 63 104857599 104857537 50G 83 Linux
Partition 1 does not start on physical sector boundary.
The Linux NVMe driver can only do IO in multiples of 4096 bytes. As seen in the initial logs, two of the requests were for 4608 and 3584 bytes respectively; these are not divisible by 4096 and thus hit a
. . . Jul 23 23:11:19 tanqueray kernel: [197925.430323] sg phys_addr:0x00000015bac5d000 offset:0 length:4608 dma_address:0x00000012cf47b000 dma_length:4608 Jul 23 23:11:19 tanqueray kernel: [197925.431052] sg phys_addr:0x00000015bac5e200 offset:512 length:3584 dma_address:0x00000012cf47c200 dma_length:3584 . . .
Going further: finding the file ^
I’m not interested in doing this because it’s fairly likely that it’s because of the offset partition and many kinds of IO to it will cause this.
If you did want to though, you’d first have to look at the partition table to see where your filesystem starts.
0.22644043 * 8192 = 1855 sectors into the disk. Partition 1 starts at 63, so this file is at 1792 sectors.
You can then (for ext4) use
debugfs to poke about and see which file that corresponds to.