New fileserver for home

specialbrew's disks

Recently my fileserver, becks, was not only getting filled to capacity but was also undergoing some severe performance problems. It’s by no means a poorly-specced machine (not for home use anyway) but my use of rsnapshot has grown so much in the last 6 months that it was no longer up to the job.

Read on for the saga of its replacement.

I only have about 46GiB of files in the rsnapshot directory tree compared to a total disk space of something like 300GiB. Not a disaster as I could always find big media files to archive off to DVD, however on closer examination there are something like 22 million files inside that rsnapshot tree. Many millions of hardlinks and other small files.

Every time rsnapshot was running (every 4 hours, plus a bit more) it was using up all the disk bandwidth to remove, move and rsync these files. As this is my home fileserver as well it was interfering with music and video playback and generally being rather annoying. It was also getting to the point where there was very little time left between each rsnapshot run in case I wanted to do anything with the rsnapshot files themselves.

So, finding myself with a bit of spare cash I decided to build a decent fileserver to replace it, and thus we have specialbrew (thanks to popey for picking the name)!

Icydock

The Icydocks are a new experiment for me, being a product I stumbled across on Scan and seemed like a nice idea: three slots for 3.5″ SATA disks in an enclosure that fits into two 5.25″ bays with its own cooling. A lot more convenient than having to wedge the hard drives inside any old place, with crap air flow, and always having to take the machine apart should one of them die (6 disks means 6 times as many hardware failures).

Coolermaster Stacker 310

I decided to go all-out on the case as well and buy a really nice one with plenty of space for further expansion. Hopefully when I outgrow this fileserver I will just be able to replace some of the parts instead of the whole thing. As long as ATX is still in popular use I suppose.

Given that there are four case fans (2x120mm, 2x80mm) plus a CPU fan I was a bit concerned about the noise but amazingly it’s running quieter than becks does. Perhaps due to the better case.

At my first install attempt I had a bit of a nightmare with the disk subsystem. I had booted a Knoppix 4.0 DVD and while playing around with the disks kept getting this message in logs:

ata2: command 0x25 timeout, stat 0x50 host_stat 0x2
ata1: command 0x25 timeout, stat 0x50 host_stat 0x1
ata1: command 0x25 timeout, stat 0x50 host_stat 0x1
ata1: command 0x25 timeout, stat 0x50 host_stat 0x2

as well as painfully slow performance coinciding with each message. We’re talking 100KiB/sec here.

Much shuffling of disks occured over several days but eventually I worked out two things:

  • One of my disks was dead regardless of this problem, returning nasty errors all over the place and failing its SMART self-tests.
  • The icydocks really do seem to need all three power connectors plugged in.

Back of an Icydock

That last bit may sound obvious, but looking at the back of the icydocks you’ll see they have two 4-pin molex connectors and one SATA power connector, which sort of hinted to me that maybe it wanted either the two molex or the one SATA. The documentation with these things is pretty laughable, being restricted to a diagram of what all the bits are in, in some fusion of Japanese, English and German. Anyway finally after plugging in all 3 power connectors on each icydock I was able to hammer the disks without seeing any of these errors.

Oh, it should also be noted that labelling the disks with their serial number is really handy when you’re dealing with one of these cheapo arrays that don’t have drive identification lights and the like. Supposedly the icydocks support SATA II activity signals but for some reason all 3 activity lights on each icydock are lit at once whenever there is any access. It may be because only 2 out of the 3 disks in each icydock are connected to a SATA II controller (SiI3112 is only SATA I), I don’t know.

Anyway, so activity lights are no way to identify disks, and Linux only knows the disks by their device name. Unfortunately depending on where the cables are connected, where the SATA controllers are connected, BIOS settings and which disks are inserted, the device names all move about — if you reboot and remove /dev/sdc then you’d like for the box to come up with /dev/sd{a,b,d,e,f} but of course it won’t, you’ll actually get /dev/sd{a,b,c,d,e} with sd{d,e,f} renaming themselves.

Because of all that I prefer to label disks with their serial number if there is any possibility of their device name changing. I can then work out which device corresponds to which disk like so:

$ for d in a b c d e f; do serial=$(sudo hdparm -I /dev/sd${d} | grep -i serial); echo "sd${d}: $serial"; done
sda: Serial Number: 3QF00BCC
sdb: Serial Number: 3QF00BSV
sdc: Serial Number: 3QF00AWJ
sdd: Serial Number: 3QF01NWZ
sde: Serial Number: 3QF00BTH
sdf: Serial Number: 3QF00BXK

Installing Debian amd64 onto this machine was not trivial because the Sarge debian-installer’s kernel was not new enough to have support for the motherboard’s 4 ICH7 SATA ports nor for its e1000 gigabit.

Ubuntu Dapper to the rescue! Yes, another reason to praise Ubuntu, even if it is just for enabling me to install Debian Sarge! The Dapper desktop amd64 DVD provided a live DVD environment from where I could use my disks and ethernet, set up software RAID and LVM to my heart’s content, before doing a debootstrap to install amd64 Sarge.

Linux MD’s ability to create working degraded RAID arrays by specifying some devices as missing was really helpful for letting me get started even while waiting for my faulty HD to be replaced, and in future should let me expand the array without having to restore from backups (A RAID-10 can function with half of its disks missing)

I’ve done my usual routine of using a small RAID-1 for /boot, another for / and then the rest in a single RAID array under LVM control. I really don’t like initrd/initramfs which is why I prefer to have /boot and / outside of LVM in a RAID-1 which grub has no problem dealing with. I know I am old-fashioned in this regard; initramfs fanboys do not need to remind me that it will be the only way in future.

With 6x320GB disks (this size chosen because at the time it offered the lowest £/GB, somewhere in the region of £0.21/GB) it was tempting to go for a RAID-5 or RAID-6 to end up with over 1TiB of storage if only for willy-waving, but I had to bring myself back to the reason for doing this in the first place: filesystem performance.

Inside with all the parts installed

I had run some stats for a few days on becks and it was doing 56% of its IO as writes, so I wasn’t confident that even with the advantage of 6 disks vs 4 that specialbrew would be up to it in a RAID-5. RAID-5 has a “write penalty” whereby every write of a partial stripe needs to rewrite the whole stripe, so has to read the rest of the stripe first in order to recalculate it. This can lead to a sequence of read-read-write-write for up to 50% performance penalty. For this reason I chose a RAID-10, and xfs as the filesystem as well, leaving me with about 880GiB of usable storage for my data.

I think it has been successful as specialbrew has been doing the rsnapshots and local fileserving for the last 4 or 5 days with none of the previous problems. I am most impressed with the icydocks and the coolermaster case – between them I have room for up to 10 more 3.5″ SATA II disks, although my PSU would need to be upgraded!

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md1              8.5G  670M  7.8G   8% /
tmpfs                 498M  4.0K  498M   1% /dev/shm
/dev/md0              243M  7.2M  224M   4% /boot
/dev/mapper/raid10-data 879G  307G  573G  35% /data
/dev/mapper/raid10-tmp 496M  176K  496M   1% /tmp
/dev/mapper/raid10-var 4.0G  164M  3.9G   5% /var

8 thoughts on “New fileserver for home

  1. There are some jumpers to fart about with on the back of the icy docks to switch the indicator LEDs to SATAII mode. On mine at least they were set to SATAI mode which lights up all the LEDs whenever there is any disk activity. Changing the jumpers (marked ACC SET in the documentation) should make each SATAII disk LED work independently.

    Oddly I had the same problem with one of my four disks – Dead. I don’t seem to have the same sucess as you though. I am getting lock-ups when I hammer the array, Still trying to work out what is causing that though… :/

  2. Hi Tony,
    Have you worked out why you are getting lock-ups?
    I get an immediate lock-up when I try to copy/back-up to my second drive.
    I have emailed IcyDock UK distributor and Scan two days ago but have not heard back from either of them.
    I really think they have a serious problem.
    Also when I put in a third drive the machine will not boot it gets to the XP start-up screen and the drive stops with the screen still running.
    Also, I ocasionally find the boot XP screen running but going nowhere even when only two or one drive installed.
    Any ideas would really help.
    If I hear back from http://www.nanopoint.co.uk/ or SCAN I will let you know.
    Regards,
    Alan

  3. Hi Alan,

    The problem was down to the dodgy disk I mentioned. Once that was replaced I was able to build a RAID5 array on top of the drives. The main tip I have is to ensure all the power connectors are plugged in and that you have a beefy enough CPU to cope with the extra load. A clamp meter from a local electronics store should help.

    I documented what I did here: http://tonywhitmore.co.uk/cgi-local/wiki.pl?RaidMigration
    Although I am using Linux rather than Windows, it may be of some use to you.

  4. Hi guys,

    I just bought an Icy Dock MB453 and seem to encounter the same problems. When I insert 2 disks (mirror), avarything works perfect, but as soon as I install the third disk and create a RAID5 array, the disks start locking when I try to copy or install anything. Even at boot, the disk lock already. When I cionnect the disks directly to my raid controller, I don’t have any problems. I have a 700W power supply and connected all 3 power connectors on the ID to differtent power cables. Changing disk positions also doesn’t help.
    Do you already have any more ideas?

    Thanks,

    Philippe (pvrijswijck@hotmail.com)

  5. I would suggest trying a more powerful PSU if you have one. (It was “PSU” I meant to type in my comment above, not “CPU”.) The dock itself will use some small amount of power for the fan and onboard electronics.

    Otherwise, I’d suggest trying different data cables.

  6. More powerfull than 700W? I think that would be overkill, since my PSU isn’t experiencing any load at all. I will try other cables, but I’m affraid that won’t help.

  7. Same problem to me: random disk failures, already replaced PSU with a 750W, replaced ALL disks and motherboard. Result is TOTAL FAILURE of that damn icydockcrapmaythedeviltakeit! Stay away from icydock, I wasted lots of money and time with that!

Leave a Reply to Tony Cancel reply

Your email address will not be published. Required fields are marked *