Linux Software RAID and drive timeouts

All the RAIDs are breaking ^

I feel like I’ve been seeing a lot more threads on the linux-raid mailing list recently where people’s arrays have broken, they need help putting them back together (because they aren’t familiar with what to do in that situation), and it turns out that there’s nothing much wrong with the devices in question other than device timeouts.

When I say “a lot”, I mean, “more than I used to.”

I think the reason for the increase in failures may be that HDD vendors have been busy segregating their products into “desktop” and “RAID” editions in a somewhat arbitrary fashion, by removing features from the “desktop” editions in the drive firmware. One of the features that today’s consumer desktop drives tend to entirely lack is configurable error timeouts, also known as SCTERC, also known as TLER.

TL;DR ^

If you use redundant storage but may be using non-RAID drives, you absolutely must check them for configurable timeout support. If they don’t have it then you must increase your storage driver’s timeout to compensate, otherwise you risk data loss.

How do storage timeouts work, and when are they a factor? ^

When the operating system requests from or write to a particular drive sector and fails to do so, it keeps trying, and does nothing else while it is trying. An HDD that either does not have configurable timeouts or that has them disabled will keep doing this for quite a long time—minutes—and won’t be responding to any other command while it does that.

At some point Linux’s own timeouts will be exceeded and the Linux kernel will decide that there is something really wrong with the drive in question. It will try to reset it, and that will probably fail, because the drive will not be responding to the reset command. Linux will probably then reset the entire SATA or SCSI link and fail the IO request.

In a single drive situation (no RAID redundancy) it is probably a good thing that the drive tries really hard to get/set the data. If it really persists it just may work, and so there’s no data loss, and you are left under no illusion that your drive is really unwell and should be replaced soon.

In a multiple drive software RAID situation it’s a really bad thing. Linux MD will kick the drive out because as far as it is concerned it’s a drive that stopped responding to anything for several minutes. But why do you need to care? RAID is resilient, right? So a drive gets kicked out and added back again, it should be no problem.

Well, a lot of the time that’s true, but if you happen to hit another unreadable sector on some other drive while the array is degraded then you’ve got two drives kicked out, and so on. A bus / controller reset can also kick multiple drives out. It’s really easy to end up with an array that thinks it’s too damaged to function because of a relatively minor amount of unreadable sectors. RAID6 can’t help you here.

If you know what you’re doing you can still coerce such an array to assemble itself again and begin rebuilding, but if its component drives have long timeouts set then you may never be able to get it to rebuild fully!

What should happen in a RAID setup is that the drives give up quickly. In the case of a failed read, RAID just reads it from elsewhere and writes it back (causing a sector reallocation in the drive). The monthly scrub that Linux MD does catches these bad sectors before you have a bad time. You can monitor your reallocated sector count and know when a drive is going bad.

How to check/set drive timeouts ^

You can query the current timeout setting with smartctl like so:

# for drive in /sys/block/sd*; do drive="/dev/$(basename $drive)"; echo "$drive:"; smartctl -l scterc $drive; done

You hopefully end up with something like this:

/dev/sda:
smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
 
SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)
 
/dev/sdb:
smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
 
SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)

That’s a good result because it shows that configurable error timeouts (scterc) are supported, and the timeout is set to 70 all over. That’s in centiseconds, so it’s 7 seconds.

Consumer desktop drives from a few years ago might come back with something like this though:

SCT Error Recovery Control:
           Read:     Disabled
          Write:     Disabled

That would mean that the drive supports scterc, but does not enable it on power up. You will need to enable it yourself with smartctl again. Here’s how:

# smartctl -q errorsonly -l scterc,70,70 /dev/sda

That will be silent unless there is some error.

More modern consumer desktop drives probably won’t support scterc at all. They’ll look like this:

Warning: device does not support SCT Error Recovery Control command

Here you have no alternative but to tell Linux itself to expect this drive to take several minutes to recover from an error and please not aggressively reset it or its controller until at least that time has passed. 180 seconds has been found to be longer than any observed desktop drive will try for.

# echo 180 > /sys/block/sda/device/timeout

I’ve got a mix of drives that support scterc, some that have it disabled, and some that don’t support it. What now? ^

It’s not difficult to come up with a script that leaves your drives set into their most optimal error timeout condition on each boot. Here’s a trivial example:

#!/bin/sh
 
for disk in `find /sys/block -maxdepth 1 -name 'sd*' | xargs -n 1 basename`
do
    smartctl -q errorsonly -l scterc,70,70 /dev/$disk
 
    if test $? -eq 4
    then
        echo "/dev/$disk doesn't suppport scterc, setting timeout to 180s" '/o\'
        echo 180 > /sys/block/$disk/device/timeout
    else
        echo "/dev/$disk supports scterc " '\o/'
    fi
done

If you call that from your system’s startup scripts (e.g. /etc/rc.local on Debian/Ubuntu) then it will try to set scterc to 7 seconds on every /dev/sd* block device. If it works, great. If it gets an error then it sets the device driver timeout to 180 seconds instead.

There are a couple of shortcomings with this approach, but I offer it here because it’s simple to understand.

It may do odd things if you have a /dev/sd* device that isn’t a real SATA/SCSI disk, for example if it’s iSCSI, or maybe some types of USB enclosure. If the drive is something that can be unplugged and plugged in again (like a USB or eSATA dock) then the drive may reset its scterc setting while unpowered and not get it back when re-plugged: the above script only runs at system boot time.

A more complete but more complex approach may be to get udev to do the work whenever it sees a new drive. That covers both boot time and any time one is plugged in. The smartctl project has had one of these scripts contributed. It looks very clever—for example it works out which devices are part of MD RAIDs—but I don’t use it yet myself as a simpler thing like the script above works for me.

What about hardware RAIDs? ^

A hardware RAID controller is going to set low timeouts on the drives itself, so as long as they support the feature you don’t have to worry about that.

If the support isn’t there in the drive then you may or may not be screwed there: chances are that the RAID controller is going to be smarter about how it handles slow requests and just ignore the drive for a while. If you are unlucky though you will end up in a position where some of your drives need the setting changed but you can’t directly address them with smartctl. Some brands e.g. 3ware/LSI do allow smartctl interaction through a control device.

When using hardware RAID it would be a good idea to only buy drives that support scterc.

What about ZFS? ^

I don’t know anything about ZFS and a quick look gives some conflicting advice:

Drives with scterc support don’t cost that much more, so I’d probably want to buy them and check it’s enabled if it were me.

What about btrfs? ^

As far as I can see btrfs does not disable drives, it leaves it until Linux does that, so you’re probably not at risk of losing data.

If your drives do support scterc though then you’re still best off making sure it’s set as otherwise things will crawl to a halt at the first sign of trouble.

What about NAS devices? ^

The thing about these is, they’re quite often just low-end hardware running Linux and doing Linux software RAID under the covers. With the disadvantage that you maybe can’t log in to them and change their timeout settings. This post claims that a few NAS vendors say they have their own timeouts and ignore scterc.

So which drives support SCTERC/TLER and how much more do they cost? ^

I’m not going to list any here because the list will become out of date too quickly. It’s just something to bear in mind, check for, and take action over.

Fart fart fart ^

Comments along the lines of “Always use hardware RAID” or “always use $filesystem” will be replaced with “fart fart fart,” so if that’s what you feel the need to say you should probably just do so on Twitter instead, where I will only have the choice to read them in my head as “fart fart fart.”

Yearly (Linux) photo management quandary

Here we are again, another year, another dissatisfying look at what options I have for local photo management.

Here’s what I do now:

  • Photos from our cameras and my phone are imported using F-Spot on my desktop computer in the office, to a directory tree that resides over NFS on a fileserver, where they will be backed up.
  • Tagging etc. happens on the desktop computer.
  • For quick viewing of a few images, if I know the date they were taken on, I can find them in the directory structure because it goes like Photos/2014/01/01/blah.jpg. The NFS mount is available on every computer in the house that can do NFS (e.g. laptops).
  • For more involved viewing that will require searching by tag or other metadata, i.e. that has to be done in F-Spot, I have to do it on the desktop computer in the office, because that is the only place that has the F-Spot database. So I either do it there, or I have to run F-Spot over X11 forwarding on another machine (slow and clunky!).

The question is how to improve that experience?

I can’t run F-Spot on multiple computers because it stores its SQLite database locally and even if the database file were synced between hosts or kept on the fileserver it would still need the exact same version of F-Spot on every machine, which is not feasible — my laptop and desktop already run different releases of Ubuntu and I want to continue being able to do that.

It would be nice to be able to import photos from any machine but I can cope with it having to be done from the desktop alone. What isn’t acceptable is only being able to view them from the desktop as well. And when I say view I mean be able to search by tags and metadata, not just navigate a directory tree.

It sounds like a web application is needed, to enforce the single point of truth for tags and metadata. Are there actually any good ones that you can install yourself though? I’ve used Gallery before and was never really satisfied with ease of use or presentation.

Your-Photos-As-A-Service providers like Flickr and even to some extent Google+ and Facebook have quite nice interfaces, but I worry about spending many hours adding tags and metadata, not bothering to back it all up, and then one day the service shuts down or changes in ways I don’t like.

I’m normally quite good about backing things up but the key to backups is to make them easy and automatic. From what I can see these service providers either don’t provide a backup facility or else it’s quite inconvenient, e.g. click a bunch of times, get a zip file of everything. Ain’t nobody got time for that, as a great philosopher once wrote.

So.. yeah.. What do you do about it?

Wanted: cheap but cheerful small Linux device

I changed ISP recently for my broadband at home and switched from ADSL2+ to FTTC, so that’s required a new broadband router.

Initially I got things working with the Technicolor TG582N as supplied by the ISP, but it appears quite horrible in most of its functionality. I find most cheap domestic broadband routers are, to be honest. Little plastic blobs with the absolute minimum spec of hardware, configured via web interfaces that can politely be described as clunky, and packing many unwanted features.

With FTTC here in the UK you have a separate NTE box supplied by British Telecom and then you supply (or your ISP supplies) a router that connects to that by Ethernet and talks PPP-over-Ethernet to your ISP. So, anything that can do PPPoE works as the router, no special hardware required. Any Linux box will do.

I had this Soekris net4801 box that I purchased in 2005, been running it constantly ever since, and it still works fine. It’s a nice little thing; 266MHz fanless CPU, 128MiB RAM, three 10/100 Ethernet ports and CompactFlash for storage. Draws under 10W when idle and not a lot more at full tilt.

Really quite expensive though. After delivery charges, purchase of compatible PSU and CF card and currency conversions are done you’re probably talking £200 now and I seem to recall it was similar back in 2005 too.

I upgraded that from Debian etch to lenny to squeeze to wheezy — which went remarkably without incident by the way, a testament to Debian’s excellent upgrade procedure — and set it to work as the router. Since it’s just a relatively conventional Debian install it’s really easy to configure PPPoE, IPv4, NAT, IPv6, firewalling and anything else.

There’s a couple of things I’m not too happy about though.

What if it dies? ^

If you have a Soekris last several years then it’s going to be pretty reliable. There’s no moving parts, the most likely faults are going to be the CF card or the power supply. Even so, this one’s been in service about 8 years and that’s a really good innings. It could go any time and then what will I replace it with?

Of course I still have the Technicolor and that will work well enough to get connectivity until I put something better in its place again, but what would be that better thing?

Back in 2005 I had a bit more disposable income than I do now and £200 was okay to spend on something I was interested in playing with. I’m done playing with it now though and spending £200 to end up with a Linux box that runs at 266MHz and has 128M RAM is going to hurt. Also the net4801 is end of life so will get harder and harder to purchase new, and any replacement will cost a little more.

Is the Soekris really beefy enough? ^

Right now I only have 40M down, 10M up FTTC and the Soekris doesn’t appear to be limiting that any more than the Technicolor limited it.

Conceivably though I may one day upgrade it to 80/20 or more and that is starting to push the limits of a 100M Ethernet port, let alone a 266MHz CPU.

As you would expect from a 266MHz CPU with 128M RAM it’s dog slow at doing anything much in user land. This is a pretty minor gripe as the use case here is that of an appliance, like the broadband router it replaced. You shouldn’t really need to touch it much. Something slightly less puny would be a nice bonus though.

Options ^

HP Microserver ^

HP have been doing cash back deals on their Microserver range for a few years now. I already have one here at home being a file server and a few other bits and pieces. If they were still doing the cash back then I’d strongly consider buying another one to use for this.

It would draw a fair bit more power than the Soekris does, but they are still quite efficient machines and I would probably find it more things to do since it would be a lot more capable.

Without the cash back though I don’t think it can be justified. Retail price of a Microserver at the moment is around £265+VAT.

Update: It appears the cash back offer has returned, at least for September 2013!

http://www.serversplus.com/servers/tower_servers/hp_tower_servers/704941-421

Some Linksys WRT device with OpenWrt ^

It’s a contender, but it will leave me with some cheap nasty hardware running a non-standard Linux distribution on an ARM CPU. I’m sure OpenWrt is great but I don’t know it, I’d have to learn it just for this, and it’s not likely to be useful knowledge for anything else.

If possible I want to remain running Debian.

More enterprisey router hardware from Cisco or Mikrotik ^

This would certainly work; a Cisco off of ebay may be cheap enough, otherwise a new Mikrotik Routerboard would be within budget. Say an RB450G.

The main issue again would be it’s not Linux. That’s not necessarily a bad thing, it’s just that it wouldn’t feel familiar to me. I know how to configure everything in Linux.

Something from Fabiatech ^

I stumbled across a blog post by Richard Kettlewell entitled Linux In A Small box. In it he considers much the same issue as I have been, and ends up going for a Fabiatech FX5624

Looks good. £289+VAT though.

omg!! Raspberry Pi everywareeeeeeeeeeeeeeeeeee!!!!! ^

Yeah, Raspberry Pis are nice pieces of kit for what they are designed for. Which is not passing large amounts of network traffic. They only have one 100M Ethernet, and it’s driven by USB 2.0 so it’s going to suck. It will suck even more when you attach a USB hub and more USB Ethernets.

Something from Jetway ^

Alex suggested looking at these devices. They look quite fun.

A bare bones system that on paper should do the job (1.6GHz Intel Cedar Trail CPU, two Realtek gigabit Ethernet, one SO-DIMM slot for up to 4GB RAM) seems to be £149+VAT.

There seems to be a good selection of other main boards and daughter boards if that config wasn’t suitable.

Anyone got any personal experience of this hardware?

This Is Not An Exit ^

I still don’t know what I will do. I might put off the decision until the Soekris releases its magic blue smoke. I would be interested to hear any suggestions that I haven’t thought of.

Here are the requirements:

  • Capable of running a mainstream Linux distribution in a supportable fashion without much hacking around.
  • Has at least two gigabit Ethernet ports.
  • Is beefier than a 266MHz Geode CPU with 128M RAM
  • Easy to run its storage from an inexpensive yet reasonably reliable medium like CompactFlash or SD/microSD. Write endurance doesn’t really matter. I will mount it read-only if necessary.

Some nice-to-haves:

  • At least one serial port so I can manage it from another computer when its network is down, without having to attach a VGA monitor and keyboard. The Soekris manages this perfectly, because it’s what it’s designed for. It doesn’t even have a VGA port.
  • Total configuration of the BIOS from the serial port, so a VGA monitor and keyboard are never necessary. Again, that’s how Soekris products work.
  • Ethernet chipsets that are actually any good, i.e. not Realtek or Broadcom.
  • Capable of being PXE booted so that I don’t have to put the storage into another machine to write the operating system onto it.

Having Music Is Ace

Tonight I’m on my own as Jenny decided to go to bed early; she has to get up very very early tomorrow for work. I got up a bit late today and don’t feel tired at all so I’m just contemplating an evening of work.

When I work I like to have a soundtrack, so I’m picking out a playlist for the next 12 hours (yes I will probably stay up all night).

What struck me is how much great music I have and what a terrible loss it would be if my collection were to be taken from me.

I’m not saying I have great taste in music. I don’t go to gigs — in fact I’ve never actually been to a gig at any venue larger than a pub — and I tend to find my new music through radio and TV; Later…, coverage of Reading, Glastonbury, that sort of thing. My taste in music has been described as “mediocre” by others, so I’m not saying I’m any kind of opinion leader here.

I was having a conversation on IRC recently about the streaming music service Spotify and how I don’t really understand the use case for it — I do get the mobile streaming part, it’s the idea of using it at home as your main method of playing music that I fail to comprehend.

During that conversation someone said to me:

“I use Spotify because I don’t have a music collection […] I don’t derive pleasure from having a music collection.”

This idea completely boggles my mind! Looking through my collection I find all kinds of things with personal attachment.

It’s not that I feel like I have every bit of music ever. I know people who just download every bit of music they can and have hundreds of thousands of tracks. I’m not like that; I have just over 3,200 tracks most of which were ripped from CDs or purchased as online singles. If I don’t find myself listening to something for years then I usually delete it. So, my collection is stuff I do still listen to.

When building a playlist, every time seeing the list of albums brings back so many memories. Music that came out at certain times in my life, or was listened to a lot at certain times in my life. It brings back memories of my teenage years, university, past relationships (girlfriends who stole my CDs!), people who have since died. I’m not into a lot of obscure music, but there’s things there you won’t even find on Amazon as a CD, let alone on Spotify for streaming.

Maybe I am just getting old and not embracing the cloud. But how does one build a big playlist with something like Spotify? What about when they remove things from the service? I should just try the free version and see what it’s like.

Perhaps there are people of an older generation who don’t like the idea of only keeping music on the computer, and regard me with pity for not being immediately able to lay my hands on the CD or vinyl for most of my collection? That really doesn’t bother me; to me it’s the music that matters and it’s there for playing.

What bothers me is the idea of marking some track in the cloud as “liked” by me, and then later finding it’s disappeared for some reason so I can no longer listen. Memories gone.

If I did use something like Spotify I’d probably have to do some report of things I listened to a lot and make sure I buy them. I will get around to trying out Spotify at some point but I can’t imagine it will replace the desire to buy and own music, rather I would hope it would help me find more music that I like.

Because having music is ace.

An odd perspective on friendship

Benjamin wrote:

“I have learnt to minimize the amount of friends I have who are vegetarian, religious or have extreme views about something. If I didn’t, I’d probably be so depressed from being lectured and told off all the time.”

Benjamin,

If you have “friends” who are lecturing or preaching to you, I don’t think they’re really your friends. Even if they’re right and you’re very wrong. If someone continually brings a subject up even when they know you aren’t interested in talking about it, then I think they’re doing so more for their own purposes than yours.

I think it’s the case that almost everyone holds “extreme” views about some topic or other, but most don’t feel the need to bring them up. You singled out vegetarianism and religion, but anything can be a hot topic for someone.

Friends might need to do some sort of reality check or intervention on each other from time to time, and of course debate is good too. But there really is no need for frequent lecturing when amongst friends, I believe. It would have been much better if you had instead said, “I have learnt to disassociate myself from people who lecture me” rather than explicitly mentioning vegetarians, people who have faith, free software zealots, … by calling out these groups you unfortunately make yourself look like a troll who is lecturing.

You probably have more friends than you think who are religious or vegetarian, and you didn’t even know.

Mushrooms stuffed with sun-dried tomatoes, Dauphinoise potatoes and aubergine rolls with pesto, tiramisu

On Sunday I’d offered to cook a three course meal that’s a bit different from what we usually eat. I virtually never cook and when I do it’s always just something quick. I don’t enjoy cooking, but I thought I’d give it a go anyway. Although these recipes were very simple, by the time the day came I was feeling quite nervous.

Ingredients ^

I used the following recipes as a basis, reducing to serve 2:

I couldn’t find aubergines anywhere (tried 3 big supermarkets) so had to settle for baby aubergines which of course weren’t big enough to wrap anything in.

Method ^

Mushrooms stuffed with sun-dried tomatoes ^

  • Preheat oven to 200C.
  • Soak about 8g sun-dried tomatoes in a small bowl of hot water, covered for 5 minutes.
  • Reserve a tablespoon of the liquid, drain the rest off and chop the tomatoes finely.
  • Chop off the mushroom stems and chop them finely.
  • Finely chop 1/8th cup of shallots.
  • Finely chop a clove of garlic.
  • Lightly beat a large egg yolk.
  • Mince 1/8th cup of parsley leaves.
  • Crumble 1/4 teaspoon of basil.
  • Heat 2 tablespoons of olive oil in a frying pan over moderate heat until hot but not steaming. Add the mushroom stems and shallots, stirring until shallots are softened.
  • In a bowl stir together the mushroom stems, shallots, 1/6th cup bread crumbs, sun-dried tomatoes, the reserved liquid, the egg yolk, parsely, basil and garlic. Add salt to taste. Mound into the mushroom caps.
  • Brush the mushroom caps with sun-dried tomato oil.
  • Arrange the cups in a lightly greased shallow baking dish.
  • Sprinkle the caps with 2 tablespoons of grated parmesan.
  • Bake in the middle of the oven for 15 minutes.

Aubergine rolls with pesto ^

  • Preheat oven to 180C.
  • Cut 1 small carrot into matchsticks.
  • Deseed and slice half a red pepper.
  • Trim 8 asparagus spears.
  • Chop half a clove of garlic.
  • Finely grate 40g parmesan.
  • Top and tail 2 aubergines. Slice aubergines lengthways into about 8 4-5mm thick strips. Add salt and pepper, brush with extra-virgin olive oil and set aside for 5 minutes. Since I only had baby aubergines I could only slice them into small thin chips at this point.
  • Brush a pan with olive oil, place over high heat, cook aubergines for 2 minutes each side. Set aside.
  • Blanch vegetables in boiling water:
    • Carrots: 3 minutes
    • Pepper: 2 minutes
    • Asparagus: 1 minute

    Drain, pat dry and set aside.

  • Coarsely blend the garlic, 50g of drained sun-dried tomatoes and 1 tablespoon of pine nuts.
  • Add 10g fresh basil, the parmesan, and 65ml extra-virgin olive oil and blend again. Season and stir in 1 tablespoon of double cream.
  • Lay the vegetables at the end of each aubergine slice, roll up to secure and place in a large baking dish. Drizzle with extra-virgin olive oil and bake for 6-7 minutes or until hot.
  • Serve with pesto, garnish with lemon and salad leaves.

Dauphinoise potatoes ^

  • Preheat oven to 190C.
  • Thinly slice half an onion.
  • Slice 500g of King Edward potatoes thinly with a mandolin.
  • Grease up a shallow baking dish with butter (I used Pure soy spread).
  • Mix ~140ml double cream and ~40ml milk.
  • Layer the potatoes and onions evenly in the dish. Pour the cream and milk over, dot over with butter and cover the dish with foil.
  • Bake for one hour.
  • Discard the foil and bake for a further 15-20 minutes or until potatoes are golden.

Tiramisu ^

  • Sift ~22g icing sugar into ~112g mascarpone. Add 1/2 teaspoon of vanilla extract and 2 tablespoons of Tia Maria.
  • In a separate bowl, whisk ~85ml of double cream until soft peaks form.
  • Mix the cream and mascarpone together and refrigerate.
  • Break some sponge fingers to size and briefly dip them in cold, strong coffee.
  • Place the sponge fingers in the bottom of serving glasses and spoon mascarpone cream over.
  • Grate chocolate on top before serving.

Conclusion ^

We were pretty pleased with how it turned out, very yummy! Things I’d change:

Presentation could have been better. The vegetables on a bed of aubergines should have been aubergine wraps, but I couldn’t find big enough aubergines. Easy enough to fix. The Dauphinoise potatoes would have looked a lot better if they’d been nice equal sizes and shapes but I only had a grater to slice them with and it was the first time I’d tried. I might have done better with a proper mandolin, and/or more practice.

The Dauphinoise potatoes were rather too creamy for my liking, but Jenny really liked them. Maybe not so much a change needed as simply smaller portions for me; not a huge fan of potatoes.

There was probably a bit too much parmesan in everything, especially the stuffing for the mushrooms. That could have done with being toned down a bit.

No complaints about the Tiramisu!

Those Google Chrome Ads

Yesterday I happened to be waiting on a tube platform with a non-technical person, and we were opposite one of those new Google Chrome ads, as pictured here.

I’ve seen a few people comment that they didn’t think that a non-technical person would understand what they were all about, so I took the opportunity to ask my friend about it.

“See that ad over there? What do you think it’s for?”

“It looks like it’s for a search thing. A new kind of web search thing. That’s a big list of related things to what they searched for”

“What do you think Google Chrome is then?”

“Well it says on it, a new browser. That’s what you use to search isn’t it?”

“What other browsers are there then?”

“Well there’s the Yahoo! one, and then there’s the Google one.”

So to the extent that she even noticed the ad, she assumed it was just something to do with Google’s search engine, because of the search implications and prior meaning of the brand “Google”.

I don’t think this proves anything, but it was interesting hearing another point of view.

Are the recent government bailouts to the banking industry proof that minimal government is no longer realistic?

First: I don’t know much about economics. This is not an opinion piece; I am genuinely looking for other people’s opinions, so please comment if you have an opinion on it.

I’m just watching The Love of Money and it was saying how Paulson and Bernanke said to the US government that if the banks didn’t receive a promise of a bailout then the entire banking system would collapse within 48hrs.

If we assume that this was true and the bailout was required, then does it suggest that the concept of a minimalist, libertarian-style government is no longer feasible? It’s my understanding that under such a tiny government there would be hardly any taxes, so how would it ever be able to nationalise so many financial institutions? Even putting aside that nationalising things would be anathema to such a government in the first place, it wouldn’t even be possible.

Or would proponents of such a style of government argue that this crisis would never have happened in the first place without government meddling?

I guess I am asking if lightweight government is incompatible with this global economy that we seem to have?

Slow Down London: Making Time

Tonight we went to a lecture at the National Portrait Gallery called Making Time, delivered by David Rooney, Curator of Timekeeping at the Royal Observatory, Greenwich.

The talk is part of a programme of events called Slow Down London, a project which hopes to inspire Londoners to take the time to appreciate the things around them and do things well instead of just rapidly.

We didn’t really know what to expect, but what we got was an extremely interesting and amusing account of the story of the UK’s relationship with time, and the personalities and quirky events surrounding its measurement and management.

I love these little nuggets of curious fact, and David provided them with great style, his passion for the subject obviously showing through. The next best thing after being provided with a list of weird trivia is an educated and articulate person having a good rant on the subject.

Some things I did not know:

  • Before the introduction of “railway time” in the 1830s, Britain had no standard civil time, with local times varying as much as half an hour across the country.
  • “Railway time” was only adopted legally as standard civil time across the country in the 1880s when the first licensing laws were introduced, banning the sale of alcohol after midnight, or 11pm in the provinces. Selling alcohol after the official time would lead to loss of licence.
  • Before radio and telegram, captains of vessels in the Thames would send a deck hand up to the observatory to ask them what the time was, and have it synchronised to the ship’s chronometer. The burden of receiving all these visitors led to the observatory installing a “time ball” on its tower which could be seen from the docks, allowing the ships to tell the time without visiting in person.
  • For over a hundred years starting in the 1830s, John Henry Belville and his descendants would set the time daily on a pocket watch and then travel around London visiting their subscribers, who paid a fee to have their own timepieces set to the observatory’s clock. This was important for many businesses such as those who made the naval chronometers, since the “time ball” tower could not be seen from most of London.
  • The first time broadcasts on the BBC wireless radio service in the 1920s were done by having someone stand by an open window and listen for the bells of St. Paul’s Cathedral, then they would attempt to sound their own chimes in time with that. According to David’s measurements this meant that at best the time signal was about 2.5 seconds late due to the speed of sound.
  • Britain only adopted British Summer Time because the First World War was on and the Germans did it first in order to use energy for making ammunition instead of lighting ammunition factories. Not wanting to lose any advantage, Britain did so too.

I was happy to learn these and many other pieces of trivia this evening.