Planet Andy

August 26, 2013


Susan Watkins: The European Impasse

All quiet on the euro front? Seen from Berlin, it looks as though the continent is now under control at last, after the macro-financial warfare of the last three years. A new authority, the Troika, is policing the countries that got themselves into trouble; governments are constitutionally bound to the principles of good housekeeping. Further measures will be needed for the banks – but all in good time. The euro has survived; order has been restored. The new status quo is already a significant achievement. Seen from the besieged parliaments of Athens and Madrid, the single currency has turned into a monetary choke-lead.

August 26, 2013 01:15

Terry Castle: On Getting Married

Rhodes, July. Charmed marine breezes, Blakey and I in the Old City, ensconced in medieval hostel-cum-boutique hotel formerly occupied by those nutty-crusader Knights of St John. (A few grim-faced Saracens, too, no doubt – especially after Suleiman the Magnificent’s successful siege of Rhodes in 1522.) Cobbled streets around the fortress awash in the fanatic blood of centuries, but we’re in a holiday mood, sipping ouzo, feeling spoiled, a bit bloated even, also somehow holy. With iPads out and glowing numinously we’re discussing the latest mishmash of blog news from our homeland.

August 26, 2013 01:15

Ian Penman: Mod v. Trad

In a lovely 1963 piece on Miles Davis, Kenneth Tynan quoted Cocteau to illuminate the art of his ‘discreet, elliptical’ subject: Davis was one of those 20th-century artists who had found ‘a simple way of saying very complicated things’. Jump to 1966 and the meatier, beatier sound of a UK Top 20 hit, the Who’s ‘Substitute’, a vexed, stuttering anti-manifesto, with its self-accusatory boast: ‘The simple things you see are all complicated!’ You couldn’t find two more different musical cries: Davis’s liquid tone is hurt, steely, where Townshend’s is impatient, hectoring.

August 26, 2013 01:15

Nikil Saval: The Real Mo Yan

When the English translation of Mo Yan’s novel Big Breasts and Wide Hips (1996) was published in 2004, it was seen by some critics as his bid for global literary prestige. It hit all the right notes: it was a historical saga of modern China featuring a proliferation of stories, it was unceasingly violent and nasty, and it came near to puncturing Party myths. In the preface, Howard Goldblatt, Mo Yan’s longtime translator and advocate, reported that it had provoked anger on the mainland among ideologues for humanising the Japanese soldiers who invaded Manchuria.

August 26, 2013 01:15


The letters page from London Review of Books Vol. 35 No. 16 (29 August 2013)

August 26, 2013 01:15

Table of contents

Table of contents from London Review of Books Vol. 35 No. 16 (29 August 2013)

August 26, 2013 01:15

Hacker News

LWN Headlines

Kernel prepatch 3.11-rc7

The 3.11-rc7 prepatch has been tagged and released. As of this writing, there is no announcement on linux-kernel, but Linus did post a brief item on Google+. "I'm doing a (free) operating system (just a hobby, even if it's big and professional) for 486+ AT clones and just about anything else out there under the sun. This has been brewing since april 1991, and is still not ready. I'd like any feedback on things people like/dislike in Linux 3.11-rc7." He is, of course, celebrating the 22nd anniversary of his original "hello everybody" mail.

by corbet at August 26, 2013 01:07

Server Fault: Unanswered

Dragged folders and files from Eclipse to a folder. Pressed Ctrl Z, can't find folders now –

I've tried undoing and redoing, but I can't get the files back that I was just working with. Please tell me there is some kind of utility that will get my files back. Please and thank you

by EGHDK at August 26, 2013 01:04

Unable to get DD-WRT router set up as a Client Bridge to access the internet –

I have a couple of network security cameras that use PoE and I would like to use them outdoors. The problem is that I do not have network ports anywhere outside, so the solution I came up with was to …

by Dave at August 26, 2013 00:59

iis6 domain name configuration –

I have a website with a domain such as and also on iis6. The problem is that goes to the sbs 3003 server welcome screen. I want to be the same as …

by kurupt89 at August 26, 2013 00:50

How do you un-edit a nickname on Skype? –

If you edit a contact's name on Skype, how can you un-edit it? I gave one of my contacts a nickname, but they change their Skype name frequently and I'd like to be able to see that without having to …

by Heather at August 26, 2013 00:47

Hacker News

Free tickets for BBC shows

03 Sep - Just the Job LIVE!

Join BBC Coventry & Warwickshire for Just the Job LIVE!

August 26, 2013 00:15

04 Sep - Just the Job LIVE!

Join BBC Coventry & Warwickshire for Just the Job LIVE!

August 26, 2013 00:15

05 Sep - 5 live Energy Day: Your Call

Join Nicky Campbell for 5 live's Energy Day.

August 26, 2013 00:15

13 Sep - Brian Taylor's Big Debate

Join Brian Taylor for Brian Taylor's Big Debate.

August 26, 2013 00:15

17 Sep - Later... with Jools Holland

Join the BBC for Later... with Jools Holland.

August 26, 2013 00:15

20 Sep - BBC Radio 3 Invitation Concert

Join the Ulster Orchestra for a concert of film music at the Ulster Hall.

August 26, 2013 00:15

24 Sep - Later... with Jools Holland

Join the BBC for Later... with Jools Holland.

August 26, 2013 00:15

27 Sep - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

28 Sep - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

01 Oct - Later... with Jools Holland

Join the BBC for Later... with Jools Holland.

August 26, 2013 00:15

05 Oct - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

08 Oct - Later... with Jools Holland

Join the BBC for Later... with Jools Holland.

August 26, 2013 00:15

12 Oct - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

19 Oct - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

26 Oct - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

02 Nov - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

09 Nov - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

16 Nov - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

23 Nov - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

30 Nov - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

07 Dec - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

14 Dec - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

21 Dec - Strictly Come Dancing

Join Sir Bruce Forsyth and Tess Daly for Strictly Come Dancing.

August 26, 2013 00:15

Server Fault: Unanswered

Windows Server 2012 after restart public network become private –

I have a 3 servers with Windows Server 2012. After restart one server : public network become private and private network become public. (2 NIC) I don't know how this can happen and how to fix it ? I …

by Radenko Zec at August 26, 2013 00:10

August 25, 2013

Server Fault: Unanswered

Wget site mirroring –

I used wget to download a mirror of a site. wget -mkN The problem is a link somewhere had a double slash. This caused a never ending loop going deeper and ...

by user247819 at August 25, 2013 23:58


Server Fault: Unanswered

Drobo vs. Synology vs. Qnap vs. DIY –

I have a XBMC based HTPC and currently just have two external drives storing the media and looking to upgrade but rather than just buying a couple bigger external drives I've been looking into NAS's …

by jonathanwash at August 25, 2013 23:15

My Keyboard won't repeat –

I am having an odd problem. My keys won't repeat when held down. its fine to type and I can double touch just fine. however I can't hold and repeat. This problem has prevented movement in games that …

by Peter at August 25, 2013 23:07

Computer Monitor Goes black When Gaming –

I was playing Cube world the other Day and my screen blacked out and i thought it was a one time thing so it happend like 2 times afterward so i stopped playing cubeworld and my screen did not black …

by Eddie at August 25, 2013 23:03

Output ffmpeg to certain format –

I have a media file with the following format according to ffprobe: Metadata: major_brand : M4A minor_version : 0 compatible_brands: M4A mp42isom creation_time : 2013-03-21 …

by Full Decent at August 25, 2013 22:56

Is Bluestacks App Player safe? –

I downloaded this software and scanned it online with Virustotal. One antivirus detected a Worm/Win32.WhiteIce.gen from the installer. Is this a false positive or something?

by Joshua Jebz at August 25, 2013 22:55

Planet Debian

Joey Hess: southern fried science fiction with plum sauce

Had an afternoon of steak and science fiction. Elysium is only so-so, but look what we found in a bookstore that was half religious materials and half SF, local books, and carefully hidden romance novels:

Star Wars, Star Trek and the 21st Century Christians

Best part was at the end, when I finally found one of the local asian markets Tomoko tells us about when she casually pulls out the good stuff at family gatherings. I will be back for whole ducks, fresh fish, squid, 50 lb bags of rice, tamarind paste, fresh ginger that has not sat on the shelf for 2 months because only I buy it, etc. Only an hour from home in the woods! Between the garlic sprouts, bean sprouts, enoki mushrooms, etc that I got for $10 today and this week's CSA surprise of 18 inch snake beans and smoked pork knuckles, I have the epic stir fry potential..

Spock and R2D2

August 25, 2013 22:25

Server Fault: Unanswered

Crashplan backup to a friend open source alternative –

The Crashplan software has a feature that allows you to backup your computer to a friend's system and vice-versa. This is great, especially if the friend is in another city and of course the same ...

by Ian at August 25, 2013 21:36

Dell insperon 545 won't power up after being switched off for 2 weeks whilst on holiday, has solid green LED but nothing else –

No problems before this. Standard build. Tried power off and resetting for 60 seconds to no avail. Is there anything else simple that I can try before getting somebody who knows more than me (that …

by Chumpty at August 25, 2013 21:27

MS Outlook account grows really fast –

My exchange account was ~5GB, but after I used Google Apps Migration for Microsoft Outlook to migrate emails to Google Apps, my account reached 6GB (quota limit) in a day. After I increased quota to …

by Vald at August 25, 2013 21:21

Programmatically changing filenames without breaking shortcuts –

I have a folder with several gigabytes of mashups. They are are all mp3s and named like this "title [remixer]". The artist tag is "artist1 vs. artist2 vs ..." and the title tag is exactly like the ...

by NounVerber at August 25, 2013 21:19

Server 2012 Essentials Remote Desktop –

I am having a bit of trouble getting remote desktop to work with some of the other users on my system. Whenever I try to sign in with an account other than my main one, I get the following error: "To …

by Lazze at August 25, 2013 21:05

Change outgoing repo on a hgsubversion checkout –

For various convoluted reasons, I had to svnsync a subversion repo to be local, before I could clone it with hgsubversion, rather than cloning it straight from the 'source'. Now that has worked ...

by LordAro at August 25, 2013 20:40

Boing Boing

Volkwagen Microbus to end production

Goodbye, old friend.

Jason Torchinsky at Jalopnik offers a wonderful goodbye to the faithful Volkswagen Microbus. This year marks the end of production, which continued in Brazil. The cab-over design that makes the Bus (and Vanagon) such a pleasure to drive resists meeting current safety standards.

So much of my driving experience has been rear engine, VW boxer designs. The Bus, the Beetle, the Vanagon, Porsche's 356 Speedster and 911. They are truly beautifully designed and a pleasure to drive, each within its limitations (or in the case of a modern 911, lack thereof.)

I've harbored the fantasy of buying a new Brazilian bus for years. I'm sorry to see it go.

Old-School VW Microbus Will Finally End Production This Year via Jalopnik


by Jason Weisberger at August 25, 2013 20:33

Server Fault: Unanswered

How can I make cleartype change subpixel layout when I rotate my tablet? –

I have a tablet running a full version of windows 8. I have cleartype enabled, but when I flip the device, the cleartype does not take seem to take into account the change in subpixel layout. Is ...

by Eric at August 25, 2013 20:29

Planet Sysadmin

Everything Sysadmin: LOPSA NJ Chapter meeting: IBM Blue Gene /P, Thu, Sept 5, 2013

It isn't on the website yet, but the September meeting will have a special guest:

Title: Anatomy of a Supercomputer: The architecture of the IBM Blue Gene /P.

IBM refers to their Blue Gene family of super computers as 'solutions'. This talk will discuss the problems facing HPC that the Blue Gene architecture was designed to solve, focusing on the Blue Gene /P model. To help those unfamiliar with high-performance computing, the talk will begin with a brief explanation of high-performance computing that anyone should be able to understand.


Prentice Bisbal first became interested in scientific computing while earning a BS in Chemical Engineering at Rutgers University. After about 2 years as a practicing Engineer, Prentice made the leap to scientific computing and has never looked back. He has been a Unix/Linux system administrator specializing in scientific/high performance computing ever since. In June 2012, he came full circle when he returned to Rutgers as the Manager of IT for the Rutgers Discovery Informatics Institute (RDI2), where he is responsible for supporting Excalibur, a 2048-node (8192 cores) IBM Blue Gene/P Supercomputer.

August 25, 2013 20:27

Server Fault: Unanswered

Dual boot Mac with Hypervisor –

Im running OS X 10.8.4 but also host a guest OS of Windows 7 using VMware Fusion. Occasionally I have problems with the system locking up, this might be VMWare Fusion because I don't remember Virtual …

by owen gerig at August 25, 2013 20:14

Windows 7 : Stop showing tooltip when hovering oppened applications on taskbar and instantly show thumbnails preview –

I'm having a hard time figuring out somethihng on windows 7. I would like to disable the useless tooltip from showing on my taskbar application icons once i hover my mouse on them and instead i would …

by Quardah at August 25, 2013 20:08

MySQL probelm in port: 3306 –

All websites on my VPS getting very slow, after dedcting the problem found that there are a line take 99.9% of the CPU: /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql ...

by Michael roshdy at August 25, 2013 20:07

Reverse tunelling from office machine running Windows 7 for VNC, SOCKS and SSH –

My work machine runs Windows 7 and the workplace proxy does not route to my home machine. I would like to access VNC(already setup using tightVNC), SSH(already setup using openssh on Cygwin) and SOCKS …

by deathMetal at August 25, 2013 20:02

CPU Upgrade for Toshiba Satellite P205-S7476 –

I have a Toshiba Satellite P205-S7476. Would like to upgrade the processor> where can I find out what replacement CPU are suitable? K

by kimberrrr at August 25, 2013 19:58

How to use new google voice searcher to write text in notepad? –

I find out that google voice searcher is perfect tool to write instead of typing, but it works only at chrome and limited time. Now my question is how to configure windows to use those tools for ...

by tonni at August 25, 2013 19:53

Ctrl-D does not do what it's supposed to in Cygwin on Win7, according to C tutorials –

I'm following examples in the book Head First C, and I'm supposed to enter strings at the command line for the program to encrypt, then ctrl-D when I'm done. But ctrl-D doesn't have any effect.

by linda at August 25, 2013 19:35

HP Photosmart C6180 won't boot –

My HP C6180 Photosmart All In One printer just keeps powering on and off any chance of getting it running again w/o board level repairs? I have tried multiple combinations of removing the power cords …

by Lis at August 25, 2013 19:25

There, I Fixed It

Server Fault: Unanswered

How to associate floating IPs to running instances? –

I have set up an OpenStack (Grizzly) single-node cloud in a virtual machine (VirtualBox). I can already create/delete/run volumes, networks, instances, and so on with no apparent problem (the logs ...

by perror at August 25, 2013 18:44

System Menu's and Norton Anti-Virus –

I bought a new PC the other month and it came with 60 days free Norton Internet Security software. I'm really impressed with it so far and I've usually avoided Norton at all costs in the past due to …

by jAsOn at August 25, 2013 18:38

Restore From Database not showing database I want to pick –

In Management Studio I want to restore to a point in time from an existing database. However, the databases that I can choose from in the drop down list don't include the one I want to choose. And …

by BVernon at August 25, 2013 18:07

Kernel module blacklist not working –

I'm trying to figure out how to blacklist modules, and I'm trying it on the usb storage. Unfortunatelly it seems ot have no effect, and I get the module in even if it's not used (apparently). My ...

by bogdan.mustiata at August 25, 2013 17:49

Planet Ubuntu

Jonathan Carter: Still Alive

I’ve been really quiet on the blogging front the last few months. There’s been a lot happening in my life. In short, I moved back to South Africa and started working for another company.  I have around 10 blog drafts that I will probably never finish, so I’m just declaring post bankruptcy and starting from scratch (who wants read my old notes from Ubuntu VUDS from March anyway?)

For what it’s worth, I’m still alive and doing very well, possibly better than ever. Over the next few months I want to focus my free software time on the Edubuntu project, my Debian ITPs (which I’ve neglected so badly they’ve been turned into RFPs) and Flashback. Once I’ve caught up there’s plenty of other projects where I’d like to focus some energy as well (LTSP, Debian-Edu, just to name a few).

Thanks for reading! More updates coming soon.

August 25, 2013 17:43

Planet Debian

Gregor Herrmann: RC bugs 2013/31-34

after a break, here's again a report about my work on RC bugs. in this case, all bugs either affect packages in the pkg-perl group and/or packages involved in the Perl 5.18 transition.

  • #689899 – mgetty: "Ships a folder in /var/run or /var/lock (Policy Manual section 9.3.2)"
    raise severity, propose patch
  • #693892 – src:libprelude: "prelude-manager: FTBFS with glibc 2.16"
    add some info to the bug report, later closed by maintainer
  • #704784 – src:libogre-perl: "libogre-perl: Please upgrade OGRE dependency to 1.8 or greater"
    upload new upstream release (pkg-perl)
  • #709680 – src:libcgi-application-plugin-stream-perl: "libcgi-application-plugin-stream-perl: FTBFS with perl 5.18: test failures"
    add patch from Niko Tyni (pkg-perl)
  • #710979 – src:libperlio-via-symlink-perl: "libperlio-via-symlink-perl: FTBFS with perl 5.18: inc/Module/"
    adopt for pkg-perl and work around ancient Module::Install
  • #711434 – src:libconfig-std-perl: "libconfig-std-perl: FTBFS with perl 5.18: test failures"
    add patch from CPAN RT (pkg-perl)
  • #713236 – src:libdbd-csv-perl: "libdbd-csv-perl: FTBFS: tests failed"
    upload new upstream release (pkg-perl)
  • #713409 – src:license-reconcile: "license-reconcile: FTBFS: dh_auto_test: perl Build test returned exit code 255"
    add patch from Oleg Gashev (pkg-perl)
  • #713580 – src:handlersocket: "handlersocket: FTBFS: config.status: error: cannot find input file: `handlersocket/'"
    add patch from upstream git, upload to DELAYED/2, then rescheduled to 0-day with maintainer's permission
  • #718082 – src:libcatalyst-modules-perl: "libcatalyst-modules-perl: FTBFS: Tests failed"
    add missing build dependency (pkg-perl)
  • #718120 – src:libbio-primerdesigner-perl: "libbio-primerdesigner-perl: FTBFS: Too early to specify a build action 'vendor'. Do 'Build vendor' instead."
    prepare a patch, upload after review (pkg-perl)
  • #718161 – src:libgeo-ip-perl: "libgeo-ip-perl: FTBFS: Failed 1/5 test programs. 1/32 subtests failed."
    upload new upstream release (pkg-perl)
  • #718280 – libev-perl: "libev-perl: forcing of EV_EPOLL=1 leads to FTBFS on non-linux architectures"
    prepare possible patches, upload one after review (pkg-perl)
  • #718743 – src:libyaml-syck-perl: "libyaml-syck-perl: FTBFS on arm* & more"
    add patch from CPAN RT (pkg-perl)
  • #719380 – src:libwx-perl: "libwx-perl: FTBFS: dh_auto_test: make -j1 test returned exit code 2"
    update versioned (build) dependency (pkg-perl)
  • #719397 – src:libsql-abstract-more-perl: "libsql-abstract-more-perl: FTBFS: dh_auto_test: perl Build test returned exit code 255"
    add missing build dependency (pkg-perl)
  • #719414 – pkg-components: "Update to support updated Debian::Control API"
    upload package fixed by Olof Johansson (pkg-perl)
  • #719500 – src:lire: "lire: FTBFS with perl 5.18: POD errors"
    add patch to fix POD, upload to DELAYED/2
  • #719501 – src:mgetty: "mgetty: FTBFS with perl 5.18: POD errors"
    apply patch from brian m. carlson, upload to DELAYED/2, then rescheduled to 0-day with maintainer's permission, then ftpmaster-auto-rejected for other reasons (cf. #689899)
  • #719503 – src:mp3burn: "mp3burn: FTBFS with perl 5.18: POD errors"
    apply patch from brian m. carlson, upload to DELAYED/2
  • #719504 – src:netsend: "netsend: FTBFS with perl 5.18: POD errors"
    apply patch from brian m. carlson, upload to DELAYED/2
  • #719505 – src:spampd: "spampd: FTBFS with perl 5.18: POD errors"
    add patch to fix POD, upload to DELAYED/2
  • #719596 – libmouse-perl: "libmouse-perl: FTBFS with Perl 5.18: t/030_roles/001_meta_role.t failure"
    upload new upstream release (pkg-perl)
  • #719963 – src:grepmail: "grepmail: FTBFS with perl 5.18: 'Subroutine Scalar::Util::openhandle redefined"
    move away bundled module for tests, QA upload
  • #719972 – src:libapache-authznetldap-perl: "libapache-authznetldap-perl: FTBFS with perl 5.18: syntax error at Makefile.PL"
    apply patch from brian m. carlson (pkg-perl)
  • #720267 – src:libkiokudb-perl: "libkiokudb-perl: FTBFS with perl 5.18: test failures"
    upload new upstream release (pkg-perl)
  • #720269 – src:libmoosex-attributehelpers-perl: "libmoosex-attributehelpers-perl: FTBFS with perl 5.18: test failures"
    add patch from CPAN RT (pkg-perl)
  • #720429 – src:mail-spf-perl: "mail-spf-perl: FTBFS with perl 5.18: POD failure"
    add patch to fix POD (pkg-perl)
  • #720430 – src:msva-perl: "msva-perl: FTBFS with perl 5.18: POD failure"
    send patch to bug report
  • #720431 – src:oar: "oar: FTBFS with perl 5.18: POD failure"
    send patch to bug report
  • #720496 – src:primaxscan: "primaxscan: FTBFS with perl 5.18: POD failure"
    send patch to bug report
  • #720497 – src:profphd: "profphd: FTBFS with perl 5.18: POD failure"
    send patch to bug report
  • #720665 – src:libwww-shorten-perl: "libwww-shorten-perl: FTBFS: POD coverage test failure"
    upload new upstream release (pkg-perl)
  • #720670 – src:aptitude-robot: "aptitude-robot: FTBFS with perl 5.18: test failures"
    try to investigate
  • #720776 – src:bioperl: "bioperl: FTBFS with perl 5.18: test failures"
    forward upstream
  • #720787 – src:libconfig-model-lcdproc-perl: "libconfig-model-lcdproc-perl: FTBFS: Can't locate Config/Model/ in @INC"
    add missing build dependency (pkg-perl)
  • #720788 – src:libconfig-model-openssh-perl: "libconfig-model-openssh-perl: FTBFS: Can't locate Config/Model/ in @INC"
    add missing build dependency in git (pkg-perl)

August 25, 2013 17:42

Planet UKnot

Net Into Dire Muck (an anagram of Nominet Direct UK)

I’ve written before (here, here and here) on Nominet’s proposals for registrations in the second level domain. That means you can register rather than Superficially that sounds a great idea, until you realise that if you have already registered you’ll either have to register (if you even get the chance) and pay Nominet and a registrar a fee for doing so, or be content with someone else registering it. This is a horse Nominet continues to flog, no doubt due to its obstinate refusal to die quite yet.

I encourage you to respond to Nominet’s second consultation on the matter (they’ve made that pretty easy to do).

My view is that this is not a good idea. You can find a PDF copy of my response here. If you prefer reading from web pages, I’ve put a version below.

 A.         Executive Summary of Response

This document is a response to the consultation document entitled “Consultation on a new .uk domain name service” published by Nominet in July 2013. It should be read in conjunction with my specific responses to the questions asked within the document. Numbering within section B of this document corresponds to the section numbering within Nominet’s own document.

The proposals to open up .uk for second level registration remain one of the least well thought-out proposals I have yet to read from Nominet. Whilst these proposal are less dreadful than their predecessors, they remain deeply flawed and should be abandoned. The proposals pay insufficient attention to the rights and legitimate expectations of existing registrants. They continue to conflate opening up domains at the second level with trust and security. They represent feedback to a one-sided consultation as if it were representative. And, most importantly, they fail to demonstrate that the proposals are in the interest of all stakeholders.

I hereby give permission to Nominet to republish this response in full, and encourage them to do so.

B.         Answers to specific questions

The following section gives answers to specific questions in Nominet’s second consultation paper. Areas of text in italics are Nominet’s.

Q1.             The proposal for second level domain registration

This proposal seeks to strike a better balance between the differing needs of our stakeholders and respond to the concerns and feedback raised to the initial consultation. We have ‘decoupled’ the security features from the proposal to address concerns regarding the potential creation of a ‘two tier’ domain space and compulsion to register in the second level. We have set out a more efficient registration process to enhance trust in the data and put forward an equitable, cost effective release mechanism. 

Q1.a             Do you agree with the proposal to enable second level domain registration in the way we have outlined?

No, I do not agree with the proposal to enable second level domain registration as outlined.

Q1.b             Please tell us your reasons why.

The reasons I do not agree with the proposal to enable second level domain registration as outlined are as follows:

In general, no persuasive case has been made to open up second level domain registrations at all, and the less than persuasive case that has been put fails to adequately weigh the perceived advantages of opening up second level domain registrations against the damage caused to existing registrants. In simple terms, the collateral damage outweighs the benefits.

Whilst it is agreed that domain names are not themselves property, in many ways they behave a little like property. Domain name registrants, whether they are commercial businesses, charities or speculators, invest in brands and other material connected with their domain names. The business models of some of these participants (be they arms companies, dubious charities or ‘domainers’) may or may not be popular with some, but a consultation purporting to deal with registration policy should not be the forum for addressing that. Like it or not, these are all people who have invested in their domain name and their brand around that domain name on the basis of the registration rules currently in place. Using the property analogy, they have  built their house upon land they believed they owned. Nominet here is government, planning authority and land registry rolled into one, and proposes telling the domain name owners that whilst they thought they had bought the land that they have, now others may be permitted to build on top of them – but no matter, Nominet will still ‘continue to support’ their now subterranean houses. And for the princely sum of about twice what they are paying Nominet already, they may buy the space above their existing home. Of course this is only an option, and living in the dark, below whatever neighbour might come along is an alternative. In any other setting, this would be called extortion.

Of course there are undoubtedly good reasons to open up second level domains; were we able to revisit the original decision made when commercial registrations were first allowed in .uk, second level domains would probably not exist. However the option to revisit that decision is not open to us. Therefore, to change those registration rules Nominet needs a very good reason indeed; a reason so strong, and so powerful that it trumps the rights and legitimate expectations of all those existing registrants. No such reason has yet been presented.

Nominet claims in the introduction to its second consultation paper “It was clear from this feedback [on its first consultation] that there was support for registrations at the second level”; it does not say whether this support outweighed the opposition, and the full consultation responses have never been published. In the background paper Nominet says “The feedback we received was mixed”. In the press release after the first consultation, Nominet said “It was clear from the feedback that there was not a consensus of support for the proposals as presented”. Nominet’s initial consultation document only told side one of the story; it presented the advantages of opening registrations at the second level without putting forward any of the disadvantages. It is therefore completely unsurprising that it found favour with some respondents particularly those unfamiliar with domain names who would not be able to intuit the disadvantages themselves, rather like a politician asking voters whether they would like lower taxes without pointing out the consequences. The second consultation is little better – nowhere does it set out the disadvantages of the proposal as a whole to existing registrants. Given this, it is remarkable how much opposition the proposal has garnered. I have yet to find anyone not in the pay of Nominet that supports this proposal, and it has managed to unite parts of the industry not normally known for their agreement in a single voice against Nominet.

For over 20 years registrations have been made in subdomains of .uk, and since 1996 that process has been managed by Nominet. Nominet claims to be a ‘force for good’ that seeks to enhance trust in the internet. Turning its back on its existing registrants that have single-handedly funded its very existence seems to me the ultimate abrogation of that trust.

The remainder of my comments on this consultation should therefore be read in the context that the best course of action for Nominet would be to admit that in this instance it has made an error, and abandon this proposal in its entirety.

Q2.             Registration process for registering second level domains

We believe that validated address information and a UK address for service would promote a higher degree of consumer confidence as well as ensure that we are in a better position to enforce the terms of our Registrant Contract. We propose that registrant contact details of registrations in the second level would be validated and verified and we would also make this an option available in the third levels that we manage. 

2.a Please tell us whether you agree or disagree with the proposed registration requirements we have outlined, and your reasons why. In particular, we welcome views on whether the requirements represent a fair, simple, practical, approach that would help achieve our objective of enhancing trust in the registration process and the data on record.

Validation of address information and indeed any proportionate steps that increase the accuracy of the .uk registration database are desirable. However, this is ineffective for the desired purpose (increasing consumer confidence), and in any case there is no reason to link it only to registrations at the second level.

Nominet’s logic here is flawed. Would validation of address information and a compulsory UK address for service promote a higher degree of consumer confidence? I believe the answer to this is no, for the following reasons:

Firstly, the fact that a domain name has a UK service addresses (which presumably would include a PO Box or similar) does not, unfortunately, guarantee that the content of the web site attached is in some way to be trusted. All it guarantees is that the web site has a UK service address. Web sites can contain malicious code whether placed there by the owner or by infection. Web sites with UK service addresses can sell fraudulent goods. Web sites with UK service addresses can turn out not to be registered to the person the viewer thought that they might be (see Nominet’s DRS case list for hundreds upon hundreds of examples). Nominet has presented no evidence that domain name registrations with UK service addresses are any less likely to carry material that should be ‘distrusted’.

Secondly, the registration address for a domain name is not easily available to the non-technical user using a web browser. Nominet appears to be around 15 years out of date in this area. Consumers increasingly do not recognise domain names at all, but rather use search engines. The domain name is becoming increasingly less relevant (despite Nominet’s research) as consumers are educated to ‘look for the green bar’ or ‘padlock’. This is a far better way, with a far easier user interface, to determine whether the web site is registered to whom the user thought it was. It is by no means perfect, but is far more useful than Nominet’s proposal (not least as it has international support). Nominet’s proposal serves only to confuse users.

Thirdly, the concept that UK consumers would be sufficiently sophisticated to know that domain names ending .uk had been address-validated (but not subject to further validation) unless those domains ended with, etc. is laughable. The user would have to know that is not address validated, but that would be address validated, which would require the average internet user memorising the previous table of Nominet SLDs. If Nominet hopes to gain support for address validation, it should do it across the board.

Fourthly, this once again means that existing registrants would be disadvantaged. By presenting (probably falsely) registrations in the second level as more trustworthy, this implies registrations at the third level (i.e. all existing registrations) are somehow less trustworthy, or in some way ‘dodgy’.

Nominet presents two other rationales for this move. Nominet claims it can enforce its contract more easily if the address is validated. This is somewhat hard to understand. Firstly, is Nominet not concerned about enforcing its contracts for other domain names? Secondly Nominet should insist on a valid address for service (Nominet already pretty much does this under clauses 4.1 and 36 of its terms and conditions). If the service address is invalid, Nominet can simply terminate its contract. Thirdly, a UK service address seems a rather onerous responsibility in particular for personal registrations, such as UK citizens who have moved abroad.

Nominet also suggests such a process would ‘enhance trust in the data on record’. This is a fair point, but should apply equally to all domain names. It is also unclear why having a foreign company’s correct head office address (outside the UK) would not be acceptable, whereas a post office box to forward the mail would be acceptable.

Q3.             Release process for the launch of second level domain registration

The release process prioritises existing .uk registrations in the current space by offering a six month window where registrants could exercise a right of first refusal. We believe this approach would be, the most equitable way to release registrations at the second level. Where a domain string is not registered at the third level it would be available for registration on a first-come, first-served basis at the start of the six month period or at the end of this process, if the right of first refusal has not been taken up. 

Q3.a            Please tell us your views on the methodology we have proposed for the potential release of second level domains. We would be particularly interested in your suggestions as to whether this could be done in a fairer, more practical or more cost-effective way.

The release mechanism proposed is less invidious than the previous scheme in that it gives priority to existing registrants. This change is to be welcomed I suppose, though is not a substitute for the correct course of action (scrapping the idea of opening the second level up at all).

The remaining challenge is how to deal equitably with the situation where two different registrants have registrations in different SLDs. The peaceful coexistence of such registrants was facilitated by the SLD system, and opening up .uk negates that facilitation. The current proposals give priority to the first registrant. This has the virtue of simplicity.

I have heard arguments that this penalises owners, who are likely to have spent more building a brand. In particular, it is argued, this penalises owners of two letter owners, as these were released after two letter domains were released in other domains (handled under Q3.b below). To the first, the counter argument is that to prefer penalises owners (for instance); no doubt the minute there is speculation that Nominet might prefer owners, there will be an active market in registering names in that are only registered in

Q3.b             Are there any categories of domain names already currently registered which should be released differently, e.g. domains registered on the same day, pre-Nominet domains (where the first registration date may not be identified with certainty) and domains released in the 2011 short domains project?

I see no merit in treating pre-Nominet domain names differently provided the domain name holder has accepted Nominet’s terms and conditions.

I see no merit in treating domain name registered on the same day as different, provided Nominet can still ascertain the order of registration.

If Nominet cannot ascertain the order of registration, I would inform each party of this and invite evidence. If after admitting evidence Nominet still could not determine which registration was first, I would either allow an auction or choose at random.

With respect to the short domains project, I would argue Nominet has dug its own grave. Just like all of its registrants before, Nominet did not predict it was to open up .uk. For consistency, it could re-auction two letter domains in .uk. However, a simpler, fairer, and more equitable result would be to not open up .uk at all.

Q3.c            We recognise that some businesses and consumers will want to consider carefully whether to take on any potential additional costs in relation to registering a second level domain. Therefore we are seeking views on: 

  • Whether the registrant of a third level domain who registers the equivalent second level should receive a discount on the second level registration fee;
  • Developing a discount structure for registrants of multiple second-level .uk domains;
  • Offering registrants with a right of first refusal the option to reserve (for a reduced fee) the equivalent second level name for a period of time, during which the name would be registered but not delegated. 

Please tell us your views on these options, or whether there are any other steps we could take to minimise the financial impact on existing registrants who would wish to exercise their right of first refusal and register at the second level.

These proposals risk introducing excess complexity. The most equitable path would be not to open up registrations at the second level at all.

If, despite all objections, the second level is opened up, it is vital that the interests of existing registrants are protected. A simple and fair way of achieving this would be to allow any existing registrant (and only existing registrants) the registration of their .uk second level domain for free for four years (or failing that at a very substantial discount to the existing prices in third level domains). As this would be a single registration to an existing registrant for a single period the marginal costs would be low. This would be sufficient time for the registrant to change stationery, letterhead etc. in the normal course of events. This should be permitted through registrars other than the registrant’s existing registrar to encourage competition. Save for the altered price, this registration would be pari passu with any other.

Q4.            Reserved and protected names

We propose to restrict the registration of <> and <> in the second level to reflect the very limited restrictions currently in force in the second level registries administered by Nominet. In addition, we would propose to reserve for those bodies granted an exemption through the Government’s Digital Transformation programme, the matching domain string of their domain in the second level.

4.a            Please give us your views on whether our proposed approach strikes an appropriate balance between protecting internet users in the UK and the expectations of our stakeholders regarding domain name registration. Can you foresee any unintended complications arising from the policy we have proposed?

This is one of the stranger proposals from Nominet.

In essence a government programme (internal to the government) has removed the right of certain organisations to register within is not administered by Nominet. I fail to see why those organisations that turned out to be on the wrong side of a government IT decision should have any special status whatsoever, especially when compared to registrants that have been Nominet’s customers for many years. I notice Nominet’s consultation does not even offer any rationale for this.

One example is which is the site of The Independent (a UK newspaper). However, does not seem to be active. This is perhaps the most obvious example, but there are no doubt others. There is simply no reason why those ejected from should have preferential treatment over domain name holders in .uk. At the very most, they should be given secondary status after existing domain name holders, but I fail to see why they can’t take their chances in the domain name market like any other organisation.

I am afraid this proposal smells like Nominet pandering for support from government for its otherwise unpopular proposal.

Q5.            General views

Q5.a            Are there any other points you would like to raise in relation to the proposal to allow second level domain registration?

  1. Nominet should abandon its current proposals in their entirety. Nominet has failed to explain why the proposals in toto are in the interests of its stakeholders, in particular the registrant community (who after all will have this change inflicted on them). Unless there is a high degree of consensus amongst all stakeholder groups in favour of the proposal, it should be abandoned. I believe no such consensus exists.
  2. Nominet should disaggregate the issue of registrations within .uk and the issue of how to help build trust in .uk in general. I said before that Nominet should run a separate consultation for opening up .uk, as a simple open domain with the same rules as, and Nominet has failed to do this having retained different rules for validation, address verification and price. Both consultations conflate the issue of opening up the second level domain with issues around consumer trust (although admittedly the second consultation does this less than the first). Whilst consumer trust and so forth are important, they are orthogonal to this issue.
  3. Nominet should remember that a core constituency of its stakeholders are those who have registered domain names. If new registrations are introduced (permitting registration in .uk for instance), Nominet should be sensitive to the fact that these registrants will feel compelled to reregister if only to protect their intellectual property. Putting such pressure and expense on businesses to reregister is one thing (and a matter on which subject ICANN received much criticism in the new gTLD debate); pressurising them to reregister and rebrand by marketing their existing registration as somehow inferior is beyond the pale. Whilst the second proposal is less invidious than the first, it is still a slap in the face for existing .uk registrants.
  4. Nominet should recognise that there is no silver bullet (save perhaps one used for shooting oneself in the foot) for the consumer trust problem, and hence it will have to be approached incrementally.
  5. Nominet should be more imaginative and reacquaint itself with developments in technology and the domain market place. Nominet’s attempt to associate a particular aspect of consumer trust with a domain name is akin to attempting to reinvent the wheel, but this time with three sides. Rather, Nominet should be looking at how to work with existing technologies. For instance, if Nominet was really interested in providing enhanced security, it could issue wildcard domain validated SSL certificates for every registration to all registrants; given Nominet already has the technology to comprehensively validate who has a domain name, such certificates could be issued cheaply or for free (and automatically). This might make Nominet instantly the largest certificate issuer in the world. If Nominet wanted to further validate users, it could issue EV certificates. And it could work with emerging technologies such as DANE to free users from the grip of the current overpriced SSL market.
  6. There is no explanation as to why these domains should cost £4.50 per year wholesale rather than £5 for two years as is the case at the moment. If the domain name validation process is abandoned (as it should be) these domains should cost no more to maintain than any other. Perhaps the additional cost is to endow a huge fund for potential legal action? The increased charges add to the perception that the reason for Nominet pursuing opening domains at the second level is simply financial self-interest, rather than acting in the interests of its stakeholders.

Q5.b            Are there any points you would like to raise in relation to this consultation? 

To reiterate the point I have made before, this consultation and its ill-fated predecessor fail to put their points across in an even handed manner. That is they expound the advantages of Nominet’s proposal, without considering its disadvantages. That is Nominet’s prerogative, but if that is the course Nominet takes then it should not attempt to present the results of such a ‘consultation’ as representative, as their consultees will have heard only one side of the story.

by Alex Bligh at August 25, 2013 17:21

There, I Fixed It

Server Fault Meta

Boing Boing

This Day in Blogging History: Switching to a straight razor; Weird recycled electronics; Burning Man never gets old

One year ago today
Switching to a straight razor: I think this method delivers an insanely close shave. The best part for me is that there is almost no irritation at all. No razor burn of any kind and my face doesn't feel on fire for the next 2 hours like it did when I used a disposable razor.

Five years ago today
Strange stuff from a computer recycler: Seen above is a wire recorder (circa 1945-1955) that stores audio by magnetizing a reel of fine wire.

Ten years ago today
Wired News: Burning Man never gets old: "A piece I wrote on this year's edition of Burning Man, which begins today in the Nevada desert. About 30,000 are expected to attend."


by Cory Doctorow at August 25, 2013 16:06

Can you identify these Daniel Clowes characters?

Can you identify all of the silhouettes in these new drawings that Daniel Clowes drew for the Modern Cartoonist exhibition murals and "Chicago Views" prints? If you can, you will have a chance to win fabulous prizes! Visit Daniel's website for details.


by Mark Frauenfelder at August 25, 2013 15:13

Guido Fawkes' blog

Coulson’s Mobile Phone Conversation Intercepted


A co-conspirator emailed on Friday:

Yesterday, I found myself walking up the Gray’s Inn Road alongside Andy Coulson. He was talking on his mobile phone to someone about the fact that his trial date had been moved. It was raining and he was mumbling a lot. But I did catch this brilliant quote:

“Whatever you do, don’t share that with anyone. Be very careful.”

I couldn’t resist papping him as he ambled along the road.

A funny thing to hear from the man who stands accused of conspiracy to intercept mobile phone voicemails, among other things. Be more careful Andy…

Tagged: Coulson, M'learned Friends, Media Guido

by Guido Fawkes at August 25, 2013 15:11

There, I Fixed It

Planet Ubuntu

Ubuntu LoCo Council: New Local communities health check process

This will be the new process, aiming to replace the current re approval process. It aims to be less formal, more interactive and above all still keep people motivated to be involved in the Ubuntu community.

Every team shall be known as a LoCoteam, teams that were previously known as an “Approved LoCoteam” shall be known as a “Verified LoCoteam”. New teams shall be a LoCoteam, teams do not have to be verified.  The term Verified means that a  Launchpad team has been created, the team name conforms to the correct naming standard and the team contact has signed the Code of Conduct.

Every two years a team will present itself for a HealthCheck – This is still beneficial to everyone involved, it gives the team a chance to show how they are doing and also the council can catch up with the team.

What is needed for a HealthCheck?

Create a Wiki with the activities of the period – Name the wiki page with Name of your team plus the YEAR example – LoCoTeamVerificationApplication20XX with the below details:

  • Name of team
  • How many people are in the team
  • Link to your wiki page / Launchpad group page
  • Social Networks (if they have any of them).
  • Link to loco team portal page, Events page, Paste events page – this is a good reason to encourage teams to use the team portal as all of the information is there and saves duplication.
  • Photo Galleries of past events.
  • Tell us about your team, what you do, if you have Ubuntu members in your team, your current projects.
  • Guideline of what you plan on doing in the future.
  • Any meeting logs, if available.

Teams will still remain verified this is just to check in and see how things are doing,  If you can’t make a meeting, it can be done over email/Bugs.

In short, the overall process should remain pretty much the same as now.

If in case of any doubts/questions regarding the new process please dont hesitate to discuss or ask us :-)


August 25, 2013 14:26

Planet Debian

Joey Hess: idea: git push requests

This is an idea that Keith Packard told me. It's a brilliant way to reduce GitHub's growing lockin, but I don't know how to implement it. And I almost forgot about it, until I had another annoying "how do I send you a patch with this amazing git technology?" experience and woke up with my memory refreshed.

The idea is to allow anyone to git push to any anonymous git:// repository. But the objects pushed are not stored in a public part of the repository (which could be abused). Instead the receiving repository emails them off to the repository owner, in a git-am-able format.

So this is like a github pull request except it can be made on any git repository, and you don't have to go look up the obfuscated contact email address and jump through git-format-patch hoops to make it. You just commit changes to your local repository, and git push to wherever you cloned from in the first place. If the push succeeds, you know your patch is on its way for review.

Keith may have also wanted to store the objects in the repository in some way that a simple git command run there could apply them without the git-am bother on the receiving end. I forget. I think git-am would be good enough -- and including the actual diffs in the email would actually make this far superior to github pull request emails, which are maximally annoying by not doing so.

Hmm, I said I didn't know how to implement this, but I do know one way. Make the git-daemon run an arbitrary script when receiving a push request. A daemon.pushscript config setting could enable this.

The script could be something like this:

set -e
tmprepo="$(mktemp -d)"
# this shared clone is *really* fast even for huge repositories, and uses
# only a few 100 kb of disk space!
git clone --shared --bare "$GIT_DIR" "$tmprepo"
git-receive-pack "$tmprepo"
# XXX add email sending code here.
rm -rf "$tmprepo"

Of course, this functionality could be built into the git-daemon too. I suspect a script hook and an example script in contrib/ might be an easier patch to get accepted into git though.

That may be as far as I take this idea, at least for now..

August 25, 2013 13:07

Kernel Planet

Pavel Machek: Wherigo on Nokia n900 -- solved

There is in tui project -- it provides gpsd interface for applications such as rana. I added "-n" option, which outputs nmea instead of gpsd data... so all you have to do is run gpsd, run java -jar DesktopWIG.jar, tell desktopwig to connect on localhost port 2948. Now... looking forward to some nice wherigo cache.

August 25, 2013 13:05

There, I Fixed It

Planet Debian

Yves-Alexis Perez: Expiration extension on PGP subkeys

So, last year I've switched to an OpenPGP smartcard setup for my whole personal/Debian PGP usage. When doing so, I've also switched to subkeys, since it's pretty natural when using a smartcard. I initially set up an expiration of one year for the subkeys, and everything seems to be running just fine for now.

The expiration date was set to october 27th, and I though it'd be a good idea to renew them quite in advance, considering there's my signing key in there, which in (for example) used to sign packages. If the Debian archive considers my signature subkey expired, that means I can't upload packages anymore, which is a bit of a problem (although I think I could still upload packages signed by the main key). dak (Debian Archive Kit, the software managing the Debian archive) uses keys from the debian-keyring package, which is usually updated every month or so, so pushing the expiration date two months before the due date seemed like a good idea.

I've just did that, and it was pretty easy, actually. For those who followed my setup last year, here's how I did it:

First, I needed my main smartcard (the one storing the main key), since it's the only one able to do operations on the subkeys. So I plug it, and then:

corsac@scapa: gpg --edit-key 71ef0ba8
gpg (GnuPG) 1.4.14; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/71EF0BA8  created: 2009-05-06  expires: never       usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096g/36E31BD8  created: 2009-05-06  expires: never       usage: E   
sub  2048R/CC0E273D  created: 2012-10-17  expires: 2013-10-27  usage: A   
sub  2048R/A675C0A5  created: 2012-10-27  expires: 2013-10-27  usage: S   
sub  2048R/D98D0D9F  created: 2012-10-27  expires: 2013-10-27  usage: E   
[ultimate] (1). Yves-Alexis Perez <>
[ultimate] (2)  Yves-Alexis Perez (Debian) <>

gpg> key 2

pub  4096R/71EF0BA8  created: 2009-05-06  expires: never       usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096g/36E31BD8  created: 2009-05-06  expires: never       usage: E   
sub* 2048R/CC0E273D  created: 2012-10-17  expires: 2013-10-27  usage: A   
sub  2048R/A675C0A5  created: 2012-10-27  expires: 2013-10-27  usage: S   
sub  2048R/D98D0D9F  created: 2012-10-27  expires: 2013-10-27  usage: E   
[ultimate] (1). Yves-Alexis Perez <>
[ultimate] (2)  Yves-Alexis Perez (Debian) <>

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 429d
Key expires at mar. 28 oct. 2014 12:43:35 CET
Is this correct? (y/N) y

At that point, a pinentry dialog should ask you the PIN, and the smartcard will sign the subkey. Repear for all the subkeys (in my case, 3 and 4). If you ask for PIN confirmation at every signature, the pinentry dialog should reappear each time.

When you're done, check that everything is ok, and save:

gpg> save
corsac@scapa: gpg --list-keys 71ef0ba8
gpg: checking the trustdb
gpg: public key of ultimately trusted key AF2195C9 not found
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   4  signed:   5  trust: 0-, 0q, 0n, 0m, 0f, 4u
gpg: depth: 1  valid:   5  signed:  53  trust: 5-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2013-12-28
pub   4096R/71EF0BA8 2009-05-06
uid                  Yves-Alexis Perez <>
uid                  Yves-Alexis Perez (Debian) <>
sub   4096g/36E31BD8 2009-05-06 [expires: 2014-10-28]
sub   2048R/CC0E273D 2012-10-17 [expires: 2014-10-28]
sub   2048R/A675C0A5 2012-10-27 [expires: 2014-10-28]
sub   2048R/D98D0D9F 2012-10-27 [expires: 2014-10-28]

Now that we have the new subkeys definition locally, we need to push it to the keyservers so other people get it too. In my case, I also need to push it to Debian keyring keyserver so it gets picked at the next update:

corsac@scapa: gpg --send-keys 71ef0ba8
gpg: sending key 71EF0BA8 to hkp server
corsac@scapa: gpg --keyserver --send-keys 71ef0ba8
gpg: sending key 71EF0BA8 to hkp server

Main smartcard now back in safe place. As far as I can tell, there's no operation needed on the daily smartcard (which only holds the subkeys), but you will need to refresh your public key on any machine you use it on before it gets the updated expiration date.

by (Yves-Alexis) at August 25, 2013 12:18

Vincent Bernat: Boilerplate for autotools-based C project

When starting a new HTML project, a common base is to use HTML5 Boilerplate which helps by setting up the essential bits. Such a template is quite useful for both beginners and experienced developers as it is kept up-to-date with best practices and it avoids forgetting some of them.

Recently, I have started several little projects written in C for a customer. Each project was bootstrapped from the previous one. I thought it would be useful to start a template that I could reuse easily. Hence, bootstrap.c1, a template for simple projects written in C with the autotools, was born.


A new project can be created from this template in three steps:

  1. Run Cookiecutter, a command-line tool to create projects from project templates, and answer the questions.
  2. Setup Git.
  3. Complete the “todo list”.


Cookiecutter is a new tool to create projects from project templates. It uses Jinja2 as a template engine for file names and contents. It is language agnostic: you can use it for Python, HTML, Javascript or… C!

Cookiecutter is quite simple. You can read an introduction from Daniel Greenfeld. The Debian package is currently waiting in the NEW queue and should be available in a few weeks in Debian Sid. You can also install it with pip.

Bootstrapping a new project is super easy:

$ cookiecutter
Cloning into 'bootstrap.c'...
remote: Counting objects: 90, done.
remote: Compressing objects: 100% (68/68), done.
remote: Total 90 (delta 48), reused 64 (delta 22)
Unpacking objects: 100% (90/90), done.
Checking connectivity... done

full_name (default is "Vincent Bernat")? Alfred Thirsty
email (default is "")?
repo_name (default is "bootstrap")? secretproject
project_name (default is "bootstrap")? secretproject
project_description (default is "boilerplate for small C programs with autotools")? Super secret project for humans

Cookiecutter asks a few questions to instantiate the template correctly. The result has been stored in the supersecret directory:

├── get-version
├── m4
│   ├── ax_cflags_gcc_option.m4
│   └── ax_ld_check_flag.m4
└── src
    ├── log.c
    ├── log.h
├── secretproject.8
├── secretproject.c
└── secretproject.h

2 directories, 13 files

Remaining steps

There are still some steps to be executed manually. You first need to initalize Git, as some features of this template rely on it:

$ git init
Initialized empty Git repository in /home/bernat/tmp/secretproject/.git/
$ git add .
$ git commit -m "Initial import"

Then, you need to extract the todo list built from the comments contained in source files:

$ git ls-tree -r --name-only HEAD | \
>   xargs grep -nH "T[O]DO:" | \
>   sed 's/\([^:]*:[^:]*\):\(.*\)T[O]DO:\(.*\)/\3 (\1)/' | \
>   sort -ns | \
>   awk '(last != $1) {print ""} {last=$1 ; print}'

2003 Add the dependencies of your project here. (
2003 The use of "Jansson" here is an example, you don't have (
2003 to keep it. (

2004 Each time you have used `PKG_CHECK_MODULES` macro (src/
2004 in ``, you get two variables that (src/
2004 you can substitute like above. (src/

3000 It's time for you program to do something. Add anything (src/secretproject.c:76)
3000 you want here. */ (src/secretproject.c:77)

Only a few minutes are needed to complete those steps.

What do you get?

Here are the main features:

  • Minimal and
  • Changelog based on Git logs and automatic version from Git tags2.
  • Manual page skeleton.
  • Logging infrastructure with variadic functions like log_warn(), log_info().

logging output of lldpd

About the use of the autotools

The autotools are a suite of tools to provide a build system for a project, including:

  • autoconf to generate a configure script, and
  • automake to generate makefiles using a similar but higher-level language.

Understanding the autotools can be a quite difficult task. There are a lot of bad documentations on the web and the manual does not help by describing corner-cases that would be useful if you wanted your project to compile for HP-UX. So, why do I use it?

  1. I have invested a lot of time in the understanding of this build system. Once you grasp how it should be used, it works reasonably well and can cover most of your needs. Maybe CMake would be a better choice but I have yet to learn it. Moreover, the autotools are so widespread that you have to know how they work.
  2. There are a lot of macros available for autoconf. Many of them are included in the GNU Autoconf Archive and ready to use. The quality of such macros are usually quite good. If you need to correctly detect the appropriate way to compile a program with GNU Readline or something compatible, there is a macro for that.

If you want to learn more about the autotools, do not read the manual. Instead, have a look at Autotools Mythbuster. Start with a minimal and do not add useless macros: a macro should be used only if it solves a real problem.

Happy hacking!

  1. Retrospectively, I think boilerplate.c would have been a better name. 

  2. For more information on those features, have a look at their presentation in a previous post about lldpd

August 25, 2013 11:43

There, I Fixed It

Planet Debian

Francesca Ciceri: Some things I learnt at DebConf13

  • People from Argentina don't like to shake hands when you first met them: it's too formal (sorry, Marga!)

  • Video team volunteering is extremely fun. Even directing is fun, except when the speaker decides to wander out of the talkroom just to make a point during a demo.

  • I won't call names, but I'm not the only one in Debian to remember how to dance the Time Warp and not ashamed to do it in public.

  • You can bribe a bloodthirsty deity with cheese, especially if the deity is a French one and you are in Switzerland.

  • A zip line is also called: tirolina (in Spanish), tyrolienne (in French) and - my all time favourite - Tarzanbahn in German, or at least in the German spoken by A. when he was a kid.

  • Given the number of talks about it this year, we - as a community - care a lot about newcomers, mentoring, creating a welcoming environment and community outreach in general. That's really great!

  • Self tagging yourself on the badge with your main interests provides conversation starters and make easier to meet new people. Many thanks to Bremner for this brilliant idea.

My self-tagged badge

  • Cheese&Wine party is the perfect time to discuss pedagogical methods (and discover interesting projects like Sugar (desktop environment)).

  • During a Mao game, an extremely simple rule by a newbie can result too difficult to guess even to seasoned players. Oh the irony and the inherent democracy of that! All hail the helicopter!

  • Once in a while, to cook dinner for upstream and/or sit with them around a table and plan your next moves is a good idea and as well as a common practice in Community-supported agriculture.

  • There are people out there brave enough to stand up and declame their poems or poems they love. As a shy person, I really am in awe of them. They make a sentence like "There will be poetry" sound less threatening.

  • If you sleep in room 43, next door is the answer. Or the previous one, depending on the direction you're walking.

August 25, 2013 09:57

Planet Ubuntu

Valorie Zimmerman: The Decipherment of Linear B

This wonderful little book has been recommended quite a few times by my friend Sho_. Today it arrived from the library, and I'm so happy to have read it. Yes, it's slim and readable; with afterward and appendices less than 150 pages. It's a loving tribute to Michael Ventris, who 'broke the code' of Linear B, then died before his publication presenting it to the word, by his co-author, John Chadwick.

Why a book on such an obscure subject? Yes, it's about Mycenaean Greek! But more important, it's about how a young person with an interest in languages and training in architecture (Michael Ventris) could use this background to crack one of the biggest mysteries revealed by modern archaeology. And he contradicted the leading experts and prevailing opinion that Linear B could not be Greek. When he proved that the Mycenaeans spoke Greek, he pushed the beginnings of European history back many centuries.

Ventris won over the experts by doing careful analysis, and always sharing his work as he proceeded. In fact, after reading all the published research, he began his work by surveying the twelve leading experts with a series of questions. This questionnaire was penetrating enough that he got ten answers. After compiling these answers and adding his own analysis, he sent out the survey results to all the experts. By getting a good grasp of current scholarship, he had built a wonderful foundation on which to build his study of the inscriptions.

Although it has not been proven that Ventris did any code-breaking during the war, his method of analysis certainly owes much to that Bletchley Park work. Also discussed here is Alice Kober, whose early work set Ventris on the right path. Her card catalogue was invaluable to Ventris, after her life was cut short by cancer. Ventris always consulted other experts, and was generous in sharing credit. He never succeeded in proving what he set out to do, which was prove that Linear B was Etruscan. Instead, he put in the hard work of analysis of all the evidence, and followed the trail to the end, proving what he started out believing impossible: Linear B is ancient Greek.

Read this book, and be inspired! Grab it used or get it from the library as I did. Although available for Kindle, the symbols render badly in that edition.

by (Valorie Zimmerman) at August 25, 2013 09:00

Planet Puppet

The Technical Blog of James: Finding YAML errors in puppet

I love tabs, they’re so much easier to work with, but YAML doesn’t like them. I’m constantly adding them in accidentally, and puppet’s error message is a bit cryptic:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: malformed format string - %S at /etc/puppet/manifests/foo.pp:18 on node

This happens during a puppet run, which in my case loads up YAML files. The tricky part was that the error wasn’t at all related to the foo.pp file, it just happened to be the first time hiera was run. So where’s the real error?

$ cd /etc/puppet/hieradata/; for i in `find . -name '*.yaml'`; do echo $i; ruby -e "require 'yaml'; YAML.parse('$i'))"; done

Run this one liner on your puppetmaster, and hiera should quickly point out which files have errors, and exactly which lines (and columns) they’re on.

Happy hacking,



by jamesjustjames at August 25, 2013 08:31

ASCII Art Farts


      (((/\))))),      I'M A FEMALE BODY BUILDER           
      ((( ^.^ )))                                          
       `"\ = /"`                                           
      /`/'._.'\`\      I.E., I HAVE HUGE SELF-ESTEEM ISSUES
     / (   Y   ) \           ............... AND SMALL TITS
    / /`\-' '-/`\ \                                        
    \ \  )   (  / /                                        
     `\\/\   /\//`                                         
      (/  \_/  \)                                          
       |   |   |                                           
       \   |   /                                           
        \_ | _/                                            
        / / \ \                                            
        | | | |                                            
jgs     |/   \|                                            
       / \   / \                                           
       `-'   '-`                                           

by (ASCII Art Farts: de) at August 25, 2013 07:00

Planet Debian

Matthias Klumpp: Some Tanglu updates…

Tanglu Logo (big)Long time since I published the last update about Tanglu. So here is a short summary about what we did meanwhile (even if you don’t hear that much, there is lots of stuff going on!)

Better QA in the archive

We now use a modified version of Debian’s Britney tool to migrate packages from the newly-created “staging” suite to our current development branch “aequorea”. This ensures that all packages are completely built on all architectures and don’t break other packages.

New uploads and syncs/merges now happen through the staging area and can be tested there as well as being blocked on demand, so our current aequorea development branch stays installable and usable for development. People who want the *very* latest stuff, can embed the staging-sources into their sources.list (but we don’t encourage you to do that).

Improved syncs from Debian

The Synchrotron toolset we use to sync packages with Debian recently gained the ability to sync packages using regular expressions. This makes it possible to sync many packages quickly which match a pattern.

Tons of infrastructure bugfixes

The infrastructure has been tweaked a lot to remove quirks, and it now works quite smoothly. Also, all Debian tools now work flawless in the Tanglu environment.

A few issues are remaining, but nothing really important is affected anymore (and some problems are merely cosmetic).

Long term we plan to replace the Jenkins build-infrastructure with the software which is running Paul Tagliamontes (only the buildd service, the archive will still be managed by dak). This requires lots of work, but will result in software not only usable by Tanglu, but also by Debian itself and everyone who wants a build-service capable of building a Debian-distro.

KDE 4.8 and GNOME 3.8

Tanglu now offers KDE 4.8 by default, a sync of GNOME 3.8 is currently in progress. The packages will be updated depending on our resources and otherwise just be synced from Debian unstable/experimental (and made working for Tanglu).

systemd 204 & libudev1

Tanglu now offers systemd 204 as default init system, and we transitioned the whole distribution to the latest version of udev. This even highlighted a few issues which could be fixed before the latest systemd reached Debian experimental. The udev transition went nicely, and hopefully Debian will fix bug#717418 too, soon, so both distributions run with the same udev version (which obviously makes things easier for Tanglu ^^)


We now have a plymouth-bootscreen and wallpapers and stuff is in progress :-)

Alpha-release & Live-CD…?

This is what we are working on – we have some issues in creating a working live-cd, since live-build does have some issues with our stack. We are currently resolving issues, but because of the lack of manpower, this is progressing slowly (all contributors also work on other FLOSS projects and of course also have their work :P )

As soon as we have working live-media, we can do a first alpha release and offer installation media.


Tanglu is a large task. And we need every help we can get, right now especially technical help from people who can build packages (Debian Developers/Maintainers, Ubuntu developers, etc.) We especially need someone to take care of the live-cd.

But also the website needs some help, and we need more artwork or improved artwork ;-) In general, if you have an idea how to make Tanglu better (of course, something which matches it’s goals :P ) and want to realize it, just come back to us! You can reach us on the mailiglists (tanglu-project for stuff related to the project, tanglu-devel for development of the distro) or in #tanglu / #tanglu-devel (on Freenode).

August 25, 2013 05:56

Planet Sysadmin

Chris Siebenmann: Adding basic quoting to your use of GNU Readline

Suppose that you have a program (or) that makes basic use of GNU Readline (essentially just calling readine()) and you want to add the feature of quoting filename expansions when it's needed. Sadly the GNU Readline documentation is a little bit scanty on what you need to do, so here is what has worked for me.

(The rest of this assumes that you've read the Readline programming documentation.)

As documented in the manual (eventually) you first need a function that will actually do the quoting, which you will activate by pointing rl_filename_quoting_function at. Although the documentation neglects to mention it, this function must return a malloc()'d string; Readline will free() it for you. As far as I can tell from running my code under valgrind, you don't need to free() the TEXT argument you are handed.

You must also set rl_filename_quote_characters and rl_completer_quote_characters to appropriate values. To be fully correct you probably also want to define a dequoter function, but I've gotten away without it so far. In simple cases Readline will simply ignore your quote character at the front when doing further filename completion; I think you only need a dequoter function to handle the case were you've had to escape something in the filename.

With a sane library this would be good enough. But contrary to what the documentation alleges, this doesn't seem to be sufficient for Readline. Instead you need to hook into Readline completion in order to tell Readline that yes really, it should quote things. You do this by the following:

char **my_rl_yesquote(const char *init, int start, int end) {
    rl_filename_quoting_desired = 1;
    return NULL;

/* initialize by setting:
   rl_attempted_completion_function = my_rl_yesquote;

Your 'attempted completion function' exists purely for this, although you can of course do more if you want. Note that the need for this function and its actions is in direct contradiction to the Readline documentation as far as I can tell. On the other hand, following the documentation doesn't work (yes, I tried it). Possibly there is some magic involved in just how you invoke Readline and some unintentional side effects going on.

(On the other hand I got this from a Stackoverflow answer, so other people are having the same problem.)

Note that a really good job of quoting and dequoting filenames needs a certain number of other functions, per the Readline documentation. I can't be bothered to worry about them (or write them) so far.

I was going to put my actual code in here as an example but it turns out it is too embarrassingly ugly and hacky for me to do it in its current state and I'm not willing to include cleaner code that I haven't actually run and tested. Check back later for acceptable code that I know doesn't explode.

(Normally I clean up my hacky 'it finally works' first pass code, but I was rather irritated by the time I got something that worked so I just stopped and put the whole thing out of my mind.)

August 25, 2013 05:08

Boing Boing

Second panda cub born dead

A full day after the birth of her cub, panda Mei Xiang gave birth to a second, stillborn, cub. The first cub is still doing great. But the second one had developmental abnormalities and wasn't ever really going to live.

by Maggie Koerth-Baker at August 25, 2013 04:46


Sunday Secrets

PostSecret is an ongoing community art project where people mail
in their secrets anonymously on one side of a homemade postcard.
PostSecret, 13345 Copper Ridge Road, Germantown, MD 20874

PostSecret Community

See More Secrets. Follow PostSecret on Twitter.

PostSecret on Facebook Widgets

The Times of India (August 25, 2013)

Frank Warren thought it was just one of those crazy ideas when he asked people to reveal their secrets, anonymously, on his website. The number of responses he got left him and the whole world astonished...

The ones who follow the policy of taking a secret to the grave may take pride in their strength of character; but often it comes at the cost of their own happiness. Author Frank Warren knows a thing or two about the burden of keeping secrets. The American, who started a website called PostSecret in 2005 — where people can anonymously post their deepest, most embarrassing secrets — receives about 1,000 secrets every week! The website had over 100 million visitors in just the first three years. From government conspiracies (he was questioned by the FBI once) to extra-marital affairs, what makes people from all over the world want to spill the beans anonymously?

If mind doesn't express, body explodes

Psychiatrist Dr Harish Shetty says, "If our mind doesn't express, our body explodes. Secrets need an outlet and sharing secrets is incredibly freeing." It's not an easy task to keep something buried deep within. It's not about how grave the matter is, it's about the inability to share the matter that makes life tough.

Film and theatre actor Namit Das says, "Secrets can turn into a problem when they are about someone close because it may be difficult to face that person. You do not know what to do when you can neither hide the secret nor reveal it."

Relationship counsellor Dr Rajan Bhonsle explains why one inherently feels guilty about keeping a secret, "Our subconscious mind joins the battle against secrecy, and we often find ourselves telling the truth in dreams and occasional drunken disclosures. The more secretive we are, the more separate we feel from our own lives." It was a similar feeling that made model and actor Sudhanshu Pandey blurt out a secret that he had been keeping for a while, "I was keeping this secret for someone close to me, but I realised it wasn't fair to his family members. So I went ahead and shared it with them. It was quite freeing."

Keeping our own secret is the toughest

A fib here, a white lie there may be okay, but a secret becomes burdensome when it has the potential to destroy someone close to us in the long run. Warren suggests, "If we could find the right person to talk to, we might realise that talking about an embarrassing story might lead to a more authentic relationship with others, and even with ourselves."

Actor Vishakha Singh agrees, "It's easier to keep someone else's secret rather than one's own. If it's your own secret, the desire to unload is much stronger. Since our body has a strange mechanism that reacts to stress, keeping a secret can harm one physically and mentally. I experienced that kind of stress when I saw my first strand of white hair!"

In 2012's TED Talk, Warren began his discourse by saying, "There are two kinds of secrets, the ones we hide from others, and those we keep from ourselves."

According to him, sharing a secret has the potential to transform lives. He describes how. "Once I received a letter from a lady, who wrote: 'Dear Frank, do you know that I left my boyfriend of a year and a half because of someone's postcard on your website that read, 'His temper is so scary, I've lost all my opinions'?"

After reading a complete stranger's letter that mirrored her own experience, this woman found the courage to come out of an abusive relationship. That's the biggest power of sharing a secret: that we get to know that we aren't alone. That someone, somewhere is feeling exactly the same about life, and grappling with the same problems that we are, and not giving up. Just knowing that can somewhat take off the burden of a secret.

Documentary filmmaker Madhureeta Anand says, "There was an instance when I was extremely hurt by what someone had done to me. I didn't tell anyone about it. Two years later, when I couldn't bear it anymore, I told a friend and we talked about it. The weight just disappeared!"

Wrestler Sushil Kumar says he has no such worries because he simply wouldn't ever keep a secret: "I would end up telling 10 lies in the attempt to hide a secret, and get caught!"


Is the secret troubling you?

If the secret is disturbing you emotionally, share it. Revealing it will reduce the associated guilt.

Is he/she a good confidante?

The person you are sharing your secret with should be discreet and non-judgmental.

Is your loved one likely to discover the secret?

If there are high chances of your loved one discovering the secret, consider coming clean right away. It will cause pain, but learning it from someone else will hurt your loved ones even more.


Write it down in a diary. To keep the information private, you can opt for a lock-and-key diary. This way, the secret will stay with you yet you'll feel relieved.

You can keep a 'secret jar'. If you are secretive by nature, you may have more than one thing weighing you down. You can write about them on a chit and put it in the jar. This way, you get rid of the thought without making it public.

Visit a counsellor. A professional won't judge you. He will be able to tell you the consequences of sharing your secret, to help you cope better.

"I remember a scene from Wong Kar Wai's film, in the mood for love, where the lead actor recounts an old saying that if you make a hole in a tree and whisper your secret, it remains there forever. I believe in it too"

—Namit Das, actor

"I don't keep secrets from the people who matter to me. To the rest of the world, it is not a secret, it is keeping my privacy"

—Aditi Rao Hydari, actor

"I am a true scorpion. I find it very difficult to trust people. I am quite stubborn by nature. If I plan not to tell my secret, I just won't"

—Anu Malik, music director

"I don't have the ability to keep secrets. I like to share everything with my wife and family. I would end up saying 10 lies in the attempt to hide a secret. Also, keeping a secret would take a toll on my mental health. It'll always be at the back of my mind, nagging me"

—Sushil Kumar, wrestler

"Keeping my family in the dark about something, while I know of it, would be very burdensome. It wouldn't allow me to be at peace with myself "

—Ayushmann Khurrana, actor

by (postsecret) at August 25, 2013 02:41

Daring Fireball

Krugman on Microsoft and Apple

There’s much to quibble about in this Krugman post, but I’ll keep it short:

The Microsoft story is familiar. Back in the 80s, Microsoft and Apple both had operating systems to sell; Apple’s was clearly better. But Apple misunderstood the nature of the market: it said, “We have a better system, so we’re going to make it available only on our own beautiful machines, and charge premium prices.” Meanwhile Microsoft licensed its system to lots of people making cheap machines — and established a commanding position through network externalities. People used Windows because other people used Windows — there was more software available, corporate tech departments were prepared to provide support, etc.

Two things.

First, when we talk about the ’80s and ’90s and Apple, we’re talking about the Mac. And though the Mac suffered mightily in the late ’90s, dropping so low that it almost brought the entire company down, today, the Mac makes Apple the world’s most profitable PC maker. Even if you don’t count the iPad as a “PC”, no one makes more money selling personal computers than Apple. In the long run, Apple’s strategy paid off.

Second, Krugman is right about the fundamental difference between Windows’s success and iOS’s. The beauty of the Windows hegemony is that it wasn’t the best, and didn’t have to be the best. Once their OS monopoly was established, they just had to show up. Apple’s success today is predicated on iOS being the best. They have to stay at the top of the game, both design- and quality-wise, to maintain their success. That’s riskier.

by John Gruber at August 25, 2013 02:28

Boing Boing

Flash Gordon (1980)

"Klytus, I'm bored. What play thing can you offer me today?"

Last year Pesco shared the classic football fight from Sam J. Jones' epic title role in the 1980 version of Flash Gordon. I've just re-watched the film and never cease to be thrilled.

Flash is one of the most iconic scifi characters of all time. An American football player thrown into a galaxy spanning adventure to save the Earth. This Mike Hodges directed version of the story has an incredible cast. Max von Sydow as Ming the Merciless, Timothy Dalton as Prince Barin and Brian Blessed playing Vultan the Hawkman are just a few of the fantastic performances.

Queen's soundtrack always gets my heart pumping and leaves me certain that Flash will save every one of us.

Flash Gordon (1980)


by Jason Weisberger at August 25, 2013 00:34

August 24, 2013

Kernel Planet

Pavel Machek: Wherigo on Nokia n900

I kind of assumed that getting wherigo cartridge to run on N900 would be hard. I was wrong. Desktopwig from openwig project actually works well. I assumed that installing java using apt in debian chroot on N900 would be very easy. I was wrong again. Apt-get writes too much data to the flash, resulting in watchdog reset and corrupted partition. (It is quite incredible that N900 in default config is so broken that writing 80MB of data causes this...) Using settings from obscure maemo thread solved that.

So now I'm running at running cartridge on n900 and have one more problem: how to connect gps. Desktopwig talks nmea, but n900 exposes gps on dbus. I do have fake gpsd for N900... but that is still not NMEA. There must be ssome tool that talks to gpsd and outputs NMEA data, but how to find it?

August 24, 2013 23:53

Planet UKnot

B&Q and the Disabled Parking Debacle

It’s a bank holiday weekend. What are you doing – perhaps some DIY or pottering around the garden? Well, if you’re disabled, chances are you won’t be able to shop at a B&Q because their dedicated blue badge parking bays are used to store stock from Spring to Autumn. And the company aren’t willing to change.

I’m physically disabled. I need to park as close to my destination as possible, and I need a wide parking bay with hatching either side, so I can open my doors fully in order to get in and out – if I use a regular bay and someone parks alongside me, I can’t get back into my car. Luckily, the number and sizes of these bays are specified by the Department for Transport (DfT), so I can rely on them being present when I need to shop. The specifications are detailed on their website (although this PDF is dated 1995, the DfT confirmed to me in June 2013 that this is still their current guidance). So it’s straightforward – a shopping area should provide dedicated, wide parking spaces for every disabled member of staff, plus 6% of all bays if there are under 200 spaces in total, or 4 bays plus an additional 4% if there are more than 200 spaces.

When a store doesn’t provide those bays, or if they are provided but not enforced, it’s easy to campaign, quoting the DfT’s guidance. But what happens when disabled parking bays are present, but it’s the store themself who abuses them?

B&Q is a repeat offender. Take my local store, Leyton Mills in East London. I won’t bore you with the numbers, but you can see the location of disabled bays – marked with blue splodges – on this trading estate. Whether you only count the parking bays in the B&Q area, or the entire trading estate, I’ve totted them up and the numbers are such that obstructing just a few bays will mean that the DfT minimums are not met. Every dedicated wide bay is important for me and the many disabled people who want to shop there.

Image from Google Maps – click for detail

Back in 2009 I raised the issue of the disabled bays having been turned into a garden centre:

The management didn’t seem to care – I spoke to them three times and then followed up with a letter, but nothing changed – so I contacted the local police team who patrol the trading estate. A lovely PC gave me the management company’s address but also confided that he’d already had words with B&Q on this topic and it was a source of frustration. The PC spoke to store management again and this time they removed plants from three bays, and promised to free up another two over the weekend (theoretically leaving just one disabled bay full of stock, and eleven available for parking).

Well, B&Q did indeed remove plants from some of the disabled parking bays… however, they replaced them with a gazebo containing a woman selling trampolines!

If it wasn’t so frustrating and ridiculous, I’d laugh. All I want is somewhere to park.

The problem is ongoing, every year the same. In May 2013 the disabled bays looked like this:

…yes, it’s another garden centre! I should point out that there are plenty of normal parking spaces that could be used instead of the dedicated wide ones, if they are unable to fit all their stock in the store. But not only do the disabled bays get used for plants, but they are spread out to provide space for people to browse around them! So it’s not just an emergency holding area, but a deliberate abuse of the space.

Don’t just take my word for it. Here are some examples from other stores…

Claire Brewer spotted this in Lea Bridge Road:

And my own image of the same store (I was unable to park there at all, and staff looked at me blankly when I called them over and asked them to clear a disabled bay for me) – click on the image to see more detail:

Su Smith visited Aylesbury, where they were displaying compost in the car park:

And Dr John Bullas saw this stack of trolleys at a Southampton store:

This problem has also been in the news, for example last October Harrow Times reported on B&Q’s Stanmore store: Anger at DIY store blocking disabled bays with stock. And I’ve heard reports of similar issues at B&Q stores across the UK. Clearly the company don’t care about the needs of their disabled customers. I won’t give you the full spiel about the business case for providing access, but suffice to say that 1 in 7 adults in the UK are disabled, with an annual spending power of around £80 billion. It’s clear that B&Q are not just frustrating and insulting disabled people, but are also turning away a lot of potential income.

So what do B&Q have to say?

Their website has an “Ethics FAQ”, which states:

“Q. What services do you provide for disabled customers?
A. All car parks have designated disabled parking bays, near the main store entrance.”

It would be nice if, having created these parking bays, B&Q would keep them free for the customers who need them.

This summer I tackled B&Q via twitter, hoping that the publicity would make them think twice. They responded to my photos of Leyton Mills, and asked their manager to move stock out of the bays. This was done in many cases, but several of the hatched areas (needed for disabled people to open their doors fully or to pull their wheelchairs alongside their car) remained blocked with stock. A quick poll of my disabled friends on twitter indicated that in order to be useful, the bay needs to have clear hatching on both sides of the bay. And indeed this clear hatching is specified in the DfT guidance. If stores are still blocking hatching they are still obstructing the bays.

Also, B&Q may have cleared some of the spaces in Leyton Mills. Stores in other locations remain just as bad. They might have responded to my tweets in fear of bad publicity but they haven’t made any kind of change to their general attitude or policy. In fact when I visited Leyton Mills this week I found another bay blocked by a sign inviting me to come and shop over the Bank Holiday weekend. This infuriated me enough to finally blog about it.

So it seems that B&Q just don’t care. Summer may be nearly over, and perhaps once colder weather comes and the bedding plants are sold out for another year, the demand for disabled bays (from both customers and the stores) will diminish.

But we know it will happen again.

Name and shame your local offenders, and perhaps head office will do something about it. If not, perhaps it’s time the issue was raised with the Equality and Human Rights Commission.

As their advert says – “B&Q: What could you do?

I welcome readers’ comments on this blog. Please let me know whether this is something which has affected you.

by Flash Wilson Bristow at August 24, 2013 22:14

Daring Fireball

Om Malik on Yahoo

Om Malik:

And forget the products — so far Yahoo has been unable to attract top quality talent to the company. Not one 20-something I have talked to in the past six months has wistfully talked about working for Yahoo. And even those who have joined Yahoo from Google are joining the company thanks to mega-million dollar contracts, not because they want to work there. When Yahoo becomes the desired job-spot for a fresh, new tech tinkerer — that will be the time I will lighten up on Yahoo.

by John Gruber at August 24, 2013 22:09


Duck Diss

The duck boat tour just rolled under my window, megaphone-shouting that "SOMA IS THE WORST NEIGHBORHOOD IN SAN FRANCISCO!"

This is the second time that's happened, so I guess it's an official part of the script, not just editorialization.

But... yesssssss. Yes it is. Please don't move here.

by jwz at August 24, 2013 21:51

Planet Debian

Marko Lalic: PTS rewrite: Django Memory Usage

Rewriting the PTS in a fairly large Web framework such as Django is sure to cause larger memory consumption by even the more simple tasks, when compared to a couple of (loosely related) perl scripts. If anything, simply due to the fact that the entire Django machinery needs to be loaded.

However, just how large the memory consumption difference between the two can be seen in the example of the command to dispatch received package emails to users. In the new version, the command is implemented as a Django management command, whereas before it was a perl script relying on the script for some additional functions.

The reason this command's memory usage is important is an incident with the current test deployment of the new PTS found at All package mails that the old PTS receives are also forwarded to the new instance in order to expose the new implementation to real-world mail traffic. At a certain point, over a hundred mails were received instantly causing over a hundred dispatch processes to be launched. This led to the system running out of memory and the kernel had to run the OOM Killer which brought down PostgreSQL server thereby causing the site to become unavailable.

For now, such a problem seems to have been prevented from reoccurring by setting up exim to queue messages when a certain system average load is exceeded, however it is interesting to check how large the memory consumption difference between the old and new dispatch really is.

Measurement setup

In order to measure the memory use, two messages of different sizes were used: 1KB and 1MB.

For the new PTS two cases were considered, when the database used is sqlite3 and PostgreSQL. In both cases, it was simply initialized by syncdb, meaning it contained only the default keywords found in the initial fixture and no registered users and subscriptions. The DEBUG setting was set to False.

For testing the old PTS's, the database was also initialized empty.

This way, the tests should show the actual difference between the two implementations.

Since the main concern here is the maximum memory usage of the dispatch script, a simple bash script was written which takes a PID of a running process and outputs the maximum memory usage as reported by ps once the process is terminated. The script used is shown below.

#!/usr/bin/env bash


while ps $pid >/dev/null
ps -o vsz= ${pid}
sleep 0.1
done | sort -n | tail -n1

Old PTS Measurements

The maximum memory usage of the old PTS was:
  1. 37.8 MB for the 1 KB message
  2. 38.2 MB for the 1 MB message

New PTS Measurements


The maximum memory usage of the new PTS using sqlite3 was:
  1. 91.77 MB for the 1 KB message
  2. 92.59 MB for the 1 MB message


The maximum memory usage of the new PTS using PostgreSQL was:
  1. 148 MB for the 1 KB message
  2. 160 MB for the 1 MB message

Comparison and Discussion

When comparing the memory used by the old implementation and the new PTS running sqlite3, the difference does not seem to be too large and is to be expected, since, as mentioned, the whole Django framework is loaded when executing a management command.

However, the huge difference between the two versions of the new PTS: the one using sqlite3 and the one using PostgreSQL is very surprising indeed.

Another interesting measurement is the difference between running a bare Django management command: one including only a sleep statement (in order to allow enough time to measure its memory usage) on PostgreSQL and sqlite3. In both cases, the memory usage was only about 1 MB less than what the respective maximum memory usage when processing the 1 KB message.

All this considered, the logical conclusion seems to be that either psycopg2 package and/or the Django postgresql_psycopg2 database engine use a lot more memory than the corresponding sqlite3 alternatives.

Could anyone shed some more light as to what causes this stark difference between using sqlite3 and Postgres in Django? Anything that could be done to mitigate it?

by (Marko Lalic) at August 24, 2013 20:47

Marko Lalic: PTS Rewrite Project Status

The rewrite has already achieved some of the major goals which have been set in the beginning, as well as made some changes and improvements along the way. Some of the major points of the new system implemented so far will be presented here.

    Tasks framework

    The current PTS implementation regenerates the HTML pages for all packages a few times per day. A goal of the rewrite was to make smaller incremental updates possible so that the information displayed for each package can be much more dynamic.

    In order to allow incremental updates, it is necessary to consider the situation where updating some information should trigger an update of other information.

    This is where the tasks framework comes in. It allows developers to define a task with a list of "events" that it produces. An event can be triggered by any change to shared information or anything else the task would like to signal. Each event is identified by a simple unique string name. Events can also contain arbitrary (JSON-serializable) objects. Other than events that it produces, a task defines a list of dependencies  events it depends on.

    When a single task should be executed, the tasks framework uses this information to build a dependency graph (a DAG) and executes the tasks in topological sort order, making sure each task runs only after all tasks which could raise an event it depends on have finished. It automatically makes the raised events available to the task so that it can access the extra data. Tasks for which no event was raised are skipped.

    Such execution of a task along with all its dependencies is called a Job. A Job's state is persisted to the database after each task completes its operation. This allows Jobs to be continued if there is an unexpected error causing it to terminate.

    So far, the tasks framework has already proven useful. For example, there are tasks which extract some package information to a denormalized model suited for directly displaying the data in the package page after a new package version is detected in a repository. Other examples are tasks which update standards-version warnings (after a new package version is detected) and the news generation task based on various detected changes to the packages (new source version, version removed from repository, version migrated to another repository).

    Vendor-specific customization

    Another goal of the rewrite was to make the PTS less Debian-specific. This is so that derivatives could easily deploy their own versions of the PTS. The idea is to have a number of core PTS services which alternative deployments can customize and augment.

    The simplest way vendors can customize PTS is by changing some of the local settings values. An example of this would be changing the PTS_VENDOR_NAME which leads to this name being displayed in any part of the system.

    This is fairly uninteresting since the information is always static. What is more interesting is when the vendor should provide different information based on some context, e.g. package name, received message, etc. This requirement led to the concept of the vendor app with a rules module.

    Various core parts of the PTS specify hook functions which different vendors can implement to provide customized behavior of the core component. Some examples are:
    • providing URLs to a developer information site (e.g. in Debian's case) based on a developer's email
    • providing a URL to a bug tracker for a particular package (based on the package type and bug category type)
    • providing additional rules to tag received package mails with keywords
    • providing additional headers to be injected in the forwarded package mails
    • providing a list of bugs which should be displayed in the bugs panel
    A settings value is set giving the full dotted path to the Python module containing the implementation of these hook functions.

    Vendor-specific features

    A step beyond customizing core features is for vendors to include their own specific functionality. The already described tasks framework allows for implementing custom data update/generation processes (optionally depending on core events). Furthermore, since a vendor-specific Django app is expected, it can contain any vendor-specific models which are managed just as the core models are (syncdb, migrations, etc. all work as expected). The last piece of the puzzle is displaying this information on the package page.

    Currently, all package information is provided in various boxes (general, bugs, links, etc.) which were named panels. The rewrite has made it possible to implement additional panels in a really elegant way.

    It is required to implement a class with some properties giving:
    • the panel's title
    • position in the page (left, center, right)
    • panel importance (higher importance panels are placed above lower importance ones in the same column)
    To generate the HTML for a package, two alternatives are allowed. The first is to provide a property which gives (explicitly marked safe) HTML output which is then included in the correct position verbatim. The second alternative, which is what should cover most needs and should be preferred, is to provide a property giving a template name and a property giving the additional context variables necessary to render this template. By using the power of Python properties, the context can be dynamically generated for each package by the panel. The PTS then makes sure to pass the correct panel's context when rendering its template and to include the rendered result in the final package page.

    Various convenience classes are implemented in the panel API, but those are better left to be read about in the panels API documentation.

    Changes from current PTS

    Some existing features of PTS were not simply rewritten to fit into a Django project, but also improved upon.

    One of the first such changes was reducing the number of confirmation mails sent by the email control bot to only one per received control mail (as per a wishlist PTS bug).

    Work started by Markus Wanner (see this bug) was expanded on and integrated with the new News model so that any inline GPG signatures found in the content of a news item are automatically extracted and the signers name displayed in the package page.

    The existing TODO and Probems panels have been merged into a single Action Needed panel. However, besides merging their representation in the package page, all the package issues are now represented as an ActionItem Django model which provides additional meta-data (item type, date created/updated, severity level) which could be used in the future to provide additional functionality (such as all packages with a certain issue type).

    The descriptions of the action items were also shortened from their current rendition, with the detailed description being moved to a dedicated item page. If Javascript is enabled in the user's browser, it can also be displayed in a popup.


    The graphical interface has seen some changes, however I hesitate to claim improvements. In my opinion, some of it is definitely better, more polished and more "modern" (for what it's worth).

    Still, this could benefit from more people getting involved in testing and providing feedback so that the design is polished to suit the largest number of users. For example, the padding between the panels is increased, which to me definitely looks better, however, it is possible that this introduces too much whitespace and reduces the information density below what some may expect.


    Overall, I am quite pleased with where the project is currently. Most of the information provided by the current PTS has been reimplemented, with only a few smaller ones remaining. For now, since we are nearly entering the final month of GSoC, the development will switch focus from porting old features to the new PTS to adding in some new ones. Namely, user registration and account management is in the pipeline for the coming week. It entails features such as managing package subscriptions via the online interface (for multiple emails associated with the account) and integration for Debian Developers.

    This post aimed to provide the most notable changes and improvements made so far to the PTS. In reality, there has been quite a lot of them and not everything is covered here. For more information, check out the PTS documentation which is updated weekly (corresponding to the weekly deployments) or the source code repository.

    The latest iteration of the PTS can always be found at The mail control interface bot responds to mails sent to

    Stay tuned for updates and be sure to check out on at least a weekly basis to see new deployments in action!

    by (Marko Lalic) at August 24, 2013 20:45

    Planet Sysadmin

    Racker Hacker: Get a rock-solid Linux touchpad configuration for the Lenovo X1 Carbon

    Lenovo ThinkPad X1 CarbonThe X1 Carbon’s touchpad has been my nemesis in Linux for quite some time because of its high sensitivity. I’d often find the cursor jumping over a few pixels each time I tried to tap to click. This was aggravating at first, but then I found myself closing windows when I wanted them minimized or confirming something in a dialog that I didn’t want to confirm.

    Last December, I wrote a post about some fixes. However, as I force myself to migrate to Linux (no turning back this time) again, my fixes didn’t work well enough. I stumbled upon a post about the X1′s touchpad and how an Ubuntu user found a configuration file that seemed to work well.

    Just as a timesaver, I’ve reposted his configuration here:

    # softlink this file into:
    # /usr/share/X11/xorg.conf.d
    # and prevent the settings app from overwriting our settings:
    # gsettings set org.gnome.settings-daemon.plugins.mouse active false
    Section "InputClass"
        Identifier "nathan touchpad catchall"
        MatchIsTouchpad "on"
        MatchDevicePath "/dev/input/event*"
        Driver "synaptics"
        # three fingers for the middle button
        Option "TapButton3" "2"
        # drag lock
        Option "LockedDrags" "1"
        # accurate tap-to-click!
        Option "FingerLow" "50"
        Option "FingerHigh" "55"
        # prevents too many intentional clicks
        Option "PalmDetect" "0"
        # "natural" vertical and horizontal scrolling
        Option "VertTwoFingerScroll" "1"
        Option "VertScrollDelta" "-75"
        Option "HorizTwoFingerScroll" "1"
        Option "HorizScrollDelta" "-75"
        Option "MinSpeed" "1"
        Option "MaxSpeed" "1"
        Option "AccelerationProfile" "2"
        Option "ConstantDeceleration" "4"

    Many many thanks to Nathan Hamblen for assembling this configuration and offering it out to the masses on his blog.

    Get a rock-solid Linux touchpad configuration for the Lenovo X1 Carbon is a post from: Major Hayden's blog.

    Thanks for following the blog via the RSS feed. Please don't copy my posts or quote portions of them without attribution.

    August 24, 2013 20:28

    Daring Fireball

    From the DF Archive: Memoranda

    2008 piece comparing and contrasting two company-wide memos, one from Steve Jobs, one from Steve Ballmer:

    Apple employees may not always — or even often — agree with Jobs, but they do believe him. Apple tends to do and achieve exactly what Jobs says they will. (His declaration in January 2007 that Apple would be selling 10 million iPhone per year by 2008, for example.)

    Ballmer’s promises, in contrast, defy belief, at least regarding where Microsoft stands against Apple in terms of “end-to-end experience” and against Google in terms of search and online advertising. He’s either ignorant or lying — neither of which is inspiring to the rank-and-file engineers.

    by John Gruber at August 24, 2013 19:43

    Planet UKnot

    Experiments with Hide Glue

    I've been sticking random bits of scrap wood together tonight with hot hide glue (AKA animal glue or pearl glue) because I intend to use it a lot in my concertina restoration and wanted to get a feel for what it's like to work with. This is the really good stuff, made from genuine boiled and distilled bits of dead animal, just like carpenters and instrument-makers used for thousands of years before the invention of modern petrochemical-based synthetic glues.

    It's as strong or stronger than modern PVA wood glue and has several important advantages. The two big ones are: 1. It's possible to dismantle a joint without damaging the parts by steaming it until it softens. 2. You can often assemble joints without any clamps because it goes quite tacky in less than a minute as it cools and gels, then gradually pulls the joint tighter over the next few hours as it dries out, shrinks and goes rock-hard. It's also incredibly cheap - I bought enough of it on eBay to make up 3 litres of glue (that's a lot of glue) for less than £10. I'd expect to pay at least three times that for a good quality PVA glue (more if you buy it in small quantities).

    Disadvantages are that it's more faff to work with (you need to mix it with the right amount of water in advance and then heat it carefully to liquify the gel), once mixed it has a limited shelf-life (though it can be frozen), it isn't waterproof (no good for garden furniture then), you don't have much time to assemble a large/complicated joint before it goes tacky (heating the wood a bit first helps), and it smells quite "interesting" when in its hot liquid form.

    I also wanted to have a try at hammer veneering because I'm thinking of using this technique to make the new ends for the concertina. This involves using hot hide glue and a heavy smooth-faced metal squeegee to stick a very thin sliver of wood (often something exotic and visually attractive) onto the visible surface of another (usually cheaper and stronger) piece of wood. I don't have a purpose-made veneer hammer, but it turns out my largest blacksmith's cross-pein hammer makes an excellent substitute. I don't have any veneer yet either, so I glued a piece of scrap card of about the right thickness onto some scrap plywood instead. It was very easy to do (following instructions I've read on the web) and, as far as I can tell, it worked perfectly the first time.

    The next thing to try is inlays, which is where you cut out an area of the veneer and replace it with something visually different of exactly the same shape and thickness (e.g. a veneer cut from a different coloured wood, a piece of mother-of-pearl, or a sheet of brass/silver/gold).

    by Alex Holden at August 24, 2013 19:25

    Daring Fireball


    My thanks to Shutterstock for sponsoring this week’s DF RSS feed. Shutterstock has over 27 million stock photos, illustrations, vectors, and videos. It’s incredible, and growing by 10,000 images every day. Shutterstock’s entire library of royalty-free images are available by subscription and a la carte.

    Visitors can browse the entire library for free, and Shutterstock has a great app for iPhone and the iPad.

    by John Gruber at August 24, 2013 19:22

    Boing Boing

    Why it matters that you can't own an electronic copy of the Oxford English Dictionary

    In my latest Guardian column, I talk about the digital versions of the Oxford English Dictionary and the Historical Thesaurus of the Oxford English Dictionary, the two most important lexicographic references to the English language. As a writer, my print copies of the OED and HTOED are to me what an anvil is to a blacksmith; but I was disturbed to learn that the digital editions of these books are only available as monthly rentals, services that come with expansive data-collecting policies and which cannot be owned. It's especially ironic that these books are published by Oxford University, home of the Bodleian, a deposit archive and library founded in the 14th century, a symbol of the importance of enduring ownership of books.

    My discussions with OUP's execs convinced me that this wasn't the result of venality or greed, but rather the unfortunate consequence of a bunch of individually reasonable decisions that added up to something rather worrying. I hope that OUP and Oxford will continue to evolve its products in a way that honours the centuries-old traditions that Oxford embodies.

    OUP – which has been selling dictionaries and thesauri since the 19th century – will not sell you a digital OED or HTOED. Not for any price.

    Instead, these books are rented by the month, accessed via the internet by logged-in users. If you stop paying, your access to these books is terminated.

    I mentioned this to some librarians at the American Library Association conference in Chicago this spring and they all said, effectively: "Welcome to the club. This is what we have to put up with all the time."

    Oxford English Dictionary – the future

    by Cory Doctorow at August 24, 2013 19:04

    Planet UKnot

    Changes to QEMU’s timer system

    This article explains a little about how the timer system works in QEMU, and in particular the changes associated with the new timer system I’ve contributed to QEMU.

    What to timers do?

    Timers (more precisely QEMUTimers) provide a means of calling a giving routine (a callback) after a time interval has elapsed, passing an opaque pointer to the routine.

    The measurement of time can be against one of three clocks:

    • The realtime clock, which runs even when the VM is stopped, with a resolution of 1000 Hz;
    • The virtual clock, which only runs when the VM is running, at a high resolution; and
    • The host clock, which (like the realtime clock) runs even when the VM is stopped, but is sensitive to time changes to the system clock (e.g. NTP).

    Timers are single shot, i.e. when they have elapsed, they call their callback and will do nothing further unless they are re-armed. The callback can rearm the timer to provide a repeating timer.

    What’s changed in the implementation?

    Prior to the timer API change, timers only existed in the main QEMU thread, and were only run from QEMU’s main loop. Expiry of timers was through a system-dependent system of secondary timers called alarm timers. These would call QemuNotify which caused a write to a notifier FD, and caused any poll() in progress to terminate. Throughout QEMU, poll() (or equivalent) would use infinite timeouts (rather than the timeout associated with any timer), and rely on the alarm timers (under POSIX running through signals) to terminate these system calls.

    This approach caused a number of problems:

    • Timers were only handled within the main loop. QEMU’s AioContext has an internal loop that runs during block operations and may run for a long time. Timers were not processed during this loop which made writing block device timers that would be guaranteed to run while the block layer was busy hard to do;
    • Timers were fundamentally single threaded, and the existing system was incompatible with plans for additional threading and AioContexts;
    • The system dependent code that implemented alarm timers was not pretty; and
    • The API to call them was messy.

    To fix this I contributed a 31 commit patch that:

    • Refactored nearly all of the timer code;
    • Switched to rely on timeouts from ppoll() rather than signals and alarm timers (thus allowing alarm timers to be deleted);
    • Separated the clock sources (QEMUClock) from the lists of active timers (QEMUTimerList);
    • Introduced timer list groups (QEMUTimerListGroup) to allow independent threads or other users of timers to have their own set of timers (each attached to any clock); and
    • Tidied up the API.

    You can find the patch set here.

    How does the new implementation work?

    The diagram below shows the relationship between the new objects (click to enlarge).

    QEMU Timer Diagram
    QEMU Timer Diagram

    There is exactly one QEMUClock for each clock type, representing the clock source with that particular type. Currently there are three clock sources, as outlined above. In the previous implementation, each clock would have a list of timers running against it. However, now the list of running timers is kept in a QEMUTimerList. The clock needs to keep to track of all the QEMUTimerLists attached to that clock, and for that purposes maintains a list of them (purple line on the diagram).

    Each user of timers maintains a QEMUTimerListGroup. This is a struct currently consisting solely of an array of QEMUTimerList pointers, one for each clock type. Hence each QEMUTimerListGroup is really a collection of three (currently) QEMUTimerLists (the red line in the diagram). There are two current QEMUTimerListGroup users. A global static, main_loop_tlg, represents the timer lists run from the main loop (i.e. all current timer users). I also added a QEMUTimerListGroup to each AioContext (there currently being just one), so the block layer can use timers that will run without relying on the aio system returning to the main loop; this also permits future threading. Any other subsystem could have its own QEMUTimerListGroup; it merely needs to call timerlistgroup_run_timers at an appropriate time to run any pending timers.

    Each QEMUTimerList maintains a pointer to the clock to which it is connected (orange line). It is (as set out above) on a list of QEMUTimerLists for that QEMUClock (the list element). It contains a pointer to the first active timer. The active timers are not maintained through the normal QEMU linked list object, but are instead a simple singly linked list of timers, arranged in order of expiry, with the timer expiring soonest at the start of the list, linked to by the QEMUTimerList object (blue line).

    Each QEMUTimer object contains a link back to its timer list (green line) for manipulation of the lists.

    Under the hood, the call to g_poll within the AioContext’s poll loop has been replaced with ppoll (the nanosecond equivalent of poll). Unfortunately, glib only provides millisecond resolution, meaning that the main loop’s timing will only be at millisecond accuracy whilst this continues to use glib’s poll routines. Plans are afoot to remedy this.

    What’s changed in the API?

    The API had become somewhat messy.

    Firstly, I’ve replaced the previous timer API (with function names like qemu_mod_timer) with a new API of the form timer_<action> where <action> is the action required, for instance timer_new, timer_mod, and so forth. The timer_new function and friends take an enum value being the clock type, rather than a pointer to a clock object. For instance this:

    timer = qemu_new_timer_ns (vm_clock, cb, opaque);

    becomes this:

    timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, cb, opaque);

    Timers not using the main loop QEMUTimerListGroup can be created using timer_new_tl. There’s also timer_init available for use without malloc which takes a timer list. AioContext provides helper functions aio_timer_new and aio_timer_init.

    Secondly, I’ve similarly rationalised the clock API. For instance this:

    now = qemu_get_clock_ns (vm_clock);

    becomes this:

    now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);

    Thirdly, I’ve added lots of documentation (see include/qemu/timer.h)

    If you have out-of-tree code that needs converting to the new timer API, simply run

    scripts/switch-timer-api [filename] ...

    to convert them.

    by Alex Bligh at August 24, 2013 17:15

    Boing Boing

    LEGO version of teaser trailer for 'The Hobbit: The Desolation of Smaug'

    BrotherhoodWorkshop created a very ambitious LEGO version of Peter Jackson's teaser trailer for 'The Hobbit: The Desolation of Smaug'-- "2 months in the making and countless man hours." Not to mention hobbit-hours. Spotted at Laughing Squid where you can view the original trailer. The motion picture is scheduled for release on December 13, 2013. I wish there were going to be a LEGO version of it, too.

    Subscribe to BrotherhoodWorkshop's YouTube channel for more awesome stop-motion and fantasy videos. According to their "about" page, their "hope is to one day make feature films," so maybe it's possible.

    Below, some stills, courtesy of the BrotherhoodWorkshop Facebook page. These guys are insane, and insanely talented.


    by Xeni Jardin at August 24, 2013 16:39

    My Daguerrotype Boyfriend: a tumblog of greatness

    From "My Daguerrotype Boyfriend," Captain Woodford M. Taylor, Company B, 26th Kentucky Volunteer Infantry, U.S.A, 1860 (via Kentucky Digital Library)

    A tumblog of photographs of hot guys from days of yore: My Daguerrotype Boyfriend, by Michelle Legro. I'd have hit that, 140 years ago. [HT: Alexis Madrigal]


    by Xeni Jardin at August 24, 2013 16:33

    Planet Debian

    Enrico Zini: On codes of conduct

    On codes of conduct

    A criticism of the status quo, with a simple proposal: video and lyrics.

    With compassion unyielding to grudge, mother, I learnt how to love.

    August 24, 2013 16:15

    Boing Boing

    Single-serve coffee trend creating more waste

    Noticed how big coffee brands from Starbucks to Peets are promoting single-serve coffee nowadays? Individual portions of pre-ground bean in "plastic capsules or packets that you put in a special coffeemaker to brew one cup at a time... the polar opposite of the pour-over artisanal coffee." East Bay Express has a feature about all the trash this craze generates. I'd also like to point out, as a coffee snob, that it generates shitty tasting coffee, too.

    by Xeni Jardin at August 24, 2013 16:05

    This Day in Blogging History: Wombat & roo BFFs; Klingon knife scares the Daily Mail; Where's the Pentagon's money go?

    One year ago today
    Wombat and kangaroo love: An orphaned wombat and kangaroo are unlikely BFFs at the Wildabout Wildlife Rescue Centre in Kilmore, Victoria, Australia.

    Five years ago today
    Klingon knife scares the crap out of dumb British scandal-sheet: The Daily Mail has a hilariously breathless account of a giant stainless steel Klingon fighting-knife received by police during a knife-amnesty; to hear them tell of it hooded thugs are roaming the streets with Klingon duelling swords looking for little old ladies to terrorise.

    Ten years ago today
    How does the Pentagon spend its yearly $400 million?: The Department of Defense Office of the Inspector General has reported that DOD has not and will not account for $1.1 trillion of "undocumentable adjustments."


    by Cory Doctorow at August 24, 2013 16:02

    "Excuse me, could you please stop making that infernal racket?"

    REUTERS/David Gray

    Steve Westnedge plays his saxophone for a Leopard Seal known as "Casey" as part of a study on the animal's reactions to different sounds at Sydney's Taronga Zoo August 19, 2013. Westnedge, who is also the zoo's elephant keeper, plays his saxophone next to the underwater viewing window to assist the study by researchers from the Australian Marine Mammal Research Centre. The seal occasionally responds with his own sounds, depending on the time of year, which are normally used when wanting to attract mates or establish territories.


    by Rob Beschizza at August 24, 2013 15:40

    LOVEINT: NSA spooks illegally stalking their romantic interests

    LOVEINT is the NSA practice of stalking people you are romantically interested in, using the enormous, illegal spy apparatus that captures huge amounts of Americans' (and foreigners') Internet traffic. It is so widespread that it has its own slangy spook-name. The NSA says it fires the people it catches doing it (though apparently it doesn't prosecute them for their crimes), but given that the NSA missed Snowden's ambitious leak-gathering activity, it seems likely that they've also missed some creepy stalkers in their midst.

    Unless, of course, you believe that being a creepy stalker is incompatible with wanting to be a lawless spy.

    Sen. Dianne Feinstein (D., Calif.), who chairs the Senate intelligence committee, said the NSA told her committee about a set of “isolated cases” that have occurred about once a year for the last 10 years, where NSA personnel have violated NSA procedures.

    She said “in most instances” the violations didn’t involve an American’s personal information. She added that she’s seen no evidence that any of the violations involved the use of NSA’s domestic surveillance infrastructure, which is governed by a law known as the Foreign Intelligence Surveillance Act.

    “Clearly, any case of noncompliance is unacceptable, but these small numbers of cases do not change my view that NSA takes significant care to prevent any abuses and that there is a substantial oversight system in place,” she said. “When errors are identified, they are reported and corrected.”

    NSA Officers Sometimes Spy on Love Interests [Siobhan Gorman/WSJ]

    (via /.)

    (Image: One Hour Photo)


    by Cory Doctorow at August 24, 2013 15:36

    Is this bridge pretty?

    Photo: Annette Sandburg

    Last weekend, Pittsburgh's Andy Warhol Bridge was yarn-bombed by volunteers. Pittsburgh Magazine offers 30 photos of the feat, which involved "more than 3,000 feet of colorful, hand-knit blankets in honor of the late pop artist’s 85th birthday."


    by Rob Beschizza at August 24, 2013 15:34

    Is this bridge ugly?

    A new bridge in Dresden, Germany, was deemed so hideous that the UNESCO has delisted the entire city from its World Heritage index. The removal, protesting the construction's marring of historical city views, makes Dresden the first city to exit the United Nations' tally of the world's beautiful and important places.


    by Rob Beschizza at August 24, 2013 15:27

    Strange Beaver

    Destino 2003 – Salvador Dali & Walt Disney’s Collaboration Video

    The film tells the story of Chronos, the personification of time and the inability to realize his desire to love for a mortal. The scenes blend a series of surreal paintings of Dali with dancing and metamorphosis. The target production began in 1945, 58 years before its completion and was a collaboration between Walt Disney and the Spanish surrealist painter, Salvador Dalí. Salvador Dali and Walt Disney’s Destiny was produced by Dali and John Hench for 8 months between 1945 and 1946. Dali, at the time, Hench described as a “ghostly figure” who knew better than Dali or the secrets of the Disney film. For some time, the project remained a secret. The work of painter Salvador Dali was to prepare a six-minute sequence combining animation with live dancers and special effects for a movie in the same format of “Fantasia.” Dali in the studio working on The Disney characters are fighting against time, the giant sundial that emerges from the great stone face of Jupiter and that determines the fate of all human novels. Dalí and Hench were creating a new animation technique, the cinematic equivalent of “paranoid critique” of Dali. Method inspired by the work of Freud on the subconscious and the inclusion of hidden and double images.
    Dalí said: “Entertainment highlights the art, its possibilities are endless.” The plot of the film was described by. Dalí as “A magical display of the problem of life in the labyrinth of time.”
    Walt Disney said it was “A simple story about a young girl in search of true love.”

    by Admin at August 24, 2013 15:19

    Boing Boing

    Street Fighter's Chun Li ruining everyone's day

    The truth behind many of YouTube's most famous embarrassing pratfalls. [via Kotaku]


    by Rob Beschizza at August 24, 2013 15:08

    There, I Fixed It

    Data Center Knowledge

    After Ballmer: Pundits Ponder Next Steps for Microsoft

    Yesterday's announcement that Ballmer will retire in the next 12 months prompted reactions from far and wide. Here's a look at some of the notable interviews, analysis and commentary from around the web.

    by Rich Miller at August 24, 2013 14:30

    There, I Fixed It

    Planet Ubuntu

    Kubuntu: LTS Update 12.04.3 Released

    Our current LTS release has had an update, 12.04.3. It adds all the current bugfixes and security updates to keep your systems fresh. Download now.

    August 24, 2013 12:41

    Data Center Knowledge

    Top 5 Data Center Stories, Week of Aug. 24

    The Week in Review: QTS confirms IPO and major expansion plans, Twitter weathers quirky anime Tweetstorm, Fog Computing extends the cloud to the edge, CyrusOne keeps growing in Houston, Colovore enters Silicon Valley market.

    by Rich Miller at August 24, 2013 12:00

    Planet PostgreSQL

    Michael Paquier: Postgres module highlight: customize passwordcheck to secure your database

    passwordcheck is a contrib module present in PostgreSQL core using a hook present in server code when creating or modifying a role with CREATE/ALTER ROLE/USER able to check a password. This hook is present in src/backend/commands/user.c and called check_password_hook if you want to have a look. This module basically checks the password format and returns [...]

    August 24, 2013 11:31

    Boing Boing

    My picks on Bullseye: Blocksworld and Adventure Time Encyclopaedia

    On the latest episode of Bullseye with Jesse Thorn I recommended the iPad app Blocksworld and The Adventure Time Encyclopaedia.

    Mark Frauenfelder is the founder of Boing Boing, which bills itself as a "directory of wonderful things." He joins us to share some of his recent finds.

    This time, it's The Adventure Time Encyclopedia and the iPad game Blocksworld.

    The Cartoon Network's show Adventure Time is ostensibly for children, but eagerly devoured by people of all ages. It follows the psychedelic adventures of a boy named Finn and his dog Jake. The new Adventure Time Encyclopedia, "translated" by comedy writer Martin Olson, features new original artwork and everything you ever wanted to know about the post-apocalyptic land of Oooo. Mark also suggests downloading the Blocksworld app for iPad, a virtual Lego-like world with huge creative possibilities.


    by Mark Frauenfelder at August 24, 2013 11:27

    Planet PostgreSQL

    Valentine Gogichashvili: Real-time console based monitoring of PostgreSQL databases (pg_view)

    In many cases, it is important to be able to keep your hand on the pulse of your database in real-time. For example when you are running a big migration task that can introduce some unexpected locks, or when you are trying to understand how the current long running query is influencing your IO subsystem.

    For a long time I was using a very simple bash alias that was injected from the .bashrc script and that included the calls to system utilities like watch, iostat, uptime, df, some additional statistics from the /proc/meminfo and psql that was extracting information about currently running queries and if that queries are waiting for a lock. But this approach had several disadvantages. In many cases I was interested in the disk read/write information for query processes or PostgreSQL system processes, like WAL and archive writers. Also I wanted to have a really easy way to notice the queries that are waiting for locks and probably highlight them by color.
    Several weeks ago we finally open-sourced our new tool, that makes our lives much easier. That tool combines all the feature requests that I was dreaming of for a long time. Here it is: pg_view.

    I already have some more feature requests actually and hope that Alexey will find some time to add them to the tool in nearest future. So if somebody wants to contribute or give some more ideas, please comment and open feature requests on the github page :)

    August 24, 2013 11:11

    There, I Fixed It

    Planet PostgreSQL

    Raghavendra Rao: How to change all objects ownership in a particular schema in PostgreSQL ?

    Few suggesion's here (Thanks), inspired me to compose a bash script for changing all object's (TABLES / SEQUENCES / VIEWS / FUNCTIONS / AGGREGATES / TYPES) ownership in a particular schema in one go. No special code included in a script, I basically picked the technique suggested and simplified the implementation method via script. Actually, REASSIGN OWNED BY command does most of the work smoothly, however, it changes database-wide objects ownership regardless of any schema. Two eventualities, where you may not use REASSIGN OWNED BY:

    1. If the user by mistake creates all his objects with super-user(postgres), and later intend to change to other user, then REASSIGN OWNED BY will not work and it merely error out as:
    postgres=# reassign owned by postgres to user1;
    ERROR: cannot reassign ownership of objects owned by role postgres because they are required by the database system
    2. If user wish to change just only one schema objects ownership.

    Either cases of changing objects, from "postgres" user to other user or just changing only one schema objects, we need to loop through each object by collecting object details from pg_catalog's & information_schema and calling ALTER TABLE / FUNCTION / AGGREGATE / TYPE etc.

    I liked the technique of tweaking pg_dump output using OS commands(sed/egrep), because it known that by nature the pg_dump writes ALTER .. OWNER TO of every object (TABLES / SEQUENCES / VIEWS / FUNCTIONS / AGGREGATES / TYPES) in its output. Grep'ing those statements from pg_dump stdout by replacing new USER/SCHEMANAME with sed and then passing back those statements to psql client will fix the things even if the object owned by Postgres user. I used same approach in script and allowed user to pass NEW USER NAME and SCHEMA NAME, so to replace it in ALTER...OWNER TO.. statement.

    Script usage and output:
    sh  -n new_rolename -S schema_name

    -bash-4.1$ sh -n user1 -S public

    Tables/Sequences/Views : 16
    Functions : 43
    Aggregates : 1
    Type : 2

    You can download the script from here, and there's also README to help you on the usage.


    August 24, 2013 10:04

    Boing Boing

    How to make stilts, from the Dad's Book of Awesome Projects

    Mike Adamick is the author of Dad's Book of Awesome Projects: From Stilts and Super-Hero Capes to Tinker Boxes and Seesaws. The projects include making a balance board, eggshell cupcakes, comic book shoes, a vintage modern silhouette, crayon shapes, a duct tape crayon wallet, friendship bracelets, homemade Play-Doh, a garden trolley, homemade ice cream, goo slime, superhero capes, popsicle stick bridges, a wooden sword, and more.

    Here's an excerpt that shows how to make stilts (PDF)


    by Mark Frauenfelder at August 24, 2013 09:03

    Planet Ubuntu

    Mohamad Faizul Zulkifli: Thanks Ubuntu Project!

    I would like to thank to Ubuntu project, Ubuntu community especially people in, ubuntu-hams for their tremendous support and being friendly to me.

    Ubuntu changed how the world thinks about free operating system. Ubuntu gave knowledges to those who are hunger and eager to learn more about operating system.

    Ubuntu changed the way ham radio operators use their computer. Ubuntu changed the way kids knowing their computer.

    Thanks Ubuntu project!

    by (9M2PJU) at August 24, 2013 08:34

    Server Fault Meta

    Extra word in help document

    I had cause to post text from in a comment today, and there's a section that reads

    too broad - if your question could be answered by an entire book, or has many of valid answers, it's probably too broad for our format

    You'll notice an extra of between many and valid. If someone with privilege could remove that, it'd probably be good.

    by MadHatter at August 24, 2013 08:03

    ASCII Art Farts

    #5175: CUTE ELEPHANT

           .--.     ___/     \                       
          /    `.-""   `-,    ;                      
         ;     /     O O  \  /	                      
         `.    \          /-'     THIS IS AN ELEPHANT
        _  J-.__;      _.'                           
       (" /      `.   -=:         AND IT IS CUTE     
        `:         `, -=|                            
         |  F\    i, ; -|         THAT IS ALL        
         |  | |   ||  \_J                            
    fsc  mmm! `mmM Mmm'                              

    by (ASCII Art Farts: de) at August 24, 2013 07:00

    Boing Boing

    New biography of MAD editor Al Feldstein

    Al Feldstein began working at EC comics, publishers of Weird Science, Weird Fantasy, Tales from the Crypt, The Vault of Horror, and The Haunt of Fear in 1948. Soon he became editor of most of EC's titles. He typically wrote and illustrated a story in each title and drew many of the covers, a mind-bogglingly prolific output. Eventually he stopped doing the art for stories and stuck with editing, writing, and cover illustrations. According to Wikipedia, from "late 1950 through 1953, he edited and wrote stories for seven EC titles." I've always loved his signature, which features elongated horizontals on the F and the T, and an extended vertical on the N.

    After MAD creator Harvey Kurtzman got in a fight with publisher William Gaines over ownership of the comic and left EC in 1956, Gaines put Feldstein in charge of the humor magazine, where he remained as editor until 1985.

    This month, IDW released Feldstein: The Mad Life and Fantastic Art of Al Feldstein!, a 320-page biography written by Grant Geissman (who is a far-out jazz guitarist in addition to being a biographer of comic book luminaries). My copy is in the mail. In the meantime, enjoy these sample pages below, swiped from Bhob Stewart's Potrzebie blog.

    Al Feldstein is retired from magazine work, but is an active painter of Western and wildlife scenes.

    Feldstein: The Mad Life and Fantastic Art of Al Feldstein!

    by Mark Frauenfelder at August 24, 2013 06:26

    Planet HantsLUG

    Guido Fawkes' blog

    Planet #BitFolk

    Phil Spencer (CrazySpence): Heat Sinks on the Pi?

    Do You need heat sinks for the Pi?

    Short answer would be: no, you don’t.

    Long answer would be: Depends what you are doing with it.

    IMG_1206My first Pi project was a media player and the Pi handles this incredibly without the need for overclocking and the temperature remains within an acceptable level. For the purpose of a media server I would have to say no you do not need heat sinks.

    I had an idea lurking around in my head to try the Pi for arcade emulation and emulation needs more juice. In this use case I intended to get a second SD card, try some emulators and play some games to test the waters. For this test I also planned to over clock so if you intend to use your Pi in this manner then I would say yes.

    I went around looking for heat sinks and what people recommended and I came across a kit that came with 3 small heat sinks and thermal paper to attach them to the Pi. It took about a week or so for the package to arrive and it was pretty plane, a small baggy containing 3 small heat sinks and a sheet of thermal paper.


    I took the case off my Pi and cut appropriate sized squares of thermal paper for the heat sinks and began to apply them to the Pi. Once I was done I was impressed by the improved appearance. I know it sounds silly but the Pi did look cooler with the heat sinks on. You can even see them through my Pi bow case.



    So after the heat sinks were attached I tested the Pi. The heat sinks provided about a 20 degree difference. I have experienced no issues going up to 950Mhz with my Pi. Some may also say you can do that without the sinks but when it comes to over clocking I’d rather err on the side of not constantly replacing cooked Pi’s.


    FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

    August 24, 2013 05:27

    Planet Sysadmin

    Chris Siebenmann: My personal view of Fedora versus Ubuntu on the desktop

    It all started with a series of tweets (1, 2) where I wound up saying:

    I have a lot of history with [Fedora on my desktop] and it still seems to be the best of a bad lot of choices. Ubuntu makes me even more unhappy.

    This made me really think about what I felt about Ubuntu and Fedora. The following are my own views about my personal usage and I don't expect anyone else to agree with them. They are also more than a little bit inflammatory, but I don't feel like lying about my views.

    I run Fedora on my desktop and am unlikely to ever run Ubuntu for several major reasons:

    • Having worked with both, I feel that RPMs are a better packaging format in practice than Debian .debs for both source and binary packages. (Yes, I care about this a fair bit.)

    • Ubuntu is not meaningfully a community distribution. Regardless of the official stance, it is really Canonical's distribution (Canonical's attempts to fool people about this leave a bad taste in my mouth, but that's a side issue).
    • I don't believe in the direction that Canonical is taking Ubuntu's user interface and design. I don't believe that it's possible to have a single interface that is good for tablets, phones, and regular desktops with keyboards and mice and (multiple) monitors, and Canonical is clearly focusing on tablets and phones, not desktop computers.

    I've said plenty of bad things about Gnome 3 in Fedora and the direction the Gnome standard desktop is going, but at least the current Gnome philosophy is not the official viewpoint of the distribution and there are plenty of real alternatives (I'm running one on my laptop). Unity is the official 'this is the way it is going to be' interface of Canonical. Everything else is at best a second class citizen.

    Oh, Canonical may not admit that or say it outright but come on, everyone knows what the score is. Canonical doesn't give a rat's rear end for anything except Canonical's priorities. This handily brings me to a final issue:

    • I no longer trust Canonical itself to have any real care for my interests or the interests of open source and Linux in general. What made this crystal clear to me was Canonical deciding to ship user desktop searches off to Amazon for affiliate revenue and never mind any of the many, many problems with this.

    (I'm aware that I'm late to the party on this one.)

    If Fedora screws something up, I have confidence that it is going to be inadvertent and that there are real people there who care. Canonical? No. If I put Ubuntu on my desktop I'd be just as much at the mercy of an uncaring corporation as if I used OS X or Windows. And that corporation has demonstrated that its priorities and interests are very divergent from mine.

    So the short version: Canonical is going to do whatever it feels like, it's going to periodically do bad things to me, and it's not even going to produce a desktop that I like. And there is no chance that Canonical is going to listen to me, either individually or en masse. Canonical has a goal and I am just a bystander (since I use a desktop machine, an unimportant one).

    (We continue to use Ubuntu for servers on an LTS release to LTS release basis. It remains the best Linux server distribution I know of for our purposes, which require a blend of long support, reasonably frequent releases with current packages on release, and a wide package selection.)

    Sidebar: smaller reasons

    • Fedora is better than Debian at moving forward. Sometimes this is not a great thing and I've heard rumbles that Fedora is slipping, but on the whole I like the results.

    • relatedly, I think that Fedora is generally making good technical choices when it moves forward. Debian has visibly fumbled several important issues that Fedora has gotten right (cf).

    • Ubuntu is worse than Debian at making good technical choices, as hard as that is to believe. Especially, Canonical seems to have a terrible case of Not Invented Here syndrome, which is deadly in the open source Linux world. Exhibit one of this for me is their grim insistence on sticking with upstart for their init system.

    • I don't think that Canonical is really committed to open source in their heart. Instead open source is a strategic choice for them, one I expect them to abandon when and where it is convenient. I can't imagine Fedora or Debian doing this; both are really committed to the spirit of open source, not just its legalities.

    Debian people will be unhappy with me for saying this, but in general I've wound up feeling that Fedora gets stuff done and Debian doesn't. And yes, I've heard rumbles that Fedora has its share of real internal problems and things are more precarious and problematic than they look from the outside.

    (I compare Fedora to Debian since Ubuntu inherits a significant number of things from Debian. Or at least I perceive it as doing so.)

    (See also.)

    August 24, 2013 05:12

    Boing Boing

    People give Breaking Bad's Anna Gunn tons of crap for being the actor who plays Skyler White

    "My character, to judge from the popularity of Web sites and Facebook pages devoted to hating her, has become a flash point for many people’s feelings about strong, nonsubmissive, ill-treated women. As the hatred of Skyler blurred into loathing for me as a person, I saw glimpses of an anger that, at first, simply bewildered me." From "I have a character issue," by Anna Gunn in the New York Times.

    by Xeni Jardin at August 24, 2013 05:03

    NSA paid tech companies millions to cover cost of PRISM compliance

    Sourcing its story on files provided by whistleblower and former NSA contractor Edward Snowden, the Guardian reported today that the NSA paid millions to cover costs associated with PRISM compliance to tech companies including Google and Yahoo. The top-secret files referenced in the Guardian's report today amount to the first publicly shared evidence of a financial relationship between the US agency and internet service providers.

    The costs in question were incurred after a 2011 FISA court ruling.

    Charlie Savage at the New York Times digs into the story here.


    by Xeni Jardin at August 24, 2013 04:59

    Server Fault Meta

    Link to "learn more" about a tag is unclickable in Google Chrome

    Link to "learn more" about a tag is unclickable in Google Chrome.

    Steps to reproduce bug:
    1. Open page to "Ask Question"
    2. Start typing in the "Tags" field at the bottom
    3. Mouse over any one of the tags
    4. Try to click the link that says "learn more".

    It's impossible to click this link in Google Chrome (Version 30.0.1599.14 beta-m) but it works just fine in Mozilla Firefox.

    I noticed it here on SF first, but this seems to affect all sites on the Stack Exchange network. Is there a more appropriate meta site for this issue?

    by Nic at August 24, 2013 04:12

    Strange Beaver

    Cool Cartoon Network Bullet Train

    Cartoon Network covered this entire bullet train to try and advertise their network. The result is beyond awesome

    cartoon network bullet train

    cartoon network bullet train

    cartoon network bullet train

    by Admin at August 24, 2013 03:19


    Oh good, RSS spam.

    Either Feedly or Newsify (I don't know which) has begun inserting "sponsored posts" "ads" spam into my RSS feed.

    Is there still no better iOS alternative to this bullshit?

    Previously, previously.

    by jwz at August 24, 2013 02:59

    Daring Fireball


    The biggest problem with the NSA scandal is the lack of accountability.

    by John Gruber at August 24, 2013 01:31

    Boing Boing

    Important baby panda news

    Mei Xiang, the female panda who lives at the Smithsonian National Zoo, gave birth today. Above is a screen shot from the Zoo's Panda Cam, showing the baby shortly after birth.

    Why should you care about this not-quite-yet-but-soon-to-be adorable baby animal more than you care about any other adorable baby animal? Because the scientific oddities of panda reproduction make its story very interesting.

    First, it's incredibly difficult for pandas to get knocked up. They're only fertile once a year and have trouble successfully mating in captivity. Mei Xiang was artificially inseminated with the sperm of two different male pandas back in March. All of Mei Xiang's cubs have been conceived this way, but the artificial inseminations don't always work. She gave birth once in 2005 to Tai Shan who now lives in China. It was another 7 years before a second pregnancy took, but the unnamed cub only lived for six days.

    Second, you can't tell whether or not pandas are pregnant until there either is or isn't a baby panda.

    They go through the same symptoms and physical changes either way and nobody even knows exactly how long the panda gestation period is.

    Plus, they're notoriously difficult to successfully ultrasound. In fact, Mei Xiang's last ultrasound on August 5 showed no sign of a fetus.

    Basically, panda reproduction is weird.

    So, break out the bubbly for this new, little cub with a bit more enthusiasm than might be applied to, say, a litter of rabbits.


    by Maggie Koerth-Baker at August 24, 2013 00:17

    August 23, 2013

    Planet Ubuntu

    The Fridge: Ubuntu 12.04.3 LTS released

    The Ubuntu team is pleased to announce the release of Ubuntu 12.04.3 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

    As with 12.04.2, 12.04.3 contains an updated kenrel and X stack for new installations on x86 architectures.

    As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 12.04 LTS.

    Kubuntu 12.04.3 LTS, Edubuntu 12.04.3 LTS, Xubuntu 12.04.3 LTS, Mythbuntu 12.04.3 LTS, and Ubuntu Studio 12.04.3 LTS are also now available. For some of these, more details can be found in their announcements:

    To get Ubuntu 12.04.3

    In order to download Ubuntu 12.04.3, visit:

    Users of Ubuntu 10.04 will be offered an automatic upgrade to 12.04.3 via Update Manager. For further information about upgrading, see:

    As always, upgrades to the latest version of Ubuntu are entirely free of charge.

    We recommend that all users read the 12.04.3 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

    If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

    Help Shape Ubuntu

    If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

    About Ubuntu

    Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

    Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

    More Information

    You can learn more about Ubuntu and about this release on our website listed below:

    To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

    Originally posted to the ubuntu-announce mailing list on Fri Aug 23 23:31:31 UTC 2013 by Stéphane Graber

    August 23, 2013 23:53

    Boing Boing

    Objects and sounds in a third grade classroom, remixed by students

    Edgar Camago, aka DJ Overeasy, did an amazing Breaking Bad remix video we featured recently on Boing Boing. Edgar also teaches third grade at a school in San Francisco.

    "My class recently created a song using nothing but objects and sounds found in the classroom or in school," Edgar says. And here's the resulting video.

    "Any money made off of YouTube will be donated to my school," Edgar says.

    He's on Facebook, and YouTube.

    More about the video, below.

    An Overeasy remix using audio and video samples of things that happen in my third grade classroom. The students wrote and performed all their own parts and even helped with filming, mic'ing, and some composition!

    As part of my sell to the parents for letting me do this I decided it'd be a neat idea if I could give whatever sum -- even if nominal -- of money made from advertisements on YouTube to the school. The money would be used for this particular group of students' outdoor ed program. Our school is small and a lot of the funding for our field trips and activities come out of our parents' pockets or from fundraising.

    In short: share, share, share! It gives me a chance to share my music and it helps out the kids a bit.

    A few last things:

    - Keep it civil in the comments. These are eight and nine year old kids. I really don't want to remove the comments section.

    - Authorization forms were obtained for all of the students pictured in this video.

    - Big shout out to our first grade teacher, Aly Pence, who helped the girls learn that chair dance (in one sitting!)

    - Everything you hear is depicted in some shape or form in the video. I did edit the heck out of some sounds, but for the most part what you see is what you hear.

    - It's tough having a day job and making these videos, but viewership makes it all worthwhile. Please like, comment, and subscribe.

    - Feel free to leave ideas for remixes in the future. I'd love to take on some of your challenges!

    Big thanks to all my students and their parents. Y'all gonna be rockstars this year in fourth. Go on and smash it!


    by Xeni Jardin at August 23, 2013 23:39

    Planet Debian

    Thorsten Glaser: FrOSCon 2013, or, why is there no MirBSD exhibit?

    FrOSCon is approaching, and all MirBSD developers will attend… but why’s there no MirBSD exhibit? The answer to that is a bit complex. First let’s state that of course we will participate in the event as well as the Open Source world. We’ll also be geocaching around the campus with other interested (mostly OSS) people (including those we won for this sport) and helping out other OSS projects we’ve become attached to.

    MirOS BSD, the operating system, is a niche system. The conference on the other hand got “younger” and more mainstream. This means that almost all conference visitors do not belong to the target group of MirOS BSD which somewhat is an “ancient solution”: the most classical BSD around (NetBSD® loses because they have rc.d and PAM and lack sendmail(8), sorry guys, your attempt at being not reformable doesn’t count) and running on restricted hardware (such as my 486SLC with 12 MiB RAM) and exots (SPARCstation). It’s viable even as developer workstation (if your hardware is supported… otherwise just virtualise it) but its strength lies with SPARC support and “embedded x86”. And being run as virtual machine: we’re reportedly more stable and more performant than OpenBSD. MirBSD is not cut off from modern development and occasionally takes a questionable but justified choice (such as using 16-bit Unicode internally) or a weird-looking but beneficial one (such as OPTU encoding saving us locale(1) hassles) or even acts as technological pioneer (64-bit time_t on ILP32 platforms) or, at least, is faster than OpenBSD (newer GNU toolchain, things like that), but usually more conservatively, and yes, this is by design, not by lack of manpower, most of the time.

    The MirPorts Framework, while technically superiour in enough places, is something that just cannot happen without manpower. I (tg@) am still using it exclusively, continuing to update ports I use and occasionally creating new ones (mupdf is in the works!), but it’s not something I’d recommend someone (other than an Mac OSX user) to use on a nōn-MirBSD system (Interix is not exactly thriving either, and the Interix support was only begun; other OSes are not widely tested).

    The MirBSD Korn Shell is probably the one thing I will be remembered for. But I have absolutely no idea how one would present it on a booth at such an exhibition. A talk is much more likely. So no on that front too.

    jupp, the editor which sucks less, is probably something that does deserve mainstream interest (especially considering Natureshadow is using it while teaching computing to kids) but probably more in a workshop setting. And booth space is precious enough in the FH so I think that’d be unfair.

    All the other subprojects and side projects Benny and I have, such as mirₘᵢₙcⒺ, josef stalin, FreeWRT, Lunix Ewe, Shellsnippets, the fonts, etc. are interesting but share few, if any, common ground. Again, this does not match the vast majority of visitors. While we probably should push a number of these more, but a booth isn’t “it” here, either.

    MirOS Linux (“MirLinux”) and MirOS Windows are, despite otherwise-saying rumours called W*k*p*d*a, only premature ideas that will not really be worked on (though MirLinux concepts are found in mirₘᵢₙcⒺ and stalin).

    As you can see, despite all developers having full-time dayjobs, The MirOS Project is far from being obsolete. We hope that our website visitors understand our reasons to not have an exhibition booth of our own (even if the SPARCstation makes for a way cool one, it’s too heavy to lift all the time), and would like to point out that there are several other booths (commercial ones, as well as OSS ones such as AllBSD, Debian and (talking to) others) and other itineries we participate in. This year both Benny and I have been roped into helping out the conference itself, too (not exactly unvoluntarily though).

    The best way to talk to us is IRC during regular European “geek” hours (i.e. until way too late into the night – which Americans should benefit from), semi-synchronously, or mailing lists. We sort of expect you to not be afraid to RTFM and look up acronyms you don’t understand; The MirOS Project is not unfriendly but definitely not suited for your proverbial Aunt Tilly, newbies, “desktop” users, and people who aren’t at least somewhat capable of using written English (this is by design).

    by (MirOS Developer tg) at August 23, 2013 23:37

    Planet Ubuntu

    Edubuntu: Edubuntu 12.04.3 Release Announcement

    Edubuntu Long-Term Support

    Edubuntu 12.04.3 LTS is the third Long Term Support (LTS) version of Edubuntu as part of Edubuntu 12.04's 5 years support cycle.

    Edubuntu's Third LTS Point Release

    The Edubuntu team is proud to announce the release of Edubuntu 12.04.3. This is the third of four LTS point releases for this LTS lifecycle. The point release includes all the bug fixes and improvements that have been applied to Edubuntu 12.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 12.04 LTS system and have applied all the available updates, then your system will already be on 12.04.3 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require drastically less updates than installing from the original 12.04 LTS media.

    This release ships with a backported kernel and X stack. This enables users to make use of more recently released hardware. Current users of Edubuntu 12.04 won't be automatically updated to this back-ported stack, you can however manually install the packages as well.

    • Information on where to download the Edubuntu 12.04.3 LTS media is available from the Downloads page.
    • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace

    Although Edubuntu 10.04 systems will ask for upgrade to 12.04.3, it's not an officially supported upgrade path. Testing however indicated that this usually works if you're ready to make some minor adjustments afterwards.

    To ensure that the Edubuntu 12.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, we will release another point release before the next long term support release is made available in 2014. More information is available on the release schedule page on the Ubuntu wiki.

    The release notes are available from the Ubuntu Wiki.

    Thanks for your support and interest in Edubuntu!

    August 23, 2013 23:23

    Planet PostgreSQL

    Josh Berkus: PostgreSQL plus Vertica on Tuesday: SFPUG Live Video

    This upcoming Tuesday, the 27th, SFPUG will have live streaming video of Chris Bohn from Etsy talking about how he uses PostgreSQL and Vertica together to do data analysis of Etsy's hundreds of gigabytes of customer traffic.  barring technical difficulties with the video or internet, of course.

    The video will be on the usual SFPUG Video Channel.  It is likely to start around 7:15PM PDT.  Questions from the internet will be taken on the attached chat channel.

    For those in San Francisco, this event will be held at Etsy's new downtown SF offices, and Etsy is sponsoring a Tacolicious taco bar.  Of course, the event is already full up, but you can always join the waiting list.

    In other, related events, sfPython will be talking about PostgreSQL performance, and DjangoSF will be talking about multicolumn joins, both on Wednesday the 28th.  I'll be at DjangoSF, doing my "5 ways to Crash Postgres" lightning talk.

    August 23, 2013 23:11


    "They are turning OUR atmosphere into THEIR atmosphere."

    On the Phenomenon of Bullshit Jobs

    In the year 1930, John Maynard Keynes predicted that, by century's end, technology would have advanced sufficiently that countries like Great Britain or the United States would have achieved a 15-hour work week. There's every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn't happen. Instead, technology has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it.

    Why did Keynes' promised utopia -- still being eagerly awaited in the '60s -- never materialise? The standard line today is that he didn't figure in the massive increase in consumerism. Given the choice between less hours and more toys and pleasures, we've collectively chosen the latter. This presents a nice morality tale, but even a moment's reflection shows it can't really be true. Yes, we have witnessed the creation of an endless variety of new jobs and industries since the '20s, but very few have anything to do with the production and distribution of sushi, iPhones, or fancy sneakers. [...]

    It's as if someone were out there making up pointless jobs just for the sake of keeping us all working. And here, precisely, lies the mystery. In capitalism, this is precisely what is not supposed to happen. Sure, in the old inefficient socialist states like the Soviet Union, where employment was considered both a right and a sacred duty, the system made up as many jobs as they had to (this is why in Soviet department stores it took three clerks to sell a piece of meat). But, of course, this is the sort of very problem market competition is supposed to fix. According to economic theory, at least, the last thing a profit-seeking firm is going to do is shell out money to workers they don't really need to employ. Still, somehow, it happens. [...]

    There's a lot of questions one could ask here, starting with, what does it say about our society that it seems to generate an extremely limited demand for talented poet-musicians, but an apparently infinite demand for specialists in corporate law? (Answer: if 1% of the population controls most of the disposable wealth, what we call "the market" reflects what they think is useful or important, not anybody else.) But even more, it shows that most people in these jobs are ultimately aware of it. In fact, I'm not sure I've ever met a corporate lawyer who didn't think their job was bullshit. The same goes for almost all the new industries outlined above. There is a whole class of salaried professionals that, should you meet them at parties and admit that you do something that might be considered interesting, will want to avoid even discussing their line of work entirely. Give them a few drinks, and they will launch into tirades about how pointless and stupid their job really is.

    This is a profound psychological violence here. How can one even begin to speak of dignity in labour when one secretly feels one's job should not exist? How can it not create a sense of deep rage and resentment. Yet it is the peculiar genius of our society that its rulers have figured out a way to ensure that rage is directed precisely against those who actually do get to do meaningful work. For instance: in our society, there seems a general rule that, the more obviously one's work benefits other people, the less one is likely to be paid for it. Again, an objective measure is hard to find, but one easy way to get a sense is to ask: what would happen were this entire class of people to simply disappear? Say what you like about nurses, garbage collectors, or mechanics, it's obvious that were they to vanish in a puff of smoke, the results would be immediate and catastrophic. A world without teachers or dock-workers would soon be in trouble, and even one without science fiction writers or ska musicians would clearly be a lesser place. It's not entirely clear how humanity would suffer were all private equity CEOs, lobbyists, PR researchers, actuaries, telemarketers, bailiffs or legal consultants to similarly vanish.


    by jwz at August 23, 2013 23:01

    Boing Boing

    CIA admits Area 51 exists, but won't someone think of the space aliens?

    A CIA report released last week "after eight years of prodding by a George Washington University archivist researching the history of the U-2" acknowledged the existence of a secret military testing base called Area 51. But the report "made no mention of colonies of alien life, suggesting that the secret base was dedicated to the relatively more mundane task of testing spy planes." And that has a lot of alien enthusiasts disappointed, writes Adam Nagourney in the New York Times.

    by Xeni Jardin at August 23, 2013 22:50

    Daring Fireball

    Why Steve Ballmer Should Have Been Shitcanned No Later Than 2009

    Another one on Ballmer from the archive:

    The damning thing isn’t that Apple got there first; it’s that even after Apple revealed it, that Ballmer didn’t get it, that he didn’t see instantly that Apple had unveiled something amazing and transformative. All Ballmer could see was the near future, the next few months where the iPhone was indeed too expensive and where typing on a touchscreen was a novelty.

    by John Gruber at August 23, 2013 21:55

    Microsoft Forces Steve Ballmer to Resign

    Officially he’s “retiring within the next 12 months”, but that’s just framing to allow Ballmer and Microsoft itself to save some face. This is the axe, and it was long overdue. Ballmer has been a successful steward growing profits from the franchises he inherited from the Gates era — Windows and Office. But he’s been an abject failure at developing anything new. Under his watch Windows has been supplanted by Apple’s iOS and Google’s Android. Mobile is the industry’s growth area, and Microsoft is barely a player.

    Here’s the hitch though: Ballmer has chased all potential successors out of the company — Ray Ozzie, Robbie Bach, J Allard, and most recently, Steven Sinofsky.

    by John Gruber at August 23, 2013 21:46

    Planet Ubuntu

    Benjamin Kerensa: Who’s the Top FOSS Blogger?

    Whos the Top FOSS Blogger Were Almost There… « FOSS ForceFOSS Force 2013 08 23 14 45 24 300x142 Whos the Top FOSS Blogger?I have made it to the final round of FOSSForce’s Top FOSS (Free and Open Source Software) Blogger Contest and am happy to be among some amazing bloggers in Open Source two of which I have been fortunate enough to meet.

    I was definitely an underdog in the last round but who knows maybe I can prevail as the Top FOSS Blogger with all of your support or perhaps Larry the CrunchBang Guy will win? Anyway, go ahead and go right here and get your vote in today!

    August 23, 2013 21:46

    Huawei Tecal ES3000 Application Accelerator Review

    The Huawei Tecal ES3000 is a family of full-height half-length enterprise application accelerators that leverage MLC NAND in capacities up to 2.4TB and PCIe interface (2.0 x8). On the surface the Huawei cards sound similar to many other products on the market, but a deeper look reveals a unique triple controller design that joins two PCBs together to form an impressive offering. On the top end of the performance scale this means 3.2GB/s max read bandwidth and 2.8GB/s write. From a latency angle, all thee capacities can drive 49µs and write latency of 8µs. The cards have a number of additional features as well including enhanced error checking, power fail protection and mechanisms to drive enhanced endurance over the course of their life. 

    read more

    by StorageReview Enterprise Lab at August 23, 2013 21:17

    Data Center Knowledge

    Steve Ballmer to Retire As Microsoft CEO

    Microsoft Corp. today announced that Chief Executive Officer Steve Ballmer has decided to retire as CEO within the next 12 months, after a special committee of the company's board names a new CEO.

    by Rich Miller at August 23, 2013 21:15

    Daring Fireball

    Google Designing Its Own Self-Driving Car

    Amir Efrati:

    Google Inc., which has been working on software to help major automakers build self-driving cars, also is quietly going around them by designing and developing a full-fledged self-driving car, according to people familiar with the matter.

    They’ve got to be eyeing Tesla, right?

    by John Gruber at August 23, 2013 21:00

    Planet Ubuntu

    Jono Bacon: Start With Art In Indianapolis

    Yesterday I took a few days off to fly out to Indianapolis to provide the keynote for the annual Start With Art event run by the Arts Council Of Indianapolis. Many thanks to Dave Lawrence for the kind invitation.

    The event I spoke at was a luncheon attended by 1000 people which included awards given out to local artists and community leaders, a speech from the First Lady Of Indianapolis, and some musical performances. My keynote wrapped up the two hour event.

    My presentation focused on the intersection between art and community, and the many similar driving forces behind both. Art is meant to be created, distributed, shared, and enjoyed, and communities are a wonderful way to infuse artists with opportunity. Likewise creativity is the backbone of great community.

    My presentation touched on a number of experiences and take-aways from my experience as both an artist (citing examples of Severed Fifth and LugRadio) and a community manager (covering Ubuntu) and a set of general lessons and conclusions that I have learned over the years. Although I had never been to a Start With Art event before, and was a little nervous as I didn’t really know the audience, the presentation was well received.

    I love speaking, and I love meeting new people at the events I speak at, but I have to admit, this event felt different to most.

    I must confess that I didn’t have a particularly large scope of knowledge about what the Arts Council Of Indianapolis actually do, but the opening remarks included a range of announcements of new areas of focus and work that the organization are working on as well as updates about existing programs. The council highlights artists, provides funding campaigns, released a local crowd-funding portal to connect donors to artists, built a central arts website for ticket and performance information and more. The strong overriding message I got from all of this was that they are doing everything they can to make Indy a national example of a thriving arts eco-system.

    The level of passion that I experienced today from the organizers, attendees, and sponsors was inspiring. They are clearly charting a course for Indianapolis to be to arts that Nashville is to music and Silicon Valley is to technology. The core takeaway from my presentation was that great communities succeed when united around a mission, and the organizers from the Arts Council Of Indianapolis and their community leaders are a thundering example of this sense of purpose. They are not just talking about how to improve the arts in Indianapolis, they are making it happen, and the event today was a strong testament to their efforts.

    A truly inspiring trip, and many thanks to everyone at the Arts Council Of Indianapolis for taking such good care of me while I was in town. Indianapolis is an awesome city, and it is wonderful to know that the arts community is in such good hands.

    August 23, 2013 20:55

    Planet Debian

    Gregor Herrmann: DebConf 2013

    [still no pictures here, feel free to go to].

    After my report about DebCamp, & after being home for 5 days now, here are some thoughts about DebConf13:

    First of all, I really enjoyed being there, & this was one of the best DebConfs for me so far. An important reason for this was the place: Le Camp was in my opinion the best DebConf venue ever (I've been to), with everything & everyone in one place, & still enough space (especially outdoors space!) for work & relaxing. Kudos to the local team for finding this spot, & for sticking to it despite some criticism.

    In general, I'd like to say thanks to the local team for the perfect organisation of all aspects – you did a great job!

    What else did I enjoy? As usual, meeting old friends & making new ones was an important part. It was great to see people I've worked together with on-line in person for the first time, & also to get to know new people. – Debian not only produces a great operating system, it also is an awesome community!

    Speaking of which: I attended quite a few seesions from the Community and Team tracks, & it's good to see that lots of energy & thoughts are going into improving how we deal with new or existing contributors.

    Like each year, we had the "use Perl;" BoF, the annual meeting of the Debian Perl Group, where we discussed various topics around our tasks, tools, & workflow. As each year, in a very friendly & collaborative atmosphere.

    Besides that, I attended talks about systemd & upstart; the first one disappointed me, since it felt more like a sales pitch than a technical talk & the presenter come across alot like "I know better than you what's good for you", the latter was better but tried to avoid upstart's main issue, the infamous Canonical Contributor License Agreement; of course the audience raised this topic in the Q&A part afterwards. – Rock vs. hard place …

    Other interesting topics were what Debian can learn from Ubuntu's QA processes for its own testing migration (mail from the Release Team pending, AFAIK); lots of git stuff, & especially the "birth" of dgit, a tool to treat the Debian archive as a git remote. If this gets traction, it could be a revolution in packaging!

    The lintian BoF encouraged me further to try to create some pkg-perl specific checks, & then there was jenkins, piuparts, autopkgtest, etc.

    On Friday we celebrated Debian's 20th birthday with a huge Kremšnita/Crèmeschnitte/mille-feuille. & with the Poetry Night at the campfire.

    Usually I'd close with "See you next year in Portland!"; but since it seems like the local team has decided to scratch DebCamp (which they only said after being asked in the Q&A after their presentation), I'm not sure how attractive this really is for me … We'll see.

    Thanks again to everyone who made this DebConf possible!

    August 23, 2013 20:43

    Planet UKnot

    Gut bacteria promoting colorectal cancer (40 minute biostatistics)

    A couple of studies have been published recently, and quite a bit written about them, which link the abundance of types of bacteria found in the mouth with incidence of colorectal cancer.

    These finding result from the observation that the oral bacterium Fusobacterium nucleatum is often found in high abundance in colorectal carcinoma tissue samples.

    Gram-negative stained culture of F. nucleatum. Image Courtesy of J. Michael Miller, Ph.D.,(D)ABMM of National Center for Zoonotic, Vector-borne, and Enteric Diseases. Picture submitted by him to American Society for Microbiology

    Image & caption nicked from:

    I've not read up on the science and microbiology in any depth, but I got the feeling that it would be interesting to plot and summarize some of the public health data available on two variables relating to Oral health and Colorectal cancer incidence.

    Hence I pulled down some data from the web and made some charts.

    US state by state data sets are often complete, with up-to-date and freely available for many observable values.

    After a bit of google searching, I came up with these 2 sets from CDC:

    The US Centers for disease control and Prevention have data for colorectal cancer for 2009 here;

    [Age-Adjusted Invasive Cancer Incidence Rates and 95% Confidence Intervals by State (Table 5.4.1M) *†Rates are per 100,000 persons and are age-adjusted to the 2000 U.S. standard population (19 age groups - Census P25-1130). Rates are per 100,000 persons and are age-adjusted to the 2000 U.S. standard population (19 age groups - Census P25-1130).]

    and the data available for download

    The site also carries summary data from the BRFSS survey, which is the "Behavioral Risk Factor Surveillance System", which collected data for 2008 asking adults 18+, whether they "had visited a dentist or dental clinic in the past year":

    The two data sets can be merged, and then scatter plot and correlations, and best fit plotted. (assuming linear association and normally distributed data etc etc)

    here is the summary data:


    characterize the association... and consider any other ideas..

    August 2013


    genome sequence for Fusobacterium nucleatum SubspeciesPolymorphum

    by Tom Hodder ( at August 23, 2013 20:30

    Skeptic Events

    SitP Birmingham - : Open Mic Night

    When: Wed Sep 11, 2013 6:30pm to 8:30pm  UTC

    Where: The Victoria 48 John Bright Street Birmingham B1 1BN
    Event Status: confirmed
    Event Description: Skeptics in the Pub Birmingham. For more information, see SitP Ref [SitP1741Event]

    by (The Skeptic Mag (RSS)) at August 23, 2013 20:18

    LWN Headlines

    Clasen: GNOME 3.10 sightings

    After releasing GNOME 3.9.90, which is the first beta of the 3.9 development branch, Matthias Clasen reflects on what is coming in GNOME 3.10. New features include a combined system status menu, some changes to control-center, the new Maps application, and more use of "header bars". "Our previous approach of hiding titlebars on maximized windows had the problem that there was no obvious way to close maximized windows, and the titlebars were still using up vertical space on non-maximized windows. Header bars address both of these issues, and pave the way to the Wayland future by being rendered on the client side."

    by jake at August 23, 2013 20:16

    Planet Debian

    Justus Winter: No noweb anymore...

    ... which is probably a good thing. But here is the boot log you all have been waiting for:

    start ext2fs: Hurd server bootstrap: ext2fs[device:hd0s1] exec init proc auth
    INIT: version 2.88 booting
    Using makefile-style concurrent boot in runlevel S.
    Activating swap...done.
    Checking root file system...fsck from util-linux 2.20.1
    hd2 : tray open or drive not ready
    hd2 : tray open or drive not ready
    hd2 : tray open or drive not ready
    hd2 : tray open or drive not ready
    end_request: I/O error, dev 02:00, sector 0
    /dev/hd0s1: clean, 44693/181056 files, 291766/723200 blocks
    Activating lvm and md swap...(default pager): Already paging to partition hd0s5!
    Checking file systems...fsck from util-linux 2.20.1
    hd2 : tray open or drive not ready
    hd2 : tray open or drive not ready
    end_request: I/O error, dev 02:00, sector 0
    Cleaning up temporary files... /tmp.
    Mounting local filesystems...done.
    Activating swapfile swap...(default pager): Already paging to partition hd0s5!
    df: Warning: cannot read table of mounted file systems: No such file or directory
    Cleaning up temporary files....
    Configuring network interfaces...Internet Systems Consortium DHCP Client 4.2.2
    Copyright 2004-2011 Internet Systems Consortium.
    All rights reserved.
    For info, please visit
    Listening on Socket//dev/eth0
    Sending on   Socket//dev/eth0
    *** stack smashing detected ***: dhclient terminated
    Failed to bring up /dev/eth0.
    Cleaning up temporary files....
    Setting up X socket directories... /tmp/.X11-unix /tmp/.ICE-unix.
    INIT: Entering runlevel: 2
    Using makefile-style concurrent boot in runlevel 2.
    Starting enhanced syslogd: rsyslogd.
    Starting deferred execution scheduler: atd.
    Starting periodic command scheduler: cron.
    Starting system message bus: dbusFailed to set socket option"/var/run/dbus/system_bus_socket": Protocol not available.
    Starting OpenBSD Secure Shell server: sshd.
    unexpected ACK from keyboard
    GNU 0.3 (debian) (console)
    login: root
    root@debian:~# ifup /dev/eth0
    Internet Systems Consortium DHCP Client 4.2.2
    Copyright 2004-2011 Internet Systems Consortium.
    All rights reserved.
    For info, please visit
    Listening on Socket//dev/eth0
    Sending on   Socket//dev/eth0
    *** stack smashing detected ***: dhclient terminated
    Failed to bring up /dev/eth0.
    root@debian:~# dhclient -v -pf /run/ -lf /var/lib/dhcp/dhclient.-dev-eth0.leases /dev/eth0
    Internet Systems Consortium DHCP Client 4.2.2
    Copyright 2004-2011 Internet Systems Consortium.
    All rights reserved.
    For info, please visit
    Listening on Socket//dev/eth0
    Sending on   Socket//dev/eth0
    *** stack smashing detected ***: dhclient terminated
    root@debian:~# dhclient -pf /run/ -lf /var/lib/dhcp/dhclient.-dev-eth0.leases /dev/eth0
    root@debian:~# ifup /dev/eth0
    Internet Systems Consortium DHCP Client 4.2.2
    Copyright 2004-2011 Internet Systems Consortium.
    All rights reserved.
    For info, please visit
    Listening on Socket//dev/eth0
    Sending on   Socket//dev/eth0
    DHCPREQUEST on /dev/eth0 to port 67
    DHCPACK from
    bound to -- renewal in 34108 seconds.
    ps: comm: Unknown format spec
    root@debian:~# halt
    Broadcast message from root@debian (console) (Fri Aug 23 19:42:19 2013):
    The system is going down for system halt NOW!
    INIT: Switching to runlevel: 0root@debian:~#
    INIT: Sending processes the TERM signal
    INIT: Sending processes the KILL signal
    Using makefile-style concurrent boot in runlevel 0.
    Stopping deferred execution scheduler: atd.
    task c10f53f8 deallocating an invalid port 2098928, most probably a bug.
    Asking all remaining processes to terminate...done.
    All processes ended within 1 seconds...done.
    Stopping enhanced syslogd: rsyslogd.
    Deconfiguring network interfaces...Internet Systems Consortium DHCP Client 4.2.2
    Copyright 2004-2011 Internet Systems Consortium.
    All rights reserved.
    For info, please visit
    Listening on Socket//dev/eth0
    Sending on   Socket//dev/eth0
    DHCPRELEASE on /dev/eth0 to port 67
    /dev/eth0 (2):
      inet address
      mtu           1500
    Deactivating swap...swapoff: /dev/hd0s5: 177152k swap space
    Unmounting weak filesystems...umount: /etc/mtab: Warning: duplicate entry for device /dev/hd0s1 (/servers/socket/26)
    umount: /etc/mtab: Warning: duplicate entry for device /dev/hd0s1 (/dev/cons)
    umount: could not find entry for: /dev/cons
    umount: could not find entry for: /servers/socket/26
    mount: cannot remount /: Device or resource busy
    Will now halt.
    store a new irq 11init: notifying pfinet of shutdown...init: notifying tmpfs swap of shutdown...init: notifying tmpfs swap of shutdown...init: notifying tmpfs swap of shutdown...init: notifying ext2fs device:hd0s1 of shutdown...init: halting Mach (flags 0x8)...
    In tight loop: hit ctl-alt-del to reboot

    With some tiny patches for ifupdown I've been able to resolve network related issues. All of them? Of course not, funny thing about developing for the Hurd is that once you fix one thing, then some other thing or code path is executed that has never been run on Hurd before, and therefore something else breaks. In this case I fixed ifupdown to generate valid names for the pid file and leases file and all of the sudden dhclient starts dying.

    Funny thing about that is, if one drops the -v flag from the dhclient invocation as I did it above, the crash isn't triggered and once the lease file has been successfully written, it is safe to add the -v flag again. Not yet sure what goes on there, then again, looking at the source of isc-dhcp-client it is not so surprising that it crashes :/

    When I first looked at ifupdown it was written in noweb, a literate programming tool. It is an interesting idea, even more so since (classic) c can be very verbose and cryptic. But it decouples the control flow from the structure of the program, which makes patching it quite a challenge since it is not as obvious where the changes have to go in. This is how ifupdown looked some weeks ago:

    % wc --lines ifupdown.nw
    6123 ifupdown.nw
    % pdftk ifupdown.pdf dump_data | grep NumberOfPages
    NumberOfPages: 113

    The ifupdown.nw is the noweb source, from which seven .c, four .h, two .pl and one Makefile are generated. It also contains a redicioulus amount of documentation, to the point that the authors at several points did not now what to write and just drop some nonsensical lines into the file. The source also compiles to a 113 page pdf file, that contains all of the documentation and all of the code, not at all in the order that one would expect a program to be written, but in the order the authors chose to structure the documentation. Fortunately for me the maintainer decided to drop the noweb source and to add the generated files to the source control system. This made my job much easier :)

    So here are the patches I published this week:

    I must admit that I do not know exactly what I will do next week. Obviously fixing the dhclient crash would be nice, I'll look into that. But I'm surely find some useful thing to do.

    August 23, 2013 20:15

    Data Center Knowledge

    Welcome to Fog Computing: Extending the Cloud to the Edge

    There's a new buzz word in distributed computing - Fog Computing. The idea of fog computing is to distribute data to move it closer to the end-user to eliminate latency and numerous hops, and support mobile computing and data streaming.

    by Bill Kleyman at August 23, 2013 20:03

    Strange Beaver

    Bryan Cranston Cares About Your Butt

    Long before Walter White was making meth and knocking on doors in Breaking Bad, Bryan Cranston was pushing another kind of drug

    by Admin at August 23, 2013 20:01

    LWN Headlines

    Balazs: KDE human interface guidelines: First steps

    On his blog, Björn Balazs writes about the recent effort to "reboot" the KDE human interface guidelines (HIG). There are three major sections in the HIG (structure, behavior, and presentation) and the team has a first draft of the behavior section. "We explicitly ask about your opinion. Please read the guidelines and make sure that the text is informative and comply with developers' requirements. The content should be both generic and comprehensive, and help to make KDE awesome. But we are also interested in support. If you are able to create nice sample UIs with Qt please contact the usability team via the kde-guidelines mailinglist."

    by jake at August 23, 2013 19:51

    Strange Beaver

    Hot Girl CPR Prank

    Who wouldn’t want to volunteer to help a couple Baywatch lifeguards? These guys will soon find out that things aren’t always how they appear.

    by Admin at August 23, 2013 19:47

    Continuity Software AvailabilityGuard/SAN Announced

    Continuity Software is announcing that it's expanding its risk management offerings by adding AvailabilityGuard/SAN, software which detects downtime and data loss risks across a SAN to ensure that data is protected and business continuity goals are met. As the modern SAN generally comprises several vendors' solutions and can be managed by multiple IT teams or individuals, configuration errors are bound to happen. Additionally, lots of organizations lack a universal management tool, meaning that changes in one sector of the SAN might not be completed in another region due to oversight or another IT admin being unaware. AvailabilityGuard/SAN utilizes a community-supported knowledgebase that compiles thousands of potential risks, and then it searches for and finds them. From there, it validates vendor best practices to ensure that the configuration is optimized.

    read more

    by Josh Shaman at August 23, 2013 19:47

    Planet Ubuntu

    Jorge Castro: Getting started with the Juju Local/LXC provider

    One of the coolest thing Juju does is deploy instances right on your laptop via LXC containers. This is nice because we reuse the same exact cloud image you’d use in Amazon, HP Cloud, Azure, or any other cloud, but configured on your local laptop so you can mirror what your production environment would look like. This also can save you quite a bit of money.

    We had a tutorial charm school on how to use the local provider, hope you enjoy it!

    Direct link

    August 23, 2013 19:16


    There, I Fixed It

    LWN Headlines

    More on Statistics (openSUSE blog)

    The openSUSE blog has an article with some in-depth statistics on the reach of the distribution. It includes various numbers and graphs on downloads, installations, the use of the Open Build Service, as well as a comparison with Fedora. "As you can see, Fedora has more downloads than openSUSE. Looking at the users, the situation is reverse: openSUSE has quite a bit more users than Fedora according to this measurement. How is this possible? The explanation is most likely that most openSUSE users upgrade with a 'zypper dup' command to the new releases, while Fedora users tend to do a fresh installation. Note that, like everybody else, we're very much aware of the deceptive nature of statistics: there is always room for mistakes in the analysis of data. To at least provide a way to detect errors and follow the commendable example set by Fedora, here are our data analysis scripts in github."

    by jake at August 23, 2013 18:43

    Planet Sysadmin

    SysAdmin1138: Descrimination in your Health Care plan

    With Manning announcing that she'll spend her 35 years of incarceration as a self-assigned woman, the US is getting a brief look at a particularly nasty state of affairs that been there for years. The US Army does not provide any treatment services other than mental health for people with Gender Identity Disorder. Which means Manning will spend her 35 years in a men's prison with no access or hormones, surgery, or even simple hair-removal.

    If you dig out the big coverage document that came with your (US-based) health-care plan (assuming you even have one) there is a section you probably never bothered to look at titled EXCEPTIONS. This is the list of things that the plan will NOT cover. This is the list that tells you that, no, they won't cover things like:

    • Going to Aruba for your (otherwise covered) kidney transplant.
    • Costs related to medical studies of pre-market drugs and treatments.
    • Costs relating to anything the FDA labels as 'Experimental'.
    • Purely cosmetic procedures.

    There is something else that almost always shows up on this list that really, really gets in the way of treating people like me and Chelsea Manning.

    • Costs related to treatment of Gender Identity Disorder.

    Yep, even though the DSM recognizes GID as an actual treatable disorder, and there is even a widely accepted treatment protocol for it, it's explicitly not covered in most plans. It has been this way for decades. By the protocol, treatment of GID requires interaction with three different medical professionals:

    1. Mental health professionals who guide the person through the whole process.
    2. Endocrinologists for the administration of hormones.
    3. Surgeons for any surgeries that may be needed.

    My current plan covers only the first step. They'll happily talk me out of it, but won't cover any actual medical interventions. This is the same coverage that Manning will get.

    My plan at WWU didn't cover any of it. This is progress of a sort, but only a grudging one. Hormones and Endocrinologist visits are thousands of dollars a year. Surgeries such as double mastectomies will be completely out of pocket and can easily end up close to $10K. Hair removal takes years and multiple treatments (hair grows in cycles, you see).

    Employers have to specifically negotiate coverage, which some do. San Francisco made news several years ago when they started covering the full costs. Several large tech companies advertise that they do so as well. It can be done, the effort just has to be taken.

    Why is this protocol treated so very differently than anything else?

    Dicks, but I'll get to that.

    The only other thing that got even close to the exclusions of GID coverage is:

    • Ovariohysterectomies in women under 30

    And even that has fallen off in recent years.

    Way back in the 1960's when the male-to-female surgery first became generally available, people started doing it. It was very scandalous since men were cutting off their dicks. Unfortunately, some of those transitioners experienced buyers remorse and learned that the surgery is a one way street, and the results aren't as good as the imagination suggests. And some of those remorse sufferers suicided.

    Cue the epic pearl-clutching.

    Something had to be Done, and Something certainly Was Done. Regulation started to fall down on this elective surgery in a haphazard way. It was in light of this that the Harry Benjamin Standards of Care were created in the 1970's, as a way to provide a widely accepted protocol for treatment. It worked.

    However, those suicides haunted the insurance actuaries. Wrongful death suits are really, really expensive. Treating GID can lead to death, therefore, we won't cover it. QED.

    That was 40 years ago, though.

    One of the big reasons those early transitioners suicided was regret over not being able to have kids. The BSC is big on making clear that sterility is one of the side effects of transition, and is a major component of the mental health requirement being satisfied before going on hormones.

    However, we've gotten a lot better at reproductive technology in the last 40 years. Sperm donation is a lot easier than it used to be, and they're viable longer. Egg donation is a thing now. I've known transitioners who've done gamete donation before taking the sterilization steps because of plans for maybe-kids later on.


    Numbers are illustrative, not scientific. Do not cite.

    40 years ago society was a lot more divided along gender lines and the concept of genderqueer wasn't really a thing. You were either male or you were not (things were also a weensy bit more sexist too), there was no between. It was a much more gender essentialist time. Men transitioning to women were told to always wear skirts, grow their hair out, and learn how to be demure (failure to comply could mean not getting access to hormones). Never mind that gender performance varies considerably even among those who never question their gender, that's a pointless detail; these people need to over-perform in order to pass at all.

    Another reason those transitioners suicided was because they were crammed into a role they didn't want to fit into. Perhaps they didn't want to change their job from the one they spent 20 years in to one more in line with Women's Work like teaching, but that's what the therapists demanded... and ended up hating it. And wanting the old life back, just different. But that's impossible so...

    Speaking from direct personal experience, having between be an option really takes the stress out of many people who are in the middle of the gender spectrum. Not having to be shoved into a -8 or +8 on the spectum in order to have the gatekeeper open the door for you takes a lot of the stress out of the process.

    The assumptions of 40 years ago no longer hold true, and it's time for that needless exclusion to be dropped.

    We're getting more people suiciding from untreated GID than we ever did for treated. The continued presence of this health-care exclusion is unexcusable discrimination.

    August 23, 2013 18:27

    Planet Ubuntu

    Adam Stokes: sosreport: on the road to 3.1

    We've got an aggressive feature list for the next milestone release and welcome any involvement from the community. A few of the big ticket items are the following:

    Top priority items

    • Python 3.3 and Python 2.7 support - yes we'd like to keep supporting older python versions if possible :)
    • Sos object model archive (SOMA) - This feature is for allowing other applications interface with the data collected by sosreport.
    • DBUS integration - We'd like to have this feature so that controlling the behavior of sosreport is easily integrated into other systems.

    Additional high priority issues are also available.

    Other important items to note:

    • Plugins - As technologies evolve and new software rises we are always welcoming new plugins to capture the necessary data to aid in debugging those technologies. If you've got something in mind its just a simple pull request away to get it on the right track for inclusion.
    • Tests - Our goal is to be at 100% (I think we're at about 69%) coverage with a wide range of unittests. So if you fancy quality assurance then this is an excellent opportunity for you :D

    More info

    For more information and other issues scheduled to be fixed for the next release visit sosreport issues

    August 23, 2013 18:17

    Planet PostgreSQL

    Jim Smith: COMMIT / ROLLBACK in Oracle and PostgreSQL


    COMMIT / ROLLBACK in Oracle and PostgreSQL

    David Edwards and Lucas Wagner




    The use of transactions in relational databases allows a database architect to logically group SQL into chunks of SQL code which can execute using an “all or nothing” strategy. In case of disaster, such as power loss or a network outage, grouping a selection of SQL statements into a transaction means that all of the statements execute – or none of them execute.


    Without transactions, a simple network outage could mean that only half of the statements are executed, possibly corrupting the data inside the database. Some pieces of customer data could be inserted into the database whereas other pieces could be lost forever.


    In contrast, while using transactions, if all statements are able to be successfully executed, a COMMIT is performed and those changes become permanent. If a failure should occur during the execution of these statements, all changes since the last commit can be undone through the use of a ROLLBACK.


    When translating Oracle PL/SQL to PostgreSQL PL/pgSQL, there are some subtle, yet important, scoping differences in how each has implemented COMMIT and ROLLBACK that all database architects should be aware of.


    Example Scenario


    Consider the following possible scenario where an application program interacts with a database function. The architect would like to do the following things in the form of a transaction:


                    - INSERT a new row into a table

                    - If disaster strikes and the insertion does not complete, ROLLBACK

                    - Otherwise, COMMIT


    Oracle’s Way: Back to the Beginning

    Oracle’s COMMIT or ROLLBACK statements scope back to the beginning of the transaction no matter where it is located.  When an exception occurs, everything between the BEGIN up to and including the failing insert will be undone by an implicit ROLLBACK. If a COMMIT occurs on return from a function, no row will be made permanent. The Oracle ROLLBACK undoes everything since the transaction began – even functions.


    In the example code below, calling a ROLLBACK from a function within a function will roll back the rows containing both ‘1’ and ‘2’. It will return back to the very first BEGIN.


                INSERT INTO testTable(testColumn) VALUES ('1');
    END testProcedure;
                INSERT INTO testTable(testColumn) VALUES ('2');
                WHEN others THEN
    END nestedTestProcedure;


    The PostgreSQL Difference


    While Oracle permits transactional statements inside a PL/SQL procedure or function, PostgreSQL does not. If we try to compile the PostgreSQL equivalent of the Oracle code above, it will not compile. It will throw an error:


                ERROR:  cannot begin/end transactions in PL/pgSQL

                HINT:  Use a BEGIN block with an EXCEPTION clause instead.


    In short, a ROLLBACK cannot span functions, and will only work inside the current function. Everything prior to the invocation of the autonomous transaction is isolated and hence not impacted (i.e., it is neither committed nor rolled back).


    In contrast with the Oracle sample code, instead of rolling back the rows containing ‘1’ and ‘2’, PostgreSQL would only roll back ‘2’.


      AS $$
                  INSERT INTO testTable(testColumn) VALUES ('1');
                  PERFORM nestedTestFunction();
      $$ LANGUAGE plpgsql;
    CREATE OR REPLACE FUNCTION nestedTestFunction() RETURNS void
      AS $$
                     INSERT INTO testTable(testColumn) VALUES ('2');
                     WHEN others THEN
    $$ LANGUAGE plpgsql;


    However, when translating Oracle code to PostgreSQL where this behavior is expected, we can implement a workaround that can behave like an autonomous transaction. An architect would choose this solution as a workaround (versus re-writing the function to become more "PostgreSQL-like") while working with an existing or mature application in order to minimize impact to the application that would interact with the database.


    The workaround involves rewriting the parent (testFunction) to become a wrapper function. We then create an autonomous, rollback child function (nestedTestFunction) which will be opened by using dblink(). In essence, we are opening another connection to the same database and running nestedTestFunction:


      AS $$
                            randomNum      text := (random() * 9 + 1);
                            cnxName           text;
                            cnxString           text := 'hostaddr= port=5440 dbname=x user=y password=z';
                            success             integer;
                            status               text;
                            SELECT concat('cnx', randomNum) INTO cnxName;
                            PERFORM dblink_connect(cnxName, cnxString);
                            PERFORM dblink_exec(cnxName, ‘BEGIN’);
                            SELECT * INTO success from dblink(cnxName, 'SELECT nestedTestFunction()')
                               AS (status text);
                            IF (success >= 1) THEN
                                        PERFORM dblink_exec(cnxName, 'COMMIT');
                                        PERFORM dblink_disconnect(cnxName);
                                        RETURN 1;     -- committed
                                        PERFORM dblink_exec(cnxName, 'ROLLBACK');
                                        PERFORM dblink_disconnect(cnxName);
                                        RETURN 0;     -- rolled back
                            END IF;
    $$ LANGUAGE plpgsql;
    CREATE OR REPLACE FUNCTION nestedTestFunction() RETURNS integer
       AS $$
                            INSERT INTO testTable(testcolumn) VALUES (‘1’);
                            INSERT INTO testTable(testcolumn) VALUES (‘2’);
                            RETURN 1;    -- good status
                            WHEN others THEN
                               RETURN 0;    -- error status
    $$ LANGUAGE plpgsql;


    Inside the code, the INSERT contained inside another function will be either committed or rolled back based on the result of the function, but neither operation will impact testFunction().




    When translating PL/SQL to PL/pgSQL which makes use of COMMITs and ROLLBACKs, there are subtle issues of scope that are greater than what is visible on the surface. It is imperative that the database architect give consideration as to whether any adaptations between the two are required to ensure data integrity.

    August 23, 2013 18:03

    Planet #BitFolk

    Phil Spencer (CrazySpence): Still Collecting Chessie

    I have moved and I haven’t built a new layout yet but I am still collecting Chessie gear when I can. My newest engine is a Bachmann GP7 Chessie 5606. This engine like my last 2 is also DCC and the detail is fairly exceptional giving that it is a low end brand.

    IMG_1242 IMG_1243 IMG_1244

    Over the years I have noticed Bachmann has been improving their brand. When I was a kid it was just well known that Bachmann was cheap crap that barely represented anything that may have existed to match the roadnames and numbers they were using. This newest addition is fairly close to its prototype and if I felt led to add some ladder rails to the nose and some hoses to the front it could even rival the Athearn Genesis model of the same engine.

    I have also collected some Chessie rolling stock in the last year and I am up to 6 Chessie branded cars.  My sons have also taken an interest in trains as many boys do thanks to things like Thomas the Tank engine. They have a wooden track set with the magnet couplers. I was able to find a Chessie Wooden Train kit and they loved it. The boys have a tendency now to call any train they see a “Chessie System” . Thinking ahead to when they are older I have collected many pieces of the Bachmann EZ track system so that they can partake in the hobby with dad. I have already let them play a couple times but right now since they are so young it always ends in some wheels being missing or a coupler broken. Still it is fun to share the hobby with them.

    IMG_2207 IMG_2272 IMG_1239 IMG_1525


    In other Chessie news, while taking the boys to see Thomas in St Thomas Ontario I saw some real still in existence Chessie System Box cars!  They are rusted and beat to hell but it was nice to see the Cat in person after all these years.

    IMG_1460 IMG_1461 IMG_1483 FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

    August 23, 2013 17:43

    Server Fault Meta

    Line noise check, RE: our topic matter

    Yes, this is an actual thing that I'm having to post.

    Reading a number of the recent opinion wars on meta, I really feel like it should be necessary to complete the following mental checklist before participating in a debate over how the topic matter can be better applied, unfair treatment of new users, or ego on the part of high rep users/moderators.

    1. Have I recently refreshed myself on the first paragraph of about?
    2. Do I agree with this?
    3. Have I clicked on the professionalism link on the about page, and read that too?
    4. Do I have feedback on how this definition can be improved, or how it can be better applied? (note that this allows for disagreement with the proposals of others)

    If the answer is no to any of these questions, then there isn't much that can be constructively added to discussions about how this site can operate more effectively within the scope of the topic matter. Bluntly, that person isn't even participating in the same discussion. (and it's getting tiresome)

    I challenge that those points of view are fundamentally off-topic for most meta discussions about how this site can be improved.

    • If people want to make a question about why the stated goals of this site are misguided, that's perfectly acceptable. Some of us might disagree, but at least everything is kept topical within the question.
    • If people agree with the subject matter of the site but disagree with the proposed changes and/or attitudes of others, that's what everyone expects and is business as usual.
    • In all other contexts, this line of conversation is line noise. It's debating something that isn't up for debate.

    Going forward, I'd like to ask that the moderators keep an eye out for this creeping into the threads about how to improve the site again and do what comes natural. Folks have had their opportunity to get it out of their system. It's just another form of the ongoing problem: people ignoring what the site is about and expecting everyone else to play ball with them. (albeit dressed in a wall of text)

    by Andrew B at August 23, 2013 17:37

    Art of Manliness

    Announcing the Newest Member of the McKay Family and a Two Week Hiatus


    On Tuesday, August 20, 2013  at 3:54 PM Kate and I welcomed a daughter into our family. Her name is Olive Scout McKay. Olive is for Kate’s Nana. Scout is for the character in one of our favorite books, To Kill a Mockingbird. We’ll be calling her Scout.

    Scout weighed in at 7 lbs 13 oz and measured 20 inches in length. She’s a real cutie and I’m already over the moon about her. Mom and baby are doing great. Gus is taking to being a big brother well. He’s giving Scout lots of hugs and kisses.

    We’re taking the next two weeks off to get to know this little girl (and to adjust to her erratic newborn sleeping schedule), so we won’t be publishing anything new during that time.  To get your AoM fix while we’re away, check out our archives. We have nearly 2,000 articles on the site and you can spend hours browsing and reading the manly info we have there. If you’re not sure what to read first, use the “Visit a Random Post” button on the top right side of the homepage. Or check out our Library of Random Man Knowledge — full of 5,000 manly factoids and quotes. That should keep you busy while we’re gone! We look forward to seeing you back in September.

    Kate, Gus, and I feel blessed and excited to have Scout in our family. Please give a warm AoM welcome to Scout McKay!


    by Brett & Kate McKay at August 23, 2013 17:28

    Server Fault Meta

    What is the best site to ask IT support questions?

    Sorry if this is not appropriated to be asked here, but I would like to now what is the best site to ask questions about booting hardware and IT support in general.

    by jvverde at August 23, 2013 17:18


    There, I Fixed It

    Planet #BitFolk

    Phil Spencer (CrazySpence): Pitopia!

    So I had been hearing about the Pi for some time before I finally decided to get one myself. I really couldn’t come up with what to do with it and now I have 2 Raspberry Pi’s and a plan to get at least 2 more in the future.

    For those of you who are not aware of what the Raspberry Pi is it is a small credit card sized computer based on the ARM CPU with half a gig of RAM (model B) 2 USB ports an SD card slot and Ethernet (also model B). Now you’re probably going “What’s ARM?” sigh, any tiny gadget you have, be it a cell phone, tablet, Ouya are based off it. It is a low cost lower power (as in electricity not computing) architecture which is taking us all by storm!!! (and you didn’t even know it did you?)


    Anyways back to topic, So I was looking at these and what people were doing with them and it just wasn’t striking a cord with me. The I came across Raspbmc, a Pi linux OS geared towards being the best XBMC a Pi could be. The software you will almost always find running a Pi is linux and to be more specific a Debian Linux offshoot “Raspbian” and that is the same case with Raspbmc. Raspbmc is a tailored Raspbian that is easy to set up and run as a media server immediately after install.

    The reason this got my attention is because at my trailer the stereo system went last season and to replace it is a couple hundred bucks. With the Pi i could do it for less than $100 so I was off to work on my project! The first step of this is to of course order a PI.

    The place I decided to go with was Newark/Element 14. Mainly because they come up first when you search for where to get one and secondly because they had distribution centers EVERYWHERE so I figured shipment wouldn’t be too terrible. When I went to order it said the backlog was 10k units and it would be a month so based off that I went to Deal extreme for all my cables.

    Low and behold a week later my Pi arrives well before the month timeline so I am stuck with a cute little PCB and nothing to do with the damn thing.

    IMG_1106Eventually the cables arrived and I could play around with the Pi. Installing Raspbmc was a breeze, there are automatic tools on all the main Operating Systems you can get that will do all the imaging for or if you are a man you can go to the CLI and use DD to directly image your card (Except Windows, where you can’t be a man without 3rd party modification).

    Raspbmc has a minimal installer that downloads everything and asks you 100 pesky installer like questions OR you can download the standalone version and it skips all that and immediately give you a bootable OS (good if you are using Ethernet as your main connection). The first time I installed Rasbmc I did it the way they recommend but I have done the stand alone image since then as it is faster and less hassle. All you need to do with the standalone is select “update now” from the Rasbmc Program within XBMC’s “Programs” and then your standalone version is as up to date as the regular without the 100 questions.

    IMG_1108 IMG_1107

    Some things to note about the Pi as a media server is that it works best (s/best/at all/) with open standards like xvid, mp4. if you have DVD rips or DVD’s to play you need to pay a small fee for the codec from the Raspberry Pi Foundation.


    I initially had a patriot brand SD card backing my Pi but after several corruptions and it finally refusing to be a bootable disk I have left that brand behind. I now use Lexar brand and I noticed the OS performance improved  but everyones experience may be different with different brands.

    External Storage

    I was originally just using a large SD card with what I wanted to watch that week on it but I have moved to external. My 3TB Mybook works excellent with the Pi and has never had any trouble being detected or playing files off it.

    Get a case of course, otherwise you just have a small PCB touching everything and you will eventually kill it. There are very simple and cheap cases to get but in my case I went for a more expensive PiBow. I went with the PiBow because the colour arrangement reminded me of the AppleII and the C64. It is a very smart looking case in my opinion.

    Some things to consider

    The Pi itself may only be $35 but keep in mind you need to buy cables, case, storage etc, it DOES add up but it is a fun platform to pursue. There seems to be endless documents on elaborate electronics projects to run with it and also simpler things like the media server like what I am using mine for.

    iPhone Remote iPad Remote FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

    August 23, 2013 16:53

    Planet PostgreSQL

    Hubert 'depesz' Lubaczewski: OmniPITR v1.2.0 released

    Title: OmniPITR v1.2.0 released It's been a while since last release, but the new one finally arrived, and has some pretty cool goodies For starters, you can now skip creation of xlog backups – which is nice, if you have ready walarchive, with all xlogs – there is no point in wasting time on creation […]

    August 23, 2013 16:51

    Kernel Planet

    Dave Jones: Weekly Fedora kernel bug statistics – August 23rd 2013

      18 19 rawhide  
    Open: 314 422 89 (825)
    Opened since 2013-08-16 17 46 6 (69)
    Closed since 2013-08-16 7 28 5 (40)
    Changed since 2013-08-16 24 69 9 (102)

    Weekly Fedora kernel bug statistics – August 23rd 2013 is a post from:

    August 23, 2013 16:43

    Planet Ubuntu

    Ubuntu Women: Ubuntu Women Scavenger Hunt

    The Ubuntu Women Project is happy to announce that we have put together an online scavenger hunt for women in our community to highlight facts about women in technology and help encourage the learning of interesting trivia about Ubuntu!

    Participants will answer 15 questions that can be answered by searching online, see the end of this post for the link. All questions must be answered, winners will be selected randomly from the pool of completed surveys that have the most correct answers.


    Three winners will be chosen, and they will have their choice of ONE of the following:

    Logitech Webcam C615

    The new virtual Ubuntu Developer Summit is online and people participate via webcam on Google+, join in with this great new Logitech webcam!


    Ubuntu Earrings and Necklace set from Boutique Academia

    Made by Boutique Academia, show your love for Ubuntu with this beautiful necklace and earring set! The winner has a choice of gold or rhodium.




    Submission rules are as follows:

    • The person submitting the answers must identify as female
    • Entrants certify that they looked for the answers themselves (it’s fine to ask for help, but the majority of the searching must be your own!)
    • Only one submission per person (we will match up the name, city and country you provide with the location we send prizes to and only send to those which match)
    • Only fully completed entries will be checked

    Entries will be accepted until 23:59 UTC on September 13th.

    Begin the Scavenger Hunt!

    To get started with the scavenger hunt go to this form: Ubuntu Women 2013 Scavenger Hunt

    If you have any questions, please contact Cheri Francis at

    Updates to this competition will be tracked on the wiki page:
    Happy hunting!

    August 23, 2013 16:18

    Planet Debian

    Bartosz Fe&#324;ski: 20th anniversary

    Just to let you know. We’ve been celebrating 20th anniversary also in Poland.
    We spent 4 days near Polish sea. And we had great time.
    Here goes pictures of cakes ;)
    One is made by my girlfriend. Guess which one ;)

    August 23, 2013 16:15

    Enrico Zini: Random notes from that other lightning talk session

    Random notes from that other lightning talk session

    YKINMKBYKIOK: "Your Kink Is Not My Kink But Your Kink Is Ok"

    As far as I'm concerned, this puts the vim vs emacs quarrel to rest, for good.

    August 23, 2013 16:11

    Planet Ubuntu

    James Hunt: Upstart 1.10 released

    Lots of goodness in this release (explanatory posts to follow):

    • upstart-local-bridge: New bridge for starting jobs on local socket connections.
    • upstart-dconf-bridge: New bridge for Session Inits to react to dconf/gsettings changes.
    • upstart-dbus-bridge: New '--bus-name' option to allow bus name variable to be included in dbus-event(7).
    • New "reload signal" stanza to allow jobs to specify a custom signal that will be sent to the main process (rather than the default SIGHUP).
    • Inclusion of Session Init sample jobs.
    • Re-exec fixes for handling chroot sessions.
    • Shutdown fix for Session Inits.
    • New python3 module and accompanying integration test suite for testing Upstart running as PID 1 and as a Session Init (privileged and non-privileged).

    The Upstart cookbook has been updated for this release.

    by (James Hunt) at August 23, 2013 15:52

    Kernel Planet

    Dave Jones: outstanding 3.11-rc6 bugs.

    Spent a lot of time this week going over older bugs I’d hit to figure out what had fallen through the cracks. Mostly for my own tracking, but there’s a few on here that really ought to be fixed before 3.11 final. There might still be a few in the list below that have now been fixed, but there was no obvious commit, and reproducing was difficult.


    memory management:



    Assorted perf / tracing bugs.

    outstanding 3.11-rc6 bugs. is a post from:

    August 23, 2013 15:51

    High Scalability

    Stuff The Internet Says On Scalability For August 23, 2013

    Hey, it's HighScalability time:

    • 5x: AWS vs combined size of other cloud vendors; Every Second on The Internet: Why we need so many servers.
    • Quotable Quotes:
      • @chaliy: Today I learned that I do not understand how #azure scaling works, instance scale does not affect requests/sec I can load.
      • @Lariar: Note how crazy this is. An international launch would have been a huge deal. Now it's just another thing you do.
      • smacktoward: The problem with relying on donations is that people don't make donations.
      • @toddhoffious: Programming is a tool built by logical positivists to solve the problems of idealists and pragmatists. We have a fundamental mismatch here.
      • @etherealmind: Me: "Weird, my phone data isn't working" Them: "They turned the 3G off at the tower because it  interferes with the particle accelerator"
      • John Carmack: In com­puter sci­ence, just about the only thing that’s really sci­ence is when you’re talk­ing about algo­rithms. And opti­miza­tion is an engi­neer­ing. But those don’t actu­ally occupy that much of the total time spent pro­gram­ming. 
      • @gappy3000: Ideas are assets. Code is a liability. So maximize ideas/code.
    • How can spiders and flies walk up walls? See for yourself with a fun DYI on How to: test Galileo's scaling laws. An idea that is simple yet profound in its implications: "the width of an object is doubled, the surface area is squared and the volume is cubed." It means size matters. Elephants can't dance and jump and insects can walk on water. Why is because the ratio of area to volume governs everything we do. You get to drop stuff from great heights and watch things explode (or not). What could be better?

    Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

    by Todd Hoff at August 23, 2013 15:45

    Planet Sysadmin

    Everything Sysadmin: Sysadmins/Devops needed for study

    I met Jeevitha Mahendiran at Usenix LISA last year. She is studying sysadmins and what we do. She writes:

    I'm Jeevitha Mahendiran, Graduate Student/Research Assistant Faculty of Computer Science, Dalhousie University, Halifax, Canada. Currently doing a research on "Understanding the Use of Models and Visualization Tools in System Administration Work". The information that the you share regarding your work will be very helpful for my research.

    We are seeking participants to take part in a study about the tools used by system administrators. Participants will be asked to complete an anonymous and confidential survey that should take about 20-30 minutes to finish. The study is an online survey.

    If you are interested in more information about the study, please contact Jeevitha Mahendiran by email at or proceed to the survey website at

    August 23, 2013 15:35

    LWN Headlines

    Calibre 1.0 released

    Version 1.0 of the Calibre electronic book management system has been released. "Lots of new features have been added to calibre in the last year — a grid view of book covers, a new, faster database backend, the ability to convert Microsoft Word files, tools to make changes to ebooks without needing to do a full conversion, full support for font embedding and subsetting, and many more."

    by corbet at August 23, 2013 15:25

    Friday's security updates

    CentOS has updated Xen4CentOS kernel (multiple vulnerabilities).

    Fedora has updated kernel (F19; F18: two vulnerabilities) and python-django (F19: cross-site scripting).

    Gentoo has updated acroread (many vulnerabilities) and dbus (denial of service).

    Mageia has updated libtiff (two code execution flaws), perl-Proc-ProcessTable (symlink flaw from 2011), python-django (cross-site scripting), python3 (two vulnerabilities), rubygem-passenger (M3: insecure tmpfile usage), spice (denial of service), and znc (M3: denial of service).

    Mandriva has updated perl-Proc-ProcessTable (symlink flaw from 2011), python-django (ES5: cross-site scripting), and spice (BS1: denial of service).

    openSUSE has updated cacti (12.2, 12.3: two vulnerabilities) and strongswan (11.4: denial of service).

    Oracle has updated kernel (OL5: multiple vulnerabilities).

    SUSE has updated firefox (multiple vulnerabilities).

    by jake at August 23, 2013 15:20

    Guido Fawkes' blog

    WATCH: “Do We Have Anyone Pro-Miliband?” “No”

    Another phone in, another damning verdict on Ed Miliband. The Wright Stuff couldn’t find a single caller to stick up for Ed, despite it being aired in the middle of the day, prime time for Labour voters:

    This is becoming a theme.

    See also: Guy News: Public Have Their Say on Ed Miliband

    Tagged: GuyNews.TV

    by WikiGuido at August 23, 2013 15:20

    There, I Fixed It

    Programmers Being Dicks

    Samsung’s new ad for an SSD. Which women apparently won’t...

    Samsung’s new ad for an SSD. Which women apparently won’t understand, but they’ll love it nonetheless.

    (Edit: the comment we linked to, a reader points out, uses an ableist slur. Oh well.)

    August 23, 2013 14:53

    Guido Fawkes' blog

    Galloway Blames Israel For Syria Chemical Weapons Attack

    George Galloway’s performance on Iranian state television today deserves a wider audience:

    Almost unbelievable that this man is a British MP.

    Via Galloway Watch.

    Tagged: GuyNews.TV

    by WikiGuido at August 23, 2013 14:48

    Programmers Being Dicks


    Data Center Knowledge

    Schneider Electric’s DCIM Tool Leverages Intel for Remote KVM

    Energy management specialist Schneider Electric has updated its Data Center Infrastructure Management (DCIM) software to provides server access without the need for additional hardware. The company worked with Intel’s recent Virtual Gateway technology for the new product module in its StruxureWare...

    by Jason Verge at August 23, 2013 14:27

    Friday Funny: How Remote Do You Go?

    It's Friday! Before you take off for the weekend, give us your ideas for a caption for our new Data Center Knowledge cartoon.

    by Colleen Miller at August 23, 2013 14:00

    IBM Adds Third Data Center in Emerging Market of Peru

    IBM is opening a new $8 million data center in Lima, Peru, citing an increasing demand for information technology services in the emerging market, particularly around cloud and big data.

    by Jason Verge at August 23, 2013 14:00

    Splunk Boosts Visibility For VMware 3.0 Environments

    Splunk updates its application for VMware 3.0, NaviSite launches a Director IaaS platform support VMware vCloud Director, FalconStor enhances its storage solutions for VMware environments, and HotLink launches a VMware disaster recovery offering to Amazon Web Services.

    by John Rath at August 23, 2013 13:45

    Avaya Launches Software-Defined Data Center Framework

    Leveraging OpenStack and its own Fabric Connect technology, Avaya launched its software-defined data center framework and roadmap. Avaya's Software-Defined Data Center (SDDC) framework uses the OpenStack cloud computing platform.

    by John Rath at August 23, 2013 13:24

    Guido Fawkes' blog

    Dark Handy-Cock Sex Claims Made Public

    Private Eye have self-confessed teen fondler’s Mike Hancock’s defence for his ongoing sexual harassment investigation. Apparently he considers the alleged victim allowing him into her home to amount to consent:

    “In order for Hancock to have access to [her] home she would have had to have let him in. In other words, she clearly consented to any actions about which she now makes complaint.”

    Good luck with that one. The full allegation is pretty dark:

    “It is not a trivial complaint that Hancock attempted to force his tongue into her mouth, that he tried to part her legs with his foot or that he exposed his penis and invited her to masturbate him. Nor is it a trivial complaint that Mr Hancock used his position and status as both an MP and councillor to target, groom and exploit for his own purposes a vulnerable woman.”


    See, not that hard to give a credit.

    Tagged: Hancock

    by WikiGuido at August 23, 2013 13:00

    There, I Fixed It

    Guido Fawkes' blog

    Quote of the Day

    Victim’s solicitors allege that Mike Hancock MP…

    “…exposed his penis and invited her to masturbate him.”

    Tagged: Hancock

    by Guido Fawkes at August 23, 2013 12:45

    Planet Ubuntu

    Sebastian K&uuml;gler: KDE Frameworks 5: Plugin Factory Guts

    In this article, I explain changes to the plugin loading mechanism in KDE Frameworks 5. The article is intended for a technical audience with some affinity to KDE development.

    Over the past weeks, I’ve spent some time reworking the plugin system in KDE Frameworks 5. The original issue I started with is that we are shifting from a plugin system that is mostly KDE specific to making more use of Qt’s native plugin system. Qt’s plugin system has changed in a few ways that caused many of our more complex plugins to not work anymore. On the other side, moving closer to Qt’s plugins makes our code easier to use from a wider range of applications and reduces dependencies for those that just want to do plugin loading (or extending their app with plugins). A mostly complete, and I must say spiffy, solution is now in place, so here’s a good opportunity to tell a little about the technical background of this, what the implications for application developers are, and how you can use a few new features in your plugins.

    Bye bye, K_EXPORT_PLUGIN

    In the KDE Platform 4, the K_EXPORT_PLUGIN macro did two things. It provided an entry point (qt_plugin_instance()) function which loads the plugin. With Qt5, the need for the entry point is gone, since plugins are now QObject based, so the methods defined in Q_INTERFACE can be relied on as entry points. K_EXPORT_PLUGIN also provided PLUGIN_VERIFICATION_DATA, which can be used to coarsely identify if a plugin was built against the right version. In most cases, this wasn’t very useful, as it would only catch a relatively small class of errors. The plugin verification data is missing in the new implementation so far, but we plan to get it back in another form: being able to specify the version in the plugin, and checking against that. This part is not yet there, but it’s also not a problem for now, as it’s not required and won’t produce fatal errors.


    The heavy lifting is done by this macro, which is often used together with K_EXPORT_PLUGIN: You create a factory class using this macro, and then, in the old world, you’d use K_EXPORT_PLUGIN to create the necessary entry points. Since we’re already defining the plugin factory instance using Q_DECLARE_INTERFACE, Qt is happy about that, and the stuff in K_EXPORT_PLUGIN becomes useless. Basically, we’ve moved the interesting bits from K_EXPORT_PLUGIN to K_PLUGIN_FACTORY. For porting that means, in the vast majority of cases, you can just remove K_EXPORT_PLUGIN from your code, and be done. (If you don’t remove it, it’ll warn during build, but will still work, so it’s source-compatible. Mostly, in some cases, .moc can’t pick up the macro, in this case, either move it into the .h file, or include the corresponding .moc file in your .cpp code.)

    K_PLUGIN_FACTORY, or rather its base class, KPluginFactory is pretty neat. It’s mostly assembled by macros and templates, which makes it a bit hard to read and understand, but once you realize what kind of effort is saved for you by that, you’ll happily go for it (you don’t have to care about its internals as it is well encapsulated, of course). The really interesting piece is this:

    T *create(const QString &keyword, QObject *parent = 0, const QVariantList &args = QVariantList());

    This is a method available in the factory (generated by K_PLUGIN_FACTORY) that is the base of your plugin, basically what you get from QPluginLoader::instance() from your plugin once you’ve loaded the .so file. You basically call (roughly)

    MyFancyObject* obj = pluginLoader->instance()->create<MyFancyObject*>(this);

    to load your code into the app hosting the plugin. (Of course, MyFancyObject can be either the class actually defined in the plugin, or, more commonly, the baseclass of it (you don’t want to include your plugin’s header in the app, as that defeats the point of the plugin in the first place). You only do the above if you go through QPluginLoader directly, KService and Plasma::PluginLoader can do most of this work for you (also, here the API didn’t change, so no worries).

    K_PLUGIN_FACTORY_WITH_JSON or where is the metadata?

    Qt5′s new plugin system allows you to bake metadata into the plugin binary itself. They’re specified as an extra argument to the Q_PLUGIN_METADATA macro, and basically point to a json file containing whatever info you want in the plugin. The metadata is compiled into the ELF section of the plugin, can be found very fast, and the plugin itself doesn’t need to be dlopened in order to read it. With Qt’s previous plugin system, the plugin shared object files would have to be loaded, which significantly impacts performance.

    This mechanism is very useful for something we’ve been doing in KDE for a long time, namely the data included in the .desktop files. Those are being installed separately, into a services install dir, indexed by ksycoca for faster access and searching. These .desktop files (which really are the plugin’s metadata contain all the usual stuff, name, icon, author, etc., but also the plugin name, dependencies, and most importantly, the ServiceType (e.g. Plasma/DataEngine). KService uses them to find a plugin (often by service type) and load it from the plugin name.

    Having the metadata baked into the plugin allows us to not use KServiceTypeTrader (which handles the searching through the sycoca cache) but to ask QPluginLoader directly. Right now, we’re still using sycoca for the lookup, but this mechanism allows us to move away from it in the future.

    Something we do use the metadata for already, at least in Plasma::DataEngine is the creation of a KPluginInfo object. (This object basically exposes the metadata, and can be instantiated from a .desktop file. With the above changes, I also added a constructor to KPluginInfo that instantiates a KPluginInfo object from the json metadata baked into the plugin. This is one nail in the coffin of KServiceTypeTrader (and in extension KSyCoCa), but obviously not its death blow.

    K_PLUGIN_FACTORY_WITH_METADATA simply takes an extra argument, the metadata file, and bakes that into the plugin (by inserting it, internally, into the Q_PLUGIN_METADATA macro which is included in the KPluginFactory implementation.


    In order to ease the transition from .desktop files to baked-in metadata, we introduced a cmake macro to help you with that. It’s pretty simple, you just write (in your CMakeLists.txt):


    and during build time, a file called mypluginmetadata.json will be generated. You can include this file using the K_PLUGIN_FACTORY_WITH_JSON macro in your code, and the metadata will be baked in. When the plugin is loaded, your ctor will have a QVariantList as argument, which you can just pass to KPluginInfo, and get a valid plugininfo object back. If you’re interested what the .json file looks like, either peak into your build directory, or use the command

    $ desktoptojson -i mydesktopfile.desktop

    to generate a json file. (You usually want to run this at build time, and not put it in your repo, since otherwise, changes to the .desktop file, for example translations, will not be picked up.)

    So, tl;dr

    • Changes are largely source-compatible (K_EXPORT_PLUGIN can just go away, you might have to include the .moc file explicitely)
    • You can optionally use JSON metadata in your plugin to create KPluginInfo objects

    If you want to create a plugin, do the following:

    • In your CMakeLists.txt file, convert your old .desktop file at build-time using kservice_desktop_to_json() and use the resulting file (replace .desktop with .json) in the following step
    • In your plugin .cpp file, add a K_PLUGIN_FACTORY macro, this does Q_DECLARE_INTERFACE and Q_PLUGIN_METADATA for you. Optionally pass a .json file
    • Use QPluginLoader, KServiceTypeTrader or Plasma::PluginLoader to load your plugin.

    These changes are documented in the API documentation of KPluginFactory and in our KDE5Porting document.

    Personal thanks go out to kdelibs hacker extraordinaire David Faure, who has been patiently guiding me through making these changes to our plugin system.

    August 23, 2013 12:17

    There, I Fixed It

    Planet Ubuntu

    Mattia Migliorini: Use Roboto Condensed with @font-face CSS rule

    @font-face is a very useful CSS rule that allows you to include fonts directly in your project, without the need to find them installed on the user’s machine. This greatly enlarges the number of fonts you can use in your projects and is very handy if all font styles are included in one single file. But what if every style is in a stand-alone file? At a first glance this could be a problem. Today we are going to find out how to use the @font-face rule to define a single font-family, while adding multiple files, one for every font style.

    Roboto Condensed

    We are going to use Roboto Condensed as an example. You can download the ttf files from Google Fonts. Be sure to check all styles!

    Define the CSS @font-face property

    Now that you’ve downloaded the font files, we’re ready to go ahead and write the CSS code necessary for our purpose. Create this file in the same directory where you put the font files and call it whatever you want.
    First of all, let me make a note here: we could define different font-family names for every style, but then we’d have to apply those to every element like <em>, <i>, <strong>, <b>, and so on. Not very handy.

    Let’s go deeper with our analysis. What we have here is three font weights (light, regular, bold), each one with two font styles (normal and italic). So we need six @font-face rules. This rules must differ in some properties. We already said that we want that all those fonts have the same font-family property and we’re going to call it “Roboto Condensed” (yeah, an original name). We’ll handle the weights with the font-weight property, which will have the three different values stated before, expressed in numbers: 300 (light), 400 (regular), 700 (bold). The styles are defined by the font-style property, which will have two values: normal and italic.

    Now it’s time to write the CSS code for our Roboto Condensed Regular:

    @font-face {
      font-family: "Roboto Condensed";
      font-style: normal;
      font-weight: 400;
      src: local('Roboto Condensed Regular'), local('RobotoCondensed-Regular'), url("RobotoCondensed-Regular.ttf") format('truetype');

    Let’s explain the last property: src. This property’s task is to load the font file, which is done by its url value (that defines the relative path to the file). I added the format value to inform the browser about the format of the font file to load.

    What about those local values? They’re not required, but very useful: if the user’s machine has a font with the name written between the brackets, the browser will load it directly from the client computer instead of downloading it from the server. This obviously saves bandwidth and shortens the time needed to load the web page.

    Now that we now how it works, it’s very easy to go through all the other font styles and weight simply by changing values to the corresponding properties. Obviously we have to change value to the src property too.

    The result is as follows. Feel free to download the gist and customize it.

    The post Use Roboto Condensed with @font-face CSS rule appeared first on deshack@web:~$.

    August 23, 2013 11:41

    Guido Fawkes' blog

    Friday Caption Contest (New Batman Announced Edition)

    UPDATE: Ed Balls himself gets involved: “I can’t believe they gave Ben Affleck the part”.

    Tagged: Caption Contest

    by WikiGuido at August 23, 2013 11:24

    There, I Fixed It

    Guido Fawkes' blog

    Yet Another Hacked Off Conflict of Interest

    What is it with Hacked Off’s inability to be able to express a view on a single subject without a great whopping conflict of interest? Now it seems they have extended their remit from control of the press to what our children should be learning at school. This time they’ve been irked by the decline in students studying Media Studies, presumably a secret conspiracy organised by the state, newspapers and Rupert Murdoch to make children learn about science.

    “Last week’s announcement of the ‘A’ level results provided an opportunity for sections of the press to indulge a pet obsession of bashing media studies as an academic discipline. The negativity is a particularly British obsession which you won’t find in the US where there is a much more healthy interchange between academics researching media issues and the media itself. The attitude found depressingly often in some British newsrooms, that studying the workings of the press in a systematic and critical way should be a bar to working in the business, would be greeted with incomprehension by most in the US.”

    Of course Hacked Off fail to mention the massive conflict of interest in them bemoaning the decline of Media Studies: studying the media is exactly what their half-baked, third-rate academics like Brian Cathcart, Professor of Journalism at Kingston University, make a living out of teaching up and down the country. Of course they want children to study it, otherwise they’d be out of a job. Schoolboy stuff…

    Tagged: Media Guido

    by WikiGuido at August 23, 2013 10:45

    “Gibraltar is Spanish” Say Telegraph Readers


    El Telegrapho readers want their Rock back. An armada of Spanish voters have brought a welcome traffic surge to this online poll after a Twitter campaign in Spain. Tu mama calata Gallagher…

    Tagged: Media Guido, Telegraph

    by Guido Fawkes at August 23, 2013 10:28

    Planet PostgreSQL

    Dimitri Fontaine: Trigger Parameters

    Sometimes you want to compute values automatically at INSERT time, like for example a duration column out of a start and an end column, both timestamptz. It's easy enough to do with a BEFORE TRIGGER on your table. What's more complex is to come up with a parametrized spelling of the trigger, where you can attach the same stored procedure to any table even when the column names are different from one another.

    I found a kind of trigger that I can use!

    The exact problem to solve here is how to code a dynamic trigger where the trigger's function code doesn't have to hard code the field names it will process. Basically, PLpgSQL is a static language and wants to know all about the function data types in use before it compiles it, so there's no easy way to do that.

    That said, we now have hstore and it's empowering us a lot here.

    The exemple

    Let's start simple, with a table having a d_start and a d_end column where to store, as you might have already guessed, a start timestamp (with timezone) and an ending timezone. The goal will be to have a parametrized trigger able to maintain a duration for us automatically, something we should be able to reuse on other tables.

    create table foo (
      id serial primary key,
      d_start timestamptz default now(),
      d_end timestamptz,
      duration interval
    insert into foo(d_start, d_end)
         select now() - 10 * random() * interval '1 min',
                now() + 10 * random() * interval '1 min'
           from generate_series(1, 10);

    So now I have a table with 10 lines containing random timestamps, but none of them of course has the duration field set. Let's see about that now.

    Playing with hstore

    The hstore extension is full of goodies, we will only have to discover a handful of them now.

    First thing to do is make hstore available in our test database:

    # create extension hstore;

    And now play with hstore in our table.

    # select hstore(foo) from foo limit 1;
     "d_end"=>"2013-08-23 11:34:53.129109+01",
     "d_start"=>"2013-08-23 11:16:04.869424+01",
    (1 row)

    I edited the result for it to be easier to read, splitting it on more than one line, so if you try that at home you will have a different result.

    What's happening in that first example is that we are transforming a row type into a value of type hstore. A row type is the result of select foo from foo;. Each PostgreSQL relation defines a type of the same name, and you can use it as a composite type if you want to.

    Now, hstore also provides the #= operator which will replace a given field in a row, look at that:

    # select (foo #= hstore('duration', '10 mins')).* from foo limit 1;
     id |            d_start            |             d_end             | duration 
      1 | 2013-08-23 11:16:04.869424+01 | 2013-08-23 11:34:53.129109+01 | 00:10:00
    (1 row)

    We just replaced the duration field with the value 10 mins, and to have a better grasp at what just happened, we then use the (...).* notation to expand the row type into its full definition.

    We should be ready for the next step now...

    The generic trigger, using hstore

    Now let's code the trigger:

    create or replace function tg_duration()
     -- (
     --  start_name    text,
     --  end_name      text,
     --  duration      interval
     -- )
     returns trigger
     language plpgsql
    as $$
       hash hstore := hstore(NEW);
       duration interval;
       duration :=  (hash -> TG_ARGV[1])::timestamptz
                  - (hash -> TG_ARGV[0])::timestamptz;
       NEW := NEW #= hstore(TG_ARGV[2], duration::text);
       RETURN NEW;

    And here's how to attach the trigger to our table. Don't forget the FOR EACH ROW part or you will have a hard time understanding why you can't accedd the details of the OLD and NEW records in your trigger: they default to being FOR EACH STATEMENT triggers.

    The other important point is how we pass down the column names as argument to the stored procedure above.

    create trigger compute_duration
         before insert on foo
              for each row
     execute procedure tg_duration('d_start', 'd_end', 'duration');

    Equiped with the trigger properly attached to our table, we can truncate it and insert again some rows:

    # truncate foo;
    # insert into foo(d_start, d_end)
           select now() - 10 * random() * interval '1 min',
                  now() + 10 * random() * interval '1 min'
             from generate_series(1, 10);
    # select d_start, d_end, duration from foo;
                d_start            |             d_end             |    duration     
     2013-08-23 11:56:20.185563+02 | 2013-08-23 12:00:08.188698+02 | 00:03:48.003135
     2013-08-23 11:51:10.933982+02 | 2013-08-23 12:02:08.661389+02 | 00:10:57.727407
     2013-08-23 11:59:44.214844+02 | 2013-08-23 12:00:57.852027+02 | 00:01:13.637183
     2013-08-23 11:50:18.931533+02 | 2013-08-23 12:00:52.752111+02 | 00:10:33.820578
     2013-08-23 11:53:18.811819+02 | 2013-08-23 12:06:30.419106+02 | 00:13:11.607287
     2013-08-23 11:56:33.933842+02 | 2013-08-23 12:01:15.158055+02 | 00:04:41.224213
     2013-08-23 11:57:26.881887+02 | 2013-08-23 12:05:53.724116+02 | 00:08:26.842229
     2013-08-23 11:54:10.897691+02 | 2013-08-23 12:06:27.528534+02 | 00:12:16.630843
     2013-08-23 11:52:17.22929+02  | 2013-08-23 12:02:08.647837+02 | 00:09:51.418547
     2013-08-23 11:58:18.20224+02  | 2013-08-23 12:07:11.170435+02 | 00:08:52.968195
    (10 rows)


    Thanks to the hstore extension we've been able to come up with a dynamic solution where you can give the name of the columns you want to work with at CREATE TRIGGER time rather than hard-code that in a series of stored procedure that will end up alike and a pain to maintain.

    August 23, 2013 10:08

    Strange Beaver

    Planet Puppet

    Guido Fawkes' blog

    WATCH: Bercow’s Impersonations of Tory MPs

    After Gove sat in the Speaker’s chair and impersonated him during the Tory parliamentary party photo shoot before recess, Bercow gets his revenge. Well sort of.


    Via Guardian.

    Tagged: GuyNews.TV, Michael Gove, Speaker

    by WikiGuido at August 23, 2013 09:27

    ASCII Art Farts


     |        o__O_\o                                                
     |       __o  \ o                                                
     |      __o| __) o           _______                             
     |      | || \_/  o o       |  ___  |     WELP IT'S LATE AUGUST  
          |_|_||/\_|      o     | |   |                              
         || | |/_/\_\      o    | |__         TIME TO START PUTTING  
         ||_|_/I/  \_\     o    |             UP THE CHRISTMAS LIGHTS
         |___/_/    \_\   o     |()                                  
            /_/      \_\ o      |                                    
           /_/        \_\ o o   |                                    
          / /    ejm   \ \    o |_ o  o  o  o                        
                                o o            o                     

    by (ASCII Art Farts: de) at August 23, 2013 07:00

    Planet HantsLUG

    Planet PostgreSQL

    Leo Hsu and Regina Obe: CREATE SCHEMA IF NOT EXISTS in 9.3 and tiger geocoder

    One of the new features in PostgreSQL 9.3 is CREATE SCHEMA IF NOT EXISTS someschema;

    . We were so excited about this new feature that we started using it in the tiger geocoder loader routine. For some reason we thought it was available since 9.1 which gained CREATE TABLE IF NOT EXISTS sometable; which we noted in Sweat the small stuff, it really matters
    Continue reading "CREATE SCHEMA IF NOT EXISTS in 9.3 and tiger geocoder"

    August 23, 2013 05:21

    Jeremy Zawodny

    C-130 Taking a run at the Rim Fire

    Shot from the Pine Mountain Lake Marina earlier today (August 22nd).

    C-130 Making a Drop Run

    The next photo is a picture of a plume that blew up east of Pine Mountain Lake Airport.

    Plume East of PML Airport

    by Jeremy Zawodny at August 23, 2013 04:38

    Planet Sysadmin

    Chris Siebenmann: Looking at how many viruses we've seen in email recently

    Once upon a time people were very worried about viruses being spread through email and devoted a lot of time and effort to eradicating them (sometimes going so far as to refuse all zipfiles and the like). The last time I looked at this we had very few viruses being recognized, but that was a couple of years ago and today I was curious to see if things had changed.

    (Technically what I am actually looking at is the amount of detected malware. Viruses are only one of the types of malware that can be spread through email.)

    Because our email system does two stages of filtering I have to give two sets of numbers. All of these are over the last 30 days because I decided that that was a good time range for 'current activity'. First, in our SMTP-time milter based filtering, which only covers some email, we checked 44,000 messages and found 316 'viruses'. This is actually highly misleading because our commercial black box spam+AV filter classifies some phish messages as viruses instead of plain spam. It turns out that most of the detected viruses were in fact phishing messages; 232 out of 318, leaving 84 real viruses.

    The main anti-spam processing (which every accepted email goes through) handled 503,000 messages and found 2,445 viruses. Again this includes some phishing messages but this time a lot fewer, only 913. That leaves 1,532 real viruses or a detected virus rate of 0.3% of our incoming email.

    Actual malware is potentially very damaging, so I'm glad we have the anti-virus filtering even if we don't see many of them. I might feel differently if we paid any significant amount of money for it (although there are free options if we ever need them).

    (I was going to say something about classifying phish spam as malware but my thoughts on this are long enough that I want to put them in a separate entry.)

    August 23, 2013 04:16

    Diesel Sweeties

    Five Reasons You Deserve A Cup Of Coffee Right Now

    fourth doctor bacon scarf

    This comic is 100% factual. You deserve coffee.

    August 23, 2013 04:16


    Planet PostgreSQL

    Josh Williams: Log Jam: Be careful with local syslog

    All they really wanted to do is log any query that takes over 10 seconds. Most of their queries are very simple and fast, but the application generates a few complicated queries for some actions. Recording anything that took longer than 10 seconds allowed them to concentrate on optimizing those. Thus, the following line was set in postgresql.conf:

    log_min_duration_statement = 10

    Log Everything

    A little while back, Greg wrote about configuring Postgres to log everything, and the really good reasons to do so. That isn't what they intended to to here, but is effectively what happened. The integer in log_min_duration_statement represents milliseconds, not seconds. With a 10ms threshold it wasn't logging everything the database server was doing, but enough that this performance graph happened:

    Reconstructed I/O Utilization

    Reconstructed I/O Utilization

    That is, admittedly, a fabricated performance plot. But it's pretty close to what we were seeing at the time. The blue is the fast SAS array where all the Postgres data resides, showing lower than normal utilization before recovering after the configuration change. The maroon behind it is the SATA disk where the OS (and /var/log) resides, not normally all that active, showing 100% utilization and dropping off sharply as soon as we fixed log_min_duration_statement.

    It took a few minutes to track down, as we were originally alerted to application performance problems, but once we spotted the disk I/O metrics it didn't take long to track down the errant postgresql.conf setting. That the disk jumped to 100% with so much log activity isn't surprising, but the database resides entirely on separate disks. So why did it effect that so much?

    syslog() and the surprise on the socket

    If you're used to using syslog to send your log messages off to a separate server, you may be rather surprised by the above. At least I was; by default it'll use UDP to transmit the messages, so an overloaded log server will simply result in the messages being dropped. Not ideal from a logging perspective, but it keeps things running if there's a problem on that front. Locally, messages are submitted to a dgram UNIX socket at /dev/log for the syslog process to pick up and save to disk or relay off to an external system.

    The AF_UNIX SOCK_DGRAM socket, it turns out, doesn't behave just like its AF_INET UDP counterpart. Ordering of the datagrams is preserved and, more importantly here, a full buffer will block rather than drop the messages. As a result in the case above, between syslog's file buffer and the log socket buffer, once the syslog() calls started blocking, each Postgres backend stopped handling traffic until its log messages made it out toward that slow SATA disk.

    As of now, this system has the postgres logs on the faster array, to mitigate it if there's any logging problems in the future. But if you're looking at leaning on syslog to help manage high volumes of log entries, just be aware that it doesn't solve everything.

    August 23, 2013 02:40

    Planet Debian

    Jose Luis Rivas: Too many authentication failures on SSH

    It seems like having several SSH keys on the same system starts giving issues with some SSH servers. The issue is that while connecting it returns:

    Too many authentication failures for xxxxxx.

    The fix for this is very simple and quick. Open up your ~/.ssh/config and put:

    IdentitiesOnly yes

    You can add that per-host or wide (without indents, even as the first line in the config file!).

    August 23, 2013 01:38

    Planet Ubuntu

    Sam Hewitt: Arbitrary Moka Update

    As you very well know I develop/design the Moka icon set & it’s arbitrary announcement time!


    Major Revisions

    Just a quick list of the notable changes & additions to Moka:

    1. Nearly all known folder icons have been completed.
    2. Icons for applications now include most of the Unity web apps and many popular Google Chrome web apps
    3. Inclusion of logos for most major popular distributions
    4. Addition of many more toolbar/action icons
    5. Revisions made to both the dark and light panel icons.
    6. A few icons for Steam Linux games are included, many more to come.

    Download & Install

    As always: it’s available for Ubuntu and it’s kin from the PPA, which supports 12.04 and beyond.

    sudo add-apt-repository ppa:snwh/moka-icon-theme-daily
    sudo apt-get update && sudo apt-get install moka-icon-theme moka-icon-theme-dark

    Or you can download either as a zip file:


    And install by extracting and running the INSTALL script with or without root, depending on where you would like to have it install –/usr/share/icons or ~/.icons.

    Also, if you’re reading this as a fan of Moka and like my work, please consider donating to support both myself and Moka’s development. :)

    August 23, 2013 01:19

    Daring Fireball

    Why We Should Care About Ichiro’s 4,000th Pro Hit

    Can’t wait to see him win an improbable World Series title in two months.

    by John Gruber at August 23, 2013 00:42

    August 22, 2013


    Art of Manliness

    Krav Maga Technique of the Month: Overhand Direct One-Handed Strike Defense

    Editor’s Note: We had such a great response to our Primer on Krav Maga article back in July, we thought many of you would be interested in learning more about this devastatingly effective martial art. To that end, each month we’ll publish a different krav maga technique explained by krav maga expert and author, David Kahn. Many of the techniques that David will share with us are featured in his latest book, Krav Maga Weapon Defenses.

    The Israeli krav maga self-defense system has achieved global recognition for its efficiency, simplicity, and, when required, brutal efficiency. Krav maga’s world-renowned defense moves against weapons were developed for a modern army. Over the next few months, we’ll take a look at ways to defend against various attacks using impact weapons.

    Impact weapon attacks can come in many forms — baton, hammer, crowbar, or any number of weapon-like objects. Impact weapons (along with edged weapons) are often referred to in krav maga parlance as “cold weapons.” Attacks can come from a myriad of directions, heights, and angles in single-swing attacks. The three fundamental principles of defense are either (1) to close the distance between you and the assailant while deflecting-redirecting the attack, (2) to disengage until you recognize the correct timing to then close the distance, or (3) to retreat straight away.

    Close the distance. The end of the weapon generates the most force, as the assailant’s wrist is used as a fulcrum. Therefore, the most dangerous range of the attack is to be struck with the very end of the weapon. In other words, the object’s momentum decreases the closer you come to the assailant’s swinging wrist. That’s why it’s vital to close the distance between you and your attacker as quickly as possible. Optimally, the distance between the defender and the assailant can be closed before a weapon is deployed while debilitating the adversary with strong combatives, blocking access to the weapon, and achieving dominant control. If the weapon is successfully deployed and put into action, closing the distance will allow the defender to either deflect-redirect or block the weapon, the majority of the time in combination with body defenses, while delivering withering counterattacks. As with all krav maga defenses, the hand always leads the body to deflect-redirect in conjunction with simultaneous multiple counterattacks.

    Time correctly. Another essential to a successful defense is precise timing; closing the distance and using the correct tactic at the correct time. Fight timing is best thought of as the fusion of instinct with simultaneous decision making to either pre-empt the attack, move off the line of attack/fire, deflect-redirect, control the weapon and strike, or retreat from harm’s way. In other words, fight timing is harnessing instinctive body movements while seizing or creating opportunities to defend both efficiently and intelligently. Defined yet another way, fight timing is your ability to capitalize on a window of opportunity offered by your opponent or to create your own opportunity to end the confrontation using whatever tactics come instinctively to you. In short, you’ll attack the attacker. Importantly, the tactics and techniques are designed to provide the defender with a pre-emption capability prior to a weapon being deployed. The goal is not to allow an assailant to get the drop on you. Your recognition of his intent and body language literally and figuratively will allow you to cut the legs out from under him.

    Retreat straight away. As soon as you see that the threat has been neutralized, retreat as soon as possible to avoid future attacks.

    Below, we take a look at how to defend against a common impact weapon attack: the overhand one-handed strike with a blunt object like a bat or crowbar.

    Overhand Direct One-Handed Strike Defense

    One of the most typical attacks with a blunt object is an overhead swing. In this technique we assume the assailant is using his right hand and the defender is squared up or face-to-face. You will execute the defense with your sameside (left) arm and counterpunches with your right arm while controlling the weapon with your left.

    Your goal is to close the distance to intercept and deflect-redirect the impact weapon harmlessly over your shoulder while delivering a simultaneous punch to the throat, jaw, or nose, trapping the weapon arm to remove it from the assailant’s grip while delivering more retzev (continuous motion) combatives. One way to practice the deflecting-stabbing movement of the defense is to simulate diving into a pool with your arms in a “V” motion to pierce the water while keeping your legs straight. Keep the fingers together and simply touch both of your hands together at the fingertips, resembling the inverted “V.” Do not touch your palms together, only your fingertips. Now, drop one arm into a straight punch position. Continue building this defense by aligning your deflecting-redirecting hand with a forward body lean, burying your chin into your shoulder.

    Screen Shot 2013-08-19 at 9.56.14 AM

    The forward combat lean achieves two purposes: it both defeats the attack and protects your head. Essentially, you are diving/bursting into your assailant with the sameside arm and leg to close the distance while deflecting-redirecting the strike and simultaneously counterstriking. Another way to think about aligning your deflecting-redirecting arm is to stand in a neutral stance and jettison your arm directly out to meet an imaginary incoming attack. Proper arm alignment requires a slight curve in your hand that will intercept the attack. Keep the fingers together and the thumb attached to the hand; do not allow the thumb to jut out because of the danger in breaking it. The deflecting-stabbing defense, when timed correctly and with proper interception alignment, will redirect the object harmlessly along your arm and over your head, glancing off your back.

    Screen Shot 2013-08-19 at 9.56.36 AM

    Time the defense and counter-attack punch together. The next (literal) step forward is with your left leg, closing the distance to the attacker. Remember to lead with your hands!

    Screen Shot 2013-08-19 at 9.56.53 AM

    Redirect the overhand blow with one hand, while simultaneously counterpunching.


    As you move into the assailant with your redirection and counterpunch, without breaking contact with the attacker’s arm, loop your deflecting-stabbing arm over the assailant’s impact weapon arm to secure the impact weapon arm.


    Continue your counterattack with a foreleg kick or multiple knee strikes to the groin depending on distance.

    The most popular method to remove the impact weapon is to use a 180-degree step (tsai-bake) with your right foot to break or rip the impact weapon away from his hand without taking your eyes off the assailant.

    The most popular method to remove the impact weapon is to use a 180-degree step (tsai-bake) with your right foot to break or rip the impact weapon away from his hand without taking your eyes off the assailant.

    Once you feel comfortable with the initial defense, add a simultaneous punch with your other arm, thrusting both arms out together. I recommend a palm down punch or keeping the palm of the hand parallel to the ground, targeting the nose, chin, or throat.

    Next time we take a look defending a two-handed overhead attack with chair or stool. Until then, train hard and always remember retzev.


    by A Manly Guest Contributor at August 22, 2013 22:00

    Planet PostgreSQL

    Fabien Coelho: Turing Machine in SQL (3)

    In previous posts (1 2), I have presented how to implement a Turing Machine (TM) with the tape stored as an ARRAY or in a separate TABLE accessed through SQL functions. In this post the solution is more cleanly relational, with the tape contents stored in a column of the recursive query, very like Andrew Gierth’s CTS implementation.

    Turing Machine with a window function

    In this post the TM is built from the following SQL features: WITH RECURSIVE to iterate till the machine stops, INNER JOIN to get transition and state informations, WINDOW functions and CASE expression to extract the next symbol from the recursive table, two sub-SELECTs to initialize the recursion, another CASE expression to copy, update and extend the tape, a CROSS JOIN to append blanks at the end of the tape.

    An ARRAY and ORDER is also used to record the tape state, but is not strictly necessary, it is just there for displaying the TM execution summary at the end.

    Turing Machine execution

    Let us now execute a run with a recursive query:

    WITH RECURSIVE running(iter, sid, len, pos, psym, tid, tsym) AS (
      -- set first iteration at state 0, position 1
          -- first, common part is repeated over and over
          0, 0,
          -- tape length needed to know where to insert blanks
          (SELECT COUNT(*)::INTEGER FROM Tape),
          -- position and next symbol to consider
          1, (SELECT symbol FROM Tape WHERE tid=1),
          -- then the tape contents
          tid, symbol
        FROM Tape
      -- compute next iteration
           pr.iter + 1,
           pr.len, -- the initial length could also be recomputed with a sub-query
           pr.pos + tr.move,
           -- recover next iteration symbol
           -- this "hack" because 'running' cannot be used twice in the query
           MAX(CASE WHEN pr.pos+tr.move=pr.tid THEN pr.tsym ELSE NULL END) OVER (),
             WHEN hack.keep THEN pr.tid   -- tape index
             ELSE pr.len + pr.iter + 1    -- append a new index
             WHEN hack.keep AND pr.tid=pr.pos THEN tr.new_symbol    -- update symbol
             WHEN hack.keep THEN pr.tsym                  -- or keep previous symbol
             ELSE 0                                      -- or append a blank symbol
        FROM running AS pr
        JOIN -- corresponding transition
             Transition AS tr ON (pr.sid=tr.sid AND pr.psym=tr.symbol)
        JOIN -- state information, necessary to know whether to stop
             State AS st ON (tr.sid=st.sid)
        CROSS JOIN -- hack to append a 0 at the end of the tape
             (VALUES (TRUE), (FALSE)) AS hack(keep)
        WHERE -- stop on a final state
              NOT st.isFinal
    -- just stores the computed iterations
    INSERT INTO Run(rid, sid, pos, tape)
        -- iteration, current state, tape head position
        iter, sid, pos,
        -- build an array from tape symbols for easier display
        ARRAY_AGG(tsym ORDER BY tid ASC)
      FROM running
      GROUP BY iter, sid, pos
      ORDER BY iter;

    Some comments about this query:

    The motivation for the WINDOW function is that PostgreSQL forbids using the recursive table twice in the query, so this function allows to hide the additional reference needed to extract the next symbol. I do not really understand the motivation for this restriction, which seems a little bit artificial. Possibly it allows some optimisation when iterating on the query, but is also impairs what can be done with the WITH RECURSIVE construct.

    There is also a CROSS JOIN hack for appending a blank symbol to the tape at each iteration, so that a tape symbol is always found when moving the TM head.

    This query basically uses the same tricks as the CTS one, but for the OUTER JOIN or other NULL handling which are avoided. ISTM that they are needed for CTS because of the specifics of CTS, namely that a rule must only be applied when a tape contains 1, or ignored otherwise.

    You can try this self-contained SQL script which implements a Turing Machine for accepting the AnBnCn language using the above method.

    In the next post, I’ll show how to get rid of both WITH RECURSIVE and WINDOW functions…


    August 22, 2013 22:00

    Planet Ubuntu

    Jono Bacon: Ubuntu In a Nutshell: The Ubuntu SDK and Developer Story

    This article is part of a series of blog posts covering the many different areas of work going on in Ubuntu right now. See the introduction post here that links to all the articles.

    In my last article I talked about the new app upload process, but today I am going to talk about how developers write apps in the first place.

    For a long time our app developer story in Ubuntu has been quite fragmented. This has been due to a number of reasons:

    • We have not had a single consistent platform that we ask developers to write to. We have traditionally supported GTK, Qt, and anything else that lives in the archive. This not only presents a inconsistent developer experience, but also an inconsistent user experience too.
    • We lacked app design guidelines around how developers should build apps that look consistent on the platform.
    • We didn’t have a single consistent developer portal and support network to provide the support and guidance app developers need to build awesome apps and get them into the platform.
    • We also didn’t have a good answer for writing an app that can work across multiple form factors.
    • Finally, we didn’t have a single consistent SDK that developers could use to write apps: they had to pick from a plethora of tools, with varying degrees of quality.

    We tried to rectify some of these issues by recommending people write apps with Python and GTK, and we wrote a tool called Quickly to optimize this process. Quickly would generate a project and help with tasks such as editing, creating your UI, and generating a package, but quickly was a somewhat primitive and incomplete solution to the problem.

    The work on Quickly also showcased some limitations in our tooling. At the time we recommended people write apps using GEdit, Glade, and GTK. Unfortunately, this collection of tools just didn’t compare favorably to the developer experience on Apple and Google’s platforms, despite the best efforts of the respective upstreams. We needed to provide an end-to-end SDK for developers that would take a developer from a new project through to submitting the app into the Ubuntu Software Center.

    Choosing a Technology

    We set out to resolve these issues and build a consistent Ubuntu SDK.

    The first decision we made was around which frameworks we wanted to support when developers write their apps. These frameworks needed to be highly efficient and able to converge across multiple devices. We finalized this list as:

    • Qt/QML – native applications that can be run on any of the devices and adapt to the screen size.
    • HTML5 – web applications that can also adapt to the device with deep integration into the system services (e.g. messaging menu, launcher etc).
    • Online Services – integration of web apps into the system services (e.g. messaging menu and unity integration).
    • OpenGL – full OpenGL support for games.

    Some time ago we decided to focus on Qt as a platform for not only building our SDK but building our convergence story too. Qt has many benefits:

    • It provides a fast C++ library and toolkit as well as a neat higher-level declarative technology in the form of QML. This means that we have the power of C++ for system software (e.g. writing Unity) but app devs can write apps using a high-performance higher level technology that is easier to learn and faster to write apps with.
    • Qt provides an awesome set of tools – an integrated IDE, debugger, designer and more.
    • The Qt Creator IDE is very pluggable which means we could use it for our main IDE and use it for writing apps in HTML5 and OpenGL.
    • Qt and QML documentation is fantastic.
    • Qt has a strong eco-system surrounding it and lots of companies in that eco-system. This makes contracting out work and hiring much easier.
    • Qt is a healthy upstream and very keen to work with those who consume it.

    We also started looking into the best way in which we could support HTML5 developers. While the IDE decision had been made (Qt Creator) we also decided to invest in building Apache Cordova support into our SDK to make writing HTML5 as flexible as possible. This way you can either write a stock HTML5 app or use the cordova functionality…all accessible within the same IDE.

    The Ubuntu SDK

    We formed the SDK team and started work. This work was broken into two areas.

    Firstly, we started work on the app developer platform. This is largely identifying the needs of app developers for writing apps for Ubuntu devices, and ensuring we have support for those needs (which largely requires integrating that support and creating APIs). This has included:

    • Building the Ubuntu Component set – a set of widgets that are usable in QML and HTML5 that developers can use to construct their apps.
    • Application lifecycle (suspending apps to preserve battery life).
    • Location Services.
    • Multimedia and Music.
    • Alarms.
    • Calendar Integration (using Evolution Data Server).
    • Sensor services (e.g. accelerometer).

    This work is currently on-going and in various stages of completeness, but all of these platform APIs will be ready by the end of August and many apps are already consuming them. Remember, these services will be made available across all form factors.

    The second piece was the SDK itself. This is tuning the Qt Creator IDE for our needs and ensuring it can be used to create QML, HTML5, and OpenGL apps. This work has touched on a number of different areas and has resulted in the following features:

    • We have project templates for QML, HTML5 (Cordova), HTML5 (Stock), and Scopes – here you can easily generate a project to get started with.
    • Source control integration for Bazaar and Git – this makes collaboration around an app easier.
    • Device integration – with just a click of a button you can run your app on an Ubuntu device to test that it works correctly.
    • Click package generation – generate a click package that you can use to upload to the Ubuntu Software Center.
    • Ubuntu Component Showcase – browse all the different Ubuntu components and see the code for how to use them.
    • Integrated documentation, IRC, design guidelines, and Ask Ubuntu support.

    We rolled all of these features into the first Beta of the SDK which was released about a month ago and you can get started with it on

    Speaking of, we have invested significantly in making the site a central resource for all of your development needs.

    Currently the site provides tutorials for building apps, API documentation, and a cookbook that brings together the top rated questions from Ask Ubuntu. The site provides a good spring-board for getting started.

    We are however in the process of making a number of improvements to This will include:

    • Revised site navigation and structure to make it easier to use.
    • Better and more clearly integrated API documentation.
    • Wider API coverage.
    • Cookbooks for all of the different app templates.
    • Full integration of Juju Charm documentation and API.

    We are expecting to have many of these improvements in place in the coming weeks.

    Are We There yet?

    As we stand today we now have a powerful Ubuntu SDK with support for writing convergent apps in Qt/QML, HTML5, OpenGL, and writing Scopes that fit into the dash. You can go to to find out more, install the SDK, and fine tutorials for getting started.

    We are only just gettin started though. The 1.0 of the SDK will be released in October and expect to find more refinements, better integration, and more features as we understand the needs of our developers better and expand the platform.

    August 22, 2013 21:54

    Dustin Kirkland: Gentlemen, Start Your Engines!

    Mark kicked this Ubuntu Edge campaign off a month ago with an analogy that's near and dear to my heart, as an avid auto race fan.  He talked about how the Ubuntu Edge could be a platform like Formula 1 race cars, where device manufacturers experiment, innovate, and push the limits of the technology itself.

    Late yesterday, the Ubuntu Edge crowd funding campaign closed its 30-day run, without hitting its $32M goal.  That's a bummer, because I still want a PC that fits in my pocket, and happens to make phone calls.  There are at least 27,488 of us who pledged our support, and are likely bummed too.

    In retrospect, I think there's a better analogy for the Edge, than Formula 1...  Time will show that the Edge worked more like a Concept Car.

    "A concept vehicle or show vehicle is a car made to showcase new styling and/or new technology. They are often shown at motor shows to gauge customer reaction to new and radical designs which may or may not be mass-produced. General Motors designer Harley Earl is generally credited with inventing the concept car, and did much to popularize it through its traveling Motorama shows of the 1950s.Concept cars never go into production directly. In modern times all would have to undergo many changes before the design is finalized for the sake of practicality, safety, the meeting the burden of regulatory compliance, and cost. A "production-intent" vehicle, as opposed to a concept vehicle, serves this purpose.[1]"
    I love reading about the incredible concept cars unveiled at the Detroit Auto Show every year, particularly as a Corvette and Cadillac enthusiast myself.

    I think the Cadillac Cien (2002) is my favorite concept car of all time.  It's a beautifully striking vehicle, with edgy design, and crazy stupid power (750hp!).

    While never mass produced, the Cien captured the imagination and updated the innovation around the Cadillac brand itself.  That concept vehicle, in a few short years, evolved in the production car I drive today, the Cadillac CTS-V -- a very different Cadillac than the land yachts your grandparents might lull around in :-)

    This car has invigorated a generation of new Cadillac owners for General Motors, competing with long established players from BMW (M5), Mercedes (E63), and Audi (S6), and recapturing a valuable market of younger drivers who have been buying German performance sedans.

    Without a doubt, I'm disappointed that I won't be holding this beautiful piece of hardware, at, all told, a very reasonable price (I pledged for two at the $600 level).

    But that's only half of the story.  Ubuntu Touch, the software that would have powered the Edge, lives!!!

    I'm actually running it right now on an LG E960 Google Nexus 4.  The hardware specs are pretty boring, and the device itself is not nearly as sexy as the Edge, but it's a decent run-of-the-mill, no-frills mobile phone that exists in the market today.

    The unlocked, international version showed up on my doorstep in 18 hours and $394 from Amazon.  Amazingly, it took me less than 30 minutes to unbox the phone, download and install the phablet-tools on my Ubuntu 13.04 desktop, unlock the device, and flash Ubuntu Touch onto it.  There's so much potential here, I'm still really excited about it.

    We are told, with confidence, that there will be Ubuntu smartphones in the market next year.  It just won't be the Edge.  As much as I lust to drive one of these elite Cadillac Cien concept cars, I love what it evolved into, and it's pure joy to absolutely drive the hell out of a CTS-V ;-)  And along those lines, this time next year, many of us will have Ubuntu smartphones, even it they won't be the Edge concept.

    Gentlemen, start your engines!


    by (Dustin Kirkland) at August 22, 2013 21:29

    Server Fault Meta

    About Page example question is "How to prevent unicorns from eating daisies"

    It's probably not good to encourage users to ask questions that are wildly off-topic for the site.

    Not sure when this got changed/borked, but it was working correctly a week or so ago when I looked last.

    by Chris S at August 22, 2013 21:29

    Daring Fireball

    Glassboard Seeks New Home

    Brent Simmons:

    I don’t have any business relationship with Glassboard (or with NewsGator or Sepia Labs), and so the only benefit I get from helping find Glassboard a new home is the selfish one: I use Glassboard every day and want to keep using it. (Q Branch uses it; my podcast uses it; my family uses it; the Seattle Xcoders group uses it.)

    The problem of persistent, private, and trustworthy group sharing is still an open problem. Glassboard represents a couple years of work by a six-person team, and it’s a great start. I believe that it can be very successful, given the right home, given resources and commitment.

    Great opportunity here for someone; Glassboard is a great product.

    by John Gruber at August 22, 2013 21:17



    At Safe Streets Rally, SFPD Blocks Bike Lane to Make Point of Victim-Blaming

    Keep it klassy, Sgt. Ernst.

    San Francisco Police Sergeant Richard Ernst apparently decided that the best way to make Folsom Street safer was to purposefully park his car in the bike lane this morning and force bicycle commuters into motor traffic.

    Staff from the SF Bicycle Coalition were out at Folsom and Sixth Streets, handing out flyers calling for safety improvements on SoMa's freeway-like streets in the wake of the death of Amelie Le Moullac, who was run over at the intersection last week by a truck driver who appeared to have made an illegal right-turn across the bike lane on to Sixth.

    When Ernst arrived on the scene, he didn't express sympathy for Le Moullac and other victims, or show support for safety improvements. Instead, he illegally parked his cruiser in the bike lane next to an empty parking space for up to 10 minutes, stating that he wanted to send a message to people on bicycles that the onus was on them to pass to the left of right-turning cars. He reportedly made no mention of widespread violations by drivers who turn across bike lanes instead of merging fully into them.

    He said it was his "right" to be there.

    According to SFBC Executive Director Leah Shahum, Ernst blamed all three victims who were killed by truck drivers in SoMa and the Mission this year, and refused to leave until she "understood that it was the bicyclist's fault."

    "This was shocking to hear, as I was told just a day ago by [SFPD Traffic] Commander [Mikail] Ali that the case was still under investigation and no cause had yet been determined," Shahum said in a written account of the incident. While Ernst's car was in the bike lane, "a steady stream of people biking on Folsom St. were blocked and forced to make sudden and sometimes-dangerous veers into the travel lane, which was busy with fast-moving car traffic during the peak of morning rush hour." [...]

    "There was literally an open, available parking spot next to the bike lane, which he could have pulled into," added Shahum. "Sgt. Ernst again said he did not need to move his car. He said it was his 'right' to be there."

    You can see San Francisco's Finest hard at work at 1:04 in this video.

    Previously, previously, previously.

    by jwz at August 22, 2013 20:10

    Folsom Street

    SFBC: Open Letter to Mayor Lee: Take Action on SoMa Streets

    Dear Mayor Lee:

    Last week, a 24-year-old woman named Amelie Le Moullac was killed while bicycling on Folsom Street near 6th Street when she was hit by a truck driver. Amelie was the third resident to be killed on a bike in San Francisco this year, all in or near SoMa. Each victim was killed by the driver of a large truck, none of whom have been cited or charged yet.

    SoMa regularly ranks as one of San Francisco's most dangerous neighborhoods for people bicycling and walking. [...]

    We ask you to commit to implement the City's long-overdue, long-delayed redesign of Folsom Street. Folsom is one of the city's few designated bike routes to downtown -- yet it is still an intimidating street, with no separation between bike riders and fast-moving auto traffic. Other cities have taken action to tame their deadlier streets by adding bikeways that are physically-separated from motor vehicle traffic. In fact, separated bikeways on 9th Avenue in New York City have reduced injuries to all street users by 58% and could do the same here.

    The City studied and recommended a redesign of Folsom Street, which includes separated bikeways, years ago through multiple planning processes and is now in the process of environmental analysis through the Central Corridor EIR. The City is scheduled to repave Folsom Street from the Embarcadero to 10th Street in November 2014. Please expedite approval and funding for this long-overdue Folsom St. plan so that the new, safe design, vetted through extensive community outreach, can be implemented with this scheduled repaving next year.

    I spend a lot of time nearly-dying on Folsom Street. It's about time they actually implement these already-approved changes. Send an email!

    by jwz at August 22, 2013 19:58

    Planet Ubuntu

    Ubuntu Podcast from the UK LoCo: S06E26 – Raging Ubuntu

    We’re back with the twenty-sixth episode of Season Six of the Ubuntu Podcast from the UK LoCo Team! Alan PopeMark JohnsonTony Whitmore, and (sort of) Laura Cowen are back in Studio A with carrot cake, tea, and an interview.

    You can also watch the video on Youtube!

    In this week’s show:-

    Please send your comments and suggestions to:
    Join us on IRC in #ubuntu-uk-podcast on Freenode
    Leave a voicemail via phone: +44 (0) 203 298 1600, sip: and skype: ubuntuukpodcast
    Follow our twitter feed
    Find our Facebook Fan Page
    Follow us on Google Plus

    August 22, 2013 19:49


    Progress on that Mouse-based Super-Soldier Formula

    New drug mimics the beneficial effects of exercise

    A drug known as SR9009, which is currently under development at The Scripps Research Institute (TSRI), increases the level of metabolic activity in skeletal muscles of mice. Treated mice become lean, develop larger muscles and can run much longer distances simply by taking SR9009, which mimics the effects of aerobic exercise. [...]

    When Burris' group administered SR9009 to these mice to activate the Rev-Erbα protein, the results were remarkable. The metabolic rate in the skeletal muscles of the mice increased significantly. The treated mice were not allowed to exercise, but despite this they developed the ability to run about 50 percent further before being stopped by exhaustion.

    "The animals actually get muscles like an athlete who has been training," said Burris. "The pattern of gene expression after treatment with SR9009 is that of an oxidative-type muscle -- again, just like an athlete."


    by jwz at August 22, 2013 19:46

    Corsair Force Series LS SSD Announced

    Corsair is announcing the Force Series LS SSD line which is available in 60GB, 120GB and 240GB capacities and is engineered as a cost-effective upgrade for desktops and notebooks to replace HDDs. The Force Series LS is a 7mm height SSD, and can thus easily be installed in any standard 2.5" drive bay, or even a 3.5" drive bay with an optional adapter. The drive also features 6Gb/s SATA support to allow the maximum amount of throughput. Internally, the Force Series LS SSD features a Phison 6Gb/s SSD controller and Toshiba NAND to provide solid SSD performance at up to 10x faster than standard HDDs.

    read more

    by Josh Shaman at August 22, 2013 19:36


    Emperor Norton Bridge

    Have you signed the petition yet? It's up to 1,500!

    John Lumea, the fellow behind the petition, brings some interesting news:

    It turns out that the resolution, within the California State Legislature, to name the Western span of the Bay Bridge for Willie Brown is in direct violation of the naming policy adopted in April by the State Senate Transportation and Housing Committee --- which has to approve the resolution in order for it to move forward.

    I won't tax you with the details of that here, other than to note that two of the three violated conditions of the naming policy are that (1) "the person being honored must be deceased" (which Willie Brown decidedly is not) and that (2) "the author or co-author of the measure must represent the district in which the facility is located" (which, in this case, none does).

    12-1 in favor? I am horrified.

    In Sacramento on Monday, the state Assembly Committee on Transportation met to consider ACR65, a bill that would designate the western span of the San Francisco-Oakland Bay Bridge the Willie L. Brown Jr. Bridge. The vote was 12-1 with three abstentions. The "no" came from committee Chair Bonnie Lowenthal, D-Long Beach; the abstainers were Tom Ammiano, D-S.F., Joan Buchanan, D-Alamo; and Jim Frazier, D-Oakley. The bill moves to the floor of the Assembly.

    Meanwhile, in a surprise move, Willie Brown does something cool!

    Meanwhile, Brown himself told Lee Houskeeper that he isn't interested in the half-a-bridge honor. He's in favor, he said, of naming the whole thing after Emperor Norton, who 141 years ago had proposed such a span. (The anniversary of that proposal will be celebrated at the Gold Dust Lounge - a client of Houskeeper's, and the reason he and Brown were discussing the subject - on Sept. 17.)


    by jwz at August 22, 2013 19:35


    There, I Fixed It

    Problem Solved!

    Problem Solved!

    Submitter MarchNero says: Whenever I'd go biking in jeans, my pant leg would always get caught in between the gear and the chain. To fix it, I simply taped a cardboard circle to the pedal. Works like a charm.

    Submitted by: MarchNero

    August 22, 2013 19:00

    Strange Beaver

    Teenage Mutant Ninja Turtles 1990 Trailer – Homemade

    Cowabunga dudes! Dustin and Homemade Movies travel back to 1990 to deliver the original trailer for Teenage Mutant Ninja Turtles. Completely made from household products, these homemade heroes-in-a-halfshell pack a punch that would make Jim Henson proud

    by Admin at August 22, 2013 18:29

    Programmers Being Dicks

    Hopefully the Last Thing We’ll Hear from Dave Winer on the Topic of Women in Programming

    Hopefully the Last Thing We’ll Hear from Dave Winer on the Topic of Women in Programming:

    1. If you find yourself fighting to shut someone up, you’re wrong.
    2. If you think “Who does he think he is” the answer is “an imperfect human being.”
    3. I will never apologize for asking questions or saying what I think.
    4. The term mansplaining is sexist.
    5. Fact: Women do their share of mansplaining.

    Not even going to bother engaging with this level of stubborn ignorance—just skip down to (Whoops! Winer has deleted all the good comments. Luckily, Faruk Ateş preserved both his and @zenlan’s—which is especially brilliant—and has put them up here.)

    August 22, 2013 18:26

    Daring Fireball

    Reuters Piece on Tim Cook and Employee Retention

    Poornima Gupta and Peter Henderson, reporting for Reuters, on retention problems:

    Some Silicon Valley recruiters and former Apple employees at rival companies say they are seeing more Apple resumes than ever before, especially from hardware engineers, though the depth and breadth of any brain-drain remains difficult to quantify, especially given the recent expansion in staff numbers.

    “I am being inundated by LinkedIn messages and emails both by people who I never imagined would leave Apple and by people who have been at Apple for a year, and who joined expecting something different than what they encountered,” said one recruiter with ties to Apple.

    Still, the Cook regime is also seen as kinder and gentler, and that’s been a welcome change for many.

    “It is not as crazy as it used to be. It is not as draconian,” said Beth Fox, a recruiting consultant and former Apple employee, adding that the people she knows are staying put. “They like Tim. They tend to err on the optimistic side.”

    So engineers are leaving in droves because Apple is a nicer place to work now?

    No doubt about it, retention is a key concern for Apple, but they do not have a retention problem. I’d wager Apple has a higher retention rate than any of its Valley competitors. There may well be more Apple resumes in circulation than ever before, but there are more Apple employees than ever before — Apple has never been bigger than it is now, and Apple employees have never been in higher demand.

    Still, employees report some grumbling, and Apple seems to have taken note, conducting a survey of morale in the critical hardware engineering unit earlier this year.

    “As our business continues to grow and face new challenges, it becomes increasingly important to get feedback about your perceptions and experiences working in hardware engineering,” Dan Riccio, Apple’s senior vice president of Hardware Engineering, wrote to his team in February in an email seen by Reuters.

    Apple does these surveys among employees every two or three years, and has done so throughout the modern era. I don’t think the survey cited above was in response to a rise in discontent.

    by John Gruber at August 22, 2013 18:24

    Programmers Being Dicks

    Perhaps Silicon Valley Should Just Leave Homeless People Alone

    Perhaps Silicon Valley Should Just Leave Homeless People Alone:


    References a number of things we’ve not linked to before:

    August 22, 2013 17:57


    There, I Fixed It

    Strange Beaver

    Creepy Adandoned Doll Factory In Spain

    This Spanish factory manufactured porcelain-faced bisque dolls, and boxes of the miniature Frankenstein’s monsters were left behind as the factory fell into decay. It would be a fascinating place to visit — in the light of day, when you’re less likely to imagine the glass-eyed monstrosities breathing inside their cartons.

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    abandoned doll factory in spain

    by Admin at August 22, 2013 16:57

    Skeptic Events

    SitP Coventry - Anthroposophy and Spiritual Science: What Every Parent Needs to know about Steiner Schools

    When: Wed Sep 18, 2013 6:30pm to 8:30pm  UTC

    Event Status: confirmed
    Event Description: Skeptics in the Pub Coventry. For more information, see SitP Ref [SitP1740Event]

    by (The Skeptic Mag (RSS)) at August 22, 2013 16:24

    Letters of Note

    Take your pick

    In 1984, iconic advertising executive and real-life Mad Man David Ogilvy received a letter from his 18-year-old great nephew, Harry. Having just finished school, Harry was now faced with the common dilemma of whether to go to university or jump straight into full-time work, and so asked his highly respected relative for some wisdom on the matter. Ogilvy responded with the following multiple choice letter of advice.

    (Source: The Unpublished David Ogilvy: A Selection of His Writings from the Files of His Partners; Image: David Ogilvy, courtesy of Ads of the World.)

    June 6, 1984

    Dear Harry,

    You ask me whether you should spend the next three years at university, or get a job. I will give you three different answers. Take your pick.

    Answer A. You are ambitious. Your sights are set on going to the top, in business or government. Today's big corporations cannot be managed by uneducated amateurs. In these high-tech times, they need top bananas who have doctorates in chemistry, physics, engineering, geology, etc.

    Even the middle managers are at a disadvantage unless they boast a university degree and an MBA. In the United States, 18 percent of the population has a degree, in Britain, only 7 percent. Eight percent of Americans have graduate degrees, compared with 1 percent of Brits. That more than anything else is why American management outperforms British management.

    Same thing in government. When I was your age, we had the best civil service in the world. Today, the French civil servants are better than ours because they are educated for the job in the postgraduate Ecole Nationale d'Administration, while ours go straight from Balliol to Whitehall. The French pros outperform the British amateurs.

    Anyway, you are too young to decide what you want to do for the rest of your life. If you spend the next few years at university, you will get to know the world - and yourself - before the time comes to choose your career.

    Answer B. Stop frittering away your time in academia. Stop subjecting yourself to the tedium of textbooks and classrooms. Stop cramming for exams before you acquire an incurable hatred for reading.

    Escape from the sterile influences of dons, who are nothing more than pickled undergraduates.

    The lack of a college degree will only be a slight handicap in your career. In Britain, you can still get to the top without a degree. What industry and government need at the top is not technocrats but leaders. The character traits which make people scholars in their youth are not the traits which make them leaders in later life.

    You put up with education for 12 boring years. Enough is enough.

    Answer C. Don't judge the value of higher education in terms of careermanship. Judge it for what it is - a priceless opportunity to furnish your mind and enrich the quality of your life. My father was a failure in business, but he read Horace in the loo until he died, poor but happy.

    If you enjoy being a scholar, and like the company of scholars, go to a university. Who knows, you may end your days as a Regius Professor. And bear in mind that British universities are still the best in the world - at the undergraduate level. Lucky you. Winning a Nobel Prize is more satisfying than being elected Chairman of some large corporation or becoming a Permanent Undersecretary in Whitehall.

    You have a first-class mind. Stretch it. If you have the opportunity to go to a university, don't pass it up. You would never forgive yourself.

    Tons of love,

    by Shaun Usher ( at August 22, 2013 16:24

    Planet Ubuntu

    Pasi Lallinaho: Xubuntu team: No Mir for 13.10

    The Xubuntu team has decided today that Xubuntu 13.10 will not have Mir installed by default. The decision was based on the testing and evaluation Xubuntu team did with Mir before.

    On behalf of the whole Xubuntu team, I want to thank the Mir developers for being closely in touch with the team as well as helping with any problems we had. I also want to thank everybody who tested Mir with Xubuntu – all feedback was important. Thank you!

    The full logs and the minutes for the community meeting along with the decisive votes can be found at the Xubuntu wiki: Xubuntu community meeting, Aug 22.

    August 22, 2013 16:06


    LWN Headlines

    Garrett: Default offerings, target audiences, and the future of Fedora

    Matthew Garrett argues for a clearer focus for the Fedora project. "Bluntly, if you have a well-defined goal, people are more likely to either work towards that goal or go and do something else. If you don't, people will just do whatever they want. The risk of defining that goal is that you'll lose some of your existing contributors, but the benefit is that the existing contributors will be more likely to work together rather than heading off in several different directions."

    by corbet at August 22, 2013 15:55

    High Scalability

    The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Second edition

    Google has released an epic second edition of their ground breaking The Datacenter as a Computer book. It's called an introduction, but at 156 pages I would love to see what the Advanced version would look like!

    John Fries in a G+ comment has what I think is a perfect summary of the ultimate sense of the book:

    It's funny, when I was at Google I was initially quite intimidated by interacting with an enormous datacenter, and then I started imagining the entire datacenter was shrunk down into a small box sitting on my desk, and realized it was just another machine and the physical size didn't matter anymore

    It's such a far ranging book that it's impossible to characterize simply. It covers an amazing diversity of topics, from an introduction to warehouse-scale computing; workloads and software infrastructure; hardware; datacenter architecture; energy and power efficiency; cost structures; how to deal with failures and repairs; and it closes with a discussion of key challenges, which include rapidly changing workloads, building responsive large scale systems, energy proportionality of non-CPU components, overcoming the end of Dennard scaling, and Amdahl's cruel law.

    In reading it I get the sense the Faerie Queen has transported us to the land of Faerie, a special other place of timeless truths, where dragons roam, and mortal danger lurks. And if you do escape, nothing is quite the same ever again. 


    by Todd Hoff at August 22, 2013 15:45

    Strange Beaver

    “Stronger Beer” Tim Hicks

    A special treat to our fans in the great white north

    by Admin at August 22, 2013 15:45

    Kernel Planet

    Matthew Garrett: Re: Default offerings, target audiences, and the future of Fedora

    Eric (a fellow Fedora board member) has a post describing his vision for what Fedora as an end goal should look like. It's essentially an assertion that since we have no idea who our users are or what they want, we should offer them everything on an equal footing.

    Shockingly enough, I disagree.

    At the most basic level, the output of different Special Interest Groups is not all equal. We've had issues over the past few releases where various spins have shipped in a broken state, because the SIG responsible for producing them doesn't have the resources to actually test them. We're potentially going to end up shipping F20 with old Bluetooth code because the smaller desktops aren't able to port to the new API in time[1]. Promoting these equally implies that they're equal, and doing so when we know it isn't the case is a disservice to our users.

    But it's not just about our users. Before I joined the Fedora project, I'd worked on both Debian and Ubuntu. Debian is broadly similar to the current state of Fedora - no strong idea about what is actually being produced, and a desire among many developers to cater to every user's requirements. Ubuntu's pretty much the direct opposite, with a strongly defined goal and a willingness to sacrifice some use cases in order to achieve that goal.

    This leads to an interestingly different social dynamic. Ubuntu contributors know what they're working on. If a change furthers the well-defined aim of the project, that change happens. Moving from Ubuntu to Fedora was a shock to me - there were several rough edges in Fedora that simply couldn't be smoothed out because fixing them for one use case would compromise another use case, and nobody could decide which was more important[2]. It's basically unthinkable that such a situation could arise in Ubuntu, not just because there was a self appointed dictator but because there was an explicit goal and people could prioritise based on that[3].

    Bluntly, if you have a well-defined goal, people are more likely to either work towards that goal or go and do something else. If you don't, people will just do whatever they want. The risk of defining that goal is that you'll lose some of your existing contributors, but the benefit is that the existing contributors will be more likely to work together rather than heading off in several different directions.

    But perhaps more importantly, having a goal can attract people. Ubuntu's Bug #1 was a solid statement of intent. Being freer than Microsoft wasn't enough. Ubuntu had to be better than Microsoft products on every axis, and joining Ubuntu meant that you were going to be part of that. Now it's been closed and Ubuntu's wandered off into convergence land, and signing up to spend your free time on producing something to help someone sell phones is much less compelling than doing it to produce a product you can give to your friends.

    Fedora should be the obvious replacement, but it's not because it's unclear to a casual observer what Fedora actually is. The website proudly leads with a description of Fedora as a fast, stable and powerful operating system, but it's obvious that many of the community don't think of Fedora that way - instead it's a playground to produce a range of niche derivatives, with little consideration as to whether contributing to Fedora in that way benefits the project as a whole. Codifying that would actively harm our ability to produce a compelling product, and in turn reduce our ability to attract new contributors even further.

    Which is why I think the current proposal to produce three first-class products is exciting. Offering several different desktops on the download page is confusing. Offering distinct desktop, server and cloud products isn't. It makes it clear to our users what we care about, and in turn that makes it easier for users to be excited about contributing to Fedora. Let's not make the mistake of trying to be all things to all people.

    [1] Although clearly in this case the absence of a stable ABI in BlueZ despite it having had a dbus interface for the best part of a decade is a pretty fundamental problem.
    [2] See the multi-year argument over default firewall rules and the resulting lack of working SMB browsing or mDNS resolving
    [3] To be fair, one of the reasons I was happy to jump ship was because of the increasingly autocratic way Ubuntu was being run. By the end of my involvement, significant technical decisions were being made in internal IRC channels - despite being on the project's Technical Board, I had no idea how or why some significant technical changes were being made. I don't think this is a fundamental outcome of having a well-defined goal, though. A goal defined by the community (or their elected representatives) should function just as well.

    comment count unavailable comments

    August 22, 2013 15:37

    Planet Debian

    Matthew Garrett: Re: Default offerings, target audiences, and the future of Fedora

    Eric (a fellow Fedora board member) has a post describing his vision for what Fedora as an end goal should look like. It's essentially an assertion that since we have no idea who our users are or what they want, we should offer them everything on an equal footing.

    Shockingly enough, I disagree.

    At the most basic level, the output of different Special Interest Groups is not all equal. We've had issues over the past few releases where various spins have shipped in a broken state, because the SIG responsible for producing them doesn't have the resources to actually test them. We're potentially going to end up shipping F20 with old Bluetooth code because the smaller desktops aren't able to port to the new API in time[1]. Promoting these equally implies that they're equal, and doing so when we know it isn't the case is a disservice to our users.

    But it's not just about our users. Before I joined the Fedora project, I'd worked on both Debian and Ubuntu. Debian is broadly similar to the current state of Fedora - no strong idea about what is actually being produced, and a desire among many developers to cater to every user's requirements. Ubuntu's pretty much the direct opposite, with a strongly defined goal and a willingness to sacrifice some use cases in order to achieve that goal.

    This leads to an interestingly different social dynamic. Ubuntu contributors know what they're working on. If a change furthers the well-defined aim of the project, that change happens. Moving from Ubuntu to Fedora was a shock to me - there were several rough edges in Fedora that simply couldn't be smoothed out because fixing them for one use case would compromise another use case, and nobody could decide which was more important[2]. It's basically unthinkable that such a situation could arise in Ubuntu, not just because there was a self appointed dictator but because there was an explicit goal and people could prioritise based on that[3].

    Bluntly, if you have a well-defined goal, people are more likely to either work towards that goal or go and do something else. If you don't, people will just do whatever they want. The risk of defining that goal is that you'll lose some of your existing contributors, but the benefit is that the existing contributors will be more likely to work together rather than heading off in several different directions.

    But perhaps more importantly, having a goal can attract people. Ubuntu's Bug #1 was a solid statement of intent. Being freer than Microsoft wasn't enough. Ubuntu had to be better than Microsoft products on every axis, and joining Ubuntu meant that you were going to be part of that. Now it's been closed and Ubuntu's wandered off into convergence land, and signing up to spend your free time on producing something to help someone sell phones is much less compelling than doing it to produce a product you can give to your friends.

    Fedora should be the obvious replacement, but it's not because it's unclear to a casual observer what Fedora actually is. The website proudly leads with a description of Fedora as a fast, stable and powerful operating system, but it's obvious that many of the community don't think of Fedora that way - instead it's a playground to produce a range of niche derivatives, with little consideration as to whether contributing to Fedora in that way benefits the project as a whole. Codifying that would actively harm our ability to produce a compelling product, and in turn reduce our ability to attract new contributors even further.

    Which is why I think the current proposal to produce three first-class products is exciting. Offering several different desktops on the download page is confusing. Offering distinct desktop, server and cloud products isn't. It makes it clear to our users what we care about, and in turn that makes it easier for users to be excited about contributing to Fedora. Let's not make the mistake of trying to be all things to all people.

    [1] Although clearly in this case the absence of a stable ABI in BlueZ despite it having had a dbus interface for the best part of a decade is a pretty fundamental problem.
    [2] See the multi-year argument over default firewall rules and the resulting lack of working SMB browsing or mDNS resolving
    [3] To be fair, one of the reasons I was happy to jump ship was because of the increasingly autocratic way Ubuntu was being run. By the end of my involvement, significant technical decisions were being made in internal IRC channels - despite being on the project's Technical Board, I had no idea how or why some significant technical changes were being made. I don't think this is a fundamental outcome of having a well-defined goal, though. A goal defined by the community (or their elected representatives) should function just as well.

    comment count unavailable comments

    August 22, 2013 15:37

    Planet Ubuntu

    Daniel Holbach: Ubuntu Developer Summit coming up next week

    The next Ubuntu Developer Summit is coming up next week (27-29 August 2013) and you can already see a nice set of topics coming together in Launchpad. The schedule will, as always, be available at

    Jono Bacon and I are going to be track leads for the Community track, so I wanted to send out an invitation to get topics in, especially for bits concerning the Community track. If you are a team lead and had feedback from your team or you want to bring up a discussion topic where you are interested to help out with, check out our docs on how to submit a session for UDS. Please note: this is not a game of “this is what I think somebody should discuss and do for me”, so if you plan to bring up a session topic, be prepared, have a good idea of what might be on the agenda, reach out to people who might be interested in the topic, so you have a good set of participants and contributors to the project available.

    If you just want to attend and listen in and contribute to sessions on the schedule, you can just do that as well, check out which has all the information on how to tune in. Register here. Can’t wait to see you all next week!

    August 22, 2013 15:29

    LWN Headlines

    Security advisories for Thursday

    Debian has updated cacti (two vulnerabilities).

    Fedora has updated glibc (F19: multiple vulnerabilities, two from 2012).

    Mandriva has updated cacti (ES5: two vulnerabilities).

    openSUSE has updated poppler (12.2: code execution from 2012) and puppet (12.3: code execution).

    Oracle has updated kernel (OL5: multiple vulnerabilities).

    Red Hat has updated condor (RHEL6; RHEL5: denial of service) and mongodb, pymongo (RHEL6: two vulnerabilities).

    Slackware has updated hplip (code execution from 2010), poppler (code execution from 2012), and xpdf (code execution from 2012).

    by jake at August 22, 2013 15:11

    Planet Sysadmin

    TechRepublic IT Security: A bridge too far: Assessing the current state of application security

    A recent report finds that applying security procedures to application development is severely lacking in many organizations.

    August 22, 2013 15:02

    There, I Fixed It

    Planet Sysadmin

    Everything Sysadmin: Evi Nemeth's life-raft spotted?

    The search for Evi Nemeth and the others onboard the Nina has been restarted. The the crowd-sourced search of 56,000 satellite pictures appeared to find an orange/yellow object to the west of Norfolk island. The life-raft was orange:

    Read more: The Nina: Fresh search for missing yacht

    The project is being funded by donations. To donate visit the Danielle Wright Search Fund.

    August 22, 2013 14:34


    Data Center Knowledge

    Hybrid is Here – The Convergence of Data Center, Hosting and Cloud

    Many organizations feel that if they want to go global, they have to use cloud comput­ing. This white paper from Latisys outlines the solutions typically offered by a modern Infrastruc­ture-as-a-Service provider and offers guidance on where your organization can benefit the most.

    by Bill Kleyman at August 22, 2013 14:30

    Planet Debian

    Bartosz Fe&#324;ski: privacy settings for video materials from debconf

    I’ve just tried to paste a link to video material from DebConf13 on my Facebook wall and I got this warning:
    Could someone tell me what privacy settings are set on materials from DebConf?
    Or maybe the more applicable question is what is the license of these materials?

    August 22, 2013 14:02

    Strange Beaver

    LWN Headlines

    Ubuntu Edge: founder says failure isn't the end of the dream (Guardian)

    The Guardian talks with Mark Shuttleworth about the Ubuntu Edge campaign, which failed to reach its $32 million goal. "The impression we have from conversations with manufacturers is that they are open to an alternative to Android. And end-users don't seem emotionally attached to Android. There's no network effect from using Android like there was with Windows in the 1990s, where if some businesses starting using Windows then others had to follow. It's not like that on mobile. They all interoperate. Every Ubuntu device would be additive to the whole ecosystem of devices."

    by corbet at August 22, 2013 13:43

    Perry: Deterministic Builds Part One: Cyberwar and Global Compromise

    Mike Perry writes about the motivations behind his deterministic build work on the Tor Project blog. "Current popular software development practices simply cannot survive targeted attacks of the scale and scope that we are seeing today. In fact, I believe we're just about to witness the first examples of large scale 'watering hole' attacks. This would be malware that attacks the software development and build processes themselves to distribute copies of itself to tens or even hundreds of millions of machines in a single, officially signed, instantaneous update. Deterministic, distributed builds are perhaps the only way we can reliably prevent these types of targeted attacks in the face of the endless stockpiling of weaponized exploits and other 'cyberweapons'."

    by corbet at August 22, 2013 13:39

    Server Fault Meta

    Minimum bounty is 100 with deleted answer

    It seems that a question which i answered myself, and then deleted the answer still appears as "answered by myself" and thus requires a minimal bounty or 100 reputation instead of 50.

    I know and understand the reason where bounty minimum was increased to 100 for self-answered questions, but what if the answer is now deleted ?

    Is that a bug or a feature ?

    by Kwaio at August 22, 2013 13:23

    There, I Fixed It

    Planet Sysadmin

    CiscoZine: How to save configurations using SNMP

    Everyone knows there are software to get the configuration using SNMP; but how can you copy the configuration if you don’t have any tool? Let me explain what is SNMP before show you how to implement it. Simple Network Management Protocol (SNMP) is an “Internet-standard protocol for managing devices on IP networks”. Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks, and more. It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP uses an extensible design, where the available information is defined by management information bases (MIBs). MIBs describe the […]

    August 22, 2013 11:28

    There, I Fixed It

    Planet UKnot

    First #bloggade a big success

    We held the first #bloggade at the Timico datacentre in Newark yesterday. A bloggade is as you may know the collective noun for a group of bloggers.

    This first event was highly successful covering a range of blog related subjects:

    1 The type of infrastructure used to host blogs (led by Timico hosting tech guru Michael Green) followrd by a guided tour of the Tiico NOC and datacenre.
    2 A lengthy discussion on Search Engine Optimisation for your blog conducted by @phil_kelsey of Spiral Media and @mattdrussell of WebbHostingBuzz.
    3 A general discussion about plug ins and which ones worked for people.

    There was a great level of audience participation and a definite interest in holding another event, sometime in the run up to Christmas perhaps.

    For a bit of fun we decided to have a go and see if we could get #bloggade to trend on twitter. Despite our intensive efforts it didn’t seem to be working. Then one of the bloggers suggested that if we tweeted that members of the currently in the news boy band “One Direction” had turned uo for #bloggade it might go viral. We did this and at the latest count have had a grand total of two retweets from (pre-pubescent?) OD fans. :)

    Gotta say I’d never heard of em before this week!!!

    Big thanks to all who came especially @mattdrussell whose original idea this was together with @phil_kelsey @jangles and @AndrewGrill for their major contributions.

    All in all considering we organised this from scratch to execution in 4 weeks I have to say it was a great success.

    Catch ya later.

    PS this post was typed by thumb on my Galaxy s4 en route to a customer meeting in London.  I’d be amazed if tge formatting is spot on – I’ll make any necessary adjustments when I get back to laptop land.

    by Trefor Davies at August 22, 2013 10:59

    Planet Ubuntu

    Mattia Migliorini: Melany WordPress Theme 0.5 Stable

    Hi everybody out there!
    Today I am excited to announce the first stable release of Melany.
    As I told you before, this comes after the theme reviewers accepted it for being uploaded and published on the WordPress Theme Directory, so basically you can install Melany directly from the WordPress built-in theme search feature or by visiting the Theme Directory.

    This is a huge step and I learned a lot of stuffs. But what does that mean from your point of view? You have a new good theme available. The quality is granted by the review, the beauty is subjective, therefore if you like minimalism and simplicity give it a try.

    Here is the features available:

    • Two-column layout, with a sidebar on the right
    • Logo, site name and description at the top of the sidebar on desktop, under the navigation bar on smaller screens
    • Navigation menu with two-levels deep dropdown
    • Designed with Twitter Bootstrap, giving a pure Bootstrap experience
    • Customize logo and favicon in Appearance > Customize
    • Customize background color and image in Appearance > Customize (at your own risk!)
    • Add custom styles with the built-in editor in Appearance > Editor
    • Support to some Jetpack features
    • Languages: English

    Now I’m working on a new huge release which will bring a revamped design along with a number of useful features, including:

    • Revamped design with Twitter Bootstrap 3.0
    • Responsive search form in the header
    • Author avatar and bio under articles in single post view
    • Better translation support (and new languages too)
    • Improved 404 page

    You can expect it to come in quite a pair of months as there’s a lot to do and that’s just a hobby. Be sure to follow the updates here on my blog and feel free to contribute through the GitHub repo!


    GitHub Melany v0.5.15

    The post Melany WordPress Theme 0.5 Stable appeared first on deshack@web:~$.

    August 22, 2013 10:35

    Adolfo Jayme Barrientos: On NOT leaving Ubuntu

    I think we must stop confusing Ubuntu the Product and Ubuntu the Project.

    Some ex-contributors to Ubuntu cite the alleged “change of direction” of Canonical. With due respect, that is rubbish. If you truly thought Canonical was a charity, sorry, but you’re being kind of a fool. Canonical is a company—and a cool one, I have to say—and as such it seeks profitability. Even Mozilla has that goal, because it’s a company as well. And that’s fine.

    It’s perfectly fine to stop contributing to Ubuntu because you’re burned out. That’s okay—that can happen, and life and personal interests change. But it’s a lie that Canonical has changed Ubuntu the Product so that is now more closed and disregards community. It’s simple: if it did, I wouldn’t even be able to post to Planet Ubuntu. Or put it like this, Planet Ubuntu wouldn’t exist at all.

    Honestly, I don’t have issues with the way Canonical is managing Ubuntu. Simply because, it has not changed significatively since its inception in 2004. Ubuntu the product has always been a Canonical-backed product, with a community behind*. Canonical spends a lot of money providing community members with many services, and that’s something I truly appreciate**. Besides, I’ve been welcomed here by people I don’t know in real life, for me it’s a great feeling that someone you don’t know has considered you a valuable part of the project.

    And you can’t argue that Canonical is doing different than its competitors. For example, I contribute to Fedora as well, which is Red Hat’s “pet”, similar to Ubuntu. And Red Hat also spends money for providing Fedora’s members with services. Both companies are welcoming to people. But if you really fear helping a company build a product as I do, then you should not try to do it because you’ll be disappointed. It’s a matter of whether you clearly know what to expect when joining a certain project. And FWIW I expected way less than what Canonical has given me as a Project member, because I did not join for the certificate, or the mail address, or the web hosting, or the discounts in third-party websites, or [name your favorite membership benefit]… I did join “only” to improve my (second) language and computer skills and have fun, and that’s it. Joining has surpassed my expectatives, and that’s why I’m here.

    So I am proudly an Ubuntu Member, and I won’t go just because someone fears Canonical’s going “closed”. Heck, I’m sure they aren’t because they haven’t “fired” me!

    * “behind” as in backing it, not conveying that is less important than it. Duh…
    ** That’s maybe because of my country of origin, which is third-world, full of corruption and filthy politicians; rich in natural resources but people is extremely poor. It affects your perspective: I am not accustomed to companies that give things away like this one.

    August 22, 2013 08:32

    Strange Beaver

    Ultimate Shrek Face Prank

    This guy’s sister was out of the house so he seized the opportunity to play an ultimate prank. He covered ALL of her One Direction pictures with Shrek faces, a total improvement

    shrek face prank

    shrek face prank

    shrek face prank

    shrek face prank

    shrek face prank

    by Admin at August 22, 2013 08:22

    Planet Debian

    Petter Reinholdtsen: Second beta release (beta 1) of Debian Edu/Skolelinux based on Debian Wheezy

    The second wheezy based beta release of Debian Edu was wrapped up today, slightly delayed because of some bugs in the initial Windows integration fixes . This is the release announcement:

    New features for Debian Edu 7.1+edu0~b1 released 2013-08-22

    These are the release notes for Debian Edu / Skolelinux 7.1+edu0~b1, based on Debian with codename "Wheezy".

    About Debian Edu and Skolelinux

    Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

    This is the sixth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

    ALERT: Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release. Both alpha and beta0 based installations should reinstall or deal with gosa.conf manually; there are two options: (1) Keep gosa.conf and edit this file as outlined on the mailing list. (2) Accept the new version of gosa.conf and replace both contained admin password placeholders with the password hashes found in the old one (backup copy!). In both cases every user need to change their their password to make sure a password is set for CIFS access to their home directory.

    Software updates

    • Added ssh askpass packages to default installation, to ensure ssh work also without a attached tty.
    • Add the command-not-found package to the default installation to make it easier to figure out where to find missing command line tools. Please note, that the command 'update-command-not-found' has to be run as root to actually make it useful (internet access required).

    Other changes

    • Adjusted the USB stick ISO image build to include every tool needed for desktop=xfce installations.
    • Adjust thin-client-server task to work when installing from USB stick ISO image.
    • Made new grub artwork (changed png from indexed to RGB format).
    • Minor cleanup in the CUPS setup.
    • Make sure that bootstrapping of the Samba domain really happens during installation of the main server and adjust SID handling to cope with this.
    • Make Samba passwords changeable (again) via GOsa².
    • Fix generation of LM and NT password hashes via GOsa² to avoid empty password hashes.
    • Adapted Samba machine domain joining to latest change in the smbldap-tools Perl package, fixing bugs blocking Windows machines from joining the Samba domain.

    Known issues

    • KDE fails to understand the wpad.dat file provided, causing it to not use the http proxy as it should.
    • Chromium also fails to use the proxy when using the KDE desktop (using the KDE configuration).

    Where to get it

    To download the multiarch netinstall CD release you can use

    The MD5SUM of this image is: 1e357f80b55e703523f2254adde6d78b
    The SHA1SUM of this image is: 7157f9be5fd27c7694d713c6ecfed61c3edda3b2

    To download the multiarch USB stick ISO release you can use

    The MD5SUM of this image is: 7a8408ead59cf7e3cef25afb6e91590b
    The SHA1SUM of this image is: f1817c031f02790d5edb3bfa0dcf8451088ad119

    How to report bugs

    August 22, 2013 07:30

    ASCII Art Farts


       _   ,--()                                                     
      ( )-'-.------|>      HAPPY VALENTINE'S DAY !!!!!!!!!!!!!!!!!!!!
       "     `--[]                                                   

    by (ASCII Art Farts: de) at August 22, 2013 07:00

    Planet Debian

    Jose Luis Rivas: Starting a Raspbery Pi without a display

    Recently (actually 3 weeks ago) I bought a Raspberry Pi for myself and it wasn't until today that I power it on by the first time. Call it RealLife™ for simplicity.

    Anyway, one of my first issues was that I do not own any display with RCA or HDMI inputs, and all I have is a router that works as my WiFi hotspot as well. Since this starter kit comes with Raspbian by default I made some research and found that the default user:password is pi:raspberry, ssh-server is on by default and the network interface eth0 tries to grab an IP automatically via DHCP.

    So starting my Raspberry Pi was as easy as connect a wire to my device and the router, then a quick nmap -sP and there it was, under the hostname raspberry my brand new raspberry.

    After a ssh pi@192.168.0.X a nice raspi-conf was receiving me to start setting up my device.

    August 22, 2013 06:42

    Kernel Planet

    Matthew Garrett: If you ever use text VTs, don't run XMir right now

    It'd be easy to assume that in a Mir-based world, the Mir server receives input events and hands them over to Mir clients. In fact, as I described here, XMir uses standard Xorg input drivers and so receives all input events directly. This led to issues like the duplicate mouse pointer seen in earlier versions of XMir - as well as the pointer being drawn by XMir, Mir was drawing its own pointer.

    But there's also some more subtle issues. Mir recently gained a fairly simple implementation of VT switching, simply listening for input events where a function key is hit while the ctrl and alt modifiers are set[1]. It then performs the appropriate ioctl on /dev/console and the kernel switches the VT. The problem here is that Mir doesn't tell XMir that this has happened, and so XMir still has all of its input devices open and still pays attention to any input events.

    This is pretty easy to demonstrate. Open a terminal or text editor under Xmir and make sure it has focus. Hit ctrl+alt+f1 and log in. Hit ctlr+alt+f7 again. Your username and password will be sitting in the window.

    This is Launchpad bug 1192843, filed on the 20th of June. A month and a half later, Mir was added to the main Ubuntu repositories. Towards the bottom, there's a note saying "XMir always listening to keyboard, passwords may appear in other X sessions". This is pretty misleading, since "other X sessions" implies that it's only going to happen if you run multiple X sessions. Regardless, it's a known bug that can potentially leak user passwords.

    So it's kind of odd that that's the only mention of it, hidden in a disused toilet behind a "Doesn't work on VESA" sign. If you follow the link to installation instructions you get this page which doesn't mention the problem at all. Now, to be fair, it doesn't mention any of the other problems with Mir either, but the other problems merely result in things not working rather than your password ending up in IRC.

    This being developmental software isn't an excuse. There's been plenty of Canonical-led publicity about Mir and people are inevitably going to test it out. The lack of clear and explicit warnings is utterly inexcusable, and these packages shouldn't have landed in the archive until the issue was fixed. This is brutally irresponsible behaviour on the part of Canonical.

    So, if you ever switch to a text VT, either make sure you're not running XMir at the moment or make sure that you never leave any kind of network client focused when you switch away from X. And you might want to check IRC and IM logs to make sure you haven't made a mistake already.

    [1] One lesser-known feature of X is that the VT switching events are actually configured in the keymap. ctrl+alt+f1 defaults to switching to VT1, but you can remap any key combination to any VT switch event. Except, of course, this is broken in XMir because Mir catches the keystroke and handles it anyway.

    comment count unavailable comments

    August 22, 2013 06:36

    Planet Debian

    Matthew Garrett: If you ever use text VTs, don't run XMir right now

    It'd be easy to assume that in a Mir-based world, the Mir server receives input events and hands them over to Mir clients. In fact, as I described here, XMir uses standard Xorg input drivers and so receives all input events directly. This led to issues like the duplicate mouse pointer seen in earlier versions of XMir - as well as the pointer being drawn by XMir, Mir was drawing its own pointer.

    But there's also some more subtle issues. Mir recently gained a fairly simple implementation of VT switching, simply listening for input events where a function key is hit while the ctrl and alt modifiers are set[1]. It then performs the appropriate ioctl on /dev/console and the kernel switches the VT. The problem here is that Mir doesn't tell XMir that this has happened, and so XMir still has all of its input devices open and still pays attention to any input events.

    This is pretty easy to demonstrate. Open a terminal or text editor under Xmir and make sure it has focus. Hit ctrl+alt+f1 and log in. Hit ctlr+alt+f7 again. Your username and password will be sitting in the window.

    This is Launchpad bug 1192843, filed on the 20th of June. A month and a half later, Mir was added to the main Ubuntu repositories. Towards the bottom, there's a note saying "XMir always listening to keyboard, passwords may appear in other X sessions". This is pretty misleading, since "other X sessions" implies that it's only going to happen if you run multiple X sessions. Regardless, it's a known bug that can potentially leak user passwords.

    So it's kind of odd that that's the only mention of it, hidden in a disused toilet behind a "Doesn't work on VESA" sign. If you follow the link to installation instructions you get this page which doesn't mention the problem at all. Now, to be fair, it doesn't mention any of the other problems with Mir either, but the other problems merely result in things not working rather than your password ending up in IRC.

    This being developmental software isn't an excuse. There's been plenty of Canonical-led publicity about Mir and people are inevitably going to test it out. The lack of clear and explicit warnings is utterly inexcusable, and these packages shouldn't have landed in the archive until the issue was fixed. This is brutally irresponsible behaviour on the part of Canonical.

    So, if you ever switch to a text VT, either make sure you're not running XMir at the moment or make sure that you never leave any kind of network client focused when you switch away from X. And you might want to check IRC and IM logs to make sure you haven't made a mistake already.

    [1] One lesser-known feature of X is that the VT switching events are actually configured in the keymap. ctrl+alt+f1 defaults to switching to VT1, but you can remap any key combination to any VT switch event. Except, of course, this is broken in XMir because Mir catches the keystroke and handles it anyway.

    comment count unavailable comments

    August 22, 2013 06:36

    Planet HantsLUG

    Jeremy Zawodny

    Aircraft Fighting the Rim Fire, seen from Pine Mountain Lake

    Long time no blog.  With the Rim Fire raging up here, I’ve been active on Facebook and Twitter, though.

    We shot some pictures of the fire fighting aircraft this evening from the Pine Mountain Lake Marina before dinner.

    DC-10 Fire Bomber DC-10 Fire Bomber Closer View of Smoke Clouds C-130 Against Smoke Clouds C-130 Fire Bomber IMG_7502 Helicopter with Water Smoke Clouds

    by Jeremy Zawodny at August 22, 2013 05:23

    Ask Debian

    what image do I download??

    what image do I download and how do I dd it to a usb so I can install. I have a dell xpsm1210 laptop intel core 2 cpu.

    I have installed ubuntu and suse a thousand times, but cannot figure out this stupid site what to download and how to install it. i386???

    by jdieter at August 22, 2013 04:30

    Planet Debian

    Mart&iacute;n Ferrari: Setting up my server: re-installing on an encripted LVM

    Very long post ahead (sorry for the wall of text), part of a series of posts on some sysadmin topics, see post 1 and post 2. I want to show you how I set up my tiny dedicated server to have encrypted partitions, and to reinstall it from scratch. All of this without ever accessing the actual server console.


    As much as my provider may have gold standards on how to do things (they don't, there are some very bad practises in the default installation, like putting their SSH key into root's authorized_keys file), I wouldn't trust an installation done by a third party. Also, I wanted to have all my data securely encrypted.

    I know this is not perfect, and there are possible attacks. But I think it is a good barrier to have to deter entities without big budgets from getting my data.

    I have done this twice on my servers, and today I was reviewing each step as he was doing the same thing (with some slight differences) on his brand new server, so I think this is all mostly correct. Please, tell me if you find a bug in this guide.

    This was done on my 12 £/month Kimsufi dedicated server, sold by OVH (see my previous post on why I chose it), and some things are specific to them. But you can do the same thing with any dedicated server that has a rescue netboot image.

    The process is to boot into the rescue image (this is of course a weak link, as the image could have a keylogger, but we have to stop the paranoia at some point), manually partition the disk, set-up encryption, and LVM; and then install a Debian system with debootstrap.

    To be able to unlock the encrypted disks, you will have to ssh into the server after a reboot and enter the passphrase (this is done inside the initrd phase). Once unlocked, the normal boot process continues.

    If anything fails, you end up with an unreachable system: it might or might have not booted, the disk might or might not be unlocked, etc. You can always go back into the rescue netboot image, but that does not allow you to see the boot process. Some providers will give you real remote console access, OVH charges you silly money for that.

    They used to offer a "virtual KVM", which was a bit of a kludge, but it worked: another netboot image that started a QEMU connected to a VNC server, so by connecting to the VNC server, you would be able to interact with the emulated boot process, but with a fake BIOS and a virtual network. For some unspecified reason they've stopped offering this, but there is a workaround available. The bottom line is, if you have some kind of rescue netboot image, you can just download and run QEMU on it and do the same trick.

    The gritty details

    Start by netbooting into your rescue image. For OVH, you'd go to the control panel, in the Services/Netboot section and select "rescue pro". Then reboot your server. OVH will mail you a temporary password when it finishes rebooting.

    Connect to it, without saving the temporary SSH key:

    $ ssh -oUserKnownHostsFile=/dev/null -oStrictHostKeyChecking=no root@${IP}

    For the rest of the text, I am assuming you have one hard drive called /dev/sda. We start by partitioning it:

    # fdisk /dev/sda

    Start a new partition table with o, and then create two primary partitions: a small one for /boot at the beginning (100 to 300 MB would do), and a second one with the remaining space. Set both as type 83 (Linux), and don't forget to activate the first one, as this servers refuse to boot from the hard drive without that.

    Create the file system for /boot, and the encrypted device:

    # mkfs.ext4 /dev/sda1
    # cryptsetup -s 512 -c aes-xts-plain64 luksFormat /dev/sda2

    The encryption parameters are the same as the ones used by the Debian Installer by default, so don't change them unless you really know what you are doing. You will need to type a passphrase for the encrypted device, be sure not to forget it! This passphrase can later be changed (or secondary passphrases added) with the cryptsetup tool.

    Look up the crypt device's UUID, and save it for later:

    # cryptsetup luksDump /dev/sda2 | grep UUID:
    UUID:           xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

    Open the encrypted device (type the passphrase again), and set up the LVM volume group:

    # cryptsetup luksOpen /dev/sda2 sda2_crypt
    # pvcreate /dev/mapper/sda2_crypt
    # vgcreate vg0 /dev/mapper/sda2_crypt

    Create the logical volumes, this is of course a matter of personal taste and there are many possible variations. This is my current layout, note that I put most of the "big data" in /srv.

    # lvcreate -L 500m -n root vg0
    # lvcreate -L 1.5g -n usr vg0
    # lvcreate -L 3g -n var vg0
    # lvcreate -L 1g -n home vg0
    # lvcreate -L 10g -n srv vg0
    # lvcreate -L 500m -n swap vg0
    # lvcreate -L 100m -n tmp vg0

    Some possible variations:

    • You can decide to use a ramdisk for /tmp, so instead of creating a logical volume, you would add RAMTMP=yes to /etc/default/tmpfs.
    • You can merge / and /usr in one same partition, as neither of them change much.
    • You can avoid having swap if you prefer.
    • You can put /home in /srv, and bind mount it later.

    Now, create the file systems, swap space, and mount them in /target. Note that I like to use human-readable labels.

    # for i in home root srv tmp usr var; do 
      mkfs.ext4 -L $i /dev/mapper/vg-$i; done
    # mkswap -L swap /dev/mapper/vg0-swap
    # mkdir /target
    # mount /dev/mapper/vg0-root /target
    # mkdir /target/{boot,home,srv,tmp,usr,var}
    # mount /dev/sda1 /target/boot
    # for i in home srv tmp usr var; do
      mount /dev/mapper/vg-$i /target/$i; done
    # swapon /dev/mapper/vg-swap

    Don't forget to set the right permissions for /tmp.

    # chmod 1777 /target/tmp

    If you want to do the /home on /srv, you'll need to do this (and then copy to /etc/fstab):

    # mkdir /target/srv/home
    # mount -o bind /target/srv/home /target/home

    The disk is ready now. We will use debootstrap to install the base system. The OVH image carries it, otherwise consult the relevant section in the Install manual for details. It is important that at this point you check that you have a good GPG keyring for debootstrap to verify the installation source, by comparing it to a good one (for example, the one in your machine):

    # gpg /usr/share/keyrings/debian-archive-keyring.gpg
    pub  4096R/B98321F9 2010-08-07 Squeeze Stable Release Key <>
    pub  4096R/473041FA 2010-08-27 Debian Archive Automatic Signing Key (6.0/squeeze) <>
    pub  4096R/65FFB764 2012-05-08 Wheezy Stable Release Key <>
    pub  4096R/46925553 2012-04-27 Debian Archive Automatic Signing Key (7.0/wheezy) <>

    Now, for the actual installation. You can use any Debian mirror, OVH has their own in the local network. In OVH's case it is critical to specify the architecture, as the rescue image is i386. I didn't notice that and had to painfully switch architectures in place (which was absolutely not possible a couple of years ago).

    # debootstrap --arch amd64 wheezy /target

    After a few minutes downloading and installing stuff, you almost have a Debian system ready to go. Since this is not D-I, we still need to tighten a few screws manually. Let's mount some needed file systems, and enter the brand new system with chroot:

    # mount -o bind /dev /target/dev
    # mount -t proc proc /target/proc
    # mount -t sysfs sys /target/sys
    # XTERM=xterm-color LANG=C.UTF-8 chroot /target /bin/bash

    The most critical parts now are to correctly save the parameters for the encrypted device, and the partitions and logical volumes. You'll need the UUID saved before:

    # echo 'sda2_crypt UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx none luks' \
      > /target/etc/crypttab

    Create the file systems table in /etc/fstab. Here I use labels to identify the devices:

    # file system   mount point type    options             dump    pass
    LABEL=root      /           ext4    errors=remount-ro   0       1
    LABEL=tmp       /tmp        ext4    rw,nosuid,nodev     0       2
    LABEL=var       /var        ext4    rw                  0       2
    LABEL=usr       /usr        ext4    rw,nodev            0       2
    LABEL=home      /home       ext4    rw,nosuid,nodev     0       2
    # Alternative home in /srv:
    #/srv/home      /home       auto    bind                0       0
    LABEL=srv       /srv        ext4    rw,nosuid,nodev     0       2
    LABEL=boot      /boot       ext4    rw,nosuid,nodev     0       2
    LABEL=swap      none        swap    sw                  0       0

    You can also just use the device mapper's names (/dev/mapper/<volume_group>-<logical_volume>), be sure not to use the /dev/<volume_group>/<logical_volume> naming, as there are some initrd-tools that choked on them.

    # file system           mount point type    options             dump    pass
    /dev/mapper/vg0-root    /           ext4    errors=remount-ro   0       1
    /dev/mapper/vg0-tmp     /tmp        ext4    rw,nosuid,nodev     0       2
    /dev/mapper/vg0-var     /var        ext4    rw                  0       2
    /dev/mapper/vg0-usr     /usr        ext4    rw,nodev            0       2
    /dev/mapper/vg0-home    /home       ext4    rw,nosuid,nodev     0       2
    # Alternative home in /srv:
    #/srv/home              /home       auto    bind                0   0
    /dev/mapper/vg0-srv     /srv        ext4    rw,nosuid,nodev     0   2
    /dev/sda1               /boot       ext4    rw,nosuid,nodev     0   2
    /dev/mapper/vg0-swap    none        swap    sw                  0   0

    Some tools depend on /etc/mtab, which now is just a symbolic link:

    # ln -sf /proc/mounts /etc/mtab

    Now configure the network. You most surely can use DHCP, but you might prefer static configuration, that's personal choice. For DHCP, it is very straightforward:

    # cat >> /etc/network/interfaces
    auto eth0
    iface eth0 inet dhcp

    For static configuration, first find the current valid addresses and routes as obtained by DHCP:

    # ip address
    # ip route

    And then store them:

    # cat >> /etc/network/interfaces
    auto eth0
    iface eth0 inet static
        address AAA.BBB.CCC.DDD/24
        gateway AAA.BBB.CCC.254
        pre-up /sbin/ip addr flush dev eth0 || true

    Note that pre-up command I added; that is to remove the configuration that is to be done by the kernel during boot (more on that later), otherwise ifupdown will complain about existing addresses.

    If your provider does IPv6, add it too. For OVH, the IPv6 set-up is a bit weird, so you need to add the routes in post-up. Your default gateway is going to be your /64 prefix, with the last byte replaced by ff, and then followed by :ff:ff:ff:ff. As you can see, that gateway is not in your network segment, so you need to add an explicit route to it. They have some information, but it is completely unreadable.

    If your IPv6 address is 2001:41D0:dead:beef::1/64, you will add:

    iface eth0 inet6 static
        address 2001:41D0:dead:beef::1/64
        post-up /sbin/ip -6 route add 2001:41D0:dead:beff:ff:ff:ff:ff dev eth0
        post-up /sbin/ip -6 route add default via 2001:41D0:dead:beff:ff:ff:ff:ff

    You probably don't want the auto-configured IPv6 addresses, so disable them via sysctl:

    # cat >> /etc/sysctl.conf
    # Disable IPv6 autoconf 
    net.ipv6.conf.all.autoconf = 0
    net.ipv6.conf.default.autoconf = 0
    net.ipv6.conf.eth0.autoconf = 0
    net.ipv6.conf.all.accept_ra = 0
    net.ipv6.conf.default.accept_ra = 0
    net.ipv6.conf.eth0.accept_ra = 0

    To have a working DNS resolver, we can use the local server (OVH in this case):

    # cat > /etc/resolv.conf 
    search $DOMAIN

    The most important part of a new install: choose a host name (and make the system use it).

    # echo $HOSTNAME > /etc/hostname
    # hostname $HOSTNAME
    # echo " $HOSTNAME.$DOMAIN $HOSTNAME" >> /etc/hosts

    If we want to speficy the BIOS clock to use UTC:

    # echo -e '0.0 0 0.0\n0\nUTC' > /etc/adjtime

    Set up your time zone:

    # dpkg-reconfigure tzdata

    Configure APT with your preferred mirrors. I also prevent APT from installing recommends by default.

    # echo deb wheezy main contrib non-free \
      >> /etc/apt/sources.list
    # echo deb wheezy-updates main contrib non-free \
      >> /etc/apt/sources.list
    # echo deb wheezy/updates main contrib non-free \
      >> /etc/apt/sources.list
    # echo 'APT::Install-Recommends "False";' > /etc/apt/apt.conf.d/02recommends
    # apt-get update

    Before installing any package, let's make sure that the initial ram disk (initrd) that is going to be created will allow us to connect. There will be no chance of using the root password during boot. Your public key is usually found in $HOME/.ssh/

    # mkdir -p /etc/initramfs-tools/root/.ssh/
    # echo $(YOUR_PUB_RSA_KEY) > /etc/initramfs-tools/root/.ssh/authorized_keys

    If you change this, or the host key stored at /etc/dropbear/dropbear_*_host_key, the /etc/crypttab, or any other critical piece of information for the booting process, you need to run update-initramfs -u.

    Now we can install the missing pieces:

    # apt-get install makedev cryptsetup lvm2 ssh dropbear busybox ssh \
      initramfs-tools locales linux-image-amd64 grub-pc kbd console-setup

    During the installation you will have to choose where to install grub, I recommend directly on /dev/sda. Also, the magic initrd will be created. We want to double check that it has all the important pieces for a successful boot:

    # zcat /boot/initrd.img-3.2.0-4-amd64 | cpio -t conf/conf.d/cryptroot \
      etc/lvm/lvm.conf etc/dropbear/\* root/.ssh/authorized_keys sbin/dropbear

    All these files need to be there. Most critically, we need to check that the cryptroot file has the right information to access the root file system:

    # zcat /boot/initrd.img-* | cpio -i --to-stdout conf/conf.d/cryptroot

    If all that was correct, now we need to tell the kernel to configure the network as soon as possible so we can connect to the initrd and unlock the disks. This is done by passing a command-line option though grub. This should match what was done in /etc/network/interfaces: either DHCP or static configuration. For DHCP, this line should be changed in /etc/default/grub:


    For static configuration:


    It is also a good idea to disable the quiet boot and graphical boot splash, in case we need to use QEMU to fix some booting issue:


    And make the changes effective:

    # update-grub2

    Having fsck fix problems automatically can be a life-saver too:

    # echo FSCKFIX=yes >> /etc/default/rcS

    Get some very useful packages:

    # apt-get install vim less ntpdate sudo

    Create an user for yourself, possibly make it an administrator:

    # adduser tincho
    # adduser tincho sudo
    # adduser tincho adm

    This is mostly done, exit the chroot, and unmount everything.

    # exit  # the chroot.
    # umount /target/{dev,proc,sys,boot,home,srv,tmp,usr,var}
    # umount /target
    # swapoff -a
    # lvchange -an /dev/mapper/vg0-*
    # cryptsetup luksClose sda2_crypt

    Disable the netboot option from your administration panel, reboot, and hope it all goes well.

    If you followed every step carefully, a few minutes later you should be able to ping your server. Use this snippet to enter the password remotely:

    $ stty -echo; ssh -o UserKnownHostsFile=$HOME/.ssh/known_hosts.initramfs \
      -o BatchMode=yes root@"$HOST" 'cat > /lib/cryptsetup/passfifo'; \
      stty echo

    It is very important that you close the pipe (with control-D twice) without typing enter. For my servers, I have a script that reads the passphrase from a GPG-encrypted file and pipes it directly into the remote server. That way, I only type the GPG passphrase locally:

    $ cat 
    BASE="$(dirname "$0")"
    gpg --decrypt "$BASE"/key-"$HOST".gpg | \
        ssh -o UserKnownHostsFile="$BASE"/known_hosts.initramfs -o BatchMode=yes \
            root@"$HOST" 'cat > /lib/cryptsetup/passfifo'

    It might be a good idea to create a long, impossible to guess passphrase that you can use in the GPG-encrypted file, and that you can also print and store in somewhere safe. See the luksAddKey function in the cryptsetup(8) man page.

    Once again, if everything went right, a few seconds later the openSSH server will replace the tiny dropbear and you will be able to access your server normally (and with the real SSH host key).

    Hope you find this article helpful! I would love to hear your feedback.

    August 22, 2013 04:25

    Planet Sysadmin

    Chris Siebenmann: I've changed my thinking about redundant power supplies

    Back almost at the start of Wandering Thoughts, I wrote an entry in which I was pretty negative on redundant power supplies. Since I'm busy specifying redundant power supplies for our new generation of fileserver hardware, I think it's about time I admitted something: now that I'm older and somewhat wiser, I'm changing my mind. Redundant power supplies can be quite worth it. In fact I was at least partially wrong back then.

    (In my defense, at the time I had very little experience with decent server hardware for reasons that do not fit in the margins of this entry but boil down to 'hardware budget? what's that?'. In retrospect this shows quite vividly in parts of that old entry.)

    It's still true that in theory there are plenty of bits of hardware that can break in your server (and the power supplies in our servers have been very reliable). But in practice we've suffered several power supply failures (especially in our backend disk enclosures) and they are probably either the first or second most common cause of hardware failures around here. Apart from the spinning rust of system drives, those other bits of fragile hardware almost never have failed for us.

    (Also, an increasing amount of server hardware effectively has some amount of redundancy for the other breakage-prone parts. For example, the whole system (CPUs included) may be passively cooled through multi-fan airflow; if one fan fails, alarms go off but there's enough remaining airflow and cooling that the system doesn't die.)

    There's also an important second thing that redundant power supplies enable for crucial servers: they let you deal easily with various sorts of UPS issues (as I noted in that entry). As we both want UPSes and have had UPS problems in the past, this is an important issue for us. We have a solution now but it adds an extra point of failure; redundant power supplies would let us get rid of it.

    There is also a pragmatic side of this. In practice hardware with redundant hot swappable power supplies is almost always simply better built in general (power supplies included). Part of our disk enclosure power supply problems likely come from the fact that the power supplies are generic PC power supplies that have had to power 12 disks on a continuous basis for years. Given our much better experience with server power supplies it seems likely that a better grade of power supply would improve things in general.

    (Part of this is probably just that hot-swap server power supplies are less generic and thus more engineered than baseline PC power supplies.)

    I'm now all for redundant power supplies in sufficiently important servers. However I'm still not sure that I'd put redundant power supplies into most of our servers unless I got them essentially for free; many of our server are not quite that important and for some we already have server-level redundancy.

    August 22, 2013 04:22