Archive:


Native advertising: you can’t rent your credibility, you can only sell it

Sell or rent?You may have heard that lots of publications are having a hard time figuring out how to make money online. It’s true; even though we’ve been at this Web thing now for twenty years, publishers still haven’t figured out a reliable way to turn large audiences into enough money to keep the lights on. So they’re understandably spooked — and ready to jump on any new approach that looks like it might solve the problem for them. And the new approach of the moment, the official New Hotness in the advertising world, is something called “native advertising.

Why are publishers and advertisers excited about native advertising? Because, to put it crudely, people don’t look at regular ads anymore, but do look at “native” ones. Which makes them exciting for advertisers, who are, after all, trying to get people to look at their message. And that makes them exciting for publishers, since you can sell an ad that people are likely to look at for much more money than one that people are going to skip.

But what is it about “native” ads that make them perform better than old-fashioned banners and the like? What is the native advertiser’s secret sauce? The answer is simple: native ads don’t look like ads. They look like content.

As with many new things, it can be difficult to pin down an exact definition of what native advertising is. One attempt goes like this:

Native advertising is a form of paid media where the ad experience follows the natural form and function of the user experience in which it is placed.

What this means in plain English is simple: it’s an ad that looks like (“follows the natural form and function of the user experience”) something other than an ad.

So if it doesn’t look like an ad, what does it look like? Well, publications are really made up of only two things: ads and content. So if it doesn’t look like an ad, there’s only one thing left for it to look like: your content. Like something you wrote. Something you endorse.

This has already bitten people. The folks at The Atlantic, for instance, found it out the hard way when they sold a “native advertising” package to the Church of Scientology, which proceeded to use it to run a great big story about how awesome Scientology is — all on a page that looked more or less like any other Atlantic page. The magazine inserted a little box on the page to inform the reader that what they were reading was sponsored content rather than their own editorial content, but understandably, lots of people missed it.

And this gets to the heart of the problem. The reason native advertising works is because readers don’t understand that it is advertising. It looks like editorial content, so they give it the same attention they give editorial content. Which, since advertising is inevitably slanted and one-sided, can only have two outcomes: either they eventually realize the bait-and-switch you’ve pulled and get angry with you for pulling it, or they don’t realize it and think that you’ve sold out your editorial objectivity.

Neither of these outcomes is great for your publication’s reputation.

The basic trade-off native advertising wants to make is simple. You, as the publisher of a publication, have a brand identity that has some degree of credibility as a purveyor of unbiased information. Advertisements that look like unbiased information get clicked. So the advertisers want, essentially, to rent your brand in order to wrap their pitch in your credibility.

But the thing about credibility is, it can’t be rented. Once you trade some amount of it for money, you can never really get it back. It’s gone. You have sold it. And eventually, after you’ve sold enough of it, you will inevitably run out.

This is not a theoretical process; we can watch it happen before our eyes in real time. Take, for example, one of the pioneers of native advertising, Forbes magazine. Forbes has turned its web site from an old-fashioned publication where ads and editorial are strictly segregated to a brave new model where they’re effectively indistinguishable. When you read an article on Forbes.com, it might be written by Forbes staff, or by a “community contributor,” or an advertiser. It’s on you to look for the little blurbs that indicate which is which. And if you miss them, and happen to think that an advertiser’s message is actually editorial content… well, you’ve made an advertiser very happy.

This creates serious tension between the interests of the publication and the interests of the reader. The less clearly the provenance of the story is disclosed — the tinier and easier to miss those disclaimers become — the more lucrative the ad slot becomes, because the whole point of the exercise is to confuse editorial and advertising. The less you foster that confusion, the less advertisers will be willing to pay you. But the more you foster it, the faster you deplete the store of value in your brand.

Forbes has made quite a bit of money with this approach. But what they’ve also done, for me at least, is demolish the credibility of Forbes as a brand. I can no longer tell at a glance when someone sends me a link to a Forbes.com story, whether that story is editorial content or an ad. So I retreat into a defensive crouch and assume that anything I read on Forbes.com is a sales pitch until I see some compelling proof that it isn’t.

Compare that reaction to, say, how I react when sent a page from NYTimes.com. I tend to trust those links, because I know that the New York Times logo indicates that the story has gone through an editorial vetting process. The success of that process may vary, but I can at minimum be sure that one thing the story definitely is not is an advertisement. (The only exception would be opinion pieces, but those are segregated from news content in their own section.) The Forbes logo, by contrast, has been degraded in value so successfully that today it basically means nothing. It’s certainly not a mark of quality, or accuracy, or trust. It’s just an umbrella under which vendors hawk their wares.

And since credibility can only be sold, not rented, that loss of institutional credibility would be hard to reverse if Forbes should ever decide they want to do so. They’re successfully training a generation of readers that the Forbes name and logo don’t provide any guarantees to the reader. Even if they were to suddenly stop the native ads and go back to clearly dividing ads and editorial, the impression that Forbes can’t by default be trusted will linger in that generation’s minds for years, maybe decades. They’d have to earn that trust — trust that they painstakingly built up over nearly one hundred years of publication — back again, from scratch.

So, as publications rush to embrace native advertising as the future, I write to add one small voice urging them not to. It is eating your seed corn; it is mortgaging your future; it is suicide. A publication whose audience does not trust it to deal with them clearly and honestly will, eventually, lose that audience. It may take a while, and you may be able to make a great deal of money fooling them in the meantime. But eventually they will wise up, and what will you be left with, then?

Nothing. You will be left with nothing. Because you will have sold everything. And once it’s sold, it’s never coming back.


Requiem for a netbook

Asus EEE 1000It’s hard to believe it’s been five years since I bought my last laptop.

It was back in 2008, and the machine — an Asus EEE 1000 — was part of a then-new category of computing devices. They were called “netbooks.” And they were awesome.

The basic idea behind the netbook was simple. The potential of a given laptop has always been constrained by a set of three factors: performance, weight, and price. Of those three, you get to pick two. So you can get a fast, cheap laptop, but it will be heavy; or an expensive, light laptop, but it will be relatively slow. You have to trade off one to get the other two, so picking a machine becomes a matter of determining which of the three factors you can live without.

Traditionally manufacturers came at this problem with two kinds of machines: light and expensive, and heavy and cheap. But the netbook was designed to create a third category: light and cheap.

It got there by sacrificing computing power. A lot of computing power. But the insight the designers of the netbook had was that for most people, modern computers are massively overpowered anyway. You don’t need a quad-core CPU to check your email and play Angry Birds. So by giving up that power, they were able to deliver machines that were both super-portable, and absurdly cheap. (If I remember correctly, mine cost around $400.)

Anyway, I didn’t expect much from that machine, given the low price. I figured it’d be fun to play with, but that I’d probably have to replace it with a “real” laptop in the near term. I was surprised to find as I used it, though, that this was not true. I never felt I was missing out on anything by using the netbook instead of a traditional laptop — especially after installing Ubuntu on it. It was more than adequate for my on-the-road computing needs. And because it was so small and light (and thoughtfully designed), it turned out to be pretty rugged as well. All the storage was solid-state (in 2008! I know, right?), so I could just throw it in a bag and go without having to worry about delicate hard disks. It even had a near-full-size keyboard, so I could touch-type without missing a step.

It was kind of the perfect mobile computer.

Anyway, lately, after all these years of reliable service, I started thinking that maybe its time had come. Not because the hardware ever failed, but just because operating systems and software have made huge advances in the last five years, and the EEE’s weak Atom processor is having more and more trouble keeping up. So I reluctantly started looking around for a 2013 replacement.

It turns out that there are none. Netbooks exploded in popularity in the late 2000s, but the PC makers never really liked them, for the simple reason that they were cheap. No OEM wants to sell $400 machines when they’re used to selling $2,000 ones. And then Apple showed them the way to keep those margins high. Apple never made a netbook. Instead, they attacked that market from two different angles: tablets for people with super-simple needs who were very price-sensitive, and super-sleek ultralight laptops for those who cared more about low weight and didn’t care about price. The strategy worked out well for them, and all the other OEMs stampeded to follow. So today there’s really no machines made with the same “cheap and cheerful” style as the old netbooks; if you want something lightweight, you either buy a tablet for a few hundred bucks, or an “ultrabook” for four times as much.

So goes the world, I suppose. I need to do some work on the road, so a tablet was out; which meant ultrabook or nothing. So now I’m the owner of a shiny new Thinkpad X1 Carbon Touch, which I’m writing this post on.

But I kind of hate that I am. The X1 is a nice machine, don’t get me wrong. But it costs more than three times as much as the EEE did. While it’s very slick-looking, and quite rugged (in the classic ThinkPad style), it’s anything but cheap and cheerful. There’s no room in the market for cheap and cheerful anymore.

It’s too bad. My EEE always struck me as a modern incarnation of the spirit of an older machine, the TRS-80 Model 100. That was an extremely simple machine — just a tiny screen and a keyboard. But you could run a tank over it and it’d come up smiling, and it had a built-in modem so you could connect up to the home office anywhere there was a telephone line, and it ran on available-everywhere AA batteries, so it found a niche among people like foreign correspondents who needed simple and reliable more than they needed sexy.  And those folks found the Model 100 so useful they held on to them for decades — even long after technology had supposedly passed them by.

I like humility and loyalty more than swagger and flash. So inexpensive, tough machines like the EEE and the Model 100 are closer to my heart than brushed-aluminum top-dollar MacBooks. But the market, it turns out, doesn’t care much for humility. So I end up with a ThinkPad Carbon.

Am I complaining? Nah. The ThinkPad is a fine machine, at least so far. But the EEE was something special. And I wish that we lived in a world where “special” wasn’t a synonym for “doomed.”

 


Rorschach Theatre’s “Neverwhere”

NeverwhereI’ve been a theater nerd ever since I was a kid in high school. (Insert your “drama club dork” insults here.) I’m not really active in that scene anymore, but I still enjoy going to see a show, and like to tell people about it when I come across one that’s particularly good. So allow me to recommend Rorschach Theatre’s production of Neverwhere, which I caught last night.

Neverwhere is an adaptation of a story by Neil Gaiman, which appeared first as a BBC miniseries and then as a novel and a radio drama. (The last of which had an extremely nerd-friendly cast, including Benedict Cumberbatch and Natalie Dormer.) It’s a sprawling fantasia about a London businessman, Richard Mayhew, whose act of kindness toward an injured girl named Door he finds on the street takes him beyond everyday London to the subterranean realm of “London Below,” where people who have “fallen through the cracks” scratch out a living alongside angels and monsters. He finds that the only way to get back to “London Above” is to help Door find those who had wronged her and her family; but the underground world is perilous, and powerful forces seek to stop Door from uncovering the truth. Both Richard and Door must decide how far they are willing to go, how much they are willing to give up, in order to get what they think they want.

I’m familiar with Gaiman’s work, but I hadn’t encountered Neverwhere in any of its other incarnations, so I was a bit worried going into Rorschach’s production that it would fall into the all-too-common trap of adaptations and essentially be fan service — something that would only make sense if you were already familiar with the original. Happily, that was not the case; Rorschach’s version is terrifically enjoyable standing on its own two feet. The dream-logic of London Below is communicated clearly and efficiently, and you never find yourself scratching your head wondering why characters are doing what they are doing.

The performances were uniformly strong as well. Richard (played by Daniel Corey) and Door (Sarah Taurchini) are presented with understated gravity, while supporting characters like underworld assassins Mr. Croup and Mr. Vandemar (Colin Smith and Ryan Tumulty), and the flamboyant Marquis de Carabas (Grady Weatherford) take more outsized, comic turns. The combination is effective in putting the audience in Richard’s shoes as he tries to understand the strange dynamics between the stranger people he has suddenly found himself among.

As a theater nerd (see above), I was especially struck by the creativity of the set design. Rorschach is mounting the show in a typical black box theater, which are great spaces for shows with simple staging requirements; but Neverwhere, which takes its characters across a fictional city, would not seem at first glance to be particularly simple. But the company works around that with clever use of the available space. The audience is seated in the round, but with aisles cut through the seats leading back to corners of the space, allowing the actors to make dramatic entrances and exits just by dashing up and down the aisles. And the walls of the space are fortified with risers at various heights, allowing them to traverse the theater by clambering up ladders and down chutes. The result is that a space that you’d think would be more suited to static drawing-room dramas feels broad, open and kinetic; the show runs nearly three hours, but there’s so much going on at such velocity that it never loses its momentum. Smart lighting and sound design add to the effect as well.

The only thing I can think to ding them on is that the acoustics in the space aren’t great, and since characters are all over the place, sometimes it can be hard to hear what they’re saying if they’re on the other side of the theater from you. That’s a minor complaint, though — there was never a time when I flat-out couldn’t hear a line, just a few times when I had to work harder to do so than I’d have liked. It’s probably inevitable given the space, though. As someone who learned the ropes by having a high school drama club advisor shout “PROJECTION! PROJECTION!” at me from the back row, I feel their pain.

So, anyway: it’s a very good show. (And don’t just take my word for it — the Washington Post liked it too.) If you’re in DC, you should go see it. Tickets are available here. It’s in its last week before closing, but there’s five more shows between now and then, so you have plenty of chances to catch it before it too vanishes into London Below. And if you’re like me, once you’ve seen it, you’ll be adding Rorschach Theatre to your list of DC theater companies to watch.


USB hard drives are bad and the people who make them should feel bad

Never againHerewith, a brief rant to let you know that I am officially done with external USB hard drives.

Done, done, done. They are dead to me. Done.

I’ve bought many of these things over the years they’ve been in existence, and I have never once found one that worked properly. It seems like it should be a simple thing: you plug it in, it mounts as a hard drive, and you’re done. But in practice, at least with the ones I’ve had experience with, it never works out that way. You plug it in and it doesn’t mount, or it does mount but then later on  un-mounts, or it mounts but in a way the operating system doesn’t like and you get spammed with dialogs warning you to fix something that in reality doesn’t actually need to be fixed. Very frustrating.

Not to mention that the damn things commit suicide at a rate that would make you think they were workers at Foxconn. I’ve never had one that lasted longer than two years. Sometimes the hard drive itself fails, other times it’s the interface circuitry that sits between the drive and the USB port, but either way you’re hosed. And since most consumer drives of this type aren’t designed to let you get at the hard drive inside, you have to crack open the plastic case just to find out if the drive itself is recoverable or not. Unacceptable.

If I were a betting man, I’d bet there’s something about the form factor these units are built in that makes them so short-lived; maybe there’s not enough room (or tolerance for noise on the part of the customer) to allow them to put in enough fans to cool the thing properly, for example. I dunno. All I know is that I keep buying these things, swearing at them for a couple of years, and then watching them die and having to start the whole buying-swearing-dying cycle over again.

But no more. I’m done throwing money down the rabbit hole on these pieces of junk. If I need external storage I’ll go buy a NAS setup from Synology or the like instead; it’s more expensive, but at least it lets you put in redundant drives so if one fails you don’t lose all the data, and you can open the front panel and get at the drives like a civilized human being instead of having to attack the casing with a crowbar.

So, to summarize: external USB hard drives are bad, and the people who make them should feel bad.

Thank you for your time. We now return you to your regularly scheduled programming.


In praise of Red Letter Media

Red Letter Media

Mike Stoklasa (left) and Jay Bauman

I just wanted to take a moment in between posts as long as War and Peace to remind you that if you love movies, you really need to be following the work of Red Letter Media.

RLM is a small group of filmmakers in Milwaukee, Wisconsin, led by Mike Stoklasa and Jay Bauman. Their first work to hit it really big came back in 2009, when they released an absolutely amazing 70-minute video dissection of all the flaws in the script of Star Wars: Episode I – The Phantom Menace. The video, which featured the character of an obsessive-Star-Wars fan-slash-serial-killer named Harry S. Plinkett, managed to simultaneously amuse while also educating the viewer on what elements are needed to make a strong film screenplay. It was a YouTube sensation.

Unlike most YouTube hits, though, these guys proved themselves to be something other than one-hit wonders. They produced several more “Plinkett reviews” of similarly high quality, breaking down such films as the second and third Star Wars prequels, Avatar, Titanic, and Indiana Jones and the Kingdom of the Crystal Skull. They’re all hilarious, and will make you look at movies you probably think you know quite well in a completely new light.

But as their audience grew, they started branching out from the basic Plinkett format as well, launching other Web video shows in other formats. Two of them have found a place among my favorite things to watch, in any medium.

Half in the Bag features Stoklasa and Bauman, as themselves, discussing movies in current release; recent episodes have covered Pacific Rim, Man of Steel, and Star Trek Into Darkness. While Half in the Bag discussions generally aren’t as in-depth as the Plinkett reviews are, the more casual format lets them cover more movies, and Stoklasa and Bauman prove to be engaging conversationalists; they’re smart and funny, so I could watch them talk about movies all day. I haven’t felt that way watching movie reviewers talk since the days when Siskel and Ebert would spar with each other on syndicated TV. (Sometimes they do still go deep on a single movie in a Half in the Bag episode, though; for an example, check out their must-see evisceration of the cynicism behind Adam Sandler’s Jack and Jill.)

Their other great show is newer; Best of the Worsta panel discussion of terrible movies from the height of the direct-to-video era, just launched this year.  In it, Stoklasa and Bauman are joined by a rotating cast of other guest reviewers to talk about such classic films as Russian Terminator, R.O.T.O.R., and Thunderpants. If you’re a fan of bad movies, Best of the Worst is a gold mine; I’ve discovered tons of hilarious stink-burgers I’d never heard of before just by watching, including the absolutely jaw-dropping Miami Connection, which I raved about in this space not long ago.

So yeah — if you’re into movies, you’ll want to get familiar with these guys sooner rather than later, just so you can tell people you were a Red Letter Media fan before being a Red Letter Media fan was cool.


STOVL, the F-35, and how we’re even more f’ed than David Axe suggests

AV-8B Harrier IIOn his excellent blog War is Boring, defense correspondent David Axe has posted a very good long-form piece explaining how the F-35 — the very new, very expensive, very behind schedule next-generation jet fighter that is supposed to fill the squadrons of the U.S. Air Force, Navy, and Marine Corps over the coming decades — is actually a pretty terrible fighter, outclassed even by older Russian and Chinese designs. He worries that it’s so bad that it will get American pilots killed and American wars lost. Given how central the F-35 is to the planning of all three U.S. military aviation arms for at least the next thirty years, that’s a very valid concern.

In the piece, Axe tracks the F-35’s problems back to one primary source — the requirement by the Marine Corps that it incorporate STOVL (Short Take-Off, Vertical Landing) technology. For those who have better things to do with their lives than follow the minutiae of military technology, STOVL means a plane that can take off from a short, unprepared runway, and then land by dropping straight down, like a helicopter does.

(So if the plane can land straight down like a helicopter, I hear you thinking, why not have it take off straight up like a helicopter, too? The answer is that many STOVL planes actually can take off vertically. But doing so requires a lot more oomph from the engine, which in turn drastically limits the amount of weapons and fuel the plane can carry. A short, rolling take-off is easier on the engine, which frees up the extra power needed to bring all that stuff along.)

STOVL never caught on with the Air Force and Navy, but it became a central part of the Marines’ air doctrine when that service selected the AV-8B Harrier II “jump jet” (pictured, right) in the 1980s.

When the proposal for a “joint strike fighter” (JSF) to fill the future needs of all three services came forward in the 1990s, the Marines said that if they were going to participate they would need a version of the JSF that had the same STOVL operational profile the Harrier did. The plane that came out of the JSF program, the F-35 Lightning II, therefore was designed in three versions: a smaller, lighter F-35A for the Air Force, a STOVL F-35B for the Marines, and an F-35C for the Navy that included tweaks for flying off aircraft carriers such as foldable wing-tips and an arrester hook.

To illustrate the limitations all variants of the F-35 have compared to potential adversaries, Axe recaps a 2008 war game, called “Pacific Vision,” in which analysts from the RAND Corporation simulated a war between the U.S. and China over the disputed territory of Taiwan. The results were not pretty:

America’s newest stealth warplane and the planned mainstay of the future Air Force and the air arms of the Navy and Marine Corps, was no match for Chinese warplanes. Despite their vaunted ability to evade detection by radar, the JSFs were blown out of the sky. “The F-35 is double-inferior,” Stillion and Perdue moaned in their written summary of the war game, later leaked to the press.

The analysts railed against the new plane, which to be fair played only a small role in the overall simulation. “Inferior acceleration, inferior climb [rate], inferior sustained turn capability,” they wrote. “Also has lower top speed. Can’t turn, can’t climb, can’t run.” Once missiles and guns had been fired and avoiding detection was no longer an option — in all but the first few seconds of combat, in other words — the F-35 was unable to keep pace with rival planes.

And partly as a result, the U.S. lost the simulated war. Hundreds of computer-code American air crew perished. Taiwan fell to the 1s and 0s representing Chinese troops in Stillion and Perdue’s virtual world. Nearly a century of American air superiority ended among the wreckage of simulated warplanes, scattered across the Pacific.

So why did the F-35 perform so badly in that simulation? Axe traces the plane’s flaws back to a single main cause — the Marines’ insistence on a STOVL variant:

Engineering compromises forced on the F-35 by this unprecedented need for versatility have taken their toll on the new jet’s performance. Largely because of the wide vertical-takeoff fan the Marines demanded, the JSF is wide, heavy and has high drag, and is neither as quick as an F-16 nor as toughly constructed as an A-10. The jack-of-all-trades JSF has become the master of none.

And since the F-35 was purposely set up as a monopoly, replacing almost every other warplane in the Pentagon’s inventory, there are fewer and fewer true alternatives. In winning the 2001 competition to build the multipurpose JSF, Lockheed set a course to eventually becoming America’s sole active builder of new-generation jet fighters, leaving competitors such as Boeing pushing older warplane designs.

Which means that arguably the worst new jet fighter in the world, which one Australian military analyst-turned-politician claimed would be “clubbed like baby seals” in combat, could soon also be America’s only new jet fighter.

I think Axe is correct that the need to support STOVL forced engineering constraints on the designers of the F-35 that ended up limiting the performance of all three variants in bad ways. But I don’t think that “STOVL is bad” is necessarily the right moral to take away from the F-35 story, as he suggests. I think the real moral is much worse. It’s that American military doctrine in general, not just the F-35, is disconnected in fundamental ways from reality; and that that disconnect, if tested in a future war, will result in far more unnecessary casualties than just the pilots in the F-35 cockpits.

Why STOVL exists

But first, a brief digression to explain why STOVL, as a technology, exists at all.

In the olden days of flying, from the Wright Brothers up until mid-World War II, fighter planes were small, rugged craft. They were light enough and slow enough that they could take off and land from pretty much anywhere with reasonably flat terrain. World War I “aerodromes” were frequently just big, open grass fields. Even as late as the Battle of Britain, RAF Spitfires and Hurricanes were taking off to fight the Luftwaffe from grass airstrips.

The rush of aviation technology spurred by World War II — jet engines, air-to-air missiles, onboard radar sets — made fighters much deadlier than those older planes had ever been. But it also made them bigger and heavier; so heavy, in fact, that they could no longer safely take off and land from a farmer’s field. Supporting all that weight, and all the extra thrust that jet engines provided, required providing them with long, concrete airstrips. And as the decades wore on, the planes got even bigger, so the airstrips had to get even longer.

All of which began to make British defense planners a bit uneasy. Britain is one of the few nations in the world to have ever had a direct attack launched on its air-defense infrastructure, so they were sensitive to the idea that a chink in that armor could have catastrophic consequences in a future war. And some thinkers there began to worry that those long, concrete runways were just such a chink.

The thing about long concrete runways, of course, is that you can’t really move them around. Once you lay one down somewhere, it stays there. And in an age of satellite “eyes in the sky,” it’s difficult to prevent the other side from knowing where they are. Long airstrips clustered with supporting structures are easy to spot from above, as a cursory glance at Google Maps’ satellite view will attest.

So, as the Cold War deepened, those British defense planners started to worry about just how secure those beautiful runways really were. We should probably assume the Russians know where they all are, went the thinking, since they’re sort of obvious. And if they know where they all are, should we not assume that they would hit them all hard on the first morning of World War III?

If the Russians really were going to try to surge across the German frontier and seize Western Europe, in other words, NATO air power was supposed to be there to stop them. But the Russians knew that as well as NATO did. And if that air power was all shackled to a few locations known to the enemy, it would be a lot easier for the enemy to destroy all those planes on the ground by bombing the air bases than it would to try and shoot them down once they’d gotten airborne. It would only take a single tactical nuclear weapon to turn the runway and every plane parked around it to radioactive dust. And since the Soviets had plenty of tactical nuclear weapons, it was possible to imagine World War III starting with a devastating Pearl Harbor-style opening attack on NATO airfields across Western Europe. Even if the attack wasn’t 100% successful — even if it only destroyed, say, 50% of NATO’s air strength — it could still do more than enough damage to give the Soviets control of the skies over the battlefield. And since NATO’s defense strategy depended so much on air power, if that happened the war would be lost before it had even really begun.

This line of thinking led the British to start wondering about whether a jet fighter could be produced that could avoid being shackled to those dangerously seductive runways. If such a fighter existed, it would offer a way out of this strategic dilemma; when war appeared imminent, those squadrons using it could disperse from their airfields and operate, Spitfire-style, from green fields and stretches of highway. They could dodge the hammer that would fall on every other NATO aircraft in those opening hours of war.

So they went to work to see if such a fighter could be built, and the result was the Hawker Siddeley Harrier.

Judged as a fighter, the Harrier was unimpressive. Just about any other contemporary fighter could fly farther, or faster, or turn more nimbly, or carry more weapons. But that was because its designers had sacrificed on all those other elements to give it its one trump card — STOVL. It could operate from short, unprepared runways, which set it free from those long concrete airstrips. So when other NATO nations sniffed at the Harrier as overweight and underperforming, the British would just reply that all those other fighters may be better, but if the Soviets ever attacked they would also be quickly transformed into junk. The Harrier would survive, and the British figured that an unimpressive fighter that can fly and fight beats an impressive one that’s a flaming wreck on a nuked runway. (Which is kind of hard to argue with, when you think about it.)

Why STOVL appeals to the Marines

As noted above, the Harrier eventually found its way into the air arm of the U.S. Marine Corps. Unlike the Air Force, the Marines did not expect to be fighting in Western Europe in World War III; that was the Army’s job, not theirs. So the vulnerability of those long runways to Russian nukes didn’t concern them much.

But that didn’t mean that they weren’t concerned about the need for those long runways at all. For their entire modern history, the Marines have been what is referred to as an “expeditionary” force. In civilian terms, what that means is: when an unexpected conflict in some remote corner of the world pops up, the ones who get sent to deal with it are Marines. So if you’re a Marine, one of the few things you can say for certain about the challenges you’ll be called upon to face is that you won’t have a lot of infrastructure set up in the place where you have to face them. You’re going to land on a beach or be dropped in by helicopter somewhere far from any base, and you’ll have to fight there with whatever you can carry with you.

Given all that, the restriction that being tied to fixed runways causes becomes obvious. It’s like the scene at the end of Back to the Future where Marty tells Doc Brown there’s not enough space on the road for them to get their DeLorean time machine up to 88 miles per hour, and Doc responds “Where we’re going, we don’t need roads.” Where the Marines go, they had better not need runways, because there probably aren’t going to be runways there waiting for them.

In theory, this isn’t a huge limitation, because the Navy is supposed to be there to support them with planes from its aircraft carriers, which are essentially long concrete runways that just also happen to float. But, as Axe notes, in real shooting wars there are no guarantees that the Navy will be able to be there. The Marines who fought the Japanese on Guadalcanal in 1942 had to watch as the Japanese navy drove the Navy’s carriers away from that island. For weeks, the only air support they had came from Marine aviators flying from an uncompleted airstrip they had captured from the Japanese.

What do you suppose the odds are that future enemies will be so thoughtful as to leave us an airstrip to use against them? Are they odds you would want to bet your life on?

STOVL offered the Marines a way out of having to rely on the Navy’s flattops. STOVL planes could operate from the same amphibious assault ships that would carry the Marines’ troops and helicopters into battle. And once the landing forces had lodged themselves on the enemy’s turf, the planes could decamp from the ship to the shore and fly from rough strips there.

This is one of the few points on which I think Axe’s otherwise excellent article errs. It chalks up the Corps’ desire for STOVL capability entirely to bureaucratic infighting — to a desire by the Marines to remove themselves from having to depend on the Navy. I don’t doubt that this is a part of their motivation; inter-service rivalries in the U.S. military can be pretty ferocious, especially the farther away you get from an actual fighting front. But it’s not hard for me to understand, in theory anyway, why the Marines could have very legitimate war-fighting reasons both for having their air arm be able to operate independently of fixed airfields and for thinking that maybe it would be good for them to have an ace up their sleeve in case the Navy’s flattops ever get driven away from them again.

The F-35: definitely F’ed

So now, having detoured through a history of how STOVL came to be and why the Marines were attracted to it, can we say that the concern about the F-35 is overstated?

No. Oh, no. If anything, it is understated. The F-35 program is a gigantic, history-making mess. The cost of the program has ballooned by at least 70 percent since it began in 2001. Its development has been plagued with technical problems. When the plane had trouble meeting its performance targets, the Pentagon responded by lowering the targets. And at some point Chinese hackers managed to compromise the project’s computer systems, stealing an unknown number of the plane’s secrets.

There’s lots of reasons why the program is in the sorry state it’s inOne is “concurrency.” To avoid getting too far into the project-management weeds, what this means is that, in order to get F-35s delivered to the customers as soon as possible, the makers of the plane, Lockheed-Martin, started the F-35 production line rolling at the same time as they were flight-testing the F-35, rather than after. LockMart’s assumption was that computer models of the plane’s performance would shake out any serious defects long before actual flight tests took place, leaving only smaller issues like software bugs whose fixes could be rolled out to the fleet later. To put it mildly, this turned out not to be the case. Most Defense Department projects use some degree of concurrency, but the F-35 program’s heavy dependence on it was described last year by now-Under Secretary of Defense for Acquisition, Technology and Logistics Frank Kendall as “acquisition malpractice.

Another is that the project has become too big to fail. The F-35 is expected to play a key role across all three U.S. military aviation arms. Additionally, it’s slated to be picked up by many U.S. allies as well, such as the U.K., Australia, Canada, and Turkey. In other words, there are so many air forces who have bet their future on the F-35 that a cancellation would be a global disaster. There’s no alternative, no fallback, no Plan B; the F-35 has to come in, because if it doesn’t, all those air forces are going to have nothing to replace their aging current fighters with. That creates a huge amount of pressure whenever the program hits a speed bump to react by simply removing the bump. If it’s too expensive, allocate more money; if it misses performance targets, lower the targets. It doesn’t matter if the plane that comes out the other end of the pipeline is any good, all that matters is that it have “F-35” written on it, because the one thing that matters above all else is that something called “F-35” be delivered.

The biggest problem with the F-35, though, is the sports car/dump truck problem. The meaning of this is simple: if you set out to design a car to be a really good sports car, you’re going to be designing something that would make a really terrible dump truck. Conversely, an excellent dump truck is designed in ways that would make it a terrible sports car. But periodically some genius decides that he could save money by buying a single vehicle that can serve in both roles — a sports-car-slash-dump truck. And what results is always a vehicle that is both a terrible sports car and a terrible dump truck.

This is where the criticism of the F-35’s STOVL features makes sense. The Air Force, Navy, and Marines all need a fighter aircraft, but the specifics of what each service needs are actually pretty divergent. Trying to meet them all with a single airframe results in a design that does lots of things, but all of them poorly. Making an airframe that can accommodate a heavier, more complex STOVL engine means the plane is not as light or as fast as it could be without it. If you’re the Marines, STOVL is important enough that you maybe don’t care about that, but if you’re the Air Force or Navy, you most certainly do. But there’s nothing you can do about it, because you’re all tied to the same airframe.

This problem of the defense acquisitions system turning out weapons that put checking off bureaucratic checkboxes above being as effective as possible on the battlefield is an old one. It’s been plaguing the U.S. defense establishment for decades. The most notorious example is probably the F-111 Aardvark, another aircraft that was supposed to be a money-saving “tri-service” fighter. The compromises required to tick off all the checkboxes on everybody’s lists resulted in a plane that was too big and heavy to be a good fighter, too slow and under-armed to be a good bomber, and too complicated and unreliable to really be a good anything. In the end only the Air Force bought it, and even they never really figured out how to make it do anything useful.

Another example, the M2 Bradley Fighting Vehicle, was spoofed by HBO in their very funny 1998 satire of Defense Department dysfunction, The Pentagon Wars. (Which was based on a more serious book.) In one darkly hilarious scene, the film’s protagonist, Air Force Colonel James Burton, who has been assigned to the Bradley program, tries to figure out how it ended up being “a troop transport that can’t carry troops, a reconnaissance vehicle that’s too conspicuous to do reconnaissance, and a quasi-tank that has less armor than a snow-blower but has enough ammo to take out half of D.C.” It then walks through all the compromises that turned the Bradley into a canonical sports-car-slash-dump-truck.

In trying to make the F-35 meet the highly divergent needs of three different services, the Defense Department and Lockheed-Martin have turned it into the classic 21st century example of the sports-car-slash-dump-truck. So yeah, it’s pretty f’ed.

But wait, it gets worse

But here’s the thing — the exercise that Axe referred to, the one that demonstrated the super-new, super-whizzy, super-expensive F-35’s inferiority against older Chinese and Russian designs, actually demonstrated a lot more than just that. It demonstrated that the problems our military has go way, way beyond the F-35.

The scenario in the exercise was straightforward: the Chinese lunge across the Taiwan Strait to try and occupy Taiwan, which they have considered part of their territory ever since the ousted government of Chiang Kai-Shek fled there in 1949. The United States, which has pledged to defend the independence of Taiwan, moves to stop them. The result is a battle between U.S. and Chinese forces in and around the Chinese coast. In the exercise, the Chinese won.

The official line on Pacific Vision was that it was a valuable learning experience. A RAND Corporation report on the exercise eventually leaked, however, and it painted a grimmer picture. It described Pacific Vision as highlighting deep, fundamental problems with U.S. military strategy in the Pacific — problems that would take a huge effort to fix. The F-35’s poor performance was one of these problems. But there were plenty of others.

Such as:

Basing. One of the fundamental rules of air warfare is that an aircraft based near the battle zone is more effective than one based far away, since that plane can join the battle faster and stay in the battle longer (since it burns less fuel flying there and back). The problem in the Taiwan Straits scenario, the report points out, is that the U.S. has exactly one air base within 500 nautical miles of the strait — Kadena Air Base, on the Japanese island of Okinawa. There are other air bases in the region (in Korea, northern Japan, and Guam), but they all range from 800 to 1500 nautical miles away. The Chinese, the report points out, have 27 bases within 500 nautical miles of the strait. This would give the Chinese a huge advantage in terms of being able to “get there first with the most,” as Nathan Bedford Forrest put it.

Base survivability. Remember the fear we talked about that led to the original development of STOVL aircraft? The British fear that planes tied to a fixed runway could be neutralized by taking out that runway? The RAND report cites Kadena — remember, our only airfield in the immediate vicinity of the battle zone — as being particularly open to such an attack. Its fuel storage areas and runways are out in the open, making them vulnerable to attacks from the air and from missiles. The report claims that 34 missiles armed with “submunitions” — little bomblets that scatter all over the target, blowing holes everywhere — could completely cover all the areas at Kadena where planes are parked. It estimates that such an attack could “damage, destroy or strand” 75 percent of the aircraft based there. The Chinese, in contrast, have “hardened” their airfields to make them resistant to just such an attack, moving fuel supplies and even runways underground.

Technology failures. The American strategy for air superiority, not just with the F-35 but also with the F-22 and other modern aircraft, is based on two fundamental technologies. The first is “stealth“: using technology to help our planes to evade detection by radar. The second is “BVR”: using long-range missiles to shoot down enemy planes before the pilot can ever even see them. (The acronym stands for “beyond visual range.“) These technologies are the key to the plan for how a smaller, more high-tech force like ours can defeat a more numerous but lower-tech one like China’s.

The problem the report highlights is that one of these technologies, stealth, has never been tested in a serious battle, and the other, BVR, has historically under-performed. Air theorists have been saying that long-range BVR missiles had made dogfights obsolete since the 1950s; the Air Force and Navy that went into Vietnam were organized around this principle, and the results were disastrous. American fighter pilots discovered that their missiles, which had been designed to shoot down slow, lumbering bombers, could not keep up with the rapid maneuvers of the smaller, nimbler Soviet fighter planes flown by the North Vietnamese. Because of this, missile kills were few and far between; eventually the Air Force, which had bet most heavily on missiles, backtracked and added guns back to their fighters as a secondary weapon, and the Navy established the famous “Top Gun” school to teach their pilots how to dogfight at close range — a skill that had been lost in all the emphasis on training with missiles.

Today our modern fighters have a new generation of BVR missiles that are supposed to be much more capable and accurate than those of the Vietnam era turned out to be. But, the report points out, we don’t have a lot of data from real battles to tell us whether that’s actually true or not. And, it also emphasizes, it’s not like the other side has been standing still all this time either; our missiles may be better, but techniques and technologies for avoiding missiles have gotten better too. (Avoiding missiles is a big part of what stealth is about — and the Chinese have already started rolling out their own stealthy fighters.) So it’s entirely possible that we could discover in the same way as the generals and admirals did in Vietnam that our faith in the missile has been misplaced.

Numbers. This is the big one.

The fundamental challenge the United States would face in a battle of any kind with the Chinese would be that the Chinese massively outnumber us. This is as true in the air as it is on the ground. If Kadena is operational, the report estimates that Chinese fighters in a Taiwan Strait battle would “only” outnumber ours by three to one. If Kadena is knocked out, the ratio goes to ten to one. That’s ten enemy fighters for every one of ours, which are not great odds.

Of course, being outnumbered is not a new problem for U.S. forces; back in the Cold War era, the Russians were always projected to outnumber us too. And the standard American response in both cases has been to neutralize the enemy’s numerical advantage by using advanced technology to make our forces much more effective than the other side’s.

The problem, the report points out, is that there’s not a lot of historical evidence that such a strategy actually works. The Germans, for example, used it in World War II. Outnumbered by Allied aircraft in the latter half of the war, they responded by building wunderwaffen — super-high-tech “wonder weapon” planes that were radically faster, better-armed, and more maneuverable than anything the Allies had. The best-known of these is the Me 262, the world’s first-ever jet-powered fighter, which could fly rings around anything else in the air. The Germans put their best, most accomplished pilots in the cockpits of these superfighters, with the plan being that the combination of the two would overcome their numerical disadvantage by allowing them to swat enemy fighters down like gnats.

It didn’t work out that way. The appearance of the Me 262 gave a shock to the Allies, and at first the German wonder weapon seemed unconquerable. But Allied pilots quickly developed techniques for dealing with it. For example, they learned that while the Me 262 was quite maneuverable in an air battle, when the time came to return home it needed to take a long, straight path in order to land safely; so they just concentrated on avoiding the fight until the Germans ran out of fuel, and then picked them off on their way home.

The bigger problem with the Me 262, however, was more fundamental: all that high technology ended up making it hard to manufacture and delicate in operation. This meant that only around 1,400 of the planes were manufactured before the end of the war, and of those, only around 200 were ever combat-worthy at any given time. Compare this to the premier American fighter of the time, the North American P-51 Mustang, of which more than 15,000 were produced during the war. And that’s only one model of Allied fighter! Against those numbers, it doesn’t matter if the average Me 262 could shoot down two, or three, or five, or even ten Allied fighters before being shot down itself; there were just too many of them for even its formidable technological edges to overcome.

So there’s a point where technology, by itself, cannot save you. Surveying historical examples, the RAND report puts that point at 3:1. According to its numbers, the U.S. could face a disparity that low, but only if everything else in the battle breaks our way: if Kadena doesn’t get bombed to hell, if our stealth works and theirs doesn’t, if our BVR missiles work and theirs don’t. If, if, if. Take away even one of those assumptions and the odds start to look pretty bad. Take away two or more and they get downright dire.

Some caveats

Before you panic too much about the above, there are some factors that the report doesn’t really go into that could mitigate some of these issues.

Carriers. The report doesn’t really talk much about airplanes based on Navy carriers and Marine assault ships backing up the Air Force in the Taiwan Straits scenario. Having a strong force of carriers there could reduce the damage a strike on Kadena would make, since they could sail up close to the battle zone and become alternate airfields close to the battle zone. (For their planes, at least; Air Force planes aren’t equipped to fly off of carriers.) And they could help close the numerical disadvantage somewhat.

Of course, there are questions about how survivable carriers would be in a modern shooting war as well, especially against China, which has recognized the strength of America’s carrier force and started building weapons specifically to destroy it. But that’s outside the scope of this discussion.

Nukes. The United States has long maintained the Taiwan Strait as one of the few territorial boundaries that it is clearly willing to use nuclear weapons to defend. Some argue that this makes discussions of conventional battles around Taiwan moot; if China launched one, it would escalate to a nuclear confrontation, and that should be enough to dissuade them from doing so.

Of course, this only works if the Chinese believe we’re sincere in this commitment, and not bluffing. And of course, they have nukes of their own. So they might be bold enough to call us on our commitment and see if we’re willing to lose, say, Los Angeles to save Taiwan. But nuclear gamesmanship is a subject entire books have been written about, so again it’s outside the scope of this discussion.

Drones. This whole discussion has been about manned aircraft, but manned aircraft (to be blunt) are on the way out. Unmanned drones can in theory perform the same missions a manned fighter can, but without putting a pilot’s life at risk. In some ways they can even do those missions better than a manned fighter can; the maneuverability of manned fighters, for instance, is currently limited far beyond what modern planes are actually capable of, simply because going beyond those limits would put too much stress on the pilot’s body and brain. Taking the pilot out of the cockpit would free up the vehicle to race and turn as radically as its airframe and engines allow.

Of course, we’re still in early days in terms of drone development, so it’ll probably be at least twenty years or so before we see drones that are seriously designed to replace manned fighters. And of course, the Air Force and Navy have huge investments (both fiscal and psychological) in manned aircraft, and the leaders of their aviation arms all came up flying manned aircraft, so it’s possible that even if a drone would objectively be better for a mission they’d resist it for personal/sentimental reasons, the way the battleship admirals resisted the rise of the aircraft carrier or the bomber generals resisted the ballistic missile. This could put us at a distinct disadvantage against an opponent who’s less bound by tradition than we are.

Assumptions. The report looks at all the ways our assumptions about how a war would go could be overturned. But there’s lots of ways assumptions on the other side could prove wrong, too. Maybe the average soldier or sailor in the Chinese military turns out to not be as enthusiastic about being in the front line of a world war over Taiwan as the party leaders in Beijing are about starting one. Maybe their weapons or tactics turn out to be unreliable or fundamentally flawed. Maybe there’s other factors outside of air power (an inability to build enough landing ships to survive the fight and get enough troops across the strait, for instance) that would make them hesitant to try something, even if they were 100% confident they could defeat us in the air.

Of course, basing your plan for victory on the idea that everything is going to break in your direction and against your opponent is a great way to lose a war. Ideally you want to be in a position where you can win even if the breaks aren’t running your way, because from a statistical perspective it’s quite unlikely that they all will.

Conclusion

The F-35 is an extremely troubled program. But its troubles are just the tip of the iceberg. Our main doctrinal theory — that we can overcome great numbers with greater technology — is leading us to build a smaller and smaller number of more and more advanced fighters. But as those fighters get more advanced, they get more complicated to make, and to maintain in the field; and it’s not entirely clear whether their technology, advanced as it is, would be enough to overcome the numbers an opponent like China could field against us.

More worryingly, the F-35 isn’t a weapon that stands completely independently; it’s part of an integrated system of weapons and technologies. And there’s lots of other links in that chain that demonstrate a risk of failing under the pressure of war. It’s not clear that our military plans are resilient enough to absorb and overcome such failures.

In other words, the institutional rot and lethargy that the F-35 program suffers from has spread farther and more deeply than most people probably think. If we want to maintain our national security, we need to get serious about it, and attack it at its roots by reforming the system so that it focuses on what it’s supposed to focus on — fighting and winning wars, with the minimum possible loss of American life — rather than on maximizing defense contractors’ profits and moving career officers smoothly up the promotion chain.


SLIDESHOW: 20 Cats Who Suck At Reducing Tensions In The Israeli-Palestinian Conflict

Screw blogging! It’s about time I made a play for some of those sweet, sweet BuzzFeed/HuffPo pageviews. So here’s a slideshow. Click on any slide to get more information on what it’s talking about.

[slideshow_deploy id=’2881′]

Jason Recommends: Miami Connection

Miami Connection posterLongtime Readers™ will know that it’s rare for me to take to these pages these days just to recommend something to you. But as a lover of bad movies, this is too good to not share. So allow me to introduce you to a little gem called Miami Connection.

Written, produced and co-directed in 1987 by its star, Korean-born Central Florida taekwondo master Y.K. Kim, Miami Connection debuted locally the next year and promptly bombed so hard it nearly drove Kim into bankruptcy. Its failure to find even a hometown audience doomed any chances of it getting a national release, and it quietly sank into obscurity for twenty-five years until it was rediscovered by the people at the legendary Alamo Drafthouse Cinema in Austin, Texas. Immediately recognizing its barely comprehensible genius, they funded an HD restoration and brought the film to a national re-release last year. I picked up on it a few months later through an episode of Red Letter Media’s hilarious web series, Best of the Worst. (If you’re a fan of cinematic cheese, Best of the Worst is must-see viewing.)

So what is so great about Miami Connection? Let’s start with the basic premise. Mark (Y.K. Kim) is the leader of a group of taekwondo-fighting orphans who live together, attend the University of Central Florida together, and apparently do everything together, since we really never see any of them away from the group. This includes a lot of hanging around their group house shirtless, which is another thing they apparently frequently do together.

(Before the UCF affiliation prompts you to ask: yes, here we have a movie called Miami Connection that, except for one scene at the beginning, takes place entirely in Orlando. After that opening scene Miami is never mentioned again. I guess they just decided that Orlando Connection wasn’t as compelling a title.)

While they are students by day, they take on another identity by night: a synth-rock band called “Dragon Sound,” whose songs are apparently the hit of the Central Florida club scene, despite most of them being completely tuneless.

See, Dragon Sound isn’t just a band; they are a band with A Message. And that message is that taekwondo is awesome and ninjas are not. I’m old enough to remember the big pro-ninja pop wave of 1986, so I can only assume their audiences found their contrarian position refreshing.

Consider one of Dragon Sound’s big numbers, creatively titled “Against the Ninja“:

… the chorus of which is, and I quote:

Tae kwon, tae kwon!
Tae kwon, tae kwon!
Tae kwon, tae kwon!
TAEKWONDO!

In case that pro-taekwondo message is too subtle, they also periodically do air kicks on stage.

Anyway, one of the members of Dragon Sound starts a wan romance with a woman whose brother is somehow affiliated with the drug business, and this leads to Dragon Sound having to use their taekwondo skills to fight their way through waves of bikers, ninjas, and biker ninjas between classes. Surprisingly, despite being associated with the blood-drenched Florida drug scene of the 1980s, none of these foes seems to own a gun. That’s probably for the best, plot-wise, as the presence of guns would make it slightly more difficult for Dragon Sound to repeatedly kick them.

(Editor’s note: near the end, we do actually see a few bad guys with guns. To a man, these all begin their fights with Dragon Sound by having their guns kicked out of their hands. Letting someone get close enough to kick you seems like a fundamental misunderstanding of the value of a firearm, but it’s possible that biker ninja doctrinal thought in the 1980s had other ideas.)

The movie ends with a final assault by biker ninjas on Our Heroes. Unlike their previous fights, in this one the ninjas actually manage to slash a couple of Dragon Sounders with their ninja swords. This leads to Our Heroes, who have spent the entire movie preaching about how taekwondo is not really about kicking people but about peace and understanding, dropping the taekwondo altogether and using the ninjas’ own swords to straight-up murder like a thousand ninjas. This ends cocaine forever.

And then, right after we’ve seen Our Heroes howling triumphantly while soaked in the blood of men they have just stabbed to death, we get a final title card:

Only Through The Elimination of Violence Can We Achieve World Peace

Yes.

The whole project is just baffling from beginning to end. Kim, playing the main character, is undaunted in his portrayal by the fact that he can’t really speak English. (He looks like he’s reading his lines phonetically, except in a few places where they were obviously dubbed in later.) The film never bothers to give the other members of Dragon Sound personalities, except for one, whose sub-plot is so ineptly handled it beggars belief. After the initial scene in Miami, we never see drugs or drug dealing again. But we do see lots of poorly choreographed fights where bad guys line up one by one to take their turn getting kicked by Dragon Sound. It gets so obvious that you start to think they should install one of those deli machines that gives out numbers.

In other words, it’s amazing.

So take my recommendation and add Miami Connection to your viewing list. It’s available on Blu-Ray, DVD and HD digital download from Drafthouse Films, and for online streaming via Netflix Instant.


BREAKING: Tech company announces marginally improved version of gizmo you already own

Apple WWDC 2013CALIFORNIA ­— In a move sure to rock the world of consumer technology to its foundations, a major computer company today announced it will soon release a marginally improved version of a gizmo you already own.

“This is it,” the CEO of the company told a crowded audience of technology journalists and fans. “This is the Year Zero. All that came before has been scoured from the Earth with fire.”

The company made its historic announcement at its usual venue for historic announcements, an annual conference for technology developers. Executives took turns on stage describing the details of the slightly improved gizmo.

“Unlike last year’s gizmo,” a senior vice president explained, “this year’s has more megahertzes, as well as more RAMs.”

“It is also slightly smaller,” he added, straining to be heard over the loud gasps of surprise and wonder from the assembled crowd.

Another executive described new online features that will be available in the new gizmo. “Like last year’s gizmo, this new gizmo is connected to the cloud,” he said. “But this new gizmo is significantly more cloudy than any previous version we have made. It is so cloudy, in fact, that internally we refer to it as ‘overcast.'”

The event was briefly interrupted at this point, as several attendees fainted from excitement and had to be attended to by emergency medical technicians.

Industry analysts were optimistic about sales prospects for the new gizmo. “It’s slightly better than last year’s gizmo, which should make it a must-buy,” one analyst told this publication. “And on top of that, this new version will be available to buy at stores. That all adds up to a home run.”

Technology enthusiasts were similarly impressed by the company’s announcement.

“I’ve been very happy to date with last year’s gizmo,” one enthusiast in attendance said after the conference. “But now I realize that it is a total heap of donkey dung, unfit for use even by sex offenders and lawyers. I am counting the minutes until I can replace it with a new, pure, slightly improved gizmo, which will allow me to leave the house without the crushing feeling of shame I now realize must follow me wherever I go with last year’s gizmo.”

Experts believe this new gizmo will redefine the human experience and change the way we live, laugh and love, until next year’s slightly more improved version renders it a hideous vestige of a barbaric age.


Presenting the best of JWM, 2012 edition

Photo: Richard Harbaugh / ©A.M.P.A.S.

Photo: Richard Harbaugh / ©A.M.P.A.S.

One of the benefits of the Attack of the Fanboys I went through this weekend was that it reminded me I had never gotten around to pulling out the best posts from this blog for 2012 and adding them officially to the “Best of Just Well Mixed” archive. I’m sure this was causing a lot of sleeplessness and anxiety for Longtime Readers™, so I’ve (finally) taken care of it.

What qualifies a post for inclusion in the Best of JWM? Mostly it’s a function of how much discussion a post generated; much-talked-about and much-linked-to posts are what a blog exists to generate. Other factors are depth (meaty, researched posts as opposed to quick “hey, check this out [LINK]” types of things) and longevity (whether the post is as interesting a year or more after it was written). All of these factors are then mashed together in the completely unbiased and objective computer that is my brain to determine the list of winners. Finally, I throw that list out and add in whichever posts I think deserve it.

Here’s the list of posts from last year that made it into the hall of fame:

I want a newspaper that can call a lie a lie (January 2)

Brisbane sets up a choice between “reporting” and “opinion,” which is a standard way journalists divide up the world, and then asks us which one we prefer. But I believe this is a false dichotomy, because it leaves out a critical third element: context

SOPA: the tech industry’s self-inflicted wound (January 19)

This makes it sound like the reason for SOPA is that the content industry spends a lot on lobbying and the tech industry does not. But the problem is that if you dig into the actual data that storyline looks less and less plausible — and what looks more plausible is that tech wasn’t outspent, but instead spent its money in dumb ways

Gadget fatigue (February 1)

I’ve tried four times now to buy a new phone, and each time I’ve walked away without closing the sale, feeling vaguely depressed about the whole process to boot.

I think it has to do with values. I know the kind of device I want to buy; the problem is that nobody makes it

The question about bombing Iran that nobody is asking (March 9)

There’s one question about an attack that nobody on either side of the question appears to be asking, and that’s disturbing, because it’s probably the most important question that could be asked. That question is whether or not we even have the capability to take out Iran’s nuclear facilities from the air

How to survive an atomic bomb (March 29)

Which brings me to the most important point about this [small terrorist nuke] type of scenario: it can be survived. It’s not like the Cold War wargasm scenario, where so much explosive tonnage is falling on your head that protecting yourself is impossible. There are things you can do if you find yourself in such a situation that can dramatically improve your chances of making it out alive

How to sell products to nerds (April 25)

Programmers aren’t just pessimists. We are fatalists. We believe that the only reason the world runs at all is because of frequent applications of bubble gum and baling wire in places we can’t see.

We think that way because our work requires us to spend our days climbing around in the innards of things, and innards, generally speaking, are not pretty

Interchangeable news story on President Obama’s announcement of “personal support” for gay marriage (May 9)

“At a certain point, I’ve just concluded that for me personally it is important for me to go ahead and affirm that I think same-sex couples should be able to get married,” the president noted, historically and unprecedentedly. “That being said, there’s no reason for anybody to worry that I’m going to help in any way to make it easier for those same-sex couples to actually do that”

Ethical aggregation: it’s simple (May 10)

Ethical aggregation increases reader demand for the original story; unethical aggregation decreases it

Against live-tweeting (June 7)

I know you think that it’s critical that you get your opinions on the presentation out to your legions of followers right this minute. But trust me, your followers can wait for your thoughts until the session is over; you’re not Edward R. Murrow, and this is not the London Blitz

The image that illustrates the White House’s communication failure on health reform (June 28)

This has been the most glaring omission from the administration’s communications efforts around health reform ever since they first took up the issue. There’s no narrative, no story, and that’s fatal, because stories are what move people

How to crusade like a king in Crusader Kings II (July 24)

I’ve written in this space before about how impressed I’ve been with the latest strategy game from Paradox Interactive, Crusader Kings II, and its first expansion, Sword of Islam. Those posts led to a discussion on Facebook asking me to expand on them a bit, by taking them down to a more concrete level: strategies for how to play them and win. So, here’s a post that will do just that

Bill Nye demonstrates how not to persuade a creationist (August 29)

If you want to change somebody’s mind, you have to first establish to them that you’re someone they want to listen to. The way you do that is by approaching them with respect. And Nye comes across here as deeply disrespectful

Ask Mr. Science: Windows 8 (October 28)

Windows 8 is some software for computers and phones and stuff. It looks just like regular Windows, except for all the places it doesn’t. It works just like regular Windows, except for all the places it doesn’t. And it runs all your old Windows software, except on some computers, where it doesn’t

Everything you need to know to understand why Obama won, in one image (November 7)

[Voters] weren’t happy with the slow pace of recovery — a point this blog predicted would be a drag on Obama’s support two years ago — but they figured a slow recovery was better than a crash back into depression, which is the image that little R conjures up now. This is the boat anchor that Bush shackled onto the leg of the Republican Party, and they haven’t figured out a way to wriggle out of it yet


But Mr. President, there is no such thing as 100% security

Boston Marathon bombing aftermathThe last few weeks have seen a steady drip-drip-drip of scandals of various degrees of seriousness hit the Obama Administration. The most recent, and to my mind the most serious, is yesterday’s revelation by the Washington Post of a massive program run by the National Security Agency (NSA) that taps directly into data gathered on users of technology products and services from nine big tech companies. (Congratulations are due to reporters Barton Gellman and Laura Poitras for digging that story up — it is huge, and hugely important.)

I have a lot to say on this subject, but since that will take some time to write up I wanted to share a quick response I had to President Obama’s attempt today to defend the surveillance programs. In his remarks, he made this statement:

“It’s important to recognize that you can’t have 100% security and also then have 100% privacy and zero inconvenience,” Obama said.

It’s a troubling statement, because it misses an obvious fact: you can’t have 100% security no matter what you do. You can try to get close, but in a huge country with hundreds of millions of people, there’s no way you can intercept every single possible threat to everyone everytime. Probably the best you can hope for is to intercept 100% of the major ones, and as many of the smaller ones as you can.

This is not a theoretical point. Even with these programs up and running, we don’t have 100% security. The Tsarnaev brothers proved that. All that eavesdropping, all that surveillance, all those violations of our rights and liberties weren’t able to prevent two schmucks with some pressure cookers from killing three people and injuring hundreds more.

How much more surveillance — piled on top of all the surveillance we live with now — would it take to have a guarantee that they would have been caught?

There is just no thing as a perfect security system. It’s always a series of tradeoffs. And if your position is that any tradeoff that promises greater security at the cost of citizens’ rights will be decided in favor of security, you’re very quickly going to end up with a system that gives you massive violations of those rights without giving you the perfect security you were looking for.

Which appears to be the system we have now.


The tiresome dietary politics of Don’t Starve

Don't StarveLately I’ve been playing a very good indie game with an intriguingly direct name: Don’t Starve.

The idea is simple: you play a character who wakes up one day in a sprawling, randomly-generated countryside. You have no map, no tools, and no food. Your challenge is to keep that character alive for as long as you can. This probably sounds pretty grim, but the mood is lightened by a playful, cartoony visual style which adds some whimsy.

As you might expect with a game called Don’t Starve, figuring out how to obtain enough food to keep your character alive is a huge part of the gameplay. When you start all you can do is pick berries and seeds up off the ground, but as time goes on you learn how to make tools, which can be used to grow food, hunt, and cook.

If you’ve read my semi-satisfied review of Minecraft, another popular indie game, you’ll know that my biggest gripe with that game was how little direction it gave you. Don’t Starve feels like a reaction to that complaint; it has a lot of the good things Minecraft has (exploration, crafting, etc.), but the stronger emphasis on what I called in that piece “the survival game” makes it feel less muddled. All the other “games” are subsidiary to that one — you explore to find new sources of food; you craft to build things that give you access to a more nourishing diet. This makes Don’t Starve more of a game and less of a toybox, which I like.

But while there is a lot to like about Don’t Starve, there is one thing about it that I don’t like: the game (or, more accurately, its makers) insist on injecting tedious dietary politics into the gameplay.

You probably know at least a few people who are vegetarians or vegans. (Maybe you are one yourself!) And of those, you probably know at least one who is evangelical on the subject; someone who’s not just looking to fix their own diet, but to fix everyone else’s diet, too. So when you’re out with a bunch of people and one of them orders meat, the Veggie Evangelist starts a sermon.

Don’t Starve is kind of like that person. It is an unrepentant Veggie Evangelist.

In a game about where the overriding objective is to not starve, you’d think any kind of edible food would be good. But Don’t Starve feels very strongly that some foods (vegetables) are good, and others (meats) are bad. And it sets up the rules of its world to reward you if you think the same — and to punish you if you don’t.

Consider. In Don’t Starve, there are three measurements of your character’s overall health and wellness. First is Hunger: how close your character is to starvation. Second is Health: his or her overall physical well-being. And third is Sanity: their mental health. These three measurements interact in many ways; for instance, eating a poisonous red mushroom might decrease your Hunger, which is good, but also decrease your Health, which is bad. A big part of the game is learning how to keep these three measurements in balance, since having any one of them tip over ends the game.

In this model, by Don’t Starve‘s rules, vegetables have a lot to commend them. Gathering them generally costs nothing in any of the three measurements, and since they are abundant and can be farmed, they are easy to gather. They’re generally safe to eat: almost all of them reduce your Hunger with no negative side effects; some also increase your Health as well. Picking up a veggie is a decision the game consistently rewards.

Meat, on the other hand, is a different matter entirely. Just the act of gathering meat comes with a significant cost — killing an animal eating meat reduces your Sanity (!). Beyond that, killing even the smallest animals can be a challenge; they fight back, damaging your Health, and it’s not hard to get killed by even a spider or a frog if you’re not careful. And if you’re successful, you rapidly discover that some animals when killed don’t yield plain old meat; they yield “monster meat,” which knocks both your Health and your Sanity down again if you eat it. (Cooking the monster meat can reduce the damage, but not remove it entirely.)

The game’s interface takes this theme even further: if you capture an animal with a trap, you can’t kill it, you have to murder it. That’s the label you have to click: “murder.” Which is hard to read as anything other than a political statement.

It doesn’t end there, though. See, it turns out that there’s actually a fourth measurement the game is tracking behind the scenes, where you can’t see it. That measurement is called “Naughtiness,” and it tracks only one thing: how many “innocent” animals you have “murdered.” Don’t Starve considers any animal that is not actively aggressive towards you — every rabbit, every pig, every bird — to be “innocent,” so killing any of them bumps up your Naughtiness score. And once that score gets high enough, the game summons a creature called “the Krampus.” The Krampus’ mission is to steal your possessions, which it goes about with great gusto. And it’s very strong to boot, so if you attack it to try and keep it from running off with all the stuff you have laboriously gathered and crafted, it can easily kill you.

Putting all of these issues together quickly teaches the player that the “right” way to play the game is to play it as a vegetarian. Meat is difficult and dangerous; veggies are safe and easy. So unless you want to deal with a bunch of potentially character-killing complications, you stick with the veggies.

Now, my point here is not to make a claim one way or the other about the relative ethics of meat-eating versus vegetarianism/veganism. (Though to make my own biases clear, I will tell you that I personally eat meat.) It’s to say that this all feels weirdly dissonant from the game’s overall theme of survival.

When you’re starving, you don’t think much about whether food that appears before you is ethically sourced or not. Starving people will eat just about anything — even their dead comrades. And we don’t judge them negatively for doing so. We put a high enough value on human life that if keeping yourself alive requires you to eat things that otherwise would shock the conscience, we’re not going to second-guess you if you eat them. Survival is the first rule of life.

But here we have a game about survival, a game called Don’t Starve, that cares about more than whether or not you are able to keep yourself from starving. It cares about whether or not you can keep yourself from starving in a way it finds morally acceptable. And if you try to take another path to survival than the one the developers want you to, it will throw huge roadblocks in your way to stop you. And we’re not even talking about extreme things like cannibalism here — we’re talking about killing a rabbit for its meat.

I dunno about you, but if I was trapped and starving in an unfamiliar land, and a rabbit hopped by, I would kill that rabbit and eat it with no moral or ethical compunctions whatsoever, because I want to stay alive. And I bet you would, too, if the alternative was privation and death.

It can be compelling to draw lines and cast judgments about which foods are moral and which aren’t from the comfort of a home with a fully stocked refrigerator. These arguments tend to be less compelling when you’re lost and starving.

Which seems like a strange thing for a game called Don’t Starve to not understand.

UPDATE (June 9): A lively discussion of this post is happening over at Rock, Paper, Shotgun; several RPS readers pointed out that you don’t lose Sanity if you kill an animal, only if you eat its meat. I’ve corrected the paragraph above that contained the error.


The free coffee test, or Lefkowitz’s Law of Corporate Financial Health

Coffee machineThis is a story I’ve told many times over the years, but this morning I shared it on Hacker News and it got a big response, with one HN reader even emailing me saying I should write it up as a blog post to make it easier for others to find. So here is that blog post.

The context is a question that comes up pretty frequently, especially in discussions of tech companies: why is it that removing small perks for employees like free soda tends to lead to an exodus of talent? After all, a can of soda costs what, fifty cents? Maybe a dollar? And yet when management decides to stop bearing that small expense, people have a habit of packing up and leaving, which seems like a big move to make over the price of a can of soda.

Based on my own work experiences, I have my own theory about why this is so. To understand the theory you need to understand the experiences, so let me (briefly, I promise) tell you about them.

Backstory

Back in the original dot-com boom, I worked at a company that had, in our office, an absolutely amazing coffee machine. It was like the Monolith from 2001: A Space Odyssey; a giant slab of technology the size of a Coke machine that could dispense a perfectly brewed cup of what seemed like thousands of varieties of coffee at the press of a button. It was a glorious thing, and all of us who worked there used it often.

Then the market started to tumble, and lo and behold, one day there comes into our office a guy with a hand cart. Said guy rolls the cart over to the amazing coffee machine, loads it up on it, and rolls it away, never to be seen again. We all sort of shook our heads and said “huh, that’s too bad.”

Then, not long afterwards, the layoffs started. Absolutely brutal, down-to-the-bone layoffs. I survived five rounds of them before I finally decided I had pushed my luck too far and took a job elsewhere.

What I learned

That was not a fun experience. But it did teach me something important, which I offer to you as Lefkowitz’s Law of Corporate Financial Health:

The financial health of a company can be inferred from the quality, variety and cost to the employee of the snacks and beverages it offers its employees.

In other words, if you want to know how well a company is doing business-wise, go look in its employee break rooms.

Why? It’s because beverages and snacks are among the cheapest employee perks a company can offer. When business is good, managers look for ways to keep their people on board and happy, and improving the beverages and snacks is a cheap way to boost employee morale; certainly adding a new type of soda to the company fridge is cheaper than handing out bonuses to everybody. In most companies cheap things are easier to get approved than expensive things, so managers tend to reach for cheap things first when they can. Which means that increases in the quality and variety of beverages in the breakroom is a signal that management is confident about the future.

Conversely, when business is bad, management starts looking around for ways to tighten the corporate belt. And because beverages and snacks are cheap, they are very, very easy to cut back on — much easier than cutting salaries, or having to lay people off. Again, managers tend to reach for the easy options first, so when business starts to sour the first response is usually to start cutting the little perks, like free sodas or fancy coffee.

But here’s the thing — the cheapness of snacks and beverages makes them easy to cut back on, but it also means that cutting them tends to do relatively little to arrest the downward slide. Very frequently these small cuts are followed by medium-size cuts, which are then followed by big cuts if the medium-size ones didn’t stop the bleeding. And so forth.

(From a good-management perspective, the best strategy in these situations is actually to jump to the big cuts immediately — figure out what you need to cut, cut it all at once, and make it a one-time thing rather than a constant drip-drip-drip of cuts. That way the people who remain at least know they’re safe. But making that one big cut is hard for people to do; no well-adjusted person likes to look someone in the eye and tell them they’re out of a job. So even though they know intellectually they shouldn’t start small, managers frequently talk themselves into believing that things are better than they really are and do it anyway.)

All of which means that you could get a pretty good sense of how well a business is doing just by putting a camera in its break room and observing how the snacks and beverages ebb and flow.

Which is why small cuts in these perks tend to lead to employees racing for the exits: they are the proverbial canary in the company coal mine. If you’re an employee, and you see management pulling back on cheap perks like snacks and beverages, you’ve just gotten a very clear signal that management believes the future doesn’t look particularly bright. And your brightest and best don’t want to sit around waiting until they are laid off — they want to get out the door and into a new gig before the train wrecks. So they observe omens like this and start dusting off their résumés.

In practice

So that’s some interesting theory, but what does it mean in practice? How do you put it to use?

If you’re an employee of a company that gives out free sodas and snacks, the answer is probably obvious. But it can be useful to others, too. Here’s one way: if you’re in a client-services business, when you visit your clients, make sure to go with them to grab a beverage from their break room before your meeting. Don’t grab it from Starbucks on the way; have the same experience your clients do. If the snack and beverage options are better and more varied than they were on your last visit, you know that your client’s management is feeling bullish. If they’re the same, you know that the status quo is still in place.

And if they have gotten noticeably worse? Consider yourself notified that you may very well in the near future have to push hard to keep your services or products from being axed too.


How nerds dream

Star Wars Kid (sequence)So here’s an actual dream I had last night.

I’m in a garage or shed or other type of workshop-ey space. And outside is a mob of horrific alien monsters bent on breaking the doors down and disembowling me.

I frantically start looking around for something — anything — I can use as a weapon to defend myself. But the only thing I can find is a flashlight.

Damn! Completely useless.

Or is it?

A brainstorm strikes! I pick up the flashlight and turn to my left, where I find: George Lucas. (I guess now that he’s retired he has time to hang out in random peoples’ dreams.)

I show George the flashlight, and pose a question.

“George,” I ask, “is this a lightsaber?”

He seems mildly annoyed by the question, but he humors me. He briefly looks the flashlight up and down, then responds:

“Yes. Yes it is.”

And suddenly I feel safe. Because now my flashlight isn’t just a flashlight. It’s a lightsaber.

Why is it a lightsaber? Because George Lucas said it was, which means that this fact is now canon. It doesn’t matter anymore that the flashlight looks, sounds, and acts just as a normal flashlight does: George Lucas is the final arbiter of what is and isn’t a lightsaber, and he said my flashlight is a lightsaber, so presto! It’s a lightsaber.

Then the slavering alien hordes break down the doors, and I use the flashlight to slice them into a deli platter.

What does it all mean? I have no idea, except maybe that I need to get out more.


“A Walk In The Dark”

A Walk in the Dark

The above is “A Walk In The Dark,” by U.S. Army Sergeant First Class Darrold Peters, 2006. (Click it for a full-size version.) I love the interplay of light and shadow in this piece.

More info on Sgt. Peters, and more of his work, can be found at the Army Center for Military History web site.


WordPress is secure, until you combine it with people

WordPress SOSSo yesterday a fellow named Jason Cospers at WPEngine, one of the higher-end hosting services for WordPress sites, put up a post on their corporate blog titled “WordPress Core Is Secure — Stop Telling People Otherwise”.

During the summer of 2009, WordPress took some knocks in the web publishing community for a series of security vectors that were exploited. The internet realized WordPress could become huge, and aimed some criticism and blog posts in the hopes of making sure WordPress would be secure enough for the crowds of end-users it was attracting…

WordPress core developers responded, and in the months that followed, collectively added patches and tightened up security across the board to make WordPress one of the most secure CMS’s on the internet. That was four years ago. An eternity in terms of technological innovation.

This is all correct. WordPress used to be an absolute horror show in terms of security (and code quality in general). But it has gotten steadily better over the last few years, and now the core downloadable WordPress software is pretty solid.

Cospers concludes from this history that WordPress has this security thing taken care of:

Looking at the evidence, it’s time to put the debate to rest. Maintaining security is an on-going process, and constant vigilance is essential. But, the core team has done an amazing job to ensure the security of WordPress, and will continue to do so as the platform continues to grow.

But, we’ve reached a point in the history of the internet where WordPress has earned a reputation for its security. It’s time to act like it.

This is the bit where he and I part company. While I agree with him that WordPress itself is pretty solid security-wise these days, I don’t think that says much about the security of WordPress sites at all. The reason has less to do with the quality of WordPress as a software product and more to do with a mismatch between the way it expects people to work and the way people actually work.

There’s two major ways, in my experience, that people don’t behave the way WordPress requires them to in order to keep their sites secure.

The first is: they don’t update the software. If you download WordPress today and set up a site on it, you’ll be pretty secure — today. But the Bad Hackers are always out there thinking up clever new ways to break into software; which is why pretty much any software connected to the public Internet requires periodic updates to close out new attacks as the Bad Hackers come up with them.[ref]Actually, there is one exception to this rule: qmail, a mail server written by Daniel J. Bernstein and designed explicitly for security. The version of qmail in current distribution today originally shipped in 1998; between now and then, exactly one (1) exploit has been found, and it’s not even clear if that one would work against an actual production deployment of qmail. But very, very few programs are engineered the way qmail was; suffice it to say that WordPress is not one of them.[/ref] WordPress is no exception to this rule.

Software that requires updates can either download and apply those updates automatically, or it can require a user to do that. WordPress takes the latter approach; it doesn’t update until you log in and tell it to. And many, many, many people simply never do that. They set up their WordPress site, get it looking the way they want, and then forget about it. Time passes, security updates come out, and those sites never receive them; they become more vulnerable each and every day.

The second is: they load up on plugins and themes. One of the big things that attracts people to the WordPress ecosystem is the huge number of free and low-cost third-party extensions available for it. The problem is that the code quality of these add-ons is nowhere near the code quality of WordPress itself.

Well, that’s not 100% accurate; there are some very well-written plugins and themes out there. But there are also a lot of them that are not so great. That’s not because the people who write them are evil, it’s just because it’s so easy to write a WordPress plugin or theme that lots of people without much programming experience do it every day. That opens up programming to lots of enthusiastic new people; but it also means those people frequently haven’t learned the hard lessons about security that more experienced programmers have, so they make lots of rookie mistakes.

None of that would matter much if WordPress did something to ensure that a flaw in a plugin or theme can’t compromise your whole site. But it really doesn’t. So all it takes is installing one bad plugin or theme to make all that work irrelevant.

But users don’t understand any of this. They just see free software that looks like it’s going to do something cool, so they plug it into their site without a second thought.

All of which is a (long-winded, I know) way of saying that WordPress suffers from a problem that many software products do: it expects its users to be something they are not.

Cospers demonstrates this further down in his post, saying “WordPress users must be responsible for their own security, maintain strong [p]asswords, and keep plugins and themes up to date, as well as WordPress itself.”

That’s certainly true, but here’s the catch: we know people don’t do this stuff. We know. We have years and years and years (decades!) of experience observing how non-technical people use software, and all of it tells us that normal people don’t do this kind of system-administrator stuff, no matter how important we tell them it is or how many times we repeat that message. They’re busy people, they have lots of stuff to worry about; updating WordPress is pretty far down the list. It becomes one of those “yeah, I’ll get to that one of these days” things that people never actually get around to, like going to the gym or eating more vegetables.

This mismatch sets users up for a kind of whiplash. We get them in the door by shouting “WordPress is easy! Anyone can run it! No tech expertise required!” And then when they don’t act the way professional systems administrators do, we shake our heads and say “what’s wrong with you? Don’t you have any tech expertise?”

I suppose one approach to solving this problem would be to shout at the users more loudly about the importance of updates and being judicious in what add-ons you install. But like I said above, that won’t work. Users don’t update software, and they don’t do code reviews before installing plugins, and there’s no evidence that they’re going to start if we shout at them loudly enough.

So what’s the alternative?

A wise person once told me that when your software doesn’t fit your user, it’s almost always easier to bend the software to fit the user rather than bend the user to fit the software. Shouting at people to Do The Right Thing is trying to bend the user to fit the software; to change themselves to make the software’s life easier. So what would it look like if WordPress took the other course? If it adjusted itself to fit the user as they actually are?

It might look something like this:

1. It would update itself. WordPress’ updater is a really smart, sophisticated piece of software. It makes updating WordPress easier than updating almost any other piece of software. But its fatal flaw is that it will never run until a user clicks a button telling it to run — a button that lots of users will never click. This is why consumer-oriented software in general has moved towards simply updating automatically. Google Chrome, for instance, famously updates itself completely silently; the user is never told the update even happened unless they ask.

The argument for not having WordPress update itself is that updates might break poorly-coded themes and plugins. But the alternative is having an ecosystem that is so tolerant of badly written add-ons that it leaves tons of sites insecure, which is just unacceptable.

If you’re really worried about breaking those add-ons, give the user a way to opt out of the automatic updates — preferably a way that requires a little technical knowledge to activate, like a flag in the configuration file. But the default should be to get the updates and be secure.

2. It would make installing plugins and themes harder. Wait, this is WordPress, right? The software that aims to be easy? Am I really arguing that it should make a commonly performed task harder?

Yes. Yes I am.

Currently installing plugins and themes is a single-click process — find something that looks cool, click, you’re running it. Which would be fine if you were protected from bugs in that software; but in this case, you’re not. So we’ve got a process that feels casual that actually really is not, which is asking for trouble.

A simple speed bump in the installation process would force users to think a little bit before installing new stuff, which might in turn get them to look a little more closely at what they’re installing before they install it. Just throw up a warning screen that tells people the security risks plugins and themes can introduce, and asks them to confirm that they want to proceed. Users with no technical experience won’t understand the text of the warning, but they have been trained by other software to recognize that warning messages mean they need to tread carefully. Those who have technical experience will be more able to evaluate the risks the specific plugin imposes.

I can hear plugin and theme authors howling now that such a speed bump would cut down the number of people who install their software. And they’re right! It totally would. But cutting down the number of plugin and theme installs is a feature, not a bug. Many of these things are installed incredibly casually, without any thought at all, and then never used or updated. But each one increases the attack surface your site presents to a potential hacker. Each one potentially makes you a bigger, riper target. You should be aware of that before you install them.

(Again, if you want to, let users opt out of this warning via a configuration flag or the like. But the default experience should be that you have to click through it.)

This post shouldn’t be taken as a slam on WordPress particularly; I use it for this site, and in my business, and I recommend it to people all the time. It really is a good piece of software. I’m writing it because this type of “our software would be secure if only our users weren’t idiots” mentality comes up in all sorts of different software projects, and it needs to be pushed back on.

A system that is only as secure as its user is diligent is insecure. WordPress is (or should be, anyway) better than that.

UPDATE (January 16, 2014): It’s worth noting that as of WordPress 3.7, the software can now update itself, removing my complaint #1 above. Good work, WordPress devs! This is a big step forward.


The act of generosity that changed the world

Robert Cailliau

Robert Cailliau, June 1995

April 30th was the 20-year anniversary of the decision by CERN, the institute where Tim Berners-Lee worked when he invented the World Wide Web, to place the original Web technologies into the public domain. The original legal statement that did so is here.

It is difficult to overstate the significance of that decision. Putting the WWW technologies into the public domain meant that anyone could pick them up and hack on them, without needing to get permission from CERN or pay a licensing fee. This led to lots of people around the world doing just that, which gave the Web a huge surge of early momentum that propelled it past other hypertext systems and started it on the road to becoming the universal gateway to information it has become.

The decision to release WWW into the public domain came from Berners-Lee and his first collaborator on the project at CERN, Robert Cailliau (pictured, in June 1995). Cailliau then spent months (Cailliau’s account says six, Berners-Lee’s says eighteen) lobbying CERN’s bureaucracy to bring them around to the same conclusion.

It was never a given that CERN would go the route it did with WWW. They could have chosen to patent it, in hopes of having more direct control over its future and reaping licensing fees from those who would use it. In fact, that was the route that was taken by the fledgling WWW’s biggest contemporary competitor.

Gopher was an information system developed by a team at the University of Minnesota led by Mark McCahill. Like WWW, it was a hypertext system, and also like WWW it was built on the Internet rather than on a proprietary corporate network. Gopher sites were connected together in “Gopherspace” via hypertext links, so you could surf from one to another in much the same way Web users do.  Also like WWW, it found an early niche in academia, one of the few sectors where connectivity to the Internet was widespread in the early ’90s.

So Gopher was a lot like WWW. But in one crucial way, it was different — the University of Minnesota didn’t set it free. They chose instead to charge a licensing fee to anyone who wanted to use their version of the Gopher software. This decision was damaging to Gopher in two ways: first, it turned off lots of potential new users from just picking up the existing Gopher software and doing cool stuff with it; and second, it dampened the enthusiasm of developers outside the U of M for writing their own Gopher software, out of fear that the university might claim their ownership of the Gopher concept gave them an ownership stake in their product too. Using the WWW meant not having to deal with any of these worries, so people fell away from Gopher and went to the WWW, and the rest is history.

And that history changed a lot of peoples’ lives, including mine. I first encountered the Web during my freshman year in college, back in 1994. It was one of those moments you remember for the rest of your life; I’d been tinkering with computers since I was a little kid, and I came out of high school convinced that widespread ownership of computers was going to change society in some profound way, but I had no idea what that way would be. Then I saw the Web, and all the exciting things that were bubbling up around it thanks to CERN’s decision to free it, and was struck with the thought that this is it — this is the lever that is going to move the world.

The next thought I had was I need to find a way to be a part of this, and the rest of my life since that day has more or less flowed directly out of that thought.

(It was such a strong feeling that I actually spent the next couple of weeks after that first experience debating whether I should drop out of school and go to Silicon Valley to try and get in on it somehow. In retrospect, I’m glad I didn’t — I didn’t have any skill at that point in the kinds of programming languages needed to get a foot in the door then, like C/C++, so I probably would have failed spectacularly. The adventurous side of me still wonders occasionally if I might have gotten lucky had I tried it, though.)

So while I haven’t been doing this for twenty years quite yet, it’s getting pretty close to that. Had CERN locked up the Web behind patents or licensing fees, it’s unlikely the Web would ever have become as lively and interesting as the one I encountered in the university computer lab that day. (I actually used Gopher a bit around the same time, and it struck me as interesting but less fundamentally important; in retrospect, that was probably because you could just see more people doing more interesting things with the Web than you could with Gopher, thanks to Gopher’s more restrictive licensing.) And if that had been the case — if the Web had struck me as an interesting toy, but nothing more — I honestly can’t tell you what I would be doing today. The direction of my life was altered too fundamentally by Berners-Lee and Caillau’s decision for me to imagine now what other way it would have gone.

And there are lots of others who could tell the same story, some of whom have made billions. It’s easy to add up all the money people have made off their invention and say “wow, if Berners-Lee and Cailliau had chosen to lock WWW up they could have made all that money themselves.” But of course, the foundation for all those successes was the fact that the Web was an open platform. Building something like the Web we know today, in other words, required as its first, catalytic step a fundamental act of generosity. Without the galvanic effect of that step, none of the online services we know today, large or small, would have crawled out of the primordial ooze.

So this seems like as good a moment as any to say thank you to Berners-Lee, Cailliau and CERN. Their generosity changed the direction of history — but (more importantly to me!) it changed my life, too. It made my life, at least that part of it I have lived to date. And that’s something worth being thankful for.


The long, strange trip of Désirée Clary

Désirée ClaryImage: from “Portrait of Désirée Clary,” by François Pascal Simon, Baron Gérard, 1810.

I like the Portrait of Désirée Clary just because it’s so striking to look at. But even if you don’t care about its aesthetic appeal, its subject is an interesting figure to learn about. Désirée Clary is one of those fascinating figures you sometimes find on the fringes of Big History.

In her teenage years, she attracted the attention of a young Corsican artillery officer of little note named Napoléon Bonaparte. The two became engaged, but the engagement was broken when her family moved to Genoa. Depressed, Bonaparte poured his feelings about the relationship into a self-pitying romantic novella, Clisson et Eugenie (Clisson and Eugenie), about a doomed relationship between a dutiful soldier and his faithless wife back home.

Upon her return to France some years later, she met and married another soldier, General Jean Baptiste Jules Bernadotte. Bernadotte was a talented soldier himself, and after Napoléon seized power and made himself Emperor he elevated Bernadotte to the elite rank of Marshal of the Empire.The Marshals led Napoléon’s armies through the whirlwind of the Napoleonic Wars, as the little artillerist led France to conquer most of Europe.

Bernadotte was one of the few Marshals who had built a reputation on his own before the rise of Napoléon; this made his relationship with the Emperor a delicate one, since both were conscious that he could be seen in the right light as a potential rival for the Imperial throne. Because of the unique relationship she had with both Bonaparte and Bernadotte, both men periodically employed her in attempts to influence the other, with varying degrees of success. That all came to an end in 1809, one year before the Portrait of Désirée Clary was made, when Napoléon, furious at Bernadotte for his lackluster performance at the Battle of Wagram, stripped him of his Marshal’s baton. This effectively ended his career in the French military.

From a strictly ladder-climbing perspective, it may therefore appear that Désirée had married below her potential; if she had married Bonaparte, she would have become Empress of France, rather than wife of a disgraced Marshal. Time and fate would prove this judgement incorrect, however.

Back in 1806, at the Battle of Lübeck, Bernadotte’s forces had captured a contingent of Swedes. As was his usual practice — while he had plenty of faults, cruelty to prisoners was not one of them —  Bernadotte had taken care to see that these prisoners were treated with kindness. When they returned to Sweden, the prisoners therefore had a story to tell of at least one French Marshal who was not a faceless, inhuman enemy. And tell it they did, all across Sweden.

This story redounded to Bernadotte’s benefit a few years later, when it became clear that the reigning (and quite old) King of Sweden, Charles XIII, was not going to be fathering any heirs to continue his royal line. This led that country’s nobility to search for a person who could be adopted into the royal family in order to take the throne when Charles died. Since Europe was embroiled in war, they thought, it was important that this person have a good military mind; and since Imperial France was at the peak of its strength, good connections there would be important too. Then someone cast their mind back to 1806. What was the name of that French general who had shown such kindness to the Swedes he had captured? What an excellent candidate he would make! So it was that in 1810, the year the Portrait was made, Jean Baptiste Jules Bernadotte was adopted into the Swedish royal line, and Désirée became Crown Princess of Sweden.

This would prove to be a much better long-term position than being Napoléon’s Empress would have been. By 1815 Napoléon’s power had been shattered for good, while Bernadotte would rule Sweden (and eventually Norway, too) for nearly three more decades. And even after Bernadotte’s death, Désirée would have influence in his kingdom as Queen Dowager until her own death in 1860, long after the First French Empire had been consigned to history.

But while her choices brought her incredible success at ascending the power structures of Europe — success that any ordinary social climber would have deeply envied — they were less successful at bringing her happiness. She had never sought a crown of her own, and had always preferred life in Paris to anywhere else in the world; she resisted actually moving to her new kingdom for years, leading to long separations as Bernadotte took the reins in Stockholm. She briefly attempted to live there in after her husband’s adoption, but found the climate unappealing and the royal family unwelcoming; she left for Paris in 1811 and would not return for twelve years, during which time both she and Bernadotte fell in love with others. Even after she came back to Stockholm and tried to live up to her royal role, she and Bernadotte drifted further apart, divided by her lack of interest in politics and dislike of the tiresome formality a Queen was expected to live under at all times. She became, as a result, a marginal figure in her own kingdom, viewed by her subjects as a sort of strange hothouse import, a Frenchwoman transplanted unsuccessfully into Swedish soil.

Much of the disconnectedness of her later life can be understood from a single fact: despite being queen of Sweden for almost 50 years, she never learned Swedish.

The woman who had been born a Marseilles silk-maker’s daughter died a queen in Stockholm in 1860. In an age of turmoil that struck down established personages and elevated new ones up to unprecedented heights, she had been elevated higher than nearly anyone else. But one has to wonder, as she gazed upon her unwanted crown, if it ever truly seemed worth what it had cost her.


How winners win: John Boyd and the four qualities of victorious organizations

John BoydAs long as I’m talking about people whose thinking has influenced my own, I should mention another one: John Boyd. Boyd was an Air Force officer who laid out some new and fundamental ideas about how people and organizations behave when in conflict with each other. Discovering Boyd and his ideas helped me better understand a lot of things I had observed in life but could never really explain.

I’m not going to use this space to write a bio of the man, especially since a very good one (Robert Coram’s Boyd: The Fighter Pilot Who Changed the Art of War) already exists. What I want to do instead is try to encapsulate a small part of his thinking in language suitable for general audiences; since Boyd was a military man, much of the discussion about his ideas has been conducted inside the defense community, which means it’s laden with jargon and acronyms that outsiders can find impenetrable. (And Boyd himself didn’t help either; he preferred to present his ideas via in-person briefings rather than writing articles or books, which after he passed meant that his ideas were locked up in decks of slides and notes rather than a single cohesive work. A book written by one of his associates is the closest thing we have to a “Boydism 101.”) Which is a shame, since there’s a lot of stuff there that’s valuable for anyone to understand.

Let’s begin with a fundamental question: when organizations come into conflict, why do the winners win? Or, more precisely, what about them makes them winners?

Boyd, being a warrior, looked at this question through the lens of military history. Throughout history, there have been a few armies that somehow managed to sweep from victory to victory in a seemingly unstoppable fashion, even when confronted with theoretically much stronger opposition: Hannibal’s Carthaginians, Genghis Khan’s Mongols, Napoleon’s Grande Armée, Guderian’s panzerkorps. Forces like these could still be brought to bay — sometimes by patient opponents willing to absorb enormous numbers of casualties, other times only by exogenous factors like the Russian winter. But before that happened they all racked up such impressive strings of victories that they reshaped the story of their age.

So, Boyd wondered, was that all just dumb luck? Or is there a common thread that runs through all of them — some element of how they were organized that predisposed them to victory?

Boyd’s research led him to believe that such a common element did exist. It could be found in the culture that all these institutions fostered among those who belonged to them. A culture focused on a particular set of essential principles, Boyd believed, would give the organization that followed it an edge when confronting any organization that did not. He called this set of principles the “organizational climate for operational success.”

The most commonly given version of Boyd’s presentations on this subject listed four core qualities that, taken together, would create this organizational climate. Boyd used German words to identify each, so I’ll do the same here, but with English-language explanations of the idea each one is getting at.

The four qualities are:

Fingerspitzengefühl — “intuitiveness.” Victorious organizations find roles for people that match their talents, and then give them time and opportunity to develop their skills to such a point that they can react to new situations automatically rather than having to consult policies or wait for direction from above. The word fingerspitzengefühl translates literally to “fingertip feeling,” and this gives a sense for the meaning of the term; by the time a baseball pitcher, say, reaches the major leagues, he’s thrown so many pitches that he has an intuitive sense for his own strengths and weaknesses and how to apply those against the strengths and weaknesses of each different batter. He doesn’t need to pull out a manual to find out which pitch to use against a particular type of batter. He just knows.

Einheit — “unity.”  Victorious organizations foster a common outlook among all their participants, from the lowest to the highest; a feeling of “we’re all in this together.” This outlook is fostered by participation in common experiences, which help individuals in the organization continue to relate to each other, even as their own individual careers focus on the things necessary to develop their own individual fingerspitzengefühl.

Auftragstaktik — “leadership by contract.” Victorious organizations avoid micromanagement, preferring instead to have leaders define goals and then offer subordinates a defined set of resources and constraints they will have to operate within order to achieve those goals. The subordinate then has the free choice to either take on the package of goals, resources and constraints, or decline to do so if she feels the goals cannot be met with the resources and constraints offered. The leader can then either work with the subordinate to tune the parameters of the project to a point where she can honestly say she can fulfill its requirements, or try to find someone else who can work with a different set of parameters. Hence the phrasing “leadership by contract” — once the two have scoped out the task and the subordinate has said she can get it done within the parameters described, an unwritten contract is in effect between the two; the subordinate’s responsibility is now to do what she has promised to do, while the leader’s is to ensure that the resources promised are made available and the constraints described don’t change.

Schwerpunkt — “the point of decision.” Victorious organizations identify their opponents’ biggest weakness, and then focus all their efforts on exploiting that weakness. They do not divert effort to satisfy internal politics, or in an attempt to hedge their bet. They put all their people, money and time behind a single arrow, and launch that arrow with devastating effect at their opposition.

Now, none of these qualities is a discovery unique to Boyd. Boyd’s insight lay in seeing that they only lead to success when they are all employed together. You have almost certainly seen this in your own experiences; everyone has seen a group with highly skilled members (lots of fingerspitzengefühl) that nonetheless fails because they waste those peoples’ skills on a million unrelated side projects rather than focusing them all on a single clear goal (no schwerpunkt), or a group that is focused on a schwerpunkt but falls apart before arriving there because the people in the group don’t trust one another (no einheit).

Even more importantly, this means that while all four qualities have to be fostered to gain Boyd’s edge, failure at any one of them can lead to losing it. Think of a startup whose founders and staff, for instance, work together in a big open-plan workspace. Then imagine the founders cash out, and new management comes in, and the first thing they do is set up offices for themselves while everyone else works keeps on working in the open space. By breaking the established social contract in the group, such a move demolishes einheit — suddenly, rather than the team being Us, it becomes Us and Them. Even if the new managers do better fostering the other three qualities than the founders did, it won’t matter, because their group’s lack of unity will undermine them; the group will destroy itself before its opponents ever need to.

So whether you lead a group or are simply a member of one, Boyd offers a lot to think about regarding your institutional culture. Is it setting you up for success? For victory?


A word of advice for Rand Paul and anti-drone activists

MQ-1 Predator

So with Sen. Rand Paul launching a filibuster over the issue, we’re finally seeing some opposition in Congress to President Obama’s program of using drone strikes to kill suspected terrorists. And I say, good! We know the program has already killed at least one American citizen — radical Islamic cleric Anwar al-Awlaki — and since the list of targets is a state secret, for all we know there may be other Americans who have ended up on the wrong side of a Hellfire missile too. The targets may indeed be traitors, but even traitors are supposed to be entitled to their day in court. And even if they won’t appear in court, the government should still have to lay out to the public the reasons why it believes a person poses an imminent threat before force is applied against them. Otherwise abuses of this awesome power are inevitable.

But while I applaud Sen. Paul and other activists for taking this up, I do have one bit of advice for them:

You keep talking about drones. If you want to win, you should stop. Here is why.

First, you’re not really against the drones per se, you’re against the drones being used to kill citizens without trial. If armed drones were only used on the battlefield, like any other weapon of war, they’d be much less objectionable. If someone’s shooting at Americans, it doesn’t really matter, morally speaking, if the Hellfire missile that stops them is fired from a manned F-15E or a remotely piloted MQ-9. And if the President was having American citizens knocked off by snipers rather than drones, you’d be right to be just as against that as you are against this. So setting up your message as “anti-drone” rather than “anti-assassination” just diverts the conversation from the heart of the issue. Your argument is about the why of this program, not the how.

Second, if you insist on positioning yourself as “anti-drone,” you will need to contend at some point with an unpleasant reality: drones are really, really popular. Somewhere between two-thirds and four-fifths of the American public supports their use. This may confuse you, but the reason why is not hard to understand; they’re not really in love with drones themselves, they’re in love with a mechanism that lets the President fight his little wars overseas without having to send their kids off to get shot at. When people hear “drones”, they hear “no boots on the ground,” and after a decade of boots-on-the-ground war in Iraq and Afghanistan, they are quite happy to let robots do their fighting for a while instead. So, looking at the question simply from a crass political perspective, being the guy who hates drones is a losing proposition. The guy who wants to uphold the Constitution and due process of law, by contrast, is in a much friendlier place.

So, as someone who agrees with you on this and wants to see you win, here’s my two cents: stop talking about the means, and start talking about the ends. Drones aren’t the issue. Assassination is.


Stephen Douglas, the politician who was too smart for his own good

Stephen A. DouglasWith the administration of George W. Bush still fresh in our collective memories, it’s easy to think that the problem with our system is the way it periodically throws stupid people into positions of power. And if that’s your diagnosis, the solution seems simple: just put smart people into those positions instead, and all will be well.

The problem, though, is that intelligence, in and of itself, is no guarantee of wisdom. Smart people tend to fall into different types of snares than stupid people; but fall into snares they do, and with distressing frequency. Self-destruction is not the exclusive province of the dumb.

History offers many examples of smart leaders who proved a bit too smart for their own good, but for my money the most impressive one is that of one of the great statesmen of 19th century America: Stephen A. Douglas of Illinois, whose attempts to head off the looming sectional crisis over slavery ended up instead driving it into full-scale civil war.

We today remember Douglas primarily as an antagonist for Abraham Lincoln — a sort of designated foil, whose role in our memory is to erect obstacles that Lincoln has to overcome to realize his destiny as America’s greatest leader. But contemporaries of Lincoln and Douglas in the years leading up to the Civil War would have scoffed at such a characterization. To them, it was Lincoln who was the footnote to Douglas’ career, rather than the other way around.

Both men entered national politics in the 1840s, but the trajectory of their careers could not have been more different. Before his fantastically unlikely capture of the Republican presidential nomination at that party’s 1860 convention, Lincoln was generally considered a political B-lister, a small-town lawyer whose lackluster public career had been marked only by a single term in the House of Representatives and an unsuccessful bid for a U.S. Senate seat. Douglas, on the other hand, was a titan, one of the best-known politicians in America and the undisputed master of the complicated politics of the U.S. Senate. Elected to his first national office in 1842, by 1846 he had reached the Senate and throughout the 1850s he was one of the national leaders of the Democratic Party and a leading candidate for that party’s Presidential nomination.

While Lincoln’s stalled career gave him few opportunities to affect the crisis that was slowly building in American life, Douglas’ meteoric rise put him right at the center of events. And he tried, several times, to use that position — and the considerable rhetorical and organizational talents that had gotten him there — to defuse that crisis. Each time, Douglas devised a plan of action that was striking in its cleverness. And each time, that very cleverness proved to be the plan’s undoing, planting a bomb that would end up causing the nation more damage than it prevented.

Washington, 1854

The first of Douglas’ great miscalculations was the Kansas-Nebraska Act of 1854.

The great question of American political life in the first half of the nineteenth century was how to keep the nation’s division between slave states and free states from tearing it apart as the nation expanded. The stability of the early Republic rested upon a delicate balance of power;  as the nation grew, and new states began to be carved out of newly won territories, Congress was forced repeatedly to act to establish some kind of equilibrium. The first crisis came after the purchase of the Louisiana Territory from France; the mechanism established to address that crisis, the Missouri Compromise, established the principle that each slave state admitted to the Union should have a corresponding free state admitted with it to preserve the balance. This ensured that both sides had the same number of votes in the Senate, where each state was allocated two Senators; and since failure to pass the Senate would stop any bill from becoming a law, legislation that overtly favored one section of the country could be put down there by the other. Neither section, therefore, had reason to fear that the other could interfere with its internal institutions unilaterally.

To make it clear where each should come from, the Missouri Compromise established a boundary line (36°30′ north) across the middle of the nation, north of which could only be created free states and south of which could only be created slave states. This tenuous agreement held when further territory was wrested from Mexico in the 1840s in the Mexican-American War; the disposition of that territory was settled by another compromise, the Compromise of 1850, which upheld the principle of balancing free and slave states.

The matter came to a head again in 1854, however, when two new states, Kansas and Nebraska, petitioned for statehood. Both were north of the Missouri Compromise line, meaning that by the terms of that agreement they would have to be admitted as free states. At the same time, Congress was debating the notion of how best to build a transcontinental railroad, and various cities were put forward as the eastern terminus of such a railroad’s route.

Senator Douglas, being from Illinois and an ardent supporter of railroad expansion, wanted to establish the railroad’s start at Chicago, since doing so would make Illinois the gateway to the West and bring a huge volume of business through that city. Southern leaders, however, had the same dreams for their own cities, such as New Orleans. Douglas, seeking to kill two birds with one stone, devised a new compromise: the South would accept the Chicago route for the railroad, and in return, the old Missouri Compromise line would be repealed. The question of whether to admit Kansas and Nebraska (and, indeed, all other future territories) as free or slave states, no longer a simple question of geography, would be left instead to the voters of each territory to settle in a vote — a solution that became known as “popular sovereignty.

(Douglas’ initial proposal, notably, had included popular sovereignty but had not explicitly repealed the Missouri Compromise line. Southerners, however, feeling that a free territory would inevitably vote to become a free state, pushed him for full repeal. The arguments of one of them, Senator Archibald Dixon of Kentucky, eventually swayed Douglas to support full repeal, either from the strength of his logic or the looming threat that without it the South would not back his compromise. “By God, sir,” he told Dixon, “you are right. I will incorporate it into my bill, though I know it will raise a hell of a storm.”)

This compromise, while winning Douglas the support he wanted for his Illinois railroad plan, proceeded to backfire on the nation in spectacular fashion. Since the question of whether a territory would be slave or free now hinged on the number of voters within it who supported each, pro- and anti-slavery activists (including one anti-slavery fanatic who would pop up again later on: John Brown) rushed into Kansas and Nebraska, hoping by their presence to establish a majority for their side. Within a year open violence had broken out between the two factions, earning Kansas the sobriquet “Bleeding Kansas.” Rather than settling the question of whether Kansas should be free or slave, the legislation and the violence it created ended up leaving it wide open, as neither side would accept a vote that went the other way as legitimate.

Beyond the two territories that had prompted the debate, the effects were dramatic as well. Northerners, who had assumed that territories north of the old compromise line were safe from the expansion of slavery, suddenly found themselves confronted with a new reality in which any territory (or established state!) could switch from free to slave with a single vote. Many of those Northerners had previously been content to accept slavery in the South, so long as it never touched them directly; now they began to worry that someday they would have to confront the issue in their own communities. This caused the first major push that drove many of these previously apathetic citizens into supporting abolition. Anger at Douglas flared up across the North; as he himself put it, “I could travel from Boston to Chicago by the light of my own effigy.

Freeport, 1858

One of these newly radicalized Northerners was an Illinois lawyer and minor-league politician named Abraham Lincoln. Disappointed by his meager success in his single term in Congress, he had put himself in the early 1850s into a sort of voluntary political retirement. But the passage of the Kansas-Nebraska Act shocked him enough to compel him to re-enter public life.

On October 16, 1854, Douglas addressed an audience at Peoria, Illinois, promoting the concept of popular sovereignty. Lincoln was there, and spoke to the crowd in response:

The doctrine of self government is right—absolutely and eternally right—but it has no just application, as here attempted. Or perhaps I should rather say that whether it has such just application depends upon whether a negro is not or is a man. If he is not a man, why in that case, he who is a man may, as a matter of self-government, do just as he pleases with him. But if the negro is a man, is it not to that extent, a total destruction of self-government, to say that he too shall not govern himself? When the white man governs himself that is self-government; but when he governs himself, and also governs another man, that is more than self-government—that is despotism. If the negro is a man, why then my ancient faith teaches me that “all men are created equal;” and that there can be no moral right in connection with one man’s making a slave of another…

But you say this question should be left to the people of Nebraska, because they are more particularly interested. If this be the rule, you must leave it to each individual to say for himself whether he will have slaves. What better moral right have thirty-one citizens of Nebraska to say, that the thirty-second shall not hold slaves, than the people of the thirty-one States have to say that slavery shall not go into the thirty-second State at all? …

Whether slavery shall go into Nebraska, or other new territories, is not a matter of exclusive concern to the people who may go there. The whole nation is interested that the best use shall be made of these territories. We want them for the homes of free white people. This they cannot be, to any considerable extent, if slavery shall be planted within them. Slave States are places for poor white people to remove FROM; not to remove TO. New free States are the places for poor people to go to and better their condition. For this use, the nation needs these territories…

Little by little, but steadily as man’s march to the grave, we have been giving up the OLD for the NEW faith. Near eighty years ago we began by declaring that all men are created equal; but now from that beginning we have run down to the other declaration, that for SOME men to enslave OTHERS is a “sacred right of self-government.” These principles can not stand together. They are as opposite as God and mammon; and whoever holds to the one, must despise the other.

As you can see from the third paragraph quoted above, Lincoln’s Peoria speech was not a full-throated call for an end to slavery and the integration of black and white. It marks the beginning of his journey towards emancipation, not the end. But it marked him out for the first time as having definitively come down on the anti-slavery side of the question — and as an eloquent home-state opponent for Douglas.

The two men would meet again in 1858. Lincoln, propelled into the limelight by the increasing urgency of the slavery question, had been put forward by the newly formed Republican Party as Douglas’ opponent for re-election to his Senate seat. The two candidates clashed in a series of debates across Illinois, debates that would become known to history as the Lincoln-Douglas debates. And in these debates, slavery and Kansas-Nebraska — a bill with which Douglas’ name had become inextricably linked — were the primary subject.

Lincoln was, among other things, a canny politician; and as such, he recognized the unpopularity of Kansas-Nebraska in free Illinois, and sought some way to nail Douglas’ ambitions firmly to it. But he knew that Douglas was a canny politician too, and if left to his own devices would frame his support for the bill in some way which did not make him seem objectively pro-slavery. So Lincoln set out to build a trap for him — a rhetorical snare into which Douglas could be enticed to step without realizing until it was too late what he had done.

At the town of Freeport, Illinois, on August 27, 1858, Lincoln sprang his trap. It consisted of a simple question:

Can the people of a United States Territory, in any lawful way, against the wish of any citizen of the United States, exclude slavery from its limits prior to the formation of a State Constitution?

This question is not only simple; it is deceptively simple. For Lincoln designed it to throw Douglas upon the horns of a dilemma, thanks to a Supreme Court decision issued the previous year.

In the landmark case of Dred Scott v. Sandford, the Court had ruled that African-Americans had no right to be free anywhere in the United States — an expansion of the “right” to own slaves that went far beyond Douglas’ principle of popular sovereignty, since it implied that even if the people of a state wished to vote freedom for their black neighbors, they had no legal right to do so. Unsurprisingly, Dred Scott agitated the North even more than Kansas-Nebraska had. But Douglas, as a legislative leader and a man who had made his career working within the system, had refused to denounce the decision as illegitimate. The purpose of Lincoln’s question was to force Douglas to choose between supporting the legitimacy of the Court’s decision and supporting his own doctrine of popular sovereignty. Did he still believe that the people of a territory had the right to vote freedom for blacks? If he did, didn’t it logically follow that he should be fighting the Court’s decision, which explicitly removed that right? In this question, to support one was to oppose the other; there was no middle ground for the great compromiser to save himself by standing upon.

Or so Lincoln thought. Douglas, thinking fast, took Lincoln’s choices and carved out a third path which allowed him to escape the carefully laid trap:

It matters not what way the Supreme Court may hereafter decide as to the abstract question whether slavery may or may not go into a Territory under the Constitution, the people have the lawful means to introduce it or exclude it as they please, for the reason that slavery cannot exist a day or an hour anywhere, unless it is supported by local police regulations. Those police regulations can only be established by the local legislature; and if the people are opposed to slavery, they will elect representatives to that body who will by unfriendly legislation effectually prevent the introduction of it into their midst. If, on the contrary, they are for it, their legislation will favor its extension. Hence, no matter what the decision of the Supreme Court may be on that abstract question, still the right of the people to make a Slave Territory or a Free Territory is perfect and complete under the Nebraska bill.

Dred Scott, in other words, was completely compatible with the principles of Kansas-Nebraska. The Court had said that territories could not prevent slave owners from bringing in their slaves by referendum; but, practically speaking, slavery could not exist in a territory if it were not propped up by a web of local laws and regulations, so a state could still make itself “free” by simply refusing to establish those laws and regulations. Slave owners could still bring their slaves in, but who would want to bring slaves into a place where no laws existed to help you recover them if they ran away, or to allow you to punish them if they refused to work? Nobody.

It was an ingenious argument, and it won Douglas victory over Lincoln in that year’s election. But as with Kansas-Nebraska, while it solved an immediate problem for Douglas, it did so in a way that planted the seeds of future trouble for him as well. Douglas’ third way became known as the “Freeport Doctrine,” and while it palliated Illinois, it infuriated the South. Here, Southern leaders thought, was a glimpse into the real way Douglas’ mind worked, setting up formal approval of slavery while at the same time winking to anyone who opposed it that nobody would stop them if they used local law to harry slave owners out of their territory. It made Douglas appear Janus-faced, with one message for the South and a completely different one for the North. And as Douglas rose to become the leader of the national Democratic party, this gave southern Democrats cause to wonder: if our leader is willing to sell us out, will our party do so as well?

Charleston, 1860

This uneasiness with Douglas as a leader boiled over two years later, when the Democrats met in Charleston, South Carolina to nominate a candidate for the 1860 Presidential election.

Douglas, as the only national Democratic leader with a foot in both the Northern and Southern wings of his party, was the front-runner for that position. But it quickly became clear when the convention opened on April 23 that the depth of the South’s anger at the Freeport Doctrine threatened to make it impossible for him to win nomination.  Southern “fire-eaters” felt that Douglas’ creation of that doctrine made him an unacceptable candidate, and began to agitate for more openly pro-slavery leaders, such as Robert M.T. Hunter of Virginia and James Guthrie of Kentucky.

Convention rules required a candidate to win two-thirds of the delegates in order to secure the nomination, and as the ballots were cast, it became clear that despite the support of most Northern delegates Southern dissatisfaction with Douglas was great enough to prevent him from reaching that level of support.

The fire-eaters, believing they had Douglas cornered by his own ambition, presented him with a way to break the impasse. He could win their support, they said, by presenting them with a demonstration of his loyalty to the Southern cause — a push to include a call for a Federal slave code in the party platform. The establishment of a national set of laws governing how slaves were to be treated would render the Freeport Doctrine moot (since there would no longer be a need for previously-free states to establish local laws to allow slavery to operate) and put Douglas firmly in the pro-slavery camp. And with the Southern delegates in his pocket, Douglas would have more than enough to win the nomination.

The Southerners didn’t only approach Douglas with this carrot, though. They brandished a stick as well. If Douglas refused to support adding the call for a national slave code to the platform, they threatened, they might be forced to conclude that there was no candidate at the convention they could support — and if that happened, they might simply walk out. Such a walkout would mean doom for the party’s chances in the general election; much of the Democratic Party’s strength in 1860 was in the South, so if Southern Democrats refused to support the party ticket there was no way it could overcome the Republican opposition, whose base was a unified North.

Considering this offer, Douglas knew that it was not as simple as the fire-eaters made it sound. Abandoning the Freeport Doctrine and supporting a national slave code might win him Southern delegates, but it would at the same time alienate Northern ones — the very people he had devised the Freeport Doctrine to appease. But he definitely could not win the nomination with just the Northern delegates alone. Just as he had been in Washington and Freeport, he was trapped — unless he could devise a new way out.

And, great compromiser that he was, he managed to find one. He would not, he told the Southern delegates, support adding a call for a national slave code to the party platform. What he would do, however, was support adding a call for questions of slave owners’ property rights to be decided by the Supreme Court, rather than by local courts and laws.

As with his other compromises, this represented a brilliant threading of the needle. It allowed him to avoid jumping firmly into the pro-slavery side of the debate, since his plan would not definitively establish any laws favoring slave owners. And at the same time, he thought, it should be enough to appease Southerners, since the Supreme Court had demonstrated itself in Dred Scott to be a strongly pro-slavery institution. The pitch to the fire-eaters was simple: this will effectively get you what you want. To avoid inflaming Northern opinion, it won’t do it overtly and explicitly; but in practice, it will still be done. Shouldn’t that be enough?

It was not.

Douglas put forward his compromise, and it garnered enough support to be included in the party platform:

Inasmuch as difference of opinion exists in the Democratic party as to the nature and extent of the powers of a Territorial Legislature, and as to the powers and duties of Congress, under the Constitution of the United States, over the institution of slavery within the Territories,

Resolved, That the Democratic party will abide by the decision of the Supreme Court of the United States upon these questions of Constitutional Law.

There were some Southern Democrats who could swallow this evasion. But the hardest of the hard-core — the fire-eaters — could not. In the first act of what would become a long list of secessions, fifty of them walked out of the convention on April 30, throwing the Democrats into chaos and forcing the convention to close and reconvene later.

“Later” came in June, when the Democrats attempted to reconvene in Baltimore. What began as one convention quickly split into two: one for delegates who had not walked out in Charleston, and another for those who had. With his most vociferous opponents having left for the splinter convention, Douglas had little difficulty winning nomination from those who stayed; but the nomination he finally won was damaged goods, since it was only for the leadership of one part of a fatally split party. The fire-eaters nominated their own candidate, John C. Breckinridge of Kentucky, putting Douglas in the awkward position of having to run a national campaign for President against two candidates (Breckinridge and the Republican candidate, Lincoln) whose support was explicitly sectional.

It was hopeless, but Douglas tried it anyway. Fearing that the split in the Democratic Party presaged a split in the nation itself, he broke one of the great unwritten rules of the presidency: that it was beneath the dignity of Presidential candidates to campaign in person. He launched an energetic campaign that took him across the country, personally addressing crowds wherever he went. But he was a compromiser in a moment when compromise was disdained as appeasement, and had to struggle everywhere against suspicions that no matter what he told one crowd he really believed something different. He had never supported secession, and in his campaign swings through the South he bravely tried to convince Southern audiences that they should not either — even calling for a tight noose for the necks of traitors:

We do not stop to inquire whether you here in Raleigh [, North Carolina] or the Abolitionists in Maine like every provision of that Constitution or not. It is enough for me that our fathers made it. Every man that holds office under the Constitution is sworn to protect it. Our children are brought up and educated under it, and they are early impressed by the injunction that they shall at all times yield a ready obedience to it. I am in favor of executing in good faith every clause and provision of the Constitution, and of protecting every right under it, and then hanging every man who takes up arms against it. Yes, my friends, I would hang every man higher than Haman who would attempt by force to resist the execution of any provision of the Constitution which our fathers made and bequeathed to us.

To the end, to Election Day, he fought ferociously for his dream of a Union preserved through compromise. But he was simply the last to realize that this dream, which had held the nation together for decades, was now well and truly dead. He ended up coming in second in the popular vote, behind Lincoln, but he carried only one state (Missouri) and his support was otherwise scattered across so many places that out of 303 total electoral votes, he won only twelve. (Breckinridge, the fire-eaters’ candidate, only got half as many total votes as Douglas, but since those votes were concentrated in the South they earned him 72 electoral votes.) His fear of a national schism had been well-grounded; despite all his efforts, within six months the fire-eaters would take their states out of the Union, and Southern shells would be bursting over Fort Sumter. The conflict he had spent his life trying to prevent had finally come — brought on, in no small part, by his efforts to prevent it.

He would live to see the failure of his efforts, but only just. The strain of his 1860 campaign wore upon him, and by early 1861, as the nation began to fall apart, his health collapsed as well. He shared the platform with Abraham Lincoln one last time, at Lincoln’s inauguration on March 4; in a gracious gesture, he held the hat of the man whom history had propelled onto the trajectory he had dreamed of for himself. On June 3, 1861, just days before the armies of North and South met for the first time at the First Battle of Bull Run, he died in Chicago.

Our nation has occasionally elevated fools to high office, and suffered for it. But Stephen A. Douglas was no fool. If anything, he was the opposite — a man intelligent enough to consistently find ways to square impossible circles. But his life instructs us that intelligence, alone, is not enough to a leader make. It needs to be leavened with wisdom, and wit, and humility — none of which were Douglas’ strong suits. Thrust into an age of crisis, he fought throughout his career to erect castles of the mind strong enough for a nation to take shelter in, only to discover that to build a castle of the mind is not enough. You need to know how to convince the nation to shelter with you within it, as well. And it was this failure — his failure to understand that politics is not so much about demonstrating intellect or scoring debating points as it is about being a good shepherd — that doomed him to become a footnote to the tale of his times.


Google and Paint.NET need to stop misleading users

Paint.NET is an excellent, free, easy-to-use image editing program for Microsoft Windows. I have frequently recommended it to Windows users who needed an inexpensive, lightweight graphics tool.

But this post isn’t about Paint.NET, really. It’s about the Paint.NET web site. Which is a horror show.

The problem is this: the Paint.NET web site runs ads from Google’s ad network. And those ads are designed in such a way as to lead naïve users to believe clicking the ad will download Paint.NET, when in actuality it causes some other, completely unrelated software to be downloaded. And then the user, thinking they are installing Paint.NET, double-clicks the downloaded installer and gets that completely unrelated software onto their machine.

This is unethical any way you slice it. Even if the unrelated software is completely innocuous, it’s still being distributed to users under false pretenses. And worse, it’s possible that the software is not innocuous; that it’s spyware, or malware, or some other nasty thing.

Let me show you what I mean. Imagine that you were telling me you needed an image editor, and I, helpful geek that I am, told you to go to www.getpaint.net to download Paint.NET.

Here is what I saw when I went there today:

Paint.NET homepage

At first glance, where would you think you should click to download the Paint.NET software?

The answer is the link in the middle right, under “Get it now (free download)”. But that link is visually swamped by the two huge ad units below it, each of which features a giant blue button labeled “DOWNLOAD”. We know that people don’t read online, so a text link is always going to be “seen” by the user after they’ve noticed the graphical elements.

Now assume that you have somehow managed to find the correct link to click, and clicked it. What happens after the click? You see this:

Paint.NET download page

Again with the giant blue “Download” button! And this time it’s even higher up on the page than the real download button, which makes it even more likely that non-technical users will be tricked into clicking it.

Sometimes the ads are even worse than the ones shown above. Like this one:

Ad on Paint.NET site

Notice how it puts the word “RECOMMENDED” in big red letters at the top, to imply that their software is recommended by the authors of Paint.NET, which (as far as I can tell) it absolutely is not.

So if you get tricked into downloading something from one of these misleading ads, what kind of software are you getting? I followed one ad and found myself at the Web site for something called “Zipper.” Zipper appears to be software for decompressing archive files, such as ZIP files. But if you scroll down to the bottom of the page and peek at the fine print, you see the real payload:

This installation is distributed with the SweetIM Toolbar. You can decline to install it. Free emoticons & search for your browser, search aid when misspelling or incorrectly formatting browser address request and SweetIM search Home Page…

This installation is distributed with the Claro Toolbar. You can decline to install it. Search the web, free online games, shopping offer and discounts and much more…

This installation is distributed with the Incredibar Toolbar. You can decline to install it. With the Incredibar Toolbar you’ll be able to access your favourite videos in just a click…

This installation is distributed with the Funmoods Toolbar. You can decline to install it. Funmoods is a free toolbar add-on for social networks chat that gives you a huge collection of smileys, winks, text effects and more…

This installation is distributed with the Babylon Toolbar. You can decline to install it. Make the web your home without boundaries and language barriers. Get quick translation and definitions directly from your browser with the Babylon toolbar…

Zippernew use DomaIQ an install manager that will manage the installation of your selected software. In addition to managing the installation of your selected software, DomaIQ will make recommendations for additional free software that you may be interested in. Additional software may include toolbars, browser add-ons, game applications, anti-virus applications, and other types of applications.

In other words, if you choose to install this software, and you click through the default settings in the installer, you’ll end up with not just the Zipper software itself but five useless browser toolbars and an “install manager” that will nag you periodically to download even more crap. None of which has the slightest thing to do with Paint.NET, the software you originally set out to download in the first place.

Given the debased state of our nation’s laws, I’m sure this is all perfectly legal. But it stinks. It stinks to high heaven. The malware authors get a vector to get their crap onto peoples’ PCs; Google gets paid by the malware authors for letting them do so; Paint.NET gets paid by Google every time someone gets fooled and clicks the ads. Everybody “wins” — everybody except the end user.

So my questions are:

  1. These ads are being placed on the Paint.NET site via Google’s AdSense network. Google’s content policies for AdSense state that “publishers may not ask others to click their ads or use deceptive implementation methods to obtain clicks.” How does this type of ad, which attempts to trick users who want to download one product into downloading another, not fall under the rubric of “deceptive implementation methods”?
  2. These ads are being displayed on the Paint.NET website. How much influence does the Paint.NET team have over the content of the ads that are displayed there? Could they stop these misleading ads from running on their site if they wished to?
  3. What financial compensation does the Paint.NET team receive from clicks generated by these ads? Is there a financial incentive being created here for them to allow potential users to be fooled or misled by these ads?

There is no good excuse for these ads to be appearing on the Paint.NET site. None. They don’t help users get what they came for; they don’t even help them get something related to what they came for. They “help” them get completely unrelated software that lards their computer down with obtrusive, malicious, unwanted software. And they do that by trading on Paint.NET’s good reputation, which they have absolutely zero claim to.

It’s unacceptable, and someone should put a stop to it — either the Paint.NET team, or Google. Or, if neither of those parties will act, the Federal Trade Commission, whose complaint line can be found here.


Just say what you mean

Say what you mean!If you want to build credibility as a communicator, here’s a piece of advice: just say what you mean.

Too often communicators outsmart themselves by trying to anticipate what their readers are thinking, and then bending their message to fit the contours of those imagined thoughts. But doing this makes you appear weak, unconvinced of the strength of your own message. It signals that you have no confidence in the ability of your message to change peoples’ minds, so you have to change the message instead. Strong messages drag the audience to them, rather than scurrying to meet the audience where they already are.

For an example of what I’m talking about, look at the way Washington Post publisher Katharine Weymouth announced that her paper would no longer have a position for an ombudsman:

The world has changed, and we at The Post must change with it. We have been privileged to have had the service of many talented ombudsmen (and women) who have addressed readers’ concerns, answered their questions and held The Post to the highest standards of journalism. Those duties are as critical today as ever. Yet it is time that the way these duties are performed evolves…

In short, while we are not filling a position that was created decades ago for a different era, we remain faithful to the mission. We know that you, our readers, will hold us to that, as you should.

In her announcement, Weymouth takes pains to tell readers that while the position of ombudsman will no longer exist, the principles that animated that position are still held in high esteem at The Post; and those principles will continue in practice at the paper, just in different forms.

The thing is, she almost certainly does not believe a word of this.

If she was being honest, her statement would have gone more like this:

“Look. Paying for an in-house critic made sense back in the days after newspaper consolidation, when the one newspaper left standing in a city had a de facto monopoly over news in that city. But those days ended decades ago. Today even a one-newspaper town like Washington has a wide range of other news providers — on TV, on radio, and online — all of whom can challenge us if we get our facts wrong. And if readers have concerns about our coverage, they have tons of ways to express that concern independently of us, through blogs and status updates, Twitters and Tumblrs.

None of which would matter much if we were rolling in cash here at The Washington Post, but you might have noticed from the ever-shrinking size of your print edition that we are not. In fact, we’re using all our powers just trying to figure out how to keep this ship afloat. So we can’t afford to hang on to things that don’t make sense anymore. And an ombudsman is one of those things; the position just  doesn’t make sense anymore.”

Notice how, rather than trying to convince the reader that, despite appearances, The Post is actually not doing anything that different, this version accepts that something different is happening, and lays out a case for why it needs to happen. It accepts that the reader may disagree with the decision that was made, and attempts to convince them that they shouldn’t, rather than falling all over itself pandering to the reader’s existing opinion.

In other words, it treats the reader as a thinking adult, rather than as a dullard.

This is the power of just saying what you mean. People aren’t dullards; they know when they’re being talked down to, or around, or condescended to. And this kind of “despite what you see all around you, trust me, nothing has changed” rhetoric sets off those alarm bells rather loudly. It makes you wonder what else you aren’t being told, rather than reassuring you that the person doing the telling knows what they are doing.

Be brave. Have faith in your decisions. Say what you mean.


I kind of hate Twitter

Twitter: this bird has flown

So conservative writer Matt K. Lewis took to the pages of The Week this week to explain how he hates Twitter:

Twitter has become like high school, where the mean kids say something hurtful to boost their self-esteem and to see if others will laugh and join in. Aside from trolling for victims after some tragedy, Twitter isn’t used for reporting much anymore. But it is used for snark.

Which earned him a chorus of guffaws (and, yes, snark), like this response from Choire Sicha at The Awl:

Whenever someone writes one of these screeds, they have to ignore that Twitter is entirely self-selecting. You chose who to follow. You chose to behave like a jerk, or a needy child, or a boor. Twitter didn’t make you an ass.

Now, I’ve never met Mr. Lewis, and since he works at the Daily Caller (ugh) I would have to imagine we wouldn’t agree about much if you put us in a room together. But on this point, I think he is right and Mr. Sicha is wrong.

Which is why I sort of hate Twitter, too.

To establish my bona fides, I’ve been using Twitter on and off since 2008, as myself and as comic personas Fake John McCain (during the ’08 election) and Red, White and News. What I experienced there depressed me sufficiently that I eventually walked away from the service completely and stayed away for two years. Last year I got tired of people asking me why I wasn’t on Twitter, so I sighed and got back on hoping that something significant had changed. It hadn’t.

Here’s my complaint: Mr. Sicha’s statement that “Twitter [doesn’t] make you an ass” is just wrong. Twitter does make you an ass. In fact, its design makes it difficult for you to be anything else.

The medium is the message

To understand what I’m talking about, allow me to digress a bit into the thinking of one of the few people whose work changed my life: Marshall McLuhan.

Insofar as he’s generally remembered today, McLuhan is remembered as a gadfly, a provocateur with some half-baked ideas redeemed by a gift for phrasemaking. But he deserves to be engaged with more seriously than that. When I read his 1964 book Understanding Media: The Extensions of Man as a young man, it turned on a light bulb in my head that has never turned off since. It helped me see the world with new eyes.

To understand how McLuhan is relevant to Twitter, you need to delve into his most famous adage: “the medium is the message.” Lots of people know this quote, but not many of those seriously understand what McLuhan was getting at with it. What he meant was that the medium you use to send a message affects the way that message will be received by the recipient. There’s no such thing as a neutral medium — the way you choose to communicate a message changes the meaning of the message you communicate.

Consider, for example, a simple message from one person to another: “I love you.” Think of all the different channels over which that message could be transmitted from person A to person B, and how different it would feel to person B to receive it in each. Whispered into the ear, “I love you” can feel erotic. Stated over a candlelit dinner, it can feel romantic. Written on a piece of paper, it can feel formal. Read out on television, it can feel distant.

The words never change, but the message person B receives does. The medium shapes the message.

This means that the forms we choose to put our communications in are significant. They matter. Different media pull the message in different directions; each has its own particular english it imparts upon the ball. This is why an engrossing novel, picked up and used as a film script without any modification, makes for a terrible movie — idioms that work in print don’t work on the big screen, and vice versa. It takes the services of a talented screenwriter to translate the printed work into a filmed work of similar quality, in the same way it takes a talented translator to take a classic work of Russian literature and produce an English version of the same quality.

What’s fascinating about the Internet is that it’s one of very few communications channels over which more than one medium travels. Television is, well, television, but the Internet is a cornucopia of different media: text, audio, video; Web pages, e-mails, instant messages, Tweets. There is no medium called “the Internet”; the Internet is just the pipe through which lots of different media — increasingly, all media — reach us.

To understand McLuhan’s relevance to the digital age, you have to look at each online medium individually — words put on a Web page will be received and processed very differently than the same words spoken in a YouTube video. And the way you take that look is by examining the unique features that define the medium, that make it what it is.

So let’s take a look at Twitter as a medium.

Twitter is designed to embarrass you

Here are a few salient things that make Twitter Twitter:

  1. Short messages. Twitter messages are limited to a maximum of 140 characters.
  2. Low publishing barrier. Twitter is deliberately designed to be as easy as humanly possible to send messages with — you don’t need to provide any metadata about the message (title, subject line, recipient list, etc.) like you do with other online publishing media such as Web pages, blogs or e-mail. You just type a message and hit “send.”
  3. Public. Tweets are, by default, readable by anybody. You have to follow someone to get them delivered right to you (see #3 below), but even if you don’t follow a person you can see their Tweets just by viewing their profile.
  4. Push delivery. You don’t have to go to a friend’s Twitter page to see what they’re saying; their messages, along with those of others you follow, come to you. This can be either in a feed, or (on mobile devices) in the form of notifications. Similarly, messages others write about you come to as well (as long as they refer to you by your @-username).
  5. Near-real-time. Absent technical problems with the Twitter service, messages posted by a user are seen by that user’s followers effectively instantaneously.
  6. Semi-ephemeral. While Tweets are public by default, and every public Tweet is archived, Twitter does not make those archives easy to access or search. To the user, they seem to just scroll away into oblivion as the feed updates.
  7. Scorekeeping. Twitter provides several mechanisms by which users can “keep score” of their status relative to other users, the most obvious being follower count, which is public and prominently displayed when viewing information about a user.

Given Twitter’s success, it’s hard to argue with any of these choices from a business perspective. But from a McLuhanite perspective, in terms of designing a medium for discussion, these choices are disastrous. They all drive the user in the same direction — away from nuance and towards sharp messages that drive up the user’s “score.”

Let’s discuss exactly how.

  • Short messages encourages the user to strip out qualifiers. Qualifiers are words we insert into statements to either dial up or dial down their impact. They serve an important social function; they allow us to say something negative about a person while simultaneously indicating that the person is not all bad, sparing their ego and limiting how harsh the critique seems to others. “Ted is a little bit of a douchebag” stings Ted less than “Ted is a douchebag” does. But when you only have 140 characters to work with, modifiers like “a bit of” are the first things to go.
  • Short messages encourages the user to omit detail. The clarity of an argument can be improved by citing sources, identifying limitations, and otherwise fleshing it out. But doing so on Twitter is tedious and unwieldy; the 140-character limit means that anything more than a sentence or two has to be split across multiple Tweets, and there’s no guarantees that someone who sees Tweet 1 of 3 will see Tweets 2 and 3. One alternative is to link out to an external resource (like a Web page) for additional information, but since URLs count against your 140-character allocation too, the medium pushes back against even this limited level of additional detail. If you’re one character over 140, are you going to go back and rewrite your message, or are you just going to drop the URL that points to more information?
  • Low publishing barrier encourages users to publish without thinking. Twitter clearly wants you to use it casually, without a lot of “should I really post this online for the world to read?” deliberation, and from a usability perspective that’s laudable. But the flip side is that it’s easy when being casual to say something dumb or poorly thought through that offends people. We’ve all had moments when we blurted out something that made us sound like an idiot, but in the past there were always some hurdles one would have to overcome to do that online; you can embarrass yourself via email, for instance, but to do so you have to compose not just your regrettable statement but a subject line and list of recipients as well. Twitter removes those hurdles, so now people can embarrass themselves online as easily as they do off. But when you embarrass yourself offline, the only people who see it are the ones standing around you; when you embarrass yourself online, the world can see it, making the potential reputational stakes much higher.
  • Public viewability encourages the user towards posturing rather than candor. When people know their messages are open to public viewing, they frequently self-edit, stripping out information that they would be less hesitant to share in a private communication with one or more known others. This is not unique to Twitter; think of how much less likely you are to see a Facebook status update about a friend gaining five pounds, for instance, over an update about the same friend losing five pounds. While individually this is understandable, when the user is immersed in a community where this is the default behavior, it can lead to depression from the feeling it creates that “everyone’s life is perfect except mine.”
  • Push delivery makes it hard to ignore what people are saying about you. If someone’s talking about you on the Web, you have to go into Google and search to find that out. If someone’s talking about you on Twitter, though, it’s very likely right in your face. This can be flattering if people are saying nice things, but if they’re not, it can feel embarrassing and/or painful; and people who are embarrassed or wounded tend to do stupid things like lash back at the person who did the wounding that they regret later when the pain has worn off.
  • Near-real-time creates negative feedback loops within communities of users. Because information spreads so quickly through the network, assertions frequently circulate faster than they can be fact-checked. Usually incorrect statements (“Just heard Celebrity X died!!! #omg”) eventually get corrected (“Whoops, Celebrity X isn’t dead after all!!! #whew”), but with near-real-time interaction, by the time the correction comes the incorrect statement can have circulated far more widely than the correction ever will.
  • Semi-ephemeral archiving encourages the user to see Tweets as something they are not. Because of the ease of publishing a Tweet, its constrained size, and the way Tweets rapidly scroll off your feed, it’s easy to get into the mindset that Tweets are something less permanent or less public than, say, a blog post. But because the archives are all public and Google-able, a stupid Tweet can live forever, just like a stupid blog post. When combined with the “just say it” low publishing barriers, this can set people up for embarrassment in ways they don’t fully understand until it happens to them.
  • Scorekeeping mechanisms encourage the user to behave in ways that drive up their score. When the mark of a high-status Twitter user versus a low-status one is the number of followers and re-tweets they generate, users will gravitate towards creating messages that will get re-tweeted and attract followers. And as in all communications, the best way to make a big impression on an audience isn’t to make a considered, nuanced argument; it’s to walk up to the person you’re arguing with and kick him in the nuts. So users gravitate towards snark, outrage, and other sharp forms of expressions that grab attention, because that’s the behavior the system incentivizes.

Taken together, all of these factors create an environment where even reasonable, thoughful people behave like douchebags. They don’t do so because they are douchebags, necessarily. They do so because Twitter as a medium is optimized for douchebaggery. Its design creates an array of pitfalls that can lead you to come off like a douchebag, even if you have no intention to.

Which is why I kind of hate it.

Do I expect this rant to change anything? Not really. Those people who like Twitter seem to really like it, as incomprehensible as that is to me. But then there are people who enjoy going to dive bars and getting in fights on Saturday night, too. At least if you walk into a new bar and someone comes up and punches you, though, nobody comes up later to tell you sanctimoniously that you used the bar the wrong way.


Rick Reilly’s Lance Armstrong problem is all of journalism’s problem

Rick Reilly

Sportswriter Rick Reilly has been a staunch defender of Lance Armstrong against charges that the superstar cyclist’s incredible win record was fueled by performance-enhancing drugs. His faith that Armstrong was clean was buttressed by his relationship with the man himself, who repeatedly denied point-blank to Reilly that he’d doped.

So Reilly was understandably upset when Armstrong emailed him to take it all back just before admitting his doping to Oprah Winfrey:

“Never failed a drug test,” I’d always point out. “Most tested athlete in the world. Tested maybe 500 times. Never flunked one.”

Why? Because Armstrong always told me he was clean.

On the record. Off the record. Every kind of record. In Colorado. In Texas. In France. On team buses. In cars. On cell phones…

Every time — every single time — he’d push himself up on his elbows and his face would be red and he’d stare at me like I’d just shot his dog and give me some very well-delivered explanation involving a few dozen F words, a painting of the accuser as a wronged employee seeking revenge, and how lawsuits were forthcoming.

And when my own reporting would produce no proof, I’d be convinced. I’d go out there and continue polishing a legend that turned out to be plated in fool’s gold.

It has to burn to go out on a limb defending someone and then find out that someone has been deceiving you all along. So I don’t begrudge Reilly his right to be angry with Armstrong. If I were him I’d be angry, too.

But when you get suckered, there’s a point at which you have to wonder if at least a little of the blame for being treated as a sucker doesn’t fall to you — to your willingness to be deceived. Not every sucker is complicit in their own deception, but many are. And a little further down in Reilly’s piece about Armstrong deceiving him, we find this:

Look, I’ve been fooled before. I believed Mark McGwire was hitting those home runs all on his own natural gifts. I believed Joe Paterno couldn’t possibly cover up something so grisly as child molestation. I bought Manti Te’o’s girlfriend story. But those people never looked me square in the pupils and spit.

Which is kind of an astonishing list of admissions for someone who gets paid to follow sports, if you think about it.

Was there anyone who watched Mark McGwire and Sammy Sosa’s home run duel and didn’t at least wonder if the two men were juicing? Was there anyone who watched the evidence pile up against Paterno and didn’t at least wonder if the man was willing to sell out kids to hold on to his legacy?

The only people who you’d expect to be so oblivious are the most rabid of fans — the ones to whom their idols are capable of doing no wrong.

Here’s the thing. If you’re just watching the game from the bleachers, and you want to be a blind fan, that’s your right. (It makes you kind of a dope, I would argue, but this is a free country and you’re free to be a dope if you want to.) But a sportswriter is, in theory, anyway, a journalist. Journalists aren’t supposed to be blind fans. They’re supposed to follow the evidence.

I’m not saying that a journalist in any of these cases had to be convinced right away that the people involved were crossing lines. There was plenty of deniability to go around in all of them, especially in their beginnings. But a journalist, who’s paid to have a critical eye, should have at least have been open to the possibility that the denials were false.

And here’s Reilly, admitting that he wasn’t. He just took everyone at their word until doing so became completely untenable. Especially Armstrong, to whom he became so close that even near the end, when the cyclist’s defense essentially boiled down to “everyone in the world is out to get me,” he was still willing to extend the benefit of the doubt.

To his credit, he shows a glimmer of self-awareness on this point:

It’s partially my fault. I let myself admire him. Let myself admire what he’d done with his life, admire the way he’d not only beaten his own cancer but was trying to help others beat it. When my sister was diagnosed, she read his book and got inspired. And I felt some pride in that. I let it get personal. And now I know he was living a lie and I was helping him live it.

This is why I feel that the style of reporting usually called “access journalism” is neither: neither access, nor journalism. Reporters can get close to a subject, can get “access,” but they can never get close enough for a malicious subject to let them see the real story; Lance Armstrong would let Rick Reilly into his team bus, but he’d never let Reilly follow him into the room where Michele Ferrari was waiting to dose him. And the large investment of time and credibility required to get close to a subject can lead the journalist to identify with the subject, to “let it get personal”; it’s the reporter’s version of Stockholm syndrome. The result is reporting that is less about the truth and more about the image that the subject wants to portray to the world.

None of which would be particularly interesting if it were something that only happened to sportswriters. But it happens all the time, in all forms of journalism. Political reporters cozy up to candidates; business reporters to CEOs; product reviewers to the people who make the products they review. The result is news outlets that can’t call a lie a lie, and it serves nobody except established interests and corrupt institutions.

Rick Reilly’s problem is journalism’s problem. And journalism as a whole, not just this one man, needs to start grappling with it.