Archive:


The only thing I have to say about the iPad 3 launch

Scoble Agonistes

Photo by Jun Seita.

If you must buy one immediately, just don’t let anyone take a picture of you acting like this (see photo). Because that would be embarrassing. Or should be, anyway.

It’s a commercial transaction. You gave the people who made it some money, and in exchange they gave it to you. You know, just like thousands of companies do with millions of people every single Goddamned day.

Nobody looks for applause when they fill their gas tank or buy toilet paper. If they came striding out of the grocery store like Caesar back from the wars, their cottony prize held above their head in triumph, they would not be greeted with camera flashes and a round of applause. Instead, everybody in the parking lot would start laughing at them, and never stop.

All you’ve done is purchase something. You haven’t won anything. You’re haven’t accomplished anything. The only thing this tells the world about you is that you have a functioning credit card.

If anyone tries to applaud you for buying one, you have my permission to punch that person in the face.


The question about bombing Iran that nobody is asking

Two Israeli F-15I strike fightersThere’s been a lot of loose talk lately about either Israel or the U.S. launching an air attack on Iran in the near future to degrade or cripple that country’s nuclear program. Nitwitted Republican Presidential candidates casting about for a way to look macho and idiot pundits who never saw a war they didn’t like (as long as it doesn’t cost them anything personally) have latched on to this idea with a vengeance. And while it would be bad enough if such talk were confined to those quarters, it’s been quietly seeping into the mainstream as well.

There’s been some pushback on the idea, thankfully. President Obama has spoken out against it, for example, which is heartening. But most of the counterargument has been based on the premise that taking out Iran’s nuclear facilities with air strikes would either be counterproductive, hardening Iranian resolve, or that it’s simply not necessary because Iran’s nuclear program isn’t as far along as the hawks fear it is.

Those things may or may not be true, I have no idea. But there’s one question about an attack that nobody on either side of the question appears to be asking, and that’s disturbing, because it’s probably the most important question that could be asked. That question is whether or not we even have the capability to take out Iran’s nuclear facilities from the air.

The History

First, some history. The pattern for an attack like this was set in 1981, when the Israelis launched an air strike to take out a nuclear reactor the Iraqis were in the process of setting up, known as “Osirak.” Since Israel and Iraq were not at war and it wasn’t clear that the purpose of the Osirak reactor was actually military, the raid provoked a firestorm of controversy around the world, but it achieved its strategic purpose — the reactor was destroyed, and a setback was dealt to Iraq’s nuclear ambitions which they never really came back from. Advocates for a strike on Iran point to the Osirak raid as an example of how Iran could be curbed.

But here’s the thing: the situation in Iran in 2012 is very different than it was in Iraq in 1981. The Iranians may be fanatics, but they’re not stupid; they paid close attention to the lessons of Osirak when they set up their own nuclear program. So unlike Osirak, which was built out in the open air, Iran’s nuclear facilities are buried deep underground — so deep underground that it’s not clear that they can be reached by even the most powerful conventional bombs.

The Targets

To see what I mean, look at two of the key Iranian nuclear facilities. (Detailed information on them is understandably hard to come by, but we can get at least a general understanding of their construction from unclassified sources.)

The first is the uranium enrichment facility at Natanz. This facility houses thousands of the gas centrifuges that are required to turn raw uranium into nuclear fuel for either civil or military purposes. Unlike Osirak, the Natanz facility is a “hardened target” — a target built specifically to be difficult to destroy with bombs. It was built somewhere around thirty feet underground, with walls of reinforced concrete designed to buffer shock from explosions.

The second is a newer facility in northern Iran, near the village of Fordo. Even less information is available in the public domain about this facility than about Natanz, since the Fordo facility was only revealed publicly by Iran in 2009. However, even with this limited information it’s clear that Fordo is designed to be an even tougher nut to crack than Natanz. The Fordo facility was built directly into a mountainside, possibly hundreds of feet deep, with even thicker layers of protective concrete. Unlike Natanz, which has enough centrifuge capacity to create fuel for either civil or military purposes, Fordo has much more limited capacity; this limits its utility in creating fuel for civilian energy projects, but not necessarily for producing uranium for weapons.

(Despite Fordo’s hardened construction, its design is believed to still have points of weakness to air attack; Iran has started moving to remove those, and to prepare Fordo for the installation of new centrifuges. This is what has reignited discussion of a pre-emptive strike — the argument is that the facilities have a “window of vulnerability” that is rapidly closing, so if there’s going to be an air strike it’s now or never.)

The Weapons

When the Israelis destroyed the Osirak reactor, they used conventional air-dropped 2,000-pound bombs. Weapons such as these would be completely ineffective against hardened targets like Natanz and Fordo, however; their explosive payload is simply insufficient to dig through so much dirt and concrete. What would be needed is a much more powerful bomb that is designed specifically to channel its blast down deep, rather than exploding in all directions like a normal bomb does. This type of weapon is known as a “bunker buster.

Israel and the U.S. both possess bunker-buster weapons, though in different amounts and sizes. Both nations have stocks of the GBU-28, an air-dropped, laser-guided bomb that can be armed with a bunker-buster warhead such as the 2,000-pound BLU-109 and the 5,000-pound BLU-113. The largest Western conventional bunker-buster bomb, the huge 30,000-pound GBU-57 (a.k.a. “Massive Ordnance Penetrator,” or “MOP”) is only in the U.S. arsenal, not Israel’s; this is in part because it’s a very new weapon, with deliveries to the U.S. Air Force starting just last year, and in part because Israel has no aircraft big enough to carry it even if we made it available to them — while the GBU-28 can be carried on small fighters like the F-15, the GBU-57 can only be carried in the B-2 stealth bomber.

The Problem

The problem with using these weapons to take out Natanz and Fordo is three-fold.

The first problem is that while the BLU-109 and BLU-113 warheads that Israel has access to are powerful weapons, it’s not certain that they are actually powerful enough to blast through the thick layers of earth and concrete surrounding Natanz and (especially) Fordo. A strike with these weapons might destroy the facilities; but then again, it might not. (The BLU-109, for instance, is rated to penetrate around six feet of concrete — but even at the less heavily hardened Natanz facility, you’re looking at thirty-plus feet of earth, along with an unknown number more of concrete.) It might just put some cracks in concrete and throw a bunch of dirt around. And for Israel, that would be a worst-case scenario; if you’re going to hit a nation you’re not already at war with, that hit needs to be so overwhelming and successful that the enemy decides retaliation is pointless. Otherwise you get the Pearl Harbor problem — an aroused and angry opponent who you haven’t completely knocked out of the ring.

(A 2006 study of the problem by researchers at MIT’s Security Studies Program argued that Israel could have high confidence of taking out Natanz with BLU-113s, but only in a large-scale strike involving 80+ warheads — a much larger and more complex operation than the Osirak raid, which involved only sixteen bombs.)

The second problem is that the only conventional bunker-buster that would offer a high probability of destroying the targets is the gigantic GBU-57, and the Israelis don’t have that weapon — only the U.S. does. So if the Israelis want to strike Iran, their options are either to go it alone and risk the possibility that their own weapons are insufficient for the task, or to go in with the U.S. and risk whatever conditions and terms we would put on the conduct of the operation.

The third problem is that even the Massive Ordnance Penetrator might not be quite massive enough to destroy a facility like Fordo. When deciding whether or not to run risks of war, political leaders understandably strive for certainty; they want to know that the risk of the operation failing is as close to zero as it can possibly be. And the Iranian facilities are so dug-in that even the MOP doesn’t offer that certainty. The MOP is a gigantic weapon, and it stands a better chance of penetrating the walls of Natanz and Fordo than the smaller bunker-busters do; but it’s never been used in battle before, so it’s always possible that there’s some problem with it that nobody foresaw, and there’s less than 20 of them in the Air Force’s kit bag currently, so if it turns out that it takes multiple MOPs to clean up the targets the bag can empty out pretty quickly. (Don’t forget that Iran may have other secret sites that we don’t know about; it would accomplish little to destroy Natanz and Fordo if they reveal a backup site the next day and we don’t have the ordnance to take out that one too.)

There’s a more ominous way to think about that third problem, too. The MOP is our biggest conventional bunker-buster, and even it might not be big enough, but that doesn’t mean we don’t have weapons that we could be certain are big enough to destroy these facilities. The problem is that those weapons are nuclear weapons. An attack with “nuclear bunker-busters” could deploy many times the explosive weight of even the largest conventional bombs. The problem there is, well, you’ve just used nuclear weapons in anger for the first time in world history since the end of World War II, and that opens up a whole new can of worms. (Not to mention that Fordo is near the Shia holy city of Qom; you can imagine the outcry if that city were to become collateral damage in an American nuclear attack.)

The Conclusion

I should preface this by saying that all the above information could be completely invalid. It’s possible that either we or Israel have classified weapons in our arsenal that would make an attack much more likely to succeed, and that these are just so deeply secret that a layman like myself doesn’t know about them. It’s also possible that there’s a particularly brilliant tactical approach to this problem that works around the problems and hits the Iranians in some undisclosed blind spot that a dumb civilian like myself would never see. And the men and women of the IAF and U.S. Air Force are among the world’s most elite practitioners of aerial warfare, so if anyone can pull off the unlikely, they can.

However, putting all that aside, the weight of the problems and uncertainties outlined above make me extremely skeptical that an air strike (or a series of air strikes) is the way to try and resolve this problem. Put bluntly, it seems like the probabilities of it causing new problems are higher than the probabilities of it solving the old ones.

My guess is this is why, while politicians are waxing rhapsodic about how we can bomb our way to a non-nuclear Iran, professional soldiers are being a bit more circumspect. General Norton Schwartz, the Chief of Staff of the Air Force, for instance, who is sounding pretty skeptical:

“Everything we have to do has to have an objective,” Schwartz told reporters at a breakfast meeting Wednesday. “What is the objective? Is it to eliminate [Iran’s nuclear program]? Is it to delay? Is it to complicate? What is the national security objective?”

“There’s a tendency for all of us to go tactical too quickly, and worry about weaponeering and things of that nature,” Schwartz continued. “Iran bears watching” is about as far as the top Air Force officer was willing to go.

As is the chairman of the Joint Chiefs:

“It’s not prudent at this point to decide to attack Iran,” said Dempsey, the chairman of the Joint Chiefs of Staff…

“A strike at this time would be destabilizing and wouldn’t achieve their long-term objectives,” Dempsey said about the Israelis, according to Bloomberg. “I wouldn’t suggest, sitting here today, that we’ve persuaded them that our view is the correct view and that they are acting in an ill-advised fashion.”

This isn’t to say that there aren’t things we can do to hinder Iran’s nuclear ambitions. There’s increasing evidence that last year’s Stuxnet computer virus, which infected computers around the world, was actually a clever cyberattack on the computers that run the centrifuges at Natanz, for example; if that’s the case, it would be a brilliant example of thinking outside the box, avoiding the need to dig through dirt and concrete by creating a weapon that an unwitting Iranian engineer would carry right through the front door on a USB stick. And there’s low-tech countermeasures that can be taken as well, as somebody has been quietly demonstrating by arranging a series of “accidents” to happen to Iranian nuclear scientists.

In other words, espionage and spycraft may be the prudent decisionmaker’s weapons of choice here, rather than big bombs and fast jets. Maybe they don’t provide a big enough testosterone rush for the likes of Newt Gingrich, but they make pills that can help with that. (Ask Bob Dole, he can hook you up.) A rational policymaker, however, needs to make decisions based on what military power can realistically achieve, not on juvenile fantasies of unlimited power.

One would think the last ten years would have taught America’s leaders that lesson pretty clearly. Or hope, anyway.


Jason Recommends: Crusader Kings II

I’d be remiss if I didn’t take a moment to tell you about my current addiction: a new strategy game from Paradox Interactive, Crusader Kings II.

Longtime Readers will already be aware of my general love for Paradox titles; see here and here for examples. But I’m not an uncritical fanboy — see here for some pointed words about the latest iteration of Paradox’s flagship series, Europa Universalis. So hopefully you won’t take it as inevitable when I tell you that Crusader Kings II may be Paradox’s finest product to date.

Paradox specializes in so-called “grand strategy” games — games where you command entire nations and empires, rather than individual units on a battlefield. CK2 continues this tradition, but like the original Crusader Kings, it brings an interesting twist: rather than controlling a nation, you control a family. You start the game by choosing a historical family from medieval Europe, and your task is to guide your family to as great a height of power as you can, pulling more and more territories under your feudal authority, without getting wiped out of the pages of history by another clan striving for the same prizes you seek.

CK2 is a game of personalities. You start off controlling whomever happens to be the head of the family you chose to play in the year you chose to start. That person may be a great leader, or a cruel tyrant, or a cringing coward. Every character in CK2 has a range of attributes — things like “brave,” or “shy,” or “clubfooted” — and these attributes define both their chances of success at the goals they set out to achieve, and how other characters respond to them as they go about trying to achieve them.

But the importance of attributes goes beyond the ones attached to the character you play at the start, because as the years in the game tick by, at some point, that character is going to die. It may be a glorious death on the battlefield, or a quiet death at home in bed. But death always comes — and when it does, your realm passes to whomever in your family is next in the line of succession. (Usually that’s your oldest son, but it’s possible for women to inherit titles too in some circumstances.) If that person is strong and capable, the transfer may happen quietly; if they are stupid, or widely disliked, or a heretic, say, other characters — even members of their own family! — may decide to challenge them, and that can lead to full-out Game of Thrones-style civil wars. So making sure that your heirs are as good as they can possibly be becomes a big part of the strategy of the game; who you marry yourself, whom you marry your children off to, and how you choose to raise them, can be every bit as important as which armies you push where on the map.

And since the game is shaped so strongly by dynamics between characters, and those characters come out different each time you play based on who pairs off with whom when, each game is new and different. It’s a fascinating and fresh approach to strategy gaming.

The original Crusader Kings played in much the same way, but was hobbled by an unintuitive interface and some questionable design choices. CK2, happily, fixes all those problems in grand style; the information you need is always close at hand, and all the different components of the game design hold together beautifully. (With one possible exception — the tech research tree seems pretty useless. CK2 isn’t a game like Civilization where you’ll win by discovering a great new technology before anyone else. But given that the setting is the technologically-static Middle Ages, that doesn’t hurt the game particularly.)

Oh, and since I’m on record griping about how terrible the maps in Paradox games have become since they moved to a 3D engine, it’s worth noting that the game map in CK2 is gorgeous. It’s the first Paradox game in years where I found myself scrolling around the map just to admire the detail. That’s a welcome return to form for a company whose original games were known for their attention to design details.

Anyway, don’t take my word for it. Read Rock Paper Shotgun’s review, or Kotaku’s, or Destructoid’s. This game is really something special.

Want to try it for yourself? It’s available for purchase now (from Gamersgate and Steam, among other places), and there’s a free demo you can download that lets you play for twenty in-game years to get a taste for how it works. If you like strategy games at all, or are even just tempted by the idea of playing a game that’s half wargame and half soap opera, take some time this weekend and give it a look.


Grade inflation

Meet Lytro.

According to the company that makes it, Lytro (which retails for $400-500) is a new kind of digital camera based on the emerging science of “light field photography.” What that means in English is that unlike conventional cameras, which record only a single amount of light in an image, Lytro records the color, intensity and vector direction of every light ray. This information is all stored within the image, which is then processed by a computer (either in the camera, or on a viewing device like a PC) to create an image in which you can change the focus after you take the picture.

So that’s what Lytro is. But according to today’s review at The Verge, one thing that Lytro isn’t is ready for prime time:

[O]ne of my biggest wishes for the Lytro is that it were built a bit more like a traditional camera. I never really got used to holding the Lytro, and I’d happily accept a larger body if it meant a slightly more ergonomically friendly design… The Lytro’s also hard to hold steady, since you don’t really get a grip or a way to balance your hands against each other, and pressing the power button always makes you push the camera down a bit…

The Lytro’s display is by far the worst thing about this camera. First of all, it’s tiny: 1.46 inches diagonal. Second, it’s kind of terrible…

I had to charge the Lytro about every 400 shots, which isn’t great…

Here’s the thing about the Lytro: for a particular type of shot, it does a better job of capturing a photo than anything I’ve ever used… Unfortunately, that’s the only situation in which the Lytro really shines (which explains why the company’s example shots are all alike)…

In anything other than perfect lighting, photos were consistently grainy and noisy, not to mention just dark since you can’t really control the shutter speed or aperture — there’s not even a flash on the camera to poorly solve that problem…

Getting photos from your camera to your computer takes a long time. Each photo weighs in at about 16MB, and there’s also a fair amount of processing to be done on the computer before a shot is ready to be used; I imported 72 photos, and it took more than 20 minutes before the photos were available…

[T]he first iteration of the Lytro isn’t quite there yet: it’s hard to use, its display is terrible, and outside of a few particular situations its photos aren’t good enough to even be worth saving. It’s not even close to being able to replace an everyday camera, and at $399-$499, for most people it would have to.

Basically the only good things the reviewer, David Pierce, could find to say about the thing were:

  1. The dynamic focus gimmick is kind of cool;
  2. They promise everything else will get better eventually.

So, not a stellar review. But then I get to the end, where The Verge puts its numerical review scores (which are measured on a scale from 1 to 10, with 1 being “utter garbage” and 10 being “perfect”), and I see this:

Lytro review score

7.5? Out of ten? After that review?

For reference, here’s how The Verge’s own guide to their rating system describes scores in the 7 range: “Very good. A solid product with some flaws.”

I’m not sure how anyone could read the same review I did and come away with the impression that Lytro is “a solid product.” The review I read made it sound like Lytro was a novelty device, and is going to stay that way until its makers figure out how to turn the tech behind it into something more broadly useful.

Hey, maybe they will some day! Who knows? But until they do, The Verge should be embarrassed by this review. Either the review text is correct, or the score is correct, but they can’t both be correct.

I say this not to pick on The Verge (well, not entirely, anyway.) I like The Verge a lot, and generally find their articles and reviews well-written and thoughtful. I bring it up to make a broader point — how infrequently you will ever really see a review of a tech product come flat out and say “don’t buy this.”

The Lytro review — the text, anyway — comes close. You can tell that Pierce doesn’t think the device is ready for the mass market, or for anybody really beyond photo nerds who want to play with its one interesting trick. But if you translate that sentiment into a numerical score using The Verge‘s scoring guidelines, you get a score somewhere between 3 (“Not a complete disaster, but not something we’d recommend.”) and 5 (“Just okay.”). Which sounds bad, on a scale of 1 to 10. It’s harsh! So they look harder at the device and, mirabile dictu, they find enough redeeming qualities to bump it up to a much more pleasant-sounding score of 7.5.

A 7.5 doesn’t say “don’t buy this.” It says “You may or may not want to buy this, we dunno.” It’s a pulled punch; a score low enough so they can say they didn’t recommend it without reservations, but not so low that the folks at Lytro will be tempted to come burn their offices down.

A 7.5, in other words, is a score that Lytro can live with. Which is great for Lytro. For potential Lytro customers, though, not so much.

You see this a lot in tech reviewing; too much, really. Even the worst stinkburgers get the “cautious optimism” treatment. Tech reviewers live in Lake Wobegon, where all the products are above average.


One more time you would have been ahead of the curve by listening to me: Oscars edition

Photo: Richard Harbaugh / ©A.M.P.A.S.

There’s been a lot of buzz over the last few weeks around French actor Jean Dujardin, driven by his performance in his latest film, The Artist, and culminating this past weekend when that performance earned Dujardin the Best Actor Oscar.

Of course, readers of this blog will find news of Dujardin’s talent to be no surprise, since I told you he was a man to watch a little more than a year ago.

It’s things like this that make Just Well Mixed readers the seemingly omniscient beings that they frequently appear to be.


Get off my lawn!

Old man

Above: the editor

This would probably have made more sense to note at the time, but since it slipped my mind: this blog turned 10 years old on January 17th.

I suppose it goes back even further than that, really — I started publishing personal essays on the Web in various places back in, I think, 1996, though most of those have been lost, alas. But this blog opened its doors at this location on January 17, 2002, and everything I’ve written since then is stored in the archives.

Happy 1-0, blog! Enjoy it, it’s pretty much all downhill from there.


Gadget fatigue

GadgetsIt’s starting to look like I need a new cellphone. My current one is two and a half years old now, and while it’s served me well, the hardware is starting to fail in various minor but annoying ways. And since we don’t fix things anymore, that means it’s time to go cellphone shopping.

Here’s the thing: I’m a nerd, and mobile technology is the hottest sector of the technology marketplace right now. There’s been an explosion of options, and sorting through different technologies to find the Right One is the sort of thing I’m supposed to love doing.

So why do I find the process so dispiriting?

I’ve tried four times now to buy a new phone, and each time I’ve walked away without closing the sale, feeling vaguely depressed about the whole process to boot.

I think it has to do with values. I know the kind of device I want to buy; the problem is that nobody makes it.

I want to buy a phone that’s open — that respects my right as the owner of the device to use it in whatever way I see fit. But every modern smartphone is locked down in various ways, either by the manufacturer, the carrier, the operating system vendor, or some unholy combination of the above. They all want to channel you into App Stores where things you used to get for free on the Web you now have to pay a dollar a pop for — and which you’ll never get access to at all unless they approve it first.

I want to buy a phone that’s responsible about power consumption. Modern smartphones suck power at an absolutely unholy rate. I have yet to meet anyone who can boast of getting more than a single work day out of a fully charged smartphone. In this respect phones are actually getting worse, as time goes on and features like huge displays and quad-core (!) processors become standard.

I want to buy a phone that’s ethically built. I want the people who actually build the phones to be able to keep a non-trivial share of the enormous profits they generate, and I don’t want them to have to risk their health or work eighteen-hour shifts — especially when doing so only serves to bump up management’s already huge profit margin.

I want to buy a phone that’s respectful of my privacy. I always giggle a bit when I hear people talking about Cellphone Revolutions, because the only reason those work is that the governments of the world haven’t realized what an enormous opportunity cellphones provide them. (Though a few have figured it out.) Here you have a device placed in every citizen’s pocket that knows (and can therefore record) everywhere you go and everyone you communicate with — and the citizens carry them voluntarily! They even consider them a status symbol! The potential for abuse is huge, and it’s only growing as the devices gain the ability to sense more about the world around them and to connect to more types of networks. I want a phone that puts my interests before that of a government, or a carrier, or an app developer.

So that’s what I want. But the state of things in 2012 is that it’s an impossible list. If you want a phone — especially a smartphone — you just have to accept that it’s going to be a locked-down, power-sucking blood diamond that routinely rats you out to a breathtaking range of third parties.

I suppose that’s the way it is, at least for now. But it’s hard to get excited about, that’s for sure.


SOPA: the tech industry’s self-inflicted wound

Don't shoot yourself in the footToday’s Web blackout against the Stop Online Piracy Act (SOPA), which saw Wikipedia, Reddit, and a host of other sites go dark for a day to protest that legislation, looks to have been a huge success; it sounds like it got a lot of people to contact their Members of Congress in opposition to SOPA, and it definitely raised the profile of the issue among the public at large.

But one question keeps coming up that nobody really seems to have a good answer for: how did the tech industry find itself in this situation in the first place?

The most common answer I’ve seen given to this question is a simple one: the content industry paid Congress off. An example of this explanation would be this piece at The Verge today, in which self-identified “former lobbyist” T.C. Sottek argues that on the Hill money talks and good intentions walk:

Lawmakers may have their own parochial interests or lofty causes, but first and foremost they’re always looking for votes. To get votes, they need attention and money — something that corporate lobbyists can dish out in abundance. The end product of this system is lawmaking that’s less about making good public policy and more about appeasing the hands that feed — as a result, powerful corporations with deep pockets gain unparalleled access to members of Congress, and they help set the agenda. That agenda is why bills like SOPA and PIPA gain such traction — they were delivered to Congress in return for money and votes…

Even if SOPA and PIPA die on the vine, Congress will be back with fresh legislation and cute new propaganda-laden titles, courtesy of the MPAA and RIAA’s ruthlessly effective combination of money and patience — a combination the tech community has shown little interest in matching…

As long as the entertainment industry spends more money in Washington than the tech industry, bad laws like SOPA and PIPA will appear with frightening regularity.

This makes it sound like the reason for SOPA is that the content industry spends a lot on lobbying and the tech industry does not. But the problem is that if you dig into the actual data that storyline looks less and less plausible — and what looks more plausible is that tech wasn’t outspent, but instead spent its money in dumb ways.

I spent a little time with the invaluable OpenSecrets.org database this evening looking into the matter. I actually went in thinking that Sottek was right, and that the data would show a huge disparity in political spending between the two industries. But what I found surprised me: in recent history, tech as a sector has generally spent about as much as Big Content has on affecting policy — and, in some ways, has actually spent much more.

I looked at two different types of political spending. The first was contributions to candidates, which is what most people think of when they think of money in politics. The second was lobbying expenses — money spent “educating” Congress between elections to think the way you want them to. OpenSecrets helpfully lets you break both types of spending down by industry; I chose the “Computers/Internet” industry category (which includes spending by Google, Microsoft, Apple and Facebook) to identify as the “tech sector,” and “TV / Movies / Music” (which includes Sony, Disney, News Corporation, Time Warner, and Comcast) to identify as the “content sector.”

Let’s look at total contributions to candidates first, going back to the 2000 election cycle:

Campaign Contributions By Sector, 2000-2010
Election Cycle Tech Sector Contributions Content Sector Contributions Tech / Content
2010 $15,654,428.00  $18,679,096.00 84%
2008  $30,596,112.00 $31,943,700.00 96%
2006 $13,564,277.00 $15,934,422.00 85%
2004 $17,614,992.00 $20,004,345.00 88%
2002 $7,894,390.00 $9,674,504.00 81%
2000 $11,804,268.00 $12,172,753.00 97%

So while we see here that it’s true that content has generally outspent tech, it’s not really true that they’ve done so by huge, majority-making margins; in the average election cycle from 2000 to 2010, tech gave candidates 89% of what content did. An 11% difference is nothing to sneeze at, but it paints a different picture than that of the poor tech industry getting steamrolled by Big Content.

When you look at lobbying expenditures, that picture becomes even less convincing:

Lobbying Expenditures By Sector, 2000-2010
Year Tech Sector Lobbying Content Sector Lobbying Tech / Content
2010 $122,398,650.00 $111,127,528.00 110%
2009 $119,401,783.00 $107,413,377.00 111%
2008 $123,739,942.00 $102,585,026.00 121%
2007 $120,659,684.00 $82,283,044.00 147%
2006 $116,750,235.00 $76,344,867.00 153%
2005 $95,995,157.00 $58,505,671.00 164%
2004 $88,339,850.00 $48,784,820.00 181%
2003 $78,348,509.00 $45,284,236.00 173%
2002 $69,235,588.00 $41,677,230.00 166%
2001 $67,721,048.00 $40,059,845.00 169%
2000 $56,066,897.00 $34,163,127.00 164%

Stop and look at those figures for a moment. They show tech, as a sector, spending more on lobbying than Big Content does. A lot more! In the average year between 2000 and 2010, tech’s lobbying spending was 151% of content’s.

And yet, despite all that spending, Big Content nearly managed to push SOPA through Congress without so much as a spirited debate. How did they pull that off?

The answer, I think, lies not in how much was spent, but in who spent it. Look at OpenSecrets’ list of tech sector organizations reporting lobbying expenses for 2010, and compare it to their list of content sector organizations from the same year. Can you spot the difference?

The tech list is almost entirely made up of companies: Microsoft, HP, Google, Oracle. The content list has plenty of companies too, but it also has major spending from industry associations: the NCTA (the cable industry trade group), NAB (the broadcast industry) and the RIAA (recorded music). And if you were to venture down below the top ten you’d see big spending from several other content industry associations, like the Motion Picture Association of America ($1.3 million), the National Academy of Recording Arts and Sciences ($341,000), and ASCAP ($240,000). Tech, by contrast, has few associations on the list, and the ones that are there spend less.

More tellingly, though, none of the major tech associations has as its primary mission the protection of the open Internet. Their members all benefit from an open Internet, of course, but the existence of that open Internet is not a direct concern of, say, the Entertainment Software Association (representing video game publishers, $4.6 million) or ITI (enterprise computing, $2.6 million). Even worse, one of the few major associations on the tech list, the Business Software Alliance ($2.1 million), actually supported SOPA, until outcry from the rest of the tech sector led them to back off.

Here, then, we find the real reason why tech was nearly sandbagged by SOPA. It wasn’t because tech is getting outspent by the deep pockets of Big Content; it’s because tech spends its lobbying money in dumb ways. Each tech company comes to DC and lobbies on its own behalf for itself and its own interests, which may or may not align with those of other tech companies, depending on the issue. But who stands up for the core issues that all tech companies cares about? Nobody, because tech hasn’t bothered to establish institutions to do that.

Big Content is smarter. Its members spend money lobbying for their own interests, but they have also taken care to build organizations to look out for the interests they all have in common, and have funded those organizations sufficiently for them to advocate vigorously on their behalf. This enables them to pool their resources on those common issues and put forward a unified, cohesive message — something that tech only managed to do on SOPA when its back was up against the wall.

Now, don’t get me wrong. I’m no fan of money in politics; I think it distorts and corrupts the system in countless ways. But I am a pragmatist, and given that we’re not going to get the money out of the system overnight, it behooves those of us who care about a free, open Internet to ask what we should be doing to keep it from being threatened like this again. And as long as money rules politics, that will mean mobilizing the money on our side more effectively than the people on the other side of the issue do.

Will SOPA prompt Google, Facebook, Microsoft, Apple and the rest of the companies whose fortunes rest on the open Internet to learn this lesson and start building the institutions needed to make it happen, or start funding institutions like the EFF that are trying to?

I guess we’re about to find out.


I want a newspaper that can call a lie a lie

The New York Times’ ombudsman, Arthur S. Brisbane, asks today whether the paper should act as a “truth vigilante”:

I’m looking for reader input on whether and when New York Times news reporters should challenge “facts” that are asserted by newsmakers they write about…

[An] example: on the campaign trail, Mitt Romney often says President Obama has made speeches “apologizing for America,” a phrase to which Paul Krugman objected in a December 23 column arguing that politics has advanced to the “post-truth” stage.

As an Op-Ed columnist, Mr. Krugman clearly has the freedom to call out what he thinks is a lie. My question for readers is: should news reporters do the same?

[Some readers] worry less about reporters imposing their judgment on what is false and what is true. Is that the prevailing view? And if so, how can The Times do this in a way that is objective and fair? Is it possible to be objective and fair when the reporter is choosing to correct one fact over another?

Setting aside the (rather shockingly loaded) term “truth vigilante,” I would think the answer to these questions would be obvious, simply by looking at them from a business perspective.

The New York Times has a product they want me to buy. I only buy things that I perceive as having value commensurate with their price. So it’s in their interest to fill their product with information I can’t get immediately and for free elsewhere.

In the Internet age, what someone is saying about the events of the day is the very definition of information I can get immediately and for free elsewhere. I don’t need a third party to relay that information to me for a price; if I want to find out what Mitt Romney is saying about President Obama, I can get that information immediately and for free from MittRomney.com, or Romney’s Facebook page, or @MittRomney on Twitter. A “newspaper” that simply reprints this information the next day has no value to me.

What I can’t get easily and for free, on the other hand, is information that helps me evaluate the things Mitt Romney is saying about President Obama. Finding that information takes work; someone has to go sifting through mountains of information and find the few nuggets that actually bring perspective to the subject at hand. I don’t want to have to do that work for myself in every news story I read.

Brisbane sets up a choice between “reporting” and “opinion,” which is a standard way journalists divide up the world, and then asks us which one we prefer. But I believe this is a false dichotomy, because it leaves out a critical third element: context. Context is not opinion. Context is factual, reported information that brings additional perspective to a story beyond the basic details of who said what. Context is how you decide whether to believe who when she says what.

In a world where everybody is drowning in unfiltered information, context is pretty damn valuable.

So yes, Mr. Brisbane, if you know that someone you’re reporting on is lying to me, telling me that would be useful. It would be context. And it would make your paper something worth subscribing to.


Presenting the Just Well Mixed Best of 2011

Best of NERDIt’s the end of another year, and that means it’s time for a look back at the year’s best posts here at Just Well Mixed.

2011 actually turned out to be the best year this blog has had for a while, content-wise. (If I do say so myself.) And that resulted in a lot of new readers coming in from new places like Reddit and Hacker News. If you’re one of those folks, thanks for coming by! (And hopefully I can come up with enough interesting things to say in 2012 to convince you to stick around.)

Without further ado, here’s my Best of 2011 picks:

  • The Bankruptcy of Optics (January 26): On political leaders who obsess over appearances rather than substance.  “This is the first challenge we as a nation will have to overcome if we hope to hold on to our greatness: to choose leaders who understand that the only true way to change how something looks is to change how it actually is.  And who have the courage to do the heavy lifting required to change things in the real world, rather than just change how those things look.”
  • When the Revolution Comes (February 4): On the uprising in Egypt, where I spent several years growing up. “It’s a bit disorienting to see a revolution you were being prepared to survive when you were 10 years old come roaring to life when you’re 35.  Even if you’re 5,000 miles away when it happens.”
  • Going All In, or Emerson in Tahrir Square (February 8): On the high stakes revolutionaries play for. ” If you wish to strike at a dictatorship, you must understand that the only blow you will have the luxury of striking unopposed will be your first.  Should that miss its mark, you will urgently need to have an answer to a simple question: what do we do now?”
  • The Coward’s Last Stand (February 22): On the spread of the Arab Spring into Libya. “Count me surprised that when the revolution finally came it saw Mubarak slinking away quietly to a retirement villa, and Gaddafi turning to bombs and bullets in a last desperate attempt to hold on to power.”
  • Rebecca Black’s “Friday,” Or Dear Internet, You Should Be Ashamed Of Yourself (March 16): On the disturbing viciousness of modern online culture. “Many of the people who are piling on this poor kid aren’t anonymous, random commenters; they’re paid employees of major news publications. Look at those links back at the top of the post. TIME magazine, for Pete’s sake!  Rolling Stone!  Rolling Stone used to be where Hunter S. Thompson would unload on Richard Nixon; now they reserve their scorn for more deserving targets, like thirteen-year-old girls. When did we reach the point where it became acceptable for professional culture commentators to beat up on children?”
  • Amazon’s Cloud Player Is Cool. But Is It Legal? (March 29): On how Amazon can do something that got a smaller company sued into oblivion ten years before. “Is Amazon just hoping that the world has changed enough in eleven years that an idea that crossed the line in 2000 won’t cross the line in 2011?”
  • Jim Moran’s a Moran When It Comes to Smithsonian Ethnic Museums (April 21): On how my Congressman is an idiot. “At this point every ethnic group in America should know that at some point Jim Moran is going to say something stupid about them. It’s part of the American Experience. So trying to read Jim Moran’s mind isn’t a particularly fruitful line of thought.  What might be fruitful, however, is to look at actual data to see if his fears are grounded in reality, no matter where they come from.”
  • Ubuntu 11.04: Everything Old Is New Again (April 29): On how Ubuntu is reinventing the wheel. “My beef with [Unity’s interface concepts] isn’t that they’re bad ideas.  My beef with them is that they’re bad implementations.”
  • The HP TouchPad, Or HP Shows How To Ruin A Good Thing (July 11): On HP’s epic failure to make anything compelling out of their purchase of the ahead-of-its-time webOS mobile operating system. “webOS is beginning to feel like a classic geek tragedy; a brilliant product, doomed to obscurity by poor management, first at a cash-strapped underdog and then at a global behemoth.”
  • The TouchPad Fiasco, or HP Perfects The Art Of The Own Goal (August 22): On HP’s epic failure to even find another company to sell webOS to. “An ‘own goal’ is exactly what it sounds like.  It’s when a player kicks the ball into his or her own team’s goal, thereby scoring a point for the opposition. In other words, it’s about the dumbest Goddamn thing you can do on a soccer field. Which is a pretty good metaphor for how HP has been managing its mobile portfolio over the last week.”
  • Your Macabre Thought For The Day (August 23): On the Washington earthquake, and disaster and death in the social media age. “The next time a tragedy of that scale happens — and one will happen, if not by act of war than by act of God — we will be able to look into the maelstrom. As horrible as it is to contemplate, we will have front row seats. We will be able to watch individuals struggle to survive, each status update or tweet illuminating them briefly like a flash of a strobe light, capturing them for a fleeting moment before it fades.”
  • The Unbearable Lightness of Minecraft (September 1): On indie gaming’s biggest hit, and how unsatisfying it is to play. “I find myself popping back in every now and then.  But I rarely stay long, because what brings me back is the game I dream it could be, rather than the game it actually is.”
  • Everything You Need To Know To Understand Netflix, In One Picture (October 11): On the reasons behind the online video company’s meltdown. “If the Qwikster decision seemed irrational to you, I would argue that that is because it was irrational. It wasn’t the product of reasoned, long-term strategic thinking; it was the product of panic.”
  • Dear Mozilla: Fix Your Damn Browser Already (October 12): On longstanding bugs in Firefox, and the dire need to get them fixed. “Firefox, on Linux at least, is busted.  It’s busted so bad that it’s painful to use.  And it’s been this way ever since Firefox 3 launched — three years ago.”
  • Don’t Worry About Selling Your Privacy to Facebook. I Already Sold It For You (October 21): On the privacy implications of integrating with Facebook. “Facebook Like buttons are kind of like a bribe.  Facebook offered me something of value — a chance at increased traffic — in exchange for letting them keep tabs on which pages you read on this site, and how frequently, and for how long.  And by including the buttons on my pages, I took the bribe. I sold you out.”
  • Kindle: No Thanks (November 14): On Amazon’s popular e-readers, and why I refuse to buy one. “With Kindle, Amazon has set things up so that in order to get the good things electronic books can offer, you have to accept a whole bunch of bad things too. Things that don’t benefit you at all — and that in some cases actually take away rights that owners of physical books have enjoyed for hundreds of years — but that benefit Amazon a whole bunch.”
  • Congressing Is Hard! Let’s Go Fundraising (November 22): On the increasing, and self-inflicted, irrelevance of Congress to the actual process of governing. “If Congress would rather defer questions of war and peace to the President, and Congress would rather defer questions of spending and taxes to the President, then what the hell is the purpose of Congress, exactly?”
  • Occupy Linux: Ubuntu Unity and making a Linux for more than the 1% (December 8): On why the most popular desktop Linux distribution is making changes that have enraged Linux nerds. “If you’re one of those people who cherish the ‘traditional’ Linux desktop experience, you need to realize that Ubuntu’s goal is not to serve you. You are, quite literally, the one percent. Ubuntu’s goal is to make a desktop that works for the 99%. If they can do that while serving you at the same time, that’s great, but if they can’t you shouldn’t be surprised to find them on the other side of the drum circle.”

So there you have it, the best of 2011. Now on to 2012!


Good news: Mozilla fixed their damn browser

Firefox is fastSince the official release of Firefox 9 is today, it seems like an appropriate time to revisit my post from a few weeks ago complaining about Firefox’s poor performance on Linux to see if anything’s changed.

Good news. It has!

After my post hit Hacker News, a couple of people at Mozilla reached out to me to see if they could help run down the problem. They said they knew about the problem (which appears to center around low-level weirdness in some popular Linux filesystems), they had been working on it, and I could test out their work by switching from using the then-current release version of Firefox to using Firefox Aurora.

“Aurora” is a new stage in the Firefox release process that was introduced earlier this year. It provides an update channel that streams in new features periodically as they come in, rather than having to wait for the next full version of Firefox to get them. It’s distinct from the nightly builds that Mozilla has always made available for Firefox in that rather than just being a snapshot of the browser code as it stands at the end of each day’s work, it doesn’t include features until they’re at least somewhat fully baked, so while it’s more bleeding-edge (and therefore potentially unstable) than official Firefox betas, it’s less so (and therefore more potentially stable) than nightlies.

Anyway, I’ll try anything once, so I made the switch to Aurora (an easy thing to do, thanks to the PPAs that Mozilla kindly provides for Ubuntu users) and started using it as my everyday browser. And I was pleased to note that the slowdowns and lack of responsiveness that had been so frustrating in mainline Firefox were entirely gone. They went from “happens all the time” to “never happens, ever.” Which was awesome. Firefox felt like… well, like Firefox again.

I didn’t want to declare victory too quickly — the old problems hadn’t cropped up immediately either, they had started off slowly and gotten worse over time, so I figured I should keep using Aurora and see if they ever came back. Well, I’ve been using Aurora as my primary browser every day for a couple of months now, and they haven’t; the browser is just as fast and responsive as it was the day I switched over. So it seems safe to say at this point that Mozilla has slain this particular dragon.

Does this mean that you should switch to Aurora? No, probably not. Aurora is just a place for Mozilla to test out new ideas; things in Aurora that work well will eventually find their way into official Firefox releases, and Mozilla’s new rapid release schedule for Firefox means that you don’t have to wait six months or a year for each new version, so you should be seeing these improvements in your copy of Firefox soon. (They may even be in Firefox 9, though I can’t say for certain because I haven’t had a chance to review the complete list of fixes in that release yet.)

While it’s good to see issues like this getting fixed, I would still say that there are places where Mozilla needs to improve its process. The biggest one is how they communicate with users. Even after multiple searches through Bugzilla and various Mozilla-oriented forums, the only reason I found out that my problem was actually being worked on was because my post hit Hacker News and went viral from there. It was great that I did eventually find out, but it would have been better for Mozilla if I had been able to learn they were aware of the problem and were working on it on my own. Then I would probably never have written the post to begin with, and Mozilla wouldn’t have suffered the PR hit of having a user’s complaints about Firefox performance leading the tech news for a day. And it should be easier for me to report back to you whether or not these fixes are included in Firefox 9 than it currently is.

Even with that being said, though, I want to give credit where it’s due: Mozilla is working hard to make Firefox not just a faster browser, but the fastest browser. And from where I sit, anyway, the work is definitely paying off.


Occupy Linux: Ubuntu Unity and making a Linux for more than the 1%

Go Big Or Go HomeThe most recent release of Ubuntu Linux, Ubuntu 11.10, included a big change — a shift from the standard GNOME desktop environment to a new one, called Unity. (If you’re not familiar with it, you can take it for a test drive here without needing to download or install anything.)

I had my reservations about Unity, but after using it for a while I can report that I’ve been pleasantly surprised; it’s easy to use and really does make some common tasks easier.

If you listen to some corners of the Linux community, though, you’d think that Unity was the worst thing since Nickelback. Here’s a representative sample, helpfully titled “Why Ubuntu 11.10 fills me with rage” so you know immediately that it’s Serious Business:

Look, I’ve been using Unity for the last six months, which is almost as long as I have been using Mac OS X, and I’m still completely disoriented.

I understand fully what Canonical is trying to do with the user interface, which is to make it palatable to Joe Average End User. I dig that, really. But there’s no way to really customize your desktop and make it optimized for the way you work.

With all due respect to Jason Perlow, the guy who wrote that piece for ZDNet: no, you don’t get what Canonical is trying to do.

What Canonical is trying to do is much bigger than what side of the screen the Ubuntu desktop dock sits on. Much bigger. Ubuntu founder Mark Shuttleworth spelled out just how big in Bug #1 in the Ubuntu bug tracker:

Ubuntu Bug #1

Bug #1: Microsoft has a majority market share
Reported by Mark Shuttleworth on 2004-08-20

Microsoft has a majority market share in the new desktop PC marketplace.
This is a bug, which Ubuntu is designed to fix…

Steps to repeat:1. Visit a local PC store.

What happens:
2. Observe that a majority of PCs for sale have non-free software pre-installed.
3. Observe very few PCs with Ubuntu and free software pre-installed.

What should happen:
1. A majority of the PCs for sale should include only free software like Ubuntu.
2. Ubuntu should be marketed in a way such that its amazing features and benefits would be apparent and known by all.
3. The system shall become more and more user friendly as time passes.

That bug, opened when the Ubuntu project first began, makes it clear what Shuttleworth’s goal for Ubuntu was and is: nothing less than to become the standard OS for personal computers.

The problem that Unity is trying to solve is that we’re now seven years out from the opening of that bug, and Ubuntu doesn’t really look like it’s any closer to being able to close it than they were when the project started.

Don’t get me wrong — Ubuntu has come a long way since then.  In fact, in that time, it’s gone from an idea to the most polished, sophisticated incarnation of the Linux desktop available anywhere. But what Ubuntu has discovered is that being the best Linux desktop isn’t enough by itself to close Bug #1.

It’s notoriously difficult to get firm figures for desktop OS marketshare. But every estimate I’ve ever seen puts Ubuntu’s slice of the pie at around 1% of the global PC marketplace. 1% of “a lot” is still a lot, but it’s not a majority. It’s not even close to a majority. It’s more like a rounding error. And despite all its improvements, Ubuntu has been stubbornly stuck at that level of market share for years now. In some respects, it’s actually gone backwards; you used to be able to buy PCs with Ubuntu pre-installed from Dell’s online store, for instance, but today you cannot. In a world where the vast majority of users get their operating system pre-installed when they buy a new PC rather than installing it themselves, that’s a huge loss. And it’s directly caused by the market share problem.

If you’re Mark Shuttleworth, paying to develop Ubuntu out of your own (deep, but not infinitely so) pockets, and your goal is to make Ubuntu into the world’s default operating system, all this is a real cause for concern. Platforms need to get to double-digit market share for people to take them seriously as real contenders — for developers to start writing apps for them, for OEMs to start bundling them with PCs, and so forth — and Shuttleworth needs people to take Ubuntu seriously to get it to a point where it can reach his goal.

This is the point that most of the criticism I’ve seen of Unity has missed. There’s been lots of people griping about they don’t like Unity because of the ways in which it departs from the usual Linux desktop experience. To them, this departure is a bug. But to Ubuntu, it is a feature, because there is no evidence that the usual Linux desktop experience is compelling enough to win significant market share. If Ubuntu has to choose between doing something the usual way or doing something the way they think will win users, they will do it the latter way, because there’s no reason to believe that doing it the former way will ever get them to a point where they can finally close Bug #1.

That’s a perspective that is guaranteed to piss off plenty of current Ubuntu users, who liked the usual way of doing things — or, at least, had grown used to it. But, and this is the big “but,” if you’re one of those people who cherish the “traditional” Linux desktop experience, you need to realize that Ubuntu’s goal is not to serve you. You are, quite literally, the one percent. Ubuntu’s goal is to make a desktop that works for the 99%. If they can do that while serving you at the same time, that’s great, but if they can’t you shouldn’t be surprised to find them on the other side of the drum circle.

None of this is to say that Unity is guaranteed to Occupy Linux and make Ubuntu the Linux for the 99%. It may well fail to reach that lofty goal. And it’s not to say that Linux for the 1% is going away anytime soon, either — there’s plenty of distros, from Debian (which forms the core of Ubuntu anyway) to Fedora to Mint et al., who will happily step up to serve you.

But if the goals of the Ubuntu project are to be taken even halfway seriously, Canonical had to do something to elevate the standard Linux desktop experience to a level where people — developers, OEMs, but most of all ordinary users — would be attracted to it. And Unity is how they’re trying to do that. If your complaints about it are rooted in the ways it diverges from the norm, you’re missing the point.


The Humble Introversion Bundle: Just Stop What You’re Doing And Buy It Immediately

Humble Introversion BundleOne of the more clever marketing moves in indie gaming over the last couple of years has been the rise of the Humble Bundle. Organized by indie developers themselves, Humble Bundles package together several indie games into a single purchase, increasing visibility for all involved. What’s made them such a phenomenon, though, is that each buyer sets his or her own purchase price for the bundle. Think the bundled games together are worth $20? Pay that. Think they’re only worth $5? Pay that instead. They even let you tune how much of your purchase price goes to the developer of each game, so you can reward one particular developer if you think their product is what makes the bundle worth buying. And on top of all that, they let you earmark some or all of your purchase price to go to two charities (Child’s Play and the Electronic Frontier Foundation) too, if you want. It’s a smart approach that has driven more than $7 million in sales so far, which is a lot of money for developers who are working out of their bedrooms on labors of love. (Not to mention raising more than $2 million for the participating charities.)

So I was excited to see that the latest Humble Bundle focuses on the work of one of the most creative indie games studios out there — Introversion Software. The Humble Introversion Bundle pulls together all four of Introversion’s titles into a single package that you can pay anything you want for. Even better, the games all run on Windows, OS X, and Linux; for Linux gamers especially, who don’t generally have many games to choose from, Introversion’s games have been a bright spot.

The Humble Introversion Bundle contains four Introversion games. Uplink is an adventure that puts you into the shoes of a hacker who bites off a bit more than he can chew. Darwinia is a gorgeous real-time strategy game where you help the digital denizens of a computerized world fight off an attack from a deadly virus. Multiwinia is multiplayer Darwinia, moving the action from you against the computer to you against another player online. And DEFCON is a strategy game based on the bizarre logic of nuclear war, where the “winner” is the side with the last man standing.

Besides their addictive gameplay, what sets Introversion’s games apart is their creative art design. Each game is lavished with a look that calls back to classic works of fiction while simultaneously seeming new and fresh. Uplink, for instance, is inspired by William Gibson’s classic cyberpunk novels, and DEFCON’s visual design is a pitch-perfect homage to the classic 1980s nuclear war film WarGames.

As an added bonus, if you pay more than the average buyer ($3.74 as of this writing), the Humble Bundle folks will throw in two extra games for free. Hard to argue with that!

So if you’ve never played an Introversion game before, here is your chance to remedy that sad fact, for whatever price you want. But this offer won’t last forever — the Humble Introversion Bundle deal will only be live for eight more days — so if you want to pick it up, now’s the time. Follow this link to get your bundle on.


Enough With Black Friday Already

Black FridayIt’s the day after Thanksgiving again, and that means it’s time for this year’s spate of “Black Friday” horror stories as the retail sector and the media whip shoppers into a frothing frenzy and then profess to be shocked! shocked! as those shoppers proceed to fold, spindle and mutilate one another in a mad stampede to save fifteen cents.

The worst story so far this year comes out of Los Angeles, where a woman pepper-sprayed a crowd of people at a Walmart to get them out of the way so she could grab a discounted XBox. The police described her actions as “competitive shopping,” which is a strong candidate for Understatement of the Year. Twenty people, including children, are reported injured.

This would be bad enough if it were an isolated incident. But it isn’t. Every year we get a brace of stories about shopping riots on Black Friday. It’s a measure of how routine they have become that you can hear a story like the one above and be cheered that at least nobody died, unlike in 2008, when maintenance worker Jdimytai Damour was trampled to death by a mob of Black Friday shoppers.

When that happened, I actually thought it would be enough to convince the nation that it was time to rein the Black Friday nonsense in. But that turned out to be naïve of me; the occasional needless death, it seems, is just the cost of doing business. So Black Friday has continued to get even more out of control.

The latest development is the spilling-over of Black Friday sales into the Thanksgiving holiday itself. This year, several retail chains opened their doors Thursday night — at Walmart, for instance, they opened at 10 P.M. That’s depressing on two fronts — it brings crass commercialism roaring into what is supposed to be a moment of humility and grace, and it forces the people who work at those stores to forego time with their own families to stock shelves and staff checkout lines.

I don’t blame the shoppers for what Black Friday has become; as the New York Times notes, in an age of persistent underemployment and economic dislocation, the lure of deep discounts on popular items can be strong, even if the discounts are mostly illusory. The fault lies with the retailers, who launch massive advertising campaigns focused on “doorbusters” that encourage rowdy behavior, and with local media who breathlessly report on every detail of every sale, increasing the sense of Black Friday as an Event rather than just a dumb sales stunt.

Thanksgiving should be a time of reflection. So this year, before we put it behind us, let’s reflect on what it means that we, as a nation, are less concerned with each others’ safety, health and home lives than we are with doing whatever it takes to get an extra one percent off a flat screen TV. And maybe, while we’re at it, on what we can do next year to get our priorities back in order.


Congressing Is Hard! Let’s Go Fundraising

BarbieYou’ve probably heard already about how the Congressional Superfriends Supercommittee failed to come up with a plan to reduce the budget deficit. In and of itself, that’s not particularly surprising. But what did surprise me is the spin I’ve heard placed on its failure from some on the right. For example, the Washington Post‘s Jennifer Rubin:

One thing that Republicans and Democrats seem able to agree upon is that President Obama’s shocking absence from any part in the supercommittee talks nearly assured its failure…

The supercommittee is not so much a failure of the legislative branch as it is of the president’s ability to lead the country…

Republicans and Democrats in Congress should be crystal-clear: The president’s been AWOL from the most important domestic challenge we face. Frankly, I suspect that a stronger Democratic president would have been able to broker a deal. Actually, a stronger and more courageous president would have embraced Simpson-Bowles. But not Obama. Maybe we should get a president who doesn’t run overseas or finger-point but who leads.

Not to put too fine a point on it, but this is bullshit on wheels. The failure of the Supercommittee is a failure of Congress by definition. That’s because the very fact that the Supercommittee existed was a failure of Congress by definition.

The Supercommittee was created because our betters decided that the budget deficit has reached crisis levels. I personally don’t agree with this assessment — I would argue that right now the deficit is less worrisome than our persistent 10% unemployment rate — but for the purpose of discussion, let’s grant the premise.

The way you solve a budget crisis is by changing the structure of the budget so that spending goes down and revenue goes up. But deciding how the budget should be structured is a job that the Constitution puts squarely on the shoulders of Congress. One could even argue that it’s the most important job Congress has, since so much of the nature of a government flows from which programs get funded and which don’t.

In other words, if the budget’s broken, the branch of government that is supposed to fix it is Congress. In this case, though, they wouldn’t, or couldn’t, do that job. Why? Because in this case, that job would be hard. Truly balancing the budget would require both major spending cutbacks and significant tax increases, neither of which would be politically popular. It’s fun to be a member of the Ways and Means Committee when you’re handing out tax cuts and pork-barrel spending; less so when you have to be the one to put the country on a diet.

So rather than take on that un-fun responsibility, they kicked it out to a new body — the Supercommittee. But they didn’t really believe that the members of the Supercommittee would do the job either, so they added an additional incentive: automatic spending cuts that would supposedly go into effect if the Supercommittee didn’t come up with a plan.

(Which, of course, didn’t end up motivating the Supercommittee to come up with a plan after all, because the threat posed by the automatic cuts is an entirely artificial, self-imposed threat; and the thing about making an artificial threat is that it can be un-made just as easily, and everyone knows it.)

As I said above, at the big-picture level, I believe that the failure of the Supercommittee is a good thing, because the problem it was created to solve is bogus to begin with. But regardless of whether you think it was necessary or not, the one thing that should be crystal clear is that its existence is a sign of the dysfunction of Congress as an institution. Congress is supposed to manage the nation’s finances; not just when times are good and the budget is flush, but all the time. That’s a big responsibility, and big responsibilities can sometimes be unpleasant. But that’s life, you know?

It’s not just a budget question, either. For Congress, such institutional cowardice has become routine. Congress is supposed to have the sole power to declare war, for instance, but for seventy years they have consistently ducked that responsibility. Presidents have led us to war in Korea, Vietnam, Grenada, Panama, Kuwait, Iraq and Afghanistan without ever bothering to ask Congress for a declaration of war, and in none of those cases has Congress protested particularly vigorously. Why? Because, like balancing the budget, declaring war is hard — it brings up all sorts of thorny, un-fun questions — and so Congress would rather not deal with it. It’s easier to do nothing, or to give the President a blank check to make the decision for them.

So it’s a bit rich to hear people like Ms. Rubin saying now that the Supercommittee was doomed to failure unless the President put his weight behind it. Managing the budget is Congress’ job. If it can’t bestir itself to actually do it — not even when it puts a gun to its own head to try and force itself to do it — that raises an obvious question:

If Congress would rather defer questions of war and peace to the President, and Congress would rather defer questions of spending and taxes to the President, then what the hell is the purpose of Congress, exactly?

That’s a question that says a lot more about Congress than it does about Barack Obama.


Kindle: No Thanks

Kindle FireSince Amazon has moved up the shipping dates of their new generation Kindle e-readers — the tablet-ish Kindle Fire ships today, with the new eInk Kindles following tomorrow — this seems like as good a time as any for me to explain why I refuse to buy one.

It’s not because I have anything against electronic books per se. I do not. I love and cherish physical books, but electronic books can bring important advantages over physical ones, like the ability to easily search the book’s full text, and to carry around a ton of books without wrenching your back out. There were tradeoffs in the shift from physical media like vinyl and CDs for music to electronic distribution too, but I think overall the shift was beneficial for listeners, and I don’t see any reason why electronic books can’t be a plus for readers too.

My problem isn’t with electronic books in general; it’s with Amazon’s Kindle specifically. Because with Kindle, Amazon has set things up so that in order to get the good things electronic books can offer, you have to accept a whole bunch of bad things too. Things that don’t benefit you at all — and that in some cases actually take away rights that owners of physical books have enjoyed for hundreds of years — but that benefit Amazon a whole bunch.

To wit:

  • You can’t buy Kindle books from anyone other than Amazon. This strikes me as a Huge Deal. For physical books, there is a competitive marketplace; you can buy them from Amazon, or Barnes & Noble, or Powell’s, or lots of other sources. This competitive pressure keeps costs down and ensures that no one store — not even a mega-store like Amazon or B&N — can shut a book out of the marketplace on its own whim. For Kindle books, no such competitive marketplace exists; other than public-domain titles, Kindle books are available from one and only one source, Amazon. If Kindle ever became the world’s default platform for reading electronic books, Amazon would have a monopoly on the intellectual property marketplace unlike anything the world has ever seen.
  • You don’t buy Kindle books; you rent them. Amazon tells you that you’re buying them, of course, but Amazon has the technical capability to reach out over the network and delete books from your Kindle at any time, with no warning and no refund. If that sounds far-fetched, consider that they have already done it at least once, remotely deleting copies of George Orwell’s 1984 and Animal Farm from customers’ devices.
  • You can’t lend your Kindle books to friends. Or rather, you can, but only if the publisher has decided to allow you that privilege, and then only if your friend is also a user of a Kindle device or the Kindle app for phones and other devices. If any of the above conditions do not apply — like, say, if your friend had the incredible gall to buy an e-reading device from some other company — forget it. Contrast this to the much more liberal lending restrictions on physical books, which are that (1) you must have a friend and (2) said friend must have at least one functional eye.
  • You can’t sell or give away your Kindle books when you’re done with them. Kindle books are tied to your Amazon account; they cannot be transferred to someone using a non-Kindle e-reader, or even to another Kindle user. This completely demolishes the used book marketplace, which is probably the idea; Amazon wants you to buy your books new from them, rather than used from your friends or Alibris or the bookshop down the street. It also makes it impossible to give your books to your local public library when you’re done with them; Amazon has a program through which public libraries can lend Kindle books, but unsurprisingly it involves the library buying the books new from Amazon rather than having them donated by patrons.

This is a lot to ask people to accept in order to get the benefits of electronic books. For me, it’s too much. So that’s why I have yet to pick up a Kindle. (Amazon’s major competitor in this space, Barnes & Noble’s Nook, is less objectionable; Nook books use the open EPUB format rather than a proprietary one like Kindle’s, so it’s possible for other vendors to sell books to Nook users beyond just B&N.)

I’m sure that some people will object that most of my complaints about Kindle are due to publishers’ paranoia about piracy, the way people said that Apple’s heavy-handed restrictions on music sold in the early days of the iTunes Music Store came from labels rather than from Apple. That may or may not be true; frankly, I don’t particularly care. I’m not looking to judge whether the soul of Jeff Bezos is good and true. I just want to buy electronic books without having to surrender the rights that I’ve always had when buying physical ones.

I’m confident that the day will come when that will be possible; for a long time people said you’d never be able to buy DRM-free music online, and now it’s available everywhere (including, ironically, from Amazon). The music business had to learn the hard way that content that comes freighted with a bunch of customer-unfriendly restrictions is less appealing than content that leaves all that baggage behind. Presumably the book business will get the message eventually too. But we’re not there yet, unfortunately.


Rick Perry: “Oops”

In case you thought I was kidding when I said that the various candidates for the GOP nomination are an historic collection of buffoons, Rick Perry took a moment in tonight’s debate to demonstrate how if anything I was understating the case:

http://www.youtube.com/watch?v=zUA2rDVrmNg

I’m sitting here trying to think of a debate moment as pathetic as this one, and I can’t; not from my adult lifetime, anyway. Even the infamous performance by Ross Perot’s 1992 running mate, Admiral James Stockdale, pales in comparison. Stockdale didn’t do well, but he wasn’t a professional politician; a lot of what makes a good debate performance is learning the theatrics of public speaking, and if you don’t have experience speaking in public it’s very easy to come across poorly, especially on television. Perry, who’s been in politics for twenty-seven years now, has no such excuse.

Pro Tip, Governor Perry: if you’re going to advocate eliminating the Department of Education, you might want to do so in a way that doesn’t make you look like the poster child for why we need it.

UPDATE (Nov. 10): James Fallows also tries to come up with a comparable moment and has to reach all the way back to Gerald Ford in 1976.

Charlie Pierce points out that Perry’s meltdown was just one drop of weirdness in a sea of it:

The other striking thing about the debate was the complete, balls-out, stigmatic, religiously euphoric, seeing-the-Virgin-go-past-on-a-go-kart veneration of The Market…

Let The Market work its magic and the budget will be in balance, unemployment will sink, personal income will rise, the housing crisis will abate, health care will be cheaper and more plentiful, and all the people will have houses and all the students will be able to afford college. I am not paraphrasing here. I am merely condensing two hours of magical thinking into a single sentence. The solution to every problem — every damn one of them — was to rely on The Market for a solution. It was like watching one of those Star Trek episodes where entire societies grow up serving a computer that the people took for a god…

This was how they could talk at length about jobs and not mention unemployment insurance or income inequality. This was how they could talk at length about health care in this country, and not say a word about the singular greed and profiteering of the insurance industry. This was how they could talk at length about the housing bubble, and endlessly belabor Fannie Me and Freddie Mac with asserting that the CBO debunked months ago, and not mention at all the phrases “credit-default swap” or “collateralized debt obligation”…

This was how Mitt Romney — Mitt by god Romney, the job-butcher of Bain Capital — could get all wet-eyed about middle-class folks and nobody threw a banana at him. No wonder Rick Perry briefly left the corporeal plane. It’ll be a wonder if he ever comes back.


A Sad Day For Comedy

The man who screwed an entire countryItalian Prime Minister and noted teenage girl enthusiast Silvio “Bunga Bunga” Berlusconi is stepping down:

Berlusconi confirmed a statement from President Giorgio Napolitano that he would step down as soon as parliament passed urgent budget reforms demanded by European leaders after Italy was sucked into epicenter of the euro zone debt crisis.

The votes in both houses of parliament are likely this month and they would spell the end of a 17-year dominance of Italy by the flamboyant billionaire media magnate.

Berlusconi’s corporatist governing style and creepy-old-man personal life made him one of Europe’s least appealing leaders. But you’ve gotta admit he did have a gift for phrasemaking.


The GOP Nomination: There Is Mitt Romney, And Then There Is Everybody Else

Don QuixoteDespite all the clatter and bang of the political media, despite all the daily back-and-forthing and to-and-froing, despite all the debating and positioning, there is really only one thing you need to know to understand the contest for the Republican presidential nomination in 2012:

There is Mitt Romney, and then there is everybody else.

It’s easy to lose sight of this if you read the news or watch the cable nets. Every day they have a new twist in the story, a new angle, a new face that’s either waxing or waning. But that’s all colored smoke and streamers; the truth that remains when the smoke has blown away and the circus has bedded down for the night never really changes.

There is Mitt Romney, and then there is everybody else.

This truth poses a huge problem for the Republican Party. Because Republicans — most of them, anyway — don’t really like Mitt Romney all that much.

Oh, there are some. But they aren’t enough. Challengers have come and gone, but the share of the Republican base that supports Romney has never really wavered — it’s consistently around 25%. But that’s not enough, all by itself, to win Romney the nomination.

The other 75% of Republicans, though, aren’t in any better position than Romney’s people are. All they need to beat Romney is a candidate to rally around who isn’t a complete buffoon. But despite months of increasingly desperate searching, they have yet to find one.

Herman Cain is the one in the news today, but he’s just the latest in a long line of anti-Romneys. Michele Bachmann and Rick Perry each took a turn carrying the banner, only to drop it when it became clear to even the most die-hard Romney-hater that they couldn’t campaign their way out of a paper bag. Next they turned to Cain, who is busy demonstrating the same quality.

You would think that the support of three-quarters of a major national political party would be a sought-after prize. But that particular Golden Ticket been offered to so many clowns now that it has begun to appear a bit shopworn. Those who have it waved in their faces, like New Jersey Governor Chris Christie, have to wonder why if it’s so valuable nobody else competent has stepped up to snatch it by now. (Christie struck a blow on behalf of overweight Americans everywhere by refusing it, demonstrating conclusively that just because somebody is fat doesn’t mean they’re also stupid.)

The reason is that it’s a poisoned chalice. The person who ends up holding that Golden Ticket wins the “opportunity” to become an enemy to the Republican aristocracy, which lined up behind Romney long ago, and the “opportunity” to run in a general election during a major economic downturn with the word “Republican” next to their name. Neither of these is what you would call appealing, at least to anyone with any desire for a long-term national political career.

Which leaves the Bachmanns and Perrys and Cains and Gingriches, who care less about building a legacy over the long haul and more about keeping their books off the remainders pile and juicing their attractiveness on the lecture circuit. That doesn’t mean that they can’t win the nomination, of course; when it’s last call and the pickings are slim, even the ugly can sometimes find someone to go home with. But that’s a far cry from true love flowering in the bloom of spring.

The signs are that the anti-Romney Republicans are beginning to see this.  They just launched a Web site, NotMittRomney.com, which expounds at length on the premise that Romney must not be allowed to become the Republican nominee.

The deep thinking that went into their position can be seen in the buttons on the site’s navigation:

Romney: He Can't Win! But We're Doomed If He Does!“Waiter, this food is terrible! And the portions are so small!

While the site has plenty to say about why Romney shouldn’t be the nominee, it is conspicuously silent on the obvious next question, which is who should be, if not Romney. That is most likely because for them to answer that next question would involve directing our gaze upon a gallery of political grotesques even more shocking than those they have trotted out to date, and even they realize that’s not a great way to win votes.

Which brings us right back to where we began.

There is Mitt Romney, and then there is everybody else.


Metered Billing: The Iceberg That’s Bearing Down On Your Business Model

OopsI want to talk about something that’s looming in the not-so-distant future that could kill a whole bunch of the most promising online businesses: metered billing for mobile data.

One of the biggest trends in tech over the last decade has been the emergence of what’s generally dubbed “cloud computing.” In non-nerd terms, what that means is providing resources that used to only be available directly on your local PC remotely over the Internet.

Over the last couple of years, we’ve started to see cloud companies emerge that are aimed directly at the consumer.  Netflix, for instance, has moved aggressively (some might say too aggressively) into the cloud with its streaming service.  Their pitch is classic cloud marketing: instead of collecting DVDs of your favorite movies, which you then have to cart around with you to watch, you can just subscribe to Netflix and have them all delivered to you anywhere online. Music services Spotify and Pandora do the same thing with your music collection.  Dropbox does the same thing with your files.  And so on.

The key to the appeal of all of these services is that they combine convenience with savings.  If Netflix has the movies you want to watch, subscribing to their service makes it easier to access those movies anywhere than DVDs do, while simultaneously being cheaper than buying every movie on DVD.

But the convenience and the savings of cloud services stem from a couple of assumptions — that your connection to the Internet is always on, and that it’s billed at a flat rate. If your access to the Internet is limited, cloud services suddenly become a lot less convenient, because you suddenly have less access to your stuff than you would if you had carried copies with you.  And if your access to the Internet is billed by usage, cloud services suddenly become a lot less cheap, because it takes a lot of bandwidth to continuously shuffle files across the network.

As long as the primary platform people used to access online services was the desktop PC, these assumptions held, because the average desktop PC’s user’s internet connection is both always on and flat-rate billed. The always-on part has been the case since the emergence of affordable broadband in the early 2000s, and the flat-rate part has been central to ISP billing plans since the mid-1990s.

The problem is that the primacy of the desktop PC as users’ primary gateway to the Internet is waning. And on the devices that are taking its place — smartphones and tablets — always-on, flat-rate Internet access can not be taken for granted. Even worse, the trend line is curving in the opposite direction: fewer and fewer mobile users have access to always-on, flat-rate Internet service with each year that goes by, at least in the United States.

Mobile users have two ways to get online. The first, Wi-Fi, offers reasonably fast wireless connections that are generally billed by time used rather than bandwidth used (or provided free); but outside public spaces like airports or restaurants Wi-Fi the gaps in coverage are big enough that you can’t be certain you’ll always be able to get online.

The second option is cellular data connections. All major US cellphone carriers offer data plans that let you connect to the Internet. Unlike Wi-Fi, cellular satisfies the always-on criterion — cell coverage is broad enough that in most parts of the country you can rely on having access to a connection if you want one. But unlike Wi-Fi, cellular connections are not generally available at flat rates anymore.

Thanks to an orgy of consolidation, there are four national cellular networks in the US today: AT&T, Verizon, T-Mobile, and Sprint. Of these, only one — Sprint — currently provides a flat-rate, “all you can eat” data plan. AT&T and Verizon used to provide such plans, but have recently dropped them and shifted entirely to “metered billing,” in which the amount you pay for your service is set by how much bandwidth you use.  T-Mobile’s advertising might lead you to believe they offer flat-rate unlimited plans, but in fact they silently throttle your data connection if you use more than a couple of gigabytes in a month.

So if you’re a mobile customer who wants a flat-rate data connection, you have exactly one choice: Sprint. And even Sprint has begun edging away from unlimited data plans, raising the prospect of there being no providers of such plans in the US market in the near future.

What do these shifts mean for people who are building online businesses, especially cloud-oriented ones? The short answer is, they’re dangerous. Incredibly dangerous. Because they strike directly at the key things that make cloud services appealing to users — convenience and savings.

The risk they pose to the savings element of the cloud pitch is obvious: if users are getting charged by the byte, services that make extensive use of the network will mean higher costs.

The risk they pose to the convenience element is less obvious, but in my view, it’s the bigger threat of the two. The core of it is this: a cloud service is only as convenient as local storage if retrieving data from the cloud service is indistinguishable to the user from retrieving it from local storage. That’s the case when you have an unlimited, flat-rate data plan, because the marginal cost to the user of pulling a byte from the cloud is the effectively the same as pulling it from their local disk. But with metered plans, suddenly the user has to think about each byte they pull from the cloud before they pull it down. Will this be the byte that starts the meter running? That’s a question they don’t have to ask with local storage.  There’s no coin slot on their Flash memory card that they have to pump a nickel into every time they want to play a song.

That difference puts any business model that’s built on replacing a local service with cloud service at risk, because it makes your business a second-class citizen. Your service now comes weighted down with a bunch of questions, and even if the answers to those questions work for some users, the very fact that they have to deal with them makes your offering less attractive. Many will avoid wrestling with them altogether by the simple expedient of not trying your service at all.

For now, the mobile market is still a secondary market, so this isn’t a make-or-break concern. But that’s becoming less and less true every year. It’s not hard to imagine a near future where the desktop PC and its flat-rate data connection is relegated to a few small niches, with the rest of the market being ruled by mobile devices with metered billing plans.

And if you’re running a cloud service, that’s a future that’s not buying what you’re selling.


Don’t Worry About Selling Your Privacy To Facebook. I Already Sold It For You

Big Zucker Is Watching YOUOne of the most interesting — and by interesting, I mean appalling — items to come out of last month’s Facebook F8 developer conference was Facebook’s move towards what the company calls “frictionless sharing.

I have a rule of thumb: whenever someone invents a new term to explain something they’re doing, they probably did so because describing it using existing terms reveals how horrible it is.  “Frictionless sharing” does not disprove this theory.

What it boils down to is simple: in the old days, you used to share things you liked with your friends on Facebook; with “frictionless sharing,” you’ll share everything you do online with your friends on Facebook, automatically.  No more clicking a button or pasting a URL to share something with your friends; now everything you look at will be public by default, just because you looked at it.

This is, of course, an absolutely colossal violation of your privacy. Everybody — everybody — has looked at something online that they wouldn’t want to share with the world. Even putting aside the obvious stuff (*cough* porn *cough*), what if you’re looking at job ads?  Would you want your co-workers (or your boss) to know you’re doing that?  Or if you’re searching for an old flame who it turns out is married now?  Do you want that fact announced to the flame (and the flame’s spouse!)? It takes literally seconds to think of scenarios where “frictionless sharing” could burn you, even if what you’re doing is totally innocuous.

To date, Web users have had an expectation that their online behavior was private, unless they explicitly made it otherwise.  It might be saved in aggregate form in some database somewhere, but that database was privately held, and the data wasn’t specifically tied to their online identity.  “Frictionless sharing” turns that expectation on its head — it makes it so that you need to assume that everything you do online is publicly tied to you, unless you take steps to make it otherwise. And that’s unacceptable, at least to me.

“But how can Facebook do that?” you ask.  “How can they track what I’m doing when I’m not on their site?”  The answer is that they have enlisted an army of accomplices — including, it depresses me to say, me.  I have been Facebook’s confederate in this scheme to violate your privacy.  I gave them material assistance in making it happen.

I did that by embedding Facebook’s Like Button on my site.

At first glance, the Like Button seems like it shouldn’t have anything to do with “frictionless sharing.” You still have to click it to “Like” the page you’re reading, right? Well, not quite. To understand why, you need to know a little about how the Web works.

“C” Is For Cookie

The Web, as originally designed, was what nerds call a “stateless” system, which in English means that every request you make for a Web page is completely independent of the request you made before; no information is shared between them. When you click a link on page 1 that takes you to page 2, page 2 knows nothing about what you were doing on page 1.  This makes the system much simpler to implement than more complex, “stateful” systems that do pass that information along, and that simplicity is a key reason why the early Web was successful where other hypertext systems were not.

But as the Web grew, people started to want to do things on it that you just can’t do in a stateless system.  The biggest example is e-commerce: to have a shopping cart on your site, you need be able to keep track of when the user adds and removes items from the cart, and hold on to that information as they browse around until they go to check out. Without that ability, it’s impossible to run any kind of storefront on the Web.

This led to a great debate in the earliest days of the Web about whether these kinds of applications were appropriate for the Web at all, and if so, how one would go about building them. The solution that eventually emerged came from Lou Montulli, then a programmer at pioneering browser developer Netscape Communications. Montulli’s solution was to allow sites to set so-called “magic cookies” — small text files — in the user’s browser. Sites could store state information in cookies, so that page 1 could leave a note for page 2 telling it that you put a particular item in your shopping cart. Cookies turned the stateless Web of the early 1990s into the stateful Web of today.

In doing so, though they opened up new privacy issues that had never existed before. To guard against the obvious threat of sites snooping around in each others’ cookies, browser developers set things up so that the only cookies a site could read were ones that had been sent from that site’s domain. So you might think that when you load a page, you’re only sharing information with the operator of that page — but there’s an important caveat: when you load a Web page, it can contain resources like scripts that are pulled in dynamically from other domains. And those scripts can set their own cookies — called “third-party cookies” — which are visible to them anytime you hit a page anywhere that pulls in that particular script.

Up until now, the primary people who took advantage of this loophole were advertising networks; it gave them the data to customize the ads they show you to your interests, because every site that contained their ads also contained their scripts, which let them use a third-party cookie to build a profile of which sites their ads run on you visit.  From a privacy perspective this is somewhat troubling, but I never found it that troubling, for two simple reasons.  First, no ad network has enough global marketshare to place its ads on every site on the Web (though Google gets closer every day), so there’s little risk of one being able to watch you everywhere. Second, even if they could, their use for your information is internal — they use it to tune what ads you see, not to tell others what ads you have seen.

“Frictionless sharing” attacks both those reasons head-on.

The Social Panopticon

The threat it poses to the second one is obvious — its whole point is to announce to the world what you’ve been reading, or watching, or listening to. The threat to the first one is a little more nuanced. Facebook doesn’t run ads on external sites, I hear you thinking. So how could they use third-party cookies to track me around like an ad network does?

The answer is hidden inside that little “Like” button.

See, the thing is, the way you place the Like Button on your site’s pages isn’t by downloading an image or a script and running it from your own site.  The only way you can do it is by pulling in a script from Facebook, from their site.  Which means that every page that includes a Like button — or any other Facebook plugin, like Facebook Comments — also reports back to Facebook that you have viewed that page. Regardless of whether or not you click it.

And that’s a Big Deal, because unlike ads from one particular network, Facebook Like buttons are damn near omnipresent these days. And that’s why they are so troubling. The only way one company could ever build a truly comprehensive profile of your Web usage would be if they could convince every site on the Web to include a little snippet of their code. For a long time, that seemed like a highly unrealistic prospect. As Like buttons proliferate, it begins to seem less and less unrealistic every day.

(Not to mention that unlike with ad networks, if you’ve got a Facebook account, your tracked activity is now tied to a personally identifying profile. A profile that they require you sign up for with your real name.)

All of which is a long-winded way of explaining why I have removed all Facebook integration code from Just Well Mixed.* This site used to have Like buttons; it doesn’t anymore.

That’s because Facebook Like buttons are kind of like a bribe.  Facebook offered me something of value — a chance at increased traffic — in exchange for letting them keep tabs on which pages you read on this site, and how frequently, and for how long.  And by including the buttons on my pages, I took the bribe. I sold you out. I sold your privacy to Facebook.

That ends today.

* Real nerds will View Source and notice that I still have Open Graph metadata tags in the page headers. That’s because those do not require allowing Facebook to execute code remotely to generate, so they pose no privacy risk, and they’re useful for describing the site to anyone who cares to write code to parse them, not just to Facebook. So I left them in.


What I Learned From Hitting The Front Page Of Hacker News

Cool story, broMy recent post about Firefox problems on Linux ended up breaking a bit bigger than I expected — it got the attention of Hacker News, the popular Web nerd discussion board, got voted up enough by HN readers to hit the #3 spot on the site’s front page, and ended up garnering 203 comments there along with 53 more here.

I’ve never written anything that got that level of popularity on HN before, and the process taught me a few things. Specifically:

  • Within 2 hours of the post hitting the HN front page, I had several people at Mozilla reach out to me to tell me that they were aware of the issues and to suggest possible fixes (which I am testing now, and will share with you if they work out). So if you want to get annoyances in the software you use fixed, getting your complaints featured on HN would seem to be an excellent way to do it.
  • Many, many of the comments left both here and at HN were complaints about Firefox’s memory consumption, which is kind of amazing, because nowhere in my post did I ever mention that. My post was all about slowness in Firefox’s internal SQLite database, not about excessive memory usage or the browser slowing down with too many tabs open. In my experience Firefox used to have memory issues, but those are pretty much gone in current versions. I can only assume that lots of people just saw the words “Firefox” and “slow” in my post and started dragging out their memory-related complaints from 2008 without bothering to read any of the other words.
  • Many, many other comments were along the lines of “doesn’t happen here, you must be stupid or something.” Which, you know, thanks for the productive contribution!
  • One (1) comment contained an intriguingly plausible technical explanation involving low-level behavior of Linux filesystems for the problems I was experiencing. Oh, for a world where comments like this were the rule rather than the exception.
  • No matter how aggressively you cache the content on a WordPress-backed site, it will fall down under load if Apache & PHP only have 1GB of RAM to play with. Give them two and they are much happier. (Note: it may be possible to get more performance out of the same hardware by moving from mod_php to FastCGI, or moving from Apache to nginx, or moving from Earth to Mars, or whatever. I will leave that question as an exercise for the reader.)

So, to summarize: 300+ comments, out of which maybe 10 productively addressed my concerns.

Go Internet!


Possible Malaria Vaccine Found

MosquitoThis ought to be the top story on every news outlet in the world — a malaria vaccine appears to finally be within reach.

The long-awaited results of the largest-ever malaria vaccine study, involving 15,460 babies and small children, show that it could massively reduce the impact of the much-feared killer disease. Malaria takes nearly 800,000 lives every year – most of them children under five. It damages many more.

The vaccine has been in development for two decades – the brainchild of scientists at the UK drug company GlaxoSmithKline, which has promised to sell it at no more than a fraction over cost-price, with the excess being ploughed back into further tropical disease research…

This early data from five- to 17-month-old children is the first of three important results; the second outcome from the vaccination of newborn babies will be published next year. These are crucial, because the malaria vaccine needs to be incorporated into the infant immunisation schedule, alongside the usual diptheria and measles jabs, but earlier small-scale trials suggest the results in six- to 12-week-old babies will also show around 50% protection.

If you live in the Northern Hemisphere you may not understand why a malaria vaccine would be a Big Deal, but those who live in the Southern Hemisphere understand it all too well; malaria relies on mosquitoes for transmission, and mosquitoes thrive in the global South’s tropical climate. And the result is beyond tragic: according to the Centers for Disease Control, malaria kills somewhere between 700,000 and 1,000,000 people every year, most of them children in sub-Saharan Africa.

To put that into some perspective, imagine if this year we took the entire population of San Francisco and shot them dead.   Then next year we did the same to the entire population of Indianapolis.  And then the year after that we did the same to the entire population of Austin, Texas. And then the year after that…

This probably seems unthinkable. But it’s the same scale as the very real human tragedy that malaria inflicts. Which is why a reliable malaria vaccine would be a huge breakthrough in public health — a breakthrough that would go down in the history books.

The results of this trial do not necessarily mean that such a breakthrough has occurred.  Scientists still need to do long-term tests to determine how long the vaccine’s protection will last, and its current effectiveness rate of 50% is lower than they generally like for vaccines; early tests of the vaccines against influenza and polio, by contrast, saw effectiveness rates on the order of 70-90%.

For all that, though, these results look like an incredibly encouraging step forward; even if the effectiveness rate of this vaccine never goes higher than 50%, I imagine there are plenty of parents who would be more than happy to cut their child’s malaria risk in half. (If your child was at risk, wouldn’t you?) And one has to assume that if this particular vaccine proves to work, enormous amounts of money and research will be brought to bear to get its effectiveness rate up as high as it can possibly go.

It’s sobering sometimes to think about just how far medicine has come in a very short amount of time.  A look at this timeline makes the point. Recorded human history goes back to about 4,000 BC. For nearly all of it, humanity had little or no protection against a whole range of infectious diseases.  It wasn’t until more than five thousand years after history began that methods of protection against these diseases began to emerge; first variolation, and then later the safer method of vaccination pioneered by Edward Jenner in the late 18th century.

Since then, in just over 200 years, a whole range of deadly plagues have been struck down by science. Smallpox. Diphtheria. Polio. Names that once terrified families and communities, now relegated by vaccination to dusty history books.

Is malaria next?

It’s probably too soon to say. But I sure hope so.


Let Them Eat Social Networking

Big Brother is Watching YouIn The New Republic, Evgeny Morozov has a characteristically sharp and provocative review of Jeff Jarvis’ new book cheering the death of privacy, Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live:

With a little Habermas, a little Arendt, and a little media history, Jarvis argues that “if we become too obsessed with privacy, we could lose opportunities to make connections in this age of links.” Privacy, he argues, has social costs: just think of patients guarding their health information instead of sharing it with scientists, who might use it to find new cures. For Jarvis, privacy is the preserve of the selfish; keep too much to yourself, and the “Privacy Police” may pay you a visit.

Why are we so obsessed with privacy? Jarvis blames rapacious privacy advocates—“there is money to be made in privacy”—who are paid to mislead the “netizens,” that amorphous elite of cosmopolitan Internet users whom Jarvis regularly volunteers to represent in Davos. On Jarvis’s scale of evil, privacy advocates fall between Qaddafi’s African mercenaries and greedy investment bankers. All they do is “howl, cry foul, sharpen arrows, get angry, get rankled, are incredulous, are concerned, watch, and fret.” Reading Jarvis, you would think that Privacy International (full-time staff: three) is a terrifying behemoth next to Google (lobbying expenses in 2010: $5.2 million).

I haven’t had a chance to read Jarvis’ book yet, so I can’t comment specifically on it.  But I was struck while reading Morozov’s review by an argument that he didn’t make, but probably should have.

Advocates of “radical transparency” like to make it sound like it’s an egalitarian development — a leveller — because if privacy is dead, we are all equally exposed. In other words, suddenly it’s not just celebrities who are being chased by paparazzi, but everybody; because in the radically transparent world, we have all transformed ourselves into paparazzi, reporting breathlessly on each other to the world. In this reading, a post-privacy world is a world in which everybody is equally exposed.

But this is, of course, nonsense.  When you step out of the world of theory and look at the world as it is, it quickly becomes clear that while the Internet has degraded privacy, it has done so far from equally. Thanks (if that is the appropriate word) to Facebook, I know a lot more about my friends’ private lives today than I did a decade ago; but I know about as much about Barack Obama’s private life, or Lloyd Blankfein’s, as I did about the great politicians and financiers back then. Maybe even less, since people at that level actively engage expensive teams of experts to use social media tools to “manage their personal brand” — make them appear as they want to be seen, rather than as how they really are. Maybe my friends would do the same, if they could afford to. But the point is, they can’t.

And that brings me to the dog that didn’t bark in Morozov’s review.  The truth of Google is that while we live in a time of greater exposure, we are not all exposed equally. We are sorted instead into two classes: those who can afford privacy, and those who cannot.  Those who can engage PR firms and lawyers to ensure that anyone who does spy on them does so at great personal risk.  Those who can’t must learn to live without privacy — indeed, they must accept their own personal information being used to enrich others.  (To sweeten that pill, the others give the proles “free” toys to play with; left unsaid is that each toy has its own set of eyes built in, to make the conversion of your privacy to their profit even more efficient.)

That is what was so subversive about Wikileaks, and why the establishment reacted so forcefully against it — it applied the “radical transparency” ethos to the class of people who were supposed to be exempt from it. You need to learn to get over privacy — and if you can’t, Google will roll out Jeff Jarvis, or someone like him, to explain to you that you’re being a stick in the mud — but cabinet officers and CEOs didn’t think they needed to. And when someone had the nerve to challenge that assumption, all the power of the powers that be came down on them like a ton of bricks.

Morozov reports that Jarvis’ book doesn’t dwell much on Wikileaks. That doesn’t surprise me, because for all the noise the “radical transparency” types like to make about the social value of eroding privacy, the only privacy reductions they’re generally interested in are those that can be monetized. Google and Facebook are interesting to them because they have built business empires by commoditizing your privacy; all Wikileaks did was expose critical information about the foreign policy of the world’s dominant hegemonic power. Who cares about that? That didn’t make any VCs’ pockets jingle.

This is why “radical transparency” arguments get my hackles up — they always seem to be made by people who are arguing for less privacy for others, and frequently they are doing so on behalf of those who profit from that reduction in privacy. If you think living without privacy is a great idea, then do me a favor: you go first.


Dear Mozilla: Fix Your Damn Browser Already

Firefox crash

Longtime Readers of this blog will be aware that I have been a fan of Mozilla for a long time. There’s nearly ten years of Mozilla advocacy tucked away in the JWM archives. I’ve been on the Mozilla fanboy train since before Firefox even existed — all the way back to the original Mozilla Suite’s Milestone 17 release, the first version after the Netscape exodus I used regularly, which Wikipedia tells me shipped on August 7, 2000.  That’s back when Bill Clinton was president.  So I don’t like that I have to write this post, but I calls ’em like I sees ’em.

And on the subject of Firefox as it exists today, the way I sees ’em is this: Mozilla, you need to fix your damn browser.

Firefox, on Linux at least, is busted.  It’s busted so bad that it’s painful to use.  And it’s been this way ever since Firefox 3 launched — three years ago.

The culprit, I believe, is the mechanism that modern versions of Firefox use to keep track of your bookmarks and browsing history.  Before Firefox 3, bookmarks and history were stored in separate places; your bookmark list was stored as an HTML file — an approach that went all the way back to the original Mosaic browser of the early 1990s — and history was stored in a custom database called “Mork“, whose design was memorably described by Jamie Zawinski in 2004 as “the single most braindamaged file format that I have ever seen in my nineteen year career.

As Zawinski’s testimony should make clear, working with those old tools was painful for the programmers involved, and as the browser grew in complexity the limitations they imposed became more and more acute. So for Firefox 3, Mozilla scrapped them both, replacing them with a new, unified system known as “Places.

The key shift that Places embodied was that instead of being scattered across multiple poorly-documented data stores, history data (including bookmarks) would now be stored in a single data store, running on the popular embedded database SQLite — which meant that all that data could now be queried in more or less the same way as any other relational database.  That opened up a whole new range of feature possibilities, such as Firefox’s Awesome Bar, which also shipped with Firefox 3; the Awesome Bar put your browsing history at your fingertips in a way that the old systems could never have supported.

Which was great! Until it slowly began to become clear that Places brought with it a bunch of problems of its own.  From Firefox 3 on, I began to notice that Firefox was hanging, and hanging a lot.  Worse, it was hanging more and more as time went on. And the hangs tended to pop up when doing something history-related, like clicking the Back button, or typing something into the Awesome Bar.

The culprit, as far as I can tell, is Places — or, more specifically, Places’ SQLite backend.  I’m not enough of an expert on either Firefox or SQLite’s internals to know which one is really responsible.  All I know is that, once you made the move to Firefox 3, you started to notice the browser getting slower and slower, and hanging more and more; and the advice you got on how to fix that kept coming back to suggestions like “use SQLite’s VACUUM command to remove empty space from your Places database” and “your Places database is fragmented; delete it and start over with a clean one“.

Which, not to put too fine a point on it, but what the hell? I’m running a Web browser here, not an Oracle cluster.  I shouldn’t need to be a freaking DBA to keep my browser running. And that’s not even the worst part; the worst part is that the only advice that really stops the problems — blowing away your Places database altogether and starting fresh — totally kills the value of the Awesome Bar. The Awesome Bar is awesome because it uses Firefox’s memory of your history to supplement your own; it helps you find sites you visited long ago and only vaguely remember now. Every time you blow away the Places database all that memory is wiped clean, which makes the Awesome Bar pretty non-Awesome.

Mozilla clearly knows these problems exist; mentions of them have popped up periodically in various Firefox blogs and forums ever since Places landed. And various people make noises about fixing them. But they never seem to get fixed. I’m on Firefox 7 now, and Firefox 8 is coming next month, and yet I’m still suffering from these painful performance issues that have lingered since Firefox 3.

That’s unacceptable.

There’s lots of great stuff coming down the pike in upcoming versions of Firefox. I find that I don’t really care about any of them anymore. What I care about is a browser that doesn’t require me to muck about with SQLite terminal commands, or manually erase history files every six to eight weeks.

A browser, in other words, that’s usable.  A browser that isn’t constantly hanging.

Can Mozilla deliver that browser? For the first time in almost a decade, I’m starting to doubt it.

UPDATE (December 20): Things have gotten better since I wrote this post, thankfully.