Archive:


You are more than a mouth

Open mouth

I don’t like the word “consumer.”

When someone calls me a “consumer,” they’re making a political statement. They’re saying that I’m basically the same thing as a single-celled organism. I’m a creature the point of whose entire existence can be summed up as eating products and shitting money.

Like all living things, I consume. I consume light and food and water, and by doing so, I stay alive. But those acts of consumption are not who I am. They are the mechanisms of life, not the point of it.

Human beings have free will, and because of that, we do more than just consume. We create, we build. We make art, and philosophy, and love. And when we make love, we do more than just perpetuate our species; we make statements to each other, statements of commitment and devotion, that no other animal makes. We build monuments to each other out of bricks we fashion from our own clay.

So when people ask why Americans are willing to fight each other over cheap plastic baubles on Black Friday, I wonder if we don’t plant the seeds for such behavior when we teach people to think of themselves as consumers. If you are taught that you are just a maw into which products can be stuffed, it should come as no surprise when you act like that is true.

But it is not true. You are better than that, more than that. You are a noble creation, a consciousness illuminated by a bright, ineffable spark.

You are more, so much more, than a mouth.


LibertarianRepublican’s “The End of Liberty in America”: a dramatic reading

Wolverines!!!So over at a site called “LibertarianRepublican.net” (which should give you a pretty good idea what they’re on about), Eric Dondero is pretty sure that Obama’s re-election means it’s time for WOLVERIIIIIIINES!

Today starts a new course for my life. I’ve soured on electoral politics given what happened last night. I believe now the best course of action is outright revolt. What do I mean by that?

Well, to each his own. Some may choose to push secession in their state legislatures. Others may choose to leave the U.S. for good (Costa Rica, Switzerland, Italy, Argentina, Hong Kong, Israel). Still others may want to personally separate themselves from the United States here in North America while still living under communist rule’ the Glenn Beck, grab your guns, food storage, build bunkers, survivalist route. I heartily endorse all these efforts…

Are you married to someone who voted for Obama, have a girlfriend who voted ‘O’. Divorce them. Break up with them without haste. Vow not to attend family functions, Thanksgiving dinner or Christmas for example, if there will be any family members in attendance who are Democrats.

Do you work for someone who voted for Obama? Quit your job. Co-workers who voted for Obama. Simply don’t talk to them in the workplace, unless your boss instructs you too for work-related only purposes. Have clients who voted Democrat? Call them up this morning and tell them to take their business elsewhere.

While he encourages you to burn all your personal relationships to the ground and head off for the mountains with your .22 rifle, though, he’s taking a slightly different (but just as radical! honest!) approach to revolution: annoying his friends, family, and grocery store checkout person.

I believe we all need to express disgust with Obama and Democrats in public places. To some extent I already do this. Example:

When I’m at the Wal-mart or grocery story I typically pay with my debit card. On the pad it comes up, “EBT, Debit, Credit, Cash.” I make it a point to say loudly to the check-out clerk, “EBT, what is that for?” She inevitably says, “it’s government assistance.” I respond, “Oh, you mean welfare? Great. I work for a living. I’m paying for my food with my own hard-earned dollars. And other people get their food for free.” And I look around with disgust, making sure others in line have heard me.

I am going to step this up. I am going to do far more of this in my life. It’s going to be my personal crusade. I hope other libertarians and conservatives will eventually join me.

As you might expect, Dondero’s addled cri de coeur was met with mockery and ridicule around the blogosphere. But still, something was missing. Someone needed to give it the Serious, Dramatic Reading it deserved.

Am I that person? Probably not, but what the hell, I did it anyway.

http://www.youtube.com/watch?v=VsuQVhZB0Ds

Enjoy.


Everything you need to know to understand why Obama won, in one image

President Obama won re-election yesterday, and the pundits are all falling over each other to explain the hows and whys. But I would suggest that the answer is pretty simple. So simple, in fact, that it can be summed up in a single image.

That’s the image over there on the right. It comes from an October 31 post on the blog of Tom Holbrook, a political science professor at the University of Wisconsin at Milwaukee. The lines show the trends this year in how people feel about the two major parties in the United States — their “favorability.” Each point is a poll, and its position along the vertical axis is determined by the percentage of people in that poll said they felt favorably about that party, minus the percentage who said they felt unfavorably about it.

What’s notable here are two things:

  1. The Democratic Party line consistently hovers around zero;
  2. The Republican Party line ranges from -15 to -10.

What this means is that throughout 2012, there wasn’t a whole lot of enthusiasm for the Democratic Party — the percentages of people who said they liked Democrats generally were more or less canceled out by those who said they didn’t like them. There was, however, a fair bit of negative enthusiasm for the Republican Party, with 10-15% more respondents saying they disliked Republicans than liked them.

This single observation explains a lot. In theory, Obama should have had a harder time than he ended up actually having last night: he’s an incumbent President presiding over a stagnant economy, and enthusiasm among his base started out weak, while his opponents had been energized by the fights against his health reform and economic plans. An opposition Congress had denied him major achievements, and he’d accepted watered-down versions of those he did get passed in futile attempts to woo them. In the first debate he seemed tired, and his performances in the second and third debates, while better, weren’t clear knockouts either.

But all of that turned out to matter less than it might have, because his opponent, Mitt Romney, was weighed down by an even greater burden: the burden of having an R next to his name.

Voters, it turns out, aren’t stupid. When things happen, they remember. So when Romney tried to capitalize on the weak economy by blaming it on Obama’s policies, they were smart enough to remember that George W. Bush, not Obama, was President when the economy went into the tank. They weren’t happy with the slow pace of recovery — a point this blog predicted would be a drag on Obama’s support two years ago — but they figured a slow recovery was better than a crash back into depression, which is the image that little R conjures up now. This is the boat anchor that Bush shackled onto the leg of the Republican Party, and they haven’t figured out a way to wriggle out of it yet.

That is a major, major problem for Republicans as a party. They simply cannot compete at the national level if their “brand” has become so tainted that any candidate who carries it takes a 10% hit in public opinion just for doing so. In a country as narrowly divided as this one is, that’s electoral suicide.

There’s two ways a political party can overcome a handicap like this. The first is time: wait long enough, and people will eventually forget the disasters that prompted their negative reaction in the first place. But that’s a very slow process; it can take decades. The second is a major, visible change in direction: breaking with at least some the candidates and policy positions that the party is currently associated with, and providing a clear, striking new definition of what belonging to that party means. But Mitt Romney was never that kind of candidate — and if he had been, it’s not clear that he’d have had any chance at all of surviving the Republican primaries, in which the only thing that counts is how slavishly the candidate can parrot conservative shibboleths.

This is the long-term challenge the Republicans are going to have to face, either now or sometime in the future: figuring out a way for candidates to run as a Republican without having to fit in the mold of George W. Bush. Because it’s pretty clear that American voters don’t find the products that come out of that mold appealing anymore.


Hurricane Sandy safety tip: don’t be a social media dumbass

Don't be a dumbassHurricane Sandy is bearing down on the East Coast as I write this, and by all indications it is going to be a big one. If, like me, you live there, you should take a moment and review the government’s sensible recommendations on things you can do to protect yourself and your family from the storm. But in the spirit of this blog’s long-standing advice to always be prepared, I want to add safety recommendation of my own:

When the hurricane strikes, don’t be a social media dumbass.

What’s a social media dumbass, you ask? A social media dumbass is someone who, in the midst of an emergency, thinks “I have to document this to share with the world!”

No. No. No you do not. There are professionals out there documenting the emergency just fine. The world can survive without your live tweets.

A sensible person, when a hurricane strikes, thinks “I have to get myself and my family to shelter immediately.” And that is exactly what you should be thinking. Anything that delays you from taking shelter puts you at risk. This includes standing around taking pictures with Instagram.

Even if you’ve taken shelter and stocked up on essentials, it is still possible to harm yourself through social media dumbassery. One way would be to spend your time in the shelter texting or tweeting or Web surfing or otherwise using your cell phone. This can put you and others at risk in two ways.

First, cellular bandwidth is a limited resource. If you’re using it to upload videos of yourself to YouTube, someone else who actually needs help may not be able to get a signal. (Conversely, if some other dumbass is wasting bandwidth, you may not be able to get a signal if you need help. This is a good reason why you should encourage your friends to keep their dumbassery in check as well.)

Second, there’s no guarantees that when you emerge from shelter you will be greeted with electrical service. It’s highly possible in any affected area that your power will be knocked out by the hurricane, and it may even stay out for several days after the hurricane hits. The batteries in most smartphones (the type of phone favored by the Social Media Dumbass) last at most a few hours without a charge. If you really are without power for an extended period, you will want every minute of that battery life so you can reach others if needed. If it turns out that you need help and your phone’s dead because you burned half of the battery watching Gangnam Style videos in your hurricane shelter, you will feel very, very stupid.

(This latter point also argues for not just not using your phone’s advanced features unless you have to, but for actually disabling them if you can. Many phones will allow you to turn off features like Bluetooth and wireless networking; because these features need to always be running to work, they drain the phone’s battery. Turning them off can help you extend the battery’s life.)

Death by Social Media Dumbassery is not a theoretical risk. There’s already been one known case: a California man who reacted to seeing huge waves rolling in from 2011’s Japanese tsunami by running toward them to take pictures.

Will there be others when Hurricane Sandy hits? That depends on you. Only you can stop social media dumbassery.

UPDATE (Oct. 29, 11 AM): New York Governor Andrew Cuomo agrees: “You do not need to be going to the beach to take pictures; you really don’t.”


Ask Mr. Science: Windows 8

Windows 8 Start ScreenQ: Hey, Mr. Science! What’s Windows 8?

A: Well, Bobby, Windows 8 is the newest version of the popular Windows operating system for personal computers. It was just released for people to buy today.

Q: So what? Don’t lots of computer programs come out every day? What’s so special about this one?

A: That’s right, Bobby, they do. But Windows is a very special computer program, because so many people use it every day. It’s installed on the vast majority of the world’s home and office computers. More than one billion people around the world use it!

Q: Gee willikers, Mr. Science! That’s a lot of people!

A: It sure is, Bobby. And that’s why the release of a new version of Windows is important. Hundreds of millions of people are wondering if they’ll end up using it or not.

Q: So if Windows 8 is a new version of Windows, what’s new about it, Mr. Science?

A: Lots of things, Bobby. First, Windows 8 brings an entirely new look to Windows. That new look used to be called “Metro,” but Microsoft forgot to check if anyone else was using that term before they rolled it out, and they almost got sued by someone who actually was using it in their own business. So they had to stop calling it that.

Q: Golly! So if they don’t call it “Metro” anymore, what do they call it?

A: “Modern UI Style,” Bobby. Or “Windows 8 Style.”

Q: Gee, Mr. Science, those names are boring! I like “Metro” a lot better.

A: So does everyone else, Bobby. Which is why everyone in the world who doesn’t work at Microsoft still calls it that.

Q: OK, so it looks different than other versions of Windows. But does it WORK any different?

A: Yes, Bobby, it definitely does. Windows 8 is the first version of Windows to be designed for touch-friendly devices like phones and tablets, so it works a lot differently than previous versions did. For instance, when you start a Windows 8 application, it automatically fills up the whole screen, where previous versions would start the application in a window.

Q: So Windows 8 doesn’t put applications into windows anymore?

A: That’s right, Bobby.

Q: So why do they still call it “Windows”?

A: Nobody knows, Bobby. Nobody knows.

Q: But what if I really liked running my programs inside a window? Or if I have some programs that need to run in a window to work right?

A: Don’t worry, Bobby. Windows 8 provides you with an alternate mode, called “desktop mode,” that works just like your old versions of Windows did. You can switch back and forth between the new Metro interface and the old desktop mode interface at any time.

Q: So I can run new-style Windows 8 programs and old-style Windows programs right next to each other?

A: Not quite, Bobby. Windows 8 mode and desktop mode are separate from each other. So you can run your new Windows 8 programs, and your old Windows programs, but not together.

Q: Gosh, Mr. Science, that doesn’t sound very nice. But at least I still have some way to run my old programs!

A: That’s right, Bobby. Well, unless you’re using Windows RT.

Q: Windows RT? I thought we were talking about Windows 8!

A: We are talking about Windows 8, Bobby.

Q: Then what’s “Windows RT”?

A: Windows RT is a version of Windows 8 for mobile devices.

Q: Wait a minute, Mr. Science. You said before that Windows 8 was designed for phones and tablets and stuff!

A: I did, Bobby.

Q: Then why do we need a separate version of Windows 8 called “Windows RT” for phones and tablets and stuff, if Windows 8 was designed for them?

A: Well, Bobby, it’s like this. Windows 8 is designed for mobile devices. Except for some mobile devices, which use Windows RT instead.

Q: Gee, Mr. Science, that’s really confusing.

A: You’re telling me.

Q: So what’s the difference between regular Windows 8 and Windows RT?

A: Windows RT is just like Windows 8, Bobby, only without the desktop mode and the ability to run your old Windows programs. It only runs the new Metro-style programs.

Q: It doesn’t run my old Windows programs? So why do they still call it “Windows?”

A: Nobody knows, Bobby. Nobody knows.

Q: So if I go to buy a new computer, how will I know if it comes with Windows 8 or Windows RT?

A: It’s very simple, Bobby. You just look closely at the long list of technical specifications that’s printed in eight point type, and look there to see if it mentions “Windows 8” or “Windows RT.”

Q: Gosh, Mr. Science, I never look at those lists! They’re full of big words I don’t understand.

A: Well, Bobby, that certainly makes you a rare bird! Everyone I know reads the specification sheets before buying a computer. In fact, we linger over them thoughtfully, and frequently get into heated arguments over the merits of various types of video chipsets.

Q: You sure hang out with weird people, Mr. Science.

A: Don’t judge me, Bobby.

Q: OK, Mr. Science.

A: And now it’s time for a pop quiz! Tell the audience what you’ve learned today about Windows 8, Bobby.

Q: Windows 8 is some software for computers and phones and stuff. It looks just like regular Windows, except for all the places it doesn’t. It works just like regular Windows, except for all the places it doesn’t. And it runs all your old Windows software, except on some computers, where it doesn’t.

A: Very good, Bobby. Very good.

Q: Thanks, Mr. Science!!!


X-COM vs. XCOM

XCOM: Enemy Unknown

Firaxis’ 2012 remake, XCOM: Enemy Unknown.

Last week saw the release of one of the most-anticipated games of 2012 — Firaxis’ XCOM: Enemy Unknown for PC, XBox 360 and Playstation 3.

I didn’t say much when it hit the shelves, mostly because I wanted to try it for myself before weighing in. This is because XCOM is a fairly daring sort of project: a revisiting of one of the most well-loved and important titles in the history of computer gaming, 1994’s X-COM. Even today, nearly 20 years after it first came out, people still play the original X-COM and hold it up as a singular achievement; which puts the idea of a remake, in terms of ambition, something like a modern filmmaker mounting a 3-D production of Citizen Kane.

The good news is that Firaxis’ XCOM is a game worthy of the name. I’m not sure it’s worth writing a full review here, since the Internet is full of glowing reviews of this game already. But if you absolutely must have a review from me, here you go:

JASON’S EXTENSIVE, DETAILED REVIEW OF XCOM: ENEMY UNKNOWN

It’s a very good game; you should buy it.

Now that that’s out of the way, I’ll move a long to a subject that’s more interesting, at least to me: the design differences between today’s XCOM and the original X-COM.

If you never played the original — wait, what? You never played one of the greatest games ever? Well, you can fix that — it’s available for around $5 on various digital download services, including Steam and Gamersgate.

Now, where was I? Oh, right. If you never played the original, here’s a brief rundown of the story, which the new one (mostly) preserves. Earth is invaded by armies of strange extraterrestrial beings who seem bent on exterminating humanity. The nations of the world put aside their differences and form a global defense force, called X-COM, to battle the aliens. You play as the commander of X-COM, leading your soldiers through battles with the aliens around the world. Earth’s forces start out desperately outmatched by the aliens’ advanced technologies, so you have to use your wits to keep your troops alive long enough to capture examples of those technologies, which (with research) you can eventually turn into new weapons and defenses for your own forces. Once you have mastered the aliens’ tools, you can use them to mount a counteroffensive with the aim of ending the alien threat once and for all.

X-COM (1994)

The original X-COM, released in 1994.

The fame of the original game came from the way it blended several different genres of game into a seamless, unified whole.  One minute you’d be thinking at the highest strategic level, planning which of Earth’s continents to build a new complex of defense sites on; a few minutes later, you’d be deciding whether an individual soldier should dash into a darkened building by herself or wait for her teammates to arrive and back her up.

The new XCOM holds on to this feeling, but with some differences. The biggest is that the game has been streamlined to appeal to a broader audience. The original gave you an incredible level of control over every aspect of X-COM’s fight against the aliens; the new version takes some of those tools away, but they’re not really missed, because the most important ones are still there. You still have to decide what weapons to kit your soldiers out with, but you no longer have to buy individual hand grenades and ammo clips. The management that’s been removed is mostly micro-management, in other words, which isn’t a particular loss.

The same streamlining is visible in the tactical battles you lead your squads of soldiers against the aliens in. In the original X-COM, your team could have more than a dozen soldiers and the battlefields were large and sprawling. This gave the battles an epic feel, but at a cost: large teams meant lots of time spent ordering each soldier around, and large maps meant that battles could sometimes devolve into boring “bug hunts” where you had to scour every hedge and shed to find and kill the last alien on the map. In the new game, you can only have six troopers maximum on a given squad, and the maps are smaller and reoriented so that it’s always clear what general direction the overall objective of the mission is in. I had worries that this would subtract tension from the battles, but in practice it does not; it just means that they play through faster, and that you come to focus more on the few soldiers you have, learning their skills and drawbacks, instead of just pushing groups around the field like cannon fodder.

Being a product of its time, the original X-COM is not exactly user friendly; you’d be hard pressed to learn how to play without a copy of the manual, and even then the interface is a bit cryptic. The new XCOM simplifies the interface and adds a story-driven tutorial that teaches you the basics of the game while you play it. The simplification of the interface has been controversial; some have found that it’s been so simplified that the game actually plays better with a console controller than with the traditional mouse and keyboard. I’ve been playing with mouse and keyboard on the PC and haven’t found doing so to be particularly awkward, however. (If you’re a console gamer, this means you can pick up XCOM for your XBox and be comfortable that it’ll be easy to play, which is nice.)

Another controversial decision by Firaxis was tweaking the game’s difficulty levels. The original X-COM was famously unforgiving, almost brutal; missions would frequently collapse into bloodbaths, with the aliens massacring your troops as you clicked frantically to try and get the survivors back to the transport plane. Firaxis’ XCOM offers four difficulty levels: Easy, Normal, Classic, and Impossible. I’ve only played so far on Normal and Classic.

Classic is exactly what it says on the tin: the original, blood-soaked X-COM experience. In Classic mode, your soldiers get killed, a lot. Hardly a mission goes by without at least one or two casualties, and sometimes entire squads go out and never come back. It’s challenging, but difficult. Normal is significantly easier, especially if you’ve got experience playing the original. (At first I thought Normal was too easy, but then I got overconfident and sent a squad off to raid an alien base without only the most basic weapons, only to see them get absolutely slaughtered.) If there’s a complaint to be made here, it’s that there probably should be a level between Normal and Classic; but that’s small beer.

And really, that’s what the few complaints I have about the new game boil down to: small beer. XCOM gets so many things right that the few things it doesn’t feel insignificant in comparison. Which is sort of a small miracle, when you think about it. There are so many ways Firaxis could have fucked this game up — but they didn’t. They avoided them all. Somehow they managed to create something that, while noticeably different from the original, still provides tense, nerve-wracking battles; still shocks when you send a soldier around a dark corner to find an alien waiting there for them, ray gun at the ready; still breaks your heart when your best soldier, the one you’ve trained up from a raw rookie, stands and fires as extraterrestrial hordes bear down on her and you realize: this is it, this is the mission she’s not coming back from.

Still feels worthy, in other words, to call itself X-COM.


Dear media: here is why nobody trusts you anymore

Liar LiarAmericans don’t trust their press corps to tell them the truth. Why should that be?

I would submit that this story, published on CNN’s Political Ticker blog yesterday, illustrates one reason.

Gibbs: Romney has advantage in debates

(CNN) – A senior Obama campaign adviser said Mitt Romney has a leg up on President Barack Obama in the upcoming presidential debates.

“Mitt Romney I think has an advantage, because he’s been through 20 of these debates in the primaries over the last year,” Gibbs said Sunday on Fox News…

On Air Force One on Monday, Obama campaign press secretary Jen Psaki claimed Romney was doing more preparation for the debates “than any candidate in modern history.”

“They’ve made clear that his performing well is a make-or-break piece for their campaign,” Psaki said of the Romney team’s efforts.

The “news” in this piece is that two professional Democratic strategists claim that Republican Mitt Romney will have an advantage in the upcoming debates, because his path through the Republican primaries required him to participate in many debates, whereas President Obama’s path to renomination did not.

The problem with this “news” is that it, well, isn’t. It isn’t news at all. It’s 100% pure political spin, and fairly transparent political spin at that.

You see, it’s a fairly elementary bit of conventional wisdom in politics that when a debate approaches, you spend the time leading up to it talking up the debating prowess of your opponent. Why? Because doing so is a no-lose proposition; if your candidate loses the debate, you can say it was only because the opponent had such legendary debating skill, and if your candidate wins, you can say what an amazing feat it was to slay such a mighty opponent.

So that explains why Gibbs and Psaki are pushing this line — they’re running interference for President Obama. But why is CNN repeating it?

Neither Gibbs nor Psaki offers any evidence to back up their claims, after all. They just offer a theory, that the Republican primary process honed Romney’s debate skills to a fearsome level of keenness. Which might be an interesting theory, if only it didn’t run counter to everything we’ve seen from Romney over the last few weeks.

Can anyone look at the Romney campaign and honestly say that there’s a shred of evidence that Romney has matured into some kind of Great Communicator? This is a man whose biggest speech to date was upstaged by a man doing improv with a chair. It would take some serious evidence to convince an objective reader that Romney truly has a leg up on Obama going into the debates; but CNN offers none, preferring instead to just echo the spin they got from Gibbs and Psaki.

Beyond that, though, there’s another dimension that makes this offense worse. I guarantee you that nobody at CNN — nobodywho was involved with the production of this story actually believes its premise. They don’t believe that Romney’s been sharpened into a fearsome debater. They don’t even believe that Gibbs and Psaki are dealing with them honestly.

They know they’re being spun, and they know that all they’re doing is passing the misleading spin along to their readers. But they just don’t care. It was said by a political figure, so as far as they’re concerned, it’s worthy of passing along.

The cynicism of this line of thinking should be breathtaking. The only reason it isn’t is because we’ve had so many lies passed along to us uncritically by our media that one more lacks the power to shock.

“Spin” is a cute-sounding word, but we should be clear about what it really means. Spin is an attempt to mislead the listener in ways that benefit the spinner. I can understand why a political operative would want to do that. What I can’t understand is why a media outlet — an organization whose entire purpose is to inform its readers — would pass it along without comment.

It shows very clearly who exactly they believe they serve, and it isn’t you and I. It’s the players, the professionals in the game of politics. The assertion Gibbs and Psaki are making is part of the ritualized structure of American politics; it’s what flacks do before every debate. And CNN sees itself as part of that ritual. Not as a guide to it for the rest of us, a vehicle for cutting through the Kabuki to the reality underneath. As a participant. A player. 

In other words, they see themselves as peers of Gibbs and Psaki, chuckling at the suckers and rubes who might actually believe this nonsense — said suckers and rubes being, of course, their readers. And that’s an attitude that absolutely kills trust. Nobody’s going to trust someone who’s more interested in the approval of the person ripping you off than in helping you avoid getting ripped off.

UPDATE (Oct. 1): Charles P. Pierce makes the same point more eloquently, as usual:

[I]gnore any pundit who attempts to explain the strategy in advance, or who attempts to assess a candidate’s performance retroactively, through what the pundit calls, “the expectations game.”

The Expectations Game is a scam, perpetrated by the candidates themselves, which the elite political media know is a scam, but which they use as a metric anyway because all of them — the candidates, their coat-holders, and the media — think you’re stupid.


Fox News will say the extra “L” is for “liberal,” of course

I’ve used this space to complain about the communication skills of the White House’s graphic design team before, so this seems like an appropriate place to talk about the new graphic they released today:

"Repealled"?

Look at the headline under the date.

“Repealled”? With two “L”s?

Seriously?

Picard facepalm

I don’t normally ding people for stuff like this, but representing the President means you’re playing in the Big Leagues. And spelling mistakes are the sort of unforced error you’re supposed to only see in the minors.

(Note: I’m sure they will correct the image once enough people notice the error, so the link above may point to a version with “repealed” spelled correctly at some point in the future. The image that’s embedded in this post is a copy made at 4:55 PM Eastern time. It has not been altered in any way.)


Bill Nye demonstrates how not to persuade a creationist

There’s a video that’s been going around for the last couple of days in which Bill Nye (“The Science Guy”) tries to explain to creationists why people who believe in divine creation instead of evolution shouldn’t pass this belief along to their kids. It’s been getting a lot of attention — 1.7 million views as of this writing — and praise for being a powerful example of effective scientific outreach communication.

Which kind of surprises me. Because after watching the video, I would use it as an example of how scientists and advocates for science should not communicate with the public.

Let me say first that I have a lot of respect for Bill Nye. There are few people in the world today who have done more to make science comprehensible to laypeople than he has. This post isn’t a slam on his work in general. But as a professional communicator myself, and one who used to work as an advocate for science-based policy, when I watched this specific video, I wanted to grab him by the shoulders and shake him, hard. His messaging here hits all the wrong notes.

The big problem

Before I get into specifics, let’s start with the overarching problem with the video, which is this: if you want to change somebody’s mind, you have to first establish to them that you’re someone they want to listen to. The way you do that is by approaching them with respect. And Nye comes across here as deeply disrespectful of creationists. He repeatedly dismisses their beliefs with a literal wave of the hand.

This is probably the biggest challenge in teaching scientists, and particularly evolutionists, how to communicate effectively with the public at large. To a scientist, the only worthy response to creationism is dismissal, because creationism has no scientific basis. It’s a rejection of science, of the idea that we can come to understand the universe through our senses and the evidence around us. So scientists tend to respond to it the way Nye does here, waving it off as superstition.

As a scientifically-oriented person myself, I understand this reaction. But for communicators it’s a mistake, because when you dismiss a person’s beliefs out of hand, what they see in you is arrogance. They see you as someone who’s so sure you’re right you don’t even need to consider their beliefs. And that immediately puts them on the defensivewhich slams the door shut on their willingness to consider your arguments.

Like I said, this is a big problem in science communication. It’s such a big problem that someone could write a whole book about it — and someone did: my friend Randy Olson, who’s a Harvard-trained marine biologist as well as a filmmaker.

A couple years back he wrote an excellent book, Don’t Be Such A Scientist! Talking Substance in an Age of Style, to help educate scientists about the basics of effective communication. In it, he wrote:

Just look at your typical villain. What’s the most common trait, for everyone from Hitler to Dr. Evil? It’s the arrogance of believing they are smarter and better than the rest of the world. It’s a repulsive trait — a guaranteed pathway to not being liked…

Being known as a tough critical thinker sounds like a good thing. And when you watch a group of top scientists get together and critically analyze a proposed idea, doing what they are best trained to do, it can be an impressive spectacle — like a group of competing alpha males pounding their chests and proclaiming dominance as they grind up what previously sounded so interesting.

But it’s a different story when you take that behavior out from behind closed doors. What is admired within the cloisters of academia can be horrifying when unleashed on the general public. And that’s because the masses thrive not on negativity and negation but on positivity and affirmation.

Don’t believe it? Just watch The Oprah Winfrey Show. What do you see, day after day? Stories of hope and joy, uplifting, inspirational, fulfilling…

Just look at the most popular movies. They’re mostly inspiring stories of hope. Not a lot of blockbusters that end with the hero plowing his truck into a school bus full of kids.

Now, with that in mind, put yourself in the shoes of a creationist and watch the video again. Is the message Nye is sending you positive or negative? It’s negative — overwhelmingly so. It’s about how the things you believe in aren’t just wrong, but so wrong that they are actively harmful to your children. And about how those children need to be shielded from you — like you’re a terrible disease — before you damage them beyond repair.

Does this sound like a message that people will receive and process with an open mind? Or does it sound like one that will immediately send them into a defensive crouch?

Now, this doesn’t mean that you have to believe in the things that they do. It doesn’t even mean you have to think the things they believe in are rational. What it does mean, though, is that you have to approach with honey instead of vinegar if you want them to be open to what you have to say.  And unfortunately, Nye’s tack here is the exact opposite.

Point by point

Moving beyond general issues of tone and presentation, consider some of the specific statements and arguments he makes in the video:

Denial of evolution is unique to the United States.

To Nye, this is a negative — evidence that we aren’t able to see what everyone else is seeing. But lots of Americans — especially conservative Americans, who would disproportionately tend to be creationists — have no problem with the idea that America is different from other nations. There’s even a name for this line of thinking: American exceptionalism.

If someone thinks America has been set apart by God from the rest of the world, why would it bother them that the French or the Japanese subscribe to something different than we do? That’s just more evidence that we’re better, smarter, more enlightened than they are. If you want to persuade somebody, your argument has to work within their personal frame of reference. If it doesn’t, all you’re going to do is confuse them.

People still move to the United States. And that’s largely because of the intellectual capital we have — the general understanding of science. When you have a portion of the population that doesn’t believe in [science], it holds everybody back.

This is a weak argument, because there are other reasons one could plausibly cite for why immigrants are attracted to the U.S. A conservative might say that one big one is our tradition of religious freedom. If you believe that, Nye’s flat assertion that no, it’s really science isn’t going to change your mind. Assertions without evidence are weak tools of persuasion.

Evolution is the fundamental idea in all of life science, and all of biology. It’s very much analogous to trying to do geology without believing in tectonic plates.

How many creationists out there do you think can relate to this analogy? How many of them have worked on geologic problems at the level where they need to take continental drift into account? Beyond a little high school and maybe college study, how many of them have worked in geology at all?

Analogies need to relate to things within the experience of the listener in order to be effective; if you compare your subject to something the listener has never heard of or doesn’t understand, you just send your message whooshing over their head.

Once in a while I get people who claim they don’t believe in evolution.

This is an example of the dismissiveness I mentioned above; he can’t bring himself to concede that they may actually believe the things they say they do, he’ll only grant that they “claim” to believe them. If your listener thinks you believe they’re not arguing in good faith, they will tune you out.

Your world just becomes fantastically complicated when you don’t believe in evolution. Here are these ancient dinosaur bones, or fossils; here is radioactivity; here are distant stars that are just like our star, but are at a different point in the lifecycle. The idea of deep time, of billions of years, explains so much of the world around us, if you try to ignore that, your worldview just becomes crazy — untenable, self-inconsistent.

Two problems here. First, the word “crazy” goes back to the problem of negativity. Second and more important, though, is that Nye’s argument here sails right past a critical point; not subscribing to evolution doesn’t make the creationist’s worldview complicated, because the creationist’s worldview is the simplest one possible:

“God did it.”

Dinosaur bones? “God put them there.”

Radioactivity? “God made it.”

Stars at different points in their lifecycles? “God made them that way.”

You can say that this is not a particularly rigorous model of how the universe works. But one thing you can’t say is that it’s internally inconsistent, because it’s very internally consistent. Everything is the way it is because God made it that way. If there appears to be a conflict between two things that God made, that’s because we mortals can’t understand His purposes. After all, He works in mysterious ways.

This is another example of what I was saying above about how persuasive arguments operate within the subject’s frame of reference.

And I say to the grown-ups: if you want to deny evolution and live in your world that’s completely inconsistent with what we’ve observed in the universe, that’s fine.

Nye’s “that’s fine” at the end of this statement is sardonic, which makes it off-putting. The listener knows from the phrasing of the rest of the sentence that Bill Nye doesn’t really think it’s fine for her to “live in a world that’s completely inconsistent with what we’ve observed in the universe.” In other words, he’s making fun of her. And making someone an object of mockery will tend to provoke a defensive reaction.

But don’t make your kids do it, because we need them.

It’s not hard to imagine that this sentence could, to a parent, sound kind of chilling. The nation needs your children to be raised correctly. If you don’t raise them correctly, you will be harming the nation. There is a weird undertone of “or else” to this argumentraise your kids right, or else society may have to step in and do what you won’t.

Would any parent be open to a message that was presented to them in those terms? Would you be open to it, if it were a creationist talking about how you need to start sending your kids to an evangelical church on Sundays, because “we need them”?

In another couple of centuries, that worldview, I’m sure, will be — just won’t exist.

This is the “ash heap of history” argument, and it illustrates another point that Olson makes in his book: just because something is true does not mean it’s persuasive.

A common trap that empirically-minded people tend to make in debate is to start throwing anything supporting their position that’s true into their argument. This is dangerous, because people react differently to different things, even if they are all equally true. So some things, while true, are best left out of your argument, for the simple reason that convincing the other party of their truth will cost you more than it’s worth.

I tend to agree with Nye that, barring enormous intellectual and societal changes, creationism will not be a widely subscribed-to belief system two or three centuries from now. But while I would look at that as a positive societal advance, to a creationist, it would be a disaster — a complete rejection of their entire belief system. It would mean creationists’ backs are up against the wall of extinction; and people with their backs to the wall tend to not be open to alternative viewpoints. If you’re trying to convince someone to subscribe to your alternative viewpoint, that’s the exact opposite of the position you want to put them in.

In conclusion

I like Bill Nye. On this matter, I agree with Bill Nye. But Bill Nye’s not going to change any creationist minds with this video.

To my scientifically-minded friends: we need to do better. We have to do better. If you’re an evolutionist and you’re not sure how we could do better, read Randy Olson’s book for some ideas.


Twitter teaches a new generation of developers why proprietary platforms suck

No matter how comfortable the trunk is, it's still a trunkThe Interwebs have been up in arms since Michael Sippey of Twitter posted an entry on the company’s blog yesterday outlining some changes that are coming to Twitter’s API. (For my non-technical readers, the API is the mechanism that software programs use to connect to Twitter, allowing them to show you your incoming Tweets and post your own.)

There’s a lot of verbiage in that post, but it boils down to this: Twitter needs to be able to show ads to users to make money. If you use software made by someone else to connect to Twitter, that someone else can put in code to strip the ads out so you never see them. Therefore, Twitter is going to shut those applications made by other people down. They’re not going to do it all right away, of course; but the changes make it pretty clear that existing Twitter client applications’ days are numbered, and that anyone thinking about writing a new one will be so shackled down with restrictions and limitations that the prospect won’t be very attractive.

This has a lot of people angry, especially programmers, because one of the things that has been unique about Twitter since it first launched was how open it was to other people building software on top of their services. Now, though, Twitter needs to start making money; they’ve decided that making money means showing ads; and they can’t show ads consistently unless every user who reads a Tweet reads it through a channel they control. So all those developers who built the software that propelled them to success are going to be put out of business.

Despite what you may be thinking, though, this is not a post wailing at Twitter for making this decision. It’s a post wailing at the developers who are upset about it.

Because — let’s be honest — you should have known better.

The thing about Twitter is that, even though they provided an API for you to connect your software to, it was always, at root, a proprietary system. The only entity that has access to the whole thing — the entire stack of software and data that makes Twitter Twitter — is Twitter Inc. You can’t download the software that runs Twitter and run it on your own server; you can’t see the underlying data, other than what they choose to expose to you via the API.

This is what makes Twitter as a service different from an open system like, say, e-mail. E-mail is not the property of any one corporation. It’s a set of standards that let anyone who wants to build software that can connect to other systems that use the same standards. Anyone can set up a server and start sending and receiving e-mail, as long as their server plays by the same rules everybody else’s does; you don’t need to get anyone’s permission to do that. And if you run the e-mail server, you have complete access to all the mail sent and received by that server; you don’t need to ask someone’s permission to get to it.

The fundamental difference between open and proprietary systems is that developers who build on proprietary systems live or die at the whim of the system’s proprietor. In a proprietary system, the owner of the platform stands between the platform and anyone who wants to build on it. That gives them the power to set the rules that everyone has to play by to use the platform. And it also gives them the power to change the rules whenever the old rules stop being convenient for them.

Open systems operate differently. Because there’s no proprietor standing between you and the platform, no one person can just decide to put you out of business. Open platforms evolve, but they evolve slowly, and with lots of opportunities for those who participate in them to affect the direction they evolve in. Once upon a time, for instance, you couldn’t include formatting (bold, italic, underline, etc.) in an e-mail, or attach files to it; but people wanted to be able to do those things, different people tried different approaches to making them possible, a consensus arose that one approach (“MIME”) was the best way, and that approach eventually got codified as a set of standards. Today nearly every e-mail aware application on Earth follows those standards, and people can easily send formatted messages and attachments back and forth to each other without ever being exposed to the underlying plumbing. Millions (billions?) do so every day.

In a proprietary system, by contrast, new features only get added when the proprietor decides to add them. And the flip side of that is that the proprietor can choose to take features away at any time, too. What’s worse, if you’re a participant in such a platform, you don’t really have a lot of say in the decisions either way. You take what the platform proprietor gives you. And if what they want to give you today doesn’t support the business you’ve just spent five years of blood, sweat and tears building up, you’re just out of luck. Thanks for playing!

This is one of those lessons that everyone involved in technology learns at one point or another. But you shouldn’t have had to learn it the hard way. There’s just so many examples of it in the past. Here’s a few:

  • In 2002, the then-young Google launched the Search SOAP API, an interface that let you use a standard protocol to run search queries through Google’s world-beating search engine from within your own software applications. This had all sorts of useful applications, and developers took to it with gusto. As Google’s business plan firmed up and it became clear that showing ads on search result pages was going to be their primary revenue stream, however, this API began to look like a threat, since it let developers show Google search results without Google ads attached. So four years later Google killed it, replacing it with a new “AJAX Search API” which, instead of just giving you back raw search results data that you could format and display however you liked, required you to carve a hole in your Web page where Google could display anything they wanted — including ads. Google gave existing SOAP API applications three years before they turned the lights out for good on them, which is generous as these things go, but the fact remained that developers who’d built applications that relied on that API were out of luck; those applications went from “viable products” to “dead ends” overnight.
  • In 1991, Microsoft released a product called Visual Basic that made it possible for non-programmers to write simple Windows applications. It was a huge hit, and an uncountably vast number of these applications were written; people made careers as Visual Basic developers. Then in 2001 they released a new programming environment called .NET, and to encourage people to move over and start using the new shiny, they killed off Visual Basic. (As a sop to the orphaned VB developers, Microsoft provided a new VB-ish language, Visual Basic.NET, in the .NET environment; but VB.NET wasn’t backwards-compatible with “classic” VB code, and as a language it was sufficiently different to be less a simple move up from classic VB and more a whole new language that just happened to share some of VB’s syntax.) Suddenly all those people out there who’d bet their career on Visual Basic had to drop everything they knew and go learn a new language just to be able to keep working, and all those Visual Basic apps had to be either rewritten in a new language or stay frozen in amber as they were the day before the .NET meteor hit.
  • In 1987, Apple released a product for Macs called HyperCard, which let users design hypermedia presentations called “stacks” where resources were connected by clickable links. In the pre-Web world this was a major advance, and HyperCard became a popular format for developing interactive presentations, especially in the education sector. When Apple developed OS X in the late 1990s, however, they decided not to include support for HyperCard in the then-new operating system. The result was that all those HyperCard stacks that people had lovingly built and tended over more than a decade were suddenly obsolete; you could only run them if you had an old, pre-OS X Mac available, and no easy way to make them playable on newer Macs existed. And all the HyperCard skills that those people had learned over that time were suddenly obsolete too. (Unlike with Google’s SOAP API and Visual Basic, HyperCard developers never even got the courtesy of an official explanation of the decision from Apple; HyperCard was just quietly dropped into the memory hole. The conventional wisdom is that it died because Steve Jobs, who returned to lead Apple in 1996, didn’t like it.)

The thread that connects these stories is simple: all of them are about developers who got burned by relying on promises from owners of proprietary platforms. Unless you have a legally binding contract with the platform proprietor, you should be aware that a proprietary platform vendor’s promises are worth nothing. All it takes is a change of management or a change of strategy by the proprietor and suddenly yesterday’s firm commitments are today’s forgotten ones. Building on a proprietary platform is building on quicksand.

This is why Dave Winer refers to developers who build on proprietary platforms as having been “locked in the trunk.” The platform is a car, and you’re not the driver. You’re not even a passenger. You’re locked in the trunk, and you’re only as free as the owner of the car will let you be.

It’s also why so many of us, having been burned by proprietary platforms in the past, flocked to the Web, which is a truly open platform. And why we resist the efforts of companies like Facebook and Twitter to take that platform and close it. Once you’ve been locked in the trunk, and managed to escape, you don’t ever want to go back in there again.

I wish each generation of developers didn’t have to learn this lesson the hard way. It’s a painful one, especially if you’ve made a big bet on a platform only to find yourself abandoned by its proprietor. But it’s a lesson that the industry keeps teaching us, over and over again. And it’ll only stop dishing out the lessons when we stop enthusiastically volunteering to go into the trunk.


Amazon Cloud Player and scan-and-match

Amazon Cloud Drive and Cloud PlayerLast year in this space I took note of the launch of a new product from Amazon.com called Cloud Player, and asked “Amazon’s Cloud Player Is Cool. But Is It Legal?

Amazon is not the first company to have had the idea of letting you establish an online “music locker” to let you access your music from anywhere.  The first was the pioneering music site MP3.com, all the way back in the late ’90s.  I wrote about my admiration for MP3.com’s innovative spirit 11 years ago(!), and their version of this service, called My.MP3.com, was a good example of that; it worked pretty much the way Amazon’s service does, only you didn’t even have to rip your CDs or upload any files anywhere to use it — you just put a CD in your drive to confirm that you owned it, and My.MP3.com loaded pre-ripped MP3s for that CD directly into your locker for you.

For 1999, this was pretty crazy advanced stuff.  So naturally the record labels took notice and sued MP3.com out of business over it

I initially figured that the key difference would be in the final point — whether Amazon, unlike MP3.com, did anything to address the potential impact of their service on the market.  In other words, My.MP3.com was illegal because tiny MP3.com made the CDs available without paying a licensing fee for that music to the copyright holders, but Amazon’s service wouldn’t be, because they would have made those licensing deals first.

But then I read in the Wall Street Journal today that Amazon didn’t make a licensing deal with the copyright holders for their new service…

So what gives?  Is Amazon just hoping that the world has changed enough in eleven years that an idea that crossed the line in 2000 won’t cross the line in 2011?

Well, yesterday, an Amazon press release gave us an answer:

SEATTLE–(BUSINESS WIRE)–Jul. 31, 2012– (NASDAQ: AMZN) – Amazon.com, Inc. today announced Cloud Player licensing agreements that bring significant updates to Amazon Cloud Player. The agreements are with Sony Music Entertainment, EMI Music, Universal Music Group, Warner Music Group, and more than 150 independent distributors, aggregators and music publishers.

It appears that the licensing deals were required to implement at least one new feature that just rolled out in Cloud Player: “scan and match,” where the application scans your music library and automatically unlocks songs it finds in it in your Cloud Player without you having to upload them. This is the same way that the precedent-setting service I described in my original post, My.MP3.com, worked, and which got them sued into oblivion for implementing it without licenses.

This seems consistent with the distinction Amazon was making at the time of the Cloud Player launch, which was that they didn’t need licenses then because users were just uploading their own files. Now they’re not just letting people upload their own files, though — they’re playing back files that Amazon owns, with your permission to access those files being based on the contents of your music library. So, to avoid being sued into oblivion the way My.MP3.com was, licenses became required.

Personally, I still find this distinction between “files users uploaded” and “files we matched to files a user could upload” to be kind of thin. It originally seemed like Amazon did too; and that by tiptoeing close to the My.MP3.com precedent without getting licenses first, they were positioning themselves to challenge it. But the risk to the labels of such a challenge would now appear to be over.


How to crusade like a king in Crusader Kings II

I’ve written in this space before about how impressed I’ve been with the latest strategy game from Paradox Interactive, Crusader Kings II, and its first expansion, Sword of Islam. Those posts led to a discussion on Facebook asking me to expand on them a bit, by taking them down to a more concrete level: strategies for how to play them and win. So, here’s a post that will do just that.

I should preface this with a couple of notes. First, the tips below all assume you’re playing a Christian king; Sword of Islam allows you to play as a Muslim, and introduces some new mechanics for Muslim characters, but that’s a subject for another post. Next, I’m hardly one of the world’s foremost experts on CK2. There’s lots of people who have delved far more deeply into the game’s mechanics than I ever have, and you can find them all in one place: the indispensable Paradox forums. The community that posts there is thoughtful, respectful, and really, really smart. If you’re interested in talking CK2, you should definitely create an account and join the discussion there. There’s also a Wiki where players are gathering information to make a comprehensive reference on how CK2 works. Finally, if you want more of a tutorial walkthrough than a collection of general tips, there’s an excellent thread at Something Awful that walks you through the game in great detail.

That being said, on to the tips…

IrelandStart small. New players have a natural tendency to want to start the game playing one of the larger empires — France, say, or Byzantium. The assumption they’re making is that it’s easier to play a big, powerful kingdom than it is to play a small, weak one. But in CK2, the opposite is true. CK2 is a game about relationships, and when you’re running a huge empire, you have a lot of relationships to manage all at once, which can be overwhelming. Choosing a smaller nation lets you learn the ropes by dealing with a much smaller cast of characters.

There’s lots of opinions about which kingdom is the best for new players to start with, but the consensus in the forums is that (if you’re starting the game at its earliest start date, 1066) any of the provinces of Ireland are ideal “starter kingdoms.”

There are several reasons for this. Most of them only have a couple of provinces you need to manage, which reduces the number of variables you have to juggle to the absolute minimum. In 1066, Ireland is divided pretty equally among several small kingdoms, so you don’t have a big, powerful neighbor threatening you right away; all your neighbors are just as weak as you are. There are powerful nations in the region, like the Normans; but they’re all separated from you by ocean, which makes it harder for them to invade you and thus defers the day when you’ll have to deal with them. And all the Irish provinces have a natural goal — unification of the island into a single Kingdom of Ireland — that makes for an ideal “tutorial quest”; once you’ve learned the game enough to bring the rest of Ireland under your control, you’ll have learned all the essential skills you need to run a kingdom of any size.

Think like a Godfather, not like a President. If you’ve played strategy games before, or really if you grew up anytime after the year 1648, it’s natural to think of your kingdom in terms of a nation — a cohesive political entity that has a unique identity transcending whomever happens to be ruling it at the moment.

To be successful in CK2, you need to rid yourself of that notion, because it didn’t exist in the medieval world. Back then, the concept of the state was inextricably mixed up with the concept of the ruler; the state, and everything in it, were the personal property of the king and his family. Or at least they were as long as the king could keep another king from coming along and taking it from him. It wasn’t a nation as we understand the word today; it was the king’s “turf.”

Take the example of the Irish provinces, mentioned above. Let’s say you choose to play as the duchy of Connacht. That might sound like you’re choosing a nation to play as. But really, you’re choosing a family — specifically the Ua Conchobair, the family from which sprang the historic Kings of Connacht during that period.

The reason this is important to understand is that your job in CK2, if you go down this road, is not to work for the greater glory of Connacht. Let me say that again: as Duke of Connacht, your job is not to protect and expand the power of Connacht. Your job is to protect and expand the power of the Ua Conchobair family. Connacht is merely the raw material you use to make that happen.

This means like the road to success isn’t to think of yourself as “leader of Connacht.” It’s to think of yourself as the Godfather of the Ua Conchobair family, in the same way that Michael Corleone was the Godfather of the Corleone family in the Godfather movies.

This particular example, in fact, is instructive. If you think back to the first Godfather movie, you’ll remember that it’s established that the “turf” of the Corleone family is in New York City, and has been ever since the original Godfather, Vito Corleone, carved it out after arriving from Sicily at the beginning of the 20th century. At the end, though, we see Michael pick up the entire Corleone empire and move it to Nevada, because he judges (correctly!) that the future of organized crime is going to be found there. Lieutenants who want to stay in New York for sentimental reasons are ruthlessly cut loose from the family. Crime figures in Nevada who stand between Corleone and the “turf” he wants there are gunned down.

This is how you have to think to be successful in CK2. Playing as Connacht, for instance, it’s possible to win glory for your family by taking on your immediate neighbors, wresting their kingdoms from them, and taking them as your own. But it’s equally possible to win glory for your family by marrying your children strategically into the dynasties of a distant, powerful kingdom, yielding children of your own blood who grow to rule it, or by taking the cross and carving out a Crusader kingdom in the Middle East, planting crowns of sandy kingdoms upon the heads of your children. It’s quite possible for the family Ua Conchobair to grow to great power, in other words, without Connacht growing any further on the island of Ireland. It’s even possible for the family to prosper without any holdings on the Irish isle at all.

Play dynastically. This leads to a related point, which is that if playing as a family rather than a nation is the way to go, ensuring the continuance of your family line is of the utmost importance.

What this means, at the most basic level, is heirs; specifically, male heirs. You must have male heirs for the family to continue. There are lots of ways to lose a game of CK2,but the easiest is to get so wrapped up in other things that your character never gets around to fathering any male children. Whoops! Game over.

CK2 character screen

Image from Something Awful’s CK2 tutorial thread (click image to read)

And it’s not enough to just produce male heirs. You have to produce decent male heirs — “throne-worthy” male heirs, in the terminology of the time. Here is where understanding of character statistics and traits comes in. Statistics are the numeric scores that are attached to a character — things like “Diplomacy,” “Intrigue,” “Martial,” etc. Traits are the character attributes displayed as icons — things like “Brave,” or “Drunkard,” or “Hunchback.” Together these define a character.

How they affect what the character himself does is pretty obvious; a character with a high Martial score and the “Brave” trait will do well on the battlefield, for example. What’s less obvious is that a character’s statistics and traits also affect how other characters perceive and react to that character. So if your male heir has that “Brave” trait mentioned above, other “Brave” characters will admire his courage, raising their opinion of him; but characters with the “Craven” trait, importantly, will envy his courage, lowering their opinion of him.

This means that whether or not your heir is accepted by the rest of your court will depend in part on his raw stats, and in part on who exactly the other people in the court are. Some traits are seen as more “kingly” than others — an heir who is Charitable will generally have an easier time picking up his father’s crown than one who is Greedy. (In CK2’s taxonomy of traits, these are known as virtues — trait icons with a green background and a number on them. The opposite of virtues are sins — numbered trait icons with a red background.) But a Charitable heir in a court full of Greedy vassals and advisors may find that his virtue alone is not enough to ensure his succession.

The less “throne-worthy” your heir is judged by your other vassals (including his siblings!) to be, the more likely it is that upon your character’s death they will launch bids to wrest the crown from your heir’s hands into their own. These bids can lead to long, destructive civil wars that break up a mighty empire, so it’s critical that you do everything you can to ensure that when your character dies, a strong, capable heir is waiting for you to play as next.

Breeding is important. There’s three ways you can affect the likelihood of producing a throne-worthy heir.

The first: marry the right person. Parents can pass some of their traits on to their children, so if you marry your character to a paranoid, lunatic hunchback, you should not be surprised when your kids turn out to be less than ideal. Similarly, a child who has the “Inbred” trait has huge negative modifiers applied to their stats, so you don’t want to marry within your own family if you can avoid it.

The second: have lots of kids. Having too many kids can pose a problem, since kids (especially male kids) tend to expect Dad to give them a province to run when they reach adulthood at age 16, so having more kids than provinces can cause disputes within the family. But those problems are usually easier to deal with than the ones that come from not having a decent male heir.

The third: educate your kids well. When a child is born in CK2, his or her initial stats and traits come from the stats and traits of its parents. But as he/she grows up, his/her “personality” is shaped by the people you surround them with. This gets more pronounced at age 6, which is the age when a child can start their education. “Education” in the world of CK2 doesn’t mean sending them to school, though — it means assigning them a guardian, another character from your court, to whom they become a sort of apprentice. The traits of a child’s guardian can be picked up by the child, so for your oldest kids, anyway, it’s worth taking a little time to find someone with good, throne-worthy traits to educate them; there’s no guarantees the kid will learn from their example, but it can’t hurt to try. (Note that you can choose to make your own character your child’s guardian if you wish; this gives you the chance to mold the child’s character directly by choosing how to react to events in the child’s life, but if your character has negative traits, it’s possible you can pass them on to the child as well.)

Note that guardians don’t just have the opportunity to pass along their traits to the children they educate; they also have the opportunity to pass along their culture as well. That means that if you’re an Irish king, and you give your oldest son to a character from France to educate, there’s an outside chance that when the kid’s education ends he’ll come away acting a lot more like a Frenchman than an Irishman. This kind of cultural difference is a huge turn-off for vassals — no self-respecting Irishman in 1100 would want to live under a Frenchified king! — which  can make passing the crown to them quite difficult, as vassals revolt left and right to install a “true Irish king” instead of your weird son. Similarly, a guardian can pass along their religion as well, and heretics have a hard time rising to the throne of a Christian kingdom; so if there’s a guardian who looks great except for the minor problem that they belong to an heretical church, think hard before handing your kid over to them.

Know the law. The last piece of keeping your dynasty alive is making sure that the laws of your kingdom make it easy to do so. Different kingdoms in CK2 have different laws governing which children can succeed a king. You can change these laws to better suit you, but you can only do so once in each character’s lifetime, and only after that character has held the throne for at least ten years; and changing it can have major negative impacts among those characters who are affected negatively by the change (like children who used to be in the line of succession, but now are not). So it’s worth thinking carefully about which approach you want to take to succession over the long term.

The relevant laws fall into two basic categories: gender laws and succession laws. Gender laws determine how (if at all) women fit into the line of succession; succession laws determine how the line of succession is organized. Not all cultures have access to all the available gender and succession laws; while it is possible to have a gender law that makes women equal to men as candidates for succession, this option is only available to a small number of cultures.

Gender laws break down as follows:

  • Agnatic: Only men can succeed to the throne.
  • Agnatic-Cognatic: Women can succeed to the throne, but only if no eligible male heirs are available.
  • Absolute Cognatic: Women and men have equal eligibility for succession.

Succession laws break down like this:

  • Seniority: The oldest member of your dynasty inherits all your holdings. (Note that this doesn’t mean your oldest child inherits; it means that the oldest dynasty member inherits, who might be your character’s younger brother, or second cousin, or even his father.)
  • Gavelkind: Your holdings are divided equally among all eligible heirs.
  • Primogeniture: Your oldest eligible child inherits all your holdings. If that child has died, their oldest eligible child inherits, and so on down their branch of the family tree. If they died without having any children, the process repeats with your second-oldest eligible child’s branch of the family, then the third, etc.
  • Elective: You nominate a successor, who can be anyone in your court (not just your children), for each of your titles. Others can put themselves forward as candidates for succession to one or all of your titles as well. Your vassals vote on which candidate to give each title to when your character dies.

You will note that, from the player’s perspective, none of these laws are perfect; all of them carry a degree of risk to the continuation of your family line. To a lot of new players, Gavelkind succession seems like the fairest choice — why not give each of the children a slice of the pie? But when you try it you quickly discover the answer to that question: because it takes your vast kingdom and turns it into several smaller kingdoms, each with a child on the throne scheming to take over all the others. Elective seems fair to our modern, democratic sensibilities, until you realize that each of your titles gets voted on separately by the vassals who belong to that title, so if your heir is popular everywhere but one corner of your empire, it’s entirely possible for that corner to give the title to a  Favorite Son of their own, peeling it off from your kingdom. And so forth.

Because of this, there’s no one “best” succession law that you should always go with; the choice depends on your play style and the makeup of your court. Personally, I tend to go with Elective — it gives you the flexibility to put a more-capable younger son up for your throne if your oldest is a drooling idiot or a heretic, and it’s usually possible to bribe or strongarm difficult vassals into voting the way you want them to. But your choice may be different.

And that’s where I’ll wrap this set of tips up for now. If you have questions about other elements of CK2, feel free to ask in the comments or on Facebook/Twitter/etc. and I’ll be happy to answer them there, or in a second post if they’re complicated.


Jason recommends: Crusader Kings II: Sword of Islam

Crusader Kings II: Sword of IslamI’ve written before in this space of how much I enjoy Crusader Kings II, the latest historical grand-strategy title from Paradox Interactive, so I won’t bore you further by reiterating the reasons why. But I do want to take a moment to tell you about the release of the first major expansion pack for that game, Sword of Islam, which takes a great game and makes it greater.

The obvious limitation of the original CK2 was that it was a story entirely told from the perspective of medieval Christianity. Islamic kingdoms existed in the game, but you couldn’t play them; they were there solely to provide an Other the game could threaten you with.

Sword of Islam fixes that, by unlocking all the Muslim dynasties for you to play. Now you can fight to defend Al-Andalus from the grim forces of the Christian reconquista, or struggle against the Byzantine Empire as the Seljuq Turks. A whole new range of interesting strategic problems is opened to you.

But the expansion goes beyond just unlocking previously unplayable empires. It also adds new game mechanics specific to Islamic dynasties that make playing the game as a Muslim king a fresh new experience.

CK2 players, for instance, will know that one of the key challenges they have had to overcome to be successful in the game has been the consistent production of quality heirs. When your character dies, your kingdom passes to whomever is your designated heir, and if that person isn’t up to the responsibility, social unrest and long, bloody civil wars can easily be the result. But Christian kings only have one wife at a time, so if your king and his queen can’t produce healthy, well-adjusted children, you quickly find yourself looking down the barrel of a serious Henry VIII problem.

When playing as a Muslim ruler, though, Sword of Islam flips this dilemma on its head. Unlike Christian kings, Muslim ones can take multiple wives — and in fact, the larger the realm you rule, the more wives the people expect you to have. (Take too few, and your prestige slips as people start to wonder if there’s something wrong with you.)

Multiple wives means lots of children, so you’re never without heirs. But now you have a different problem: too many heirs. It’s not hard for an Emir with three wives to pop out eight or ten or twelve healthy male children — and when they reach adulthood, that’s eight or ten or twelve guys you’re going to need to find employment for, because each one of them who sits around unemployed drives up the new Decadence score. And if your dynasty gets too decadent — if your people start to think that all their work and taxes are going to provide lavish lifestyles for a bunch of lazy loafabouts — it’s only a matter of time until some ambitious family rides out of the desert to challenge you for your throne.

The game also has nice touches that connect it to Islamic faith. For example, like all good Muslims, your characters in Sword of Islam are expected at least once in their life to make the hajj — a spiritual pilgrimage to the holy city of Mecca. But unlike today, when the devout can get to Mecca (which is located in modern-day Saudi Arabia) from anywhere in the world in safety and comfort in a matter of hours, making the hajj in the Crusader Kings era is a much longer and more dangerous trip. If you’re an Iberian Muslim, for instance, it requires you to travel from modern-day Spain across the entire length of North Africa, through huge, scorching deserts and hostile Crusader states, moving only as fast as a horse, camel, or your own feet can carry you. This makes the hajj an epic journey, long and fraught with danger as well as opportunities for spiritual enlightenment. And it’s one that every character you play over your dynasty’s history must undertake.

So yeah, if you like Crusader Kings II, you should think of Sword of Islam as a must-buy. Thankfully, it’s inexpensive: just $9.95. (Compatibility note: if you have the Steam version of CK2, you should buy Sword of Islam through Steam as well.) And for a little extra atmosphere, you can shell out $1.99 more and get Songs of the Caliph, another add-on pack that adds 11 minutes of new music to the game’s soundtrack that play only for Muslim rulers.

Like I said, it’s a good deal that makes a great game even greater. You should jump on it.


The image that illustrates the White House’s communication failure on health reform

The big news today is obviously the Supreme Court’s 5-4 decision to uphold the Affordable Care Act, which removes the final major challenge that could prevent the health care reform law from moving to implementation.

That’s a victory — one that will be welcomed by millions of previously uninsured and underinsured Americans. But getting to that victory required traveling a long, hard road. And I believe that the journey was longer and harder than it had to be, because the Obama administration seemed reluctant to really made the case to the American people that this health reform package needed to pass — and when they did, they did so in ways that were ineffective, or even self-defeating.

Longtime Readers will know that this is a critique I have been making of the administration for years now. (See here for the most fully fleshed-out version.) But I saw something today that really crystallized why I felt this way. It brought back in one sudden rush all the frustration I felt watching progressive leaders cede the field to Tea Party wingnuts back in 2009 and 2010. So as a professional communicator myself, I wanted to highlight it for you — as an example of how not to do advocacy communications.

The “something” in question is the image over there on the right. (Click it for a full size version.) It’s an image that the White House proudly blasted out across all its social media channels (such as this tweet, for instance) this afternoon, after the Supreme Court decision came down. It’s also featured on the White House blog. And it sort of illustrates everything that has been wrong about how the White House has talked about this issue from day one.

Let’s start at the top: the overall concept itself. The White House calls this an “infographic.” But it isn’t. An infographic is a visual representation of informationan illustration that takes raw numbers or disconnected facts and makes them comprehensible by using graphics to pull out trends and tendencies. This does nothing of the sort; it’s just words with checkboxes next to them, saved in an image file format. Just saving text as a JPEG does not magically make it an infographic, as Edward Tufte would probably explain to anyone he saw touting this as an “infographic” just before he beat them to death with one of his books.

Next, the presentation. An infographic is a type of visual communication, and one of the basic concepts of visual communication is that the person on the other end should still get a sense of your message even if they don’t read any of the words. There’s lots of ways a good graphic designer can do this; colors, type, and imagery can be applied to summon up feelings in the viewer that match the tone of your message, for example. Look at the famous “Uncle Sam Wants You” recruiting poster from World War I, for instance; even if you don’t read the words, Uncle Sam’s stern expression tells you that Serious Matters are afoot, and his pointing finger tells you that they need your attention. The words are powerful, but you can get the vibe just from the art.

This image does none of those things. Look at it for a second, taking care to look at the overall design rather than reading the words. (To make that easier to do, here’s a version of the image with the text fuzzed out.)

What does it look like when you’re looking at the design and not the words? It looks like a memorandum, or maybe a doctor’s prescription. (My guess is that they were shooting for the latter, to give it a “health care” feel.)  Neither of those are things that anyone would associate with a feeling of victory, or freedom from fear, or social justice, though. They’re boring, bureaucratic, process-oriented documents. And even worse, they’re documents you usually get when there’s bad news on the way, or some annoying problem you have to deal with. You don’t get a prescription when your kid graduates from college; you get one when you get a rash that won’t go away. You don’t see many people posting their prescription slips on Pinterest.

Additionally, because so much of the design is taken up by words, when you scan the overall layout it comes across as a “wall of text” — a long, monotonous document. That’s death online, because online readers don’t read; instead, they quickly scan through whatever they’re looking at, searching for nuggets of information that are particularly interesting. But since the text here lacks any visual cues to support that behavior (no boldfacing of important words, no variation of font size, etc.), it’s going to be quickly discarded as uninteresting by a lot of people. On top of that, the typeface used to present the words is harder to read than it should be, especially when the image is viewed at less than 100% size; I’m guessing they were aiming for a “handwritten” feel, which is cute in theory, but in practice it just impairs the legibility of the message.

Now that we’ve looked at the design, let’s look at the copy — the words. Because this image is so text-heavy, the wording it uses is absolutely critical to any effectiveness it may have. Unfortunately, it falls down hard on this count too.

First, the authors seem unable to decide how they want to organize the information they’re trying to present; sometimes it’s organized by who wins (“millions of young adults,” “small business owners,” “hard-working Americans”), sometimes by who loses (“stops insurance companies,” “ends insurance company power,” “insurers who spend too much on CEO salaries” — hey, I’m seeing a theme here), and sometimes by outputs (“hundreds of community health centers,” “state-based marketplaces”). It gives the copy a weirdly dissonant feeling; it doesn’t hang together cohesively, the way a good message should.

Second, the frequent repetition of “beginning in 2014” after several of the items makes it read like it a document with lots of asterisks indicating the presence of fine print, and people know that the fine print is where documents hide the parts that are really going to screw you over. It’s true that many of the act’s provisions don’t kick in until 2014, but in a mass-audience message like this, it’s not necessary to spell that out; just give people a link to a FAQ listing when each provision kicks in, or just lead off with a single sentence like “By 2014, the Affordable Care Act will:”. That way you only hit the bad news (that so much of the good stuff the act is supposed to provide is still years away from being available) once, rather than multiple times. The key to implanting a message in someone’s memory is repetition, and by repeating “beginning in 2014” over and over the message this image plants is that if you need help today, this legislation is not going to be much help.

The third criticism I have of the text is the biggest — and, to my mind, the most illustrative of the administration’s problems in communicating about this act overall.

It can be summed up in a simple question: what’s the narrative thread connecting all of these list items together?

Here’s what I mean. A good message is all pulled together around a single idea; that way, even if the reader forgets the individual details, the underlying message can still stick. But this message reads like a grab-bag of different programs for different people; there’s no through-line connecting the tax credits for small business owners, say, to the ability of young people to stay on their parents’ insurance. (Except for the sense that these are all good things, of course, but that’s an awfully weak reed to prop yourself up against.) The two details stand in isolation, rather than reinforcing a common theme.

In the defense of the White House communications staff, there’s probably not a lot they could have done on this last point. The message reads like a grab bag because the Affordable Care Act is a grab bag. The way to fix that would have been to frame it better when it was first being drafted, but that ship has sailed. Still, I can’t help but feel that if one of the things that makes your legislation hard to sell people on is the sense that it’s too complicated and hard to understand, the way to overcome that is not to reel off a list of unconnected facts; it’s to tell people a story that places those facts into a narrative context.

This has been the most glaring omission from the administration’s communications efforts around health reform ever since they first took up the issue. There’s no narrative, no story, and that’s fatal, because stories are what move people. “Death panels” wasn’t true, but it was a hell of a story, and it stuck in peoples’ minds long after lists of individual reforms had faded away. That’s why people say they like individual planks of the Affordable Care Act, but dislike the act itself; they don’t remember that the two are connected. Ask them about “health reform” or “Obamacare” and the first thing that leaps to their mind is the narratives opponents were putting forward against it, not the individual reforms that supporters were (and still are!) talking up.

Narratives, in other words, stick. And the fact that we’re now at the tail end of three years of furious debate and supporters of reform still haven’t come up with a narrative of their own — a narrative that something like this infographic could be framed around — explaining why it’s worth supporting is pretty damning.


Jason recommends: World of Tanks

World of Tanks screenshotI’ve been getting a kick lately out of a game called World of Tanks, so I figured I’d pass along my recommendation. Especially since you can get started playing World of Tanks for the low, low cost of $0.

World of Tanks is an example of one of the biggest trends in the gaming business today: so-called “free-to-play” games. The standard business model for games has always been that the customer pays for the game up front, and then online multiplayer modes are provided either free or for a monthly subscription fee. But in all cases, the customer had to pay something to start playing the game. In the free-to-play model, you can download the game for no charge, and there’s no recurring subscription fee to sign up for.

“So how does the company behind the game make money?” you ask. The answer is that the free game isn’t really the whole game. There’s chunks of game content that the company holds back and sells separately, usually through their Web site or an in-game “item store.” You don’t technically have to buy any of that stuff to progress through the game — they generally make it possible for you to earn it without paying if you play long enough — but if you’re willing to throw down a couple of bucks you can skip ahead and equip your brave paladin with the +8 Vorpal Sword of Disposable Income right now,without having to grind your way through twenty game levels.

Game publishers like free-to-play because it turns out there’s enough impatient people in the world to make the above proposition quite lucrative. For players, though, like most things, the free-to-play model has its pros and cons. The biggest pro is obvious: you can try it out for free to see if you like it. The biggest con is more subtle: designers of free-to-play games have to master a delicate balancing act. To make their game fun, they have to avoid both holding back too much content for paying customers (leading players to see staying competitive in the game as expensive and abandon it) and holding back too little (leading the company to make no money off the game). It’s a hard balance to strike, so there’s lots of really abysmal free-to-play games out there.

I am happy to report that World of Tanks is not one of them. It’s actually a very well-designed, well thought-out game. Which makes it stand out from the free-to-play crowd somewhat.

The basic concept of World of Tanks is simplicity itself. Here it is:

  1. There is a world.
  2. In this world, there are tanks.
  3. The purpose of these tanks is to flip out and kill each other.

That’s it. That’s the complete story of World of Tanks. You have a tank, you get assigned to a team with fourteen other people (each with their own tank), your team is dropped onto a map with another team of fifteen, and the two teams duke it out until only one is left standing.

http://www.youtube.com/watch?v=_0y2wmE4lMo&feature=relmfu

So World of Tanks isn’t going to win any Original Story awards. But that doesn’t matter, because the designers took all the time they could have spent writing a story and put it into fine-tuning an array of more than 150 World War II tanks from the US, Germany, France, and the USSR. And the fun of the game is exploring how each of them matches up against the others.

See, when you start the game, you don’t have access to all those tanks. You have one tank, and it’s the lowest of the low end — a “Tier I” tank. But you’re only matched up against other players with those weak tanks as well, so you can learn the ropes without having to worry about getting stomped on by a giant Tiger tank. As you play rounds, you earn experience points, which can be used to unlock upgrades for your weak tank — a faster engine, say, or a bigger cannon. And eventually you earn enough to unlock a tank from the next class up, Tier II. Then after a few rounds with your Tier II (against other players in that tier — no beating up on the newbies! — with maybe a few IIIs and IVs thrown in, to keep things interesting) you earn enough points to upgrade it, and then you unlock a Tier III… and so on.

This keeps you coming back, because no matter how fearsome your current tank is, you’re always being pitted against other players whose tanks are just as fearsome, if not a little more so. So you take out your new shiny new Tier III Russian BT-7 tank, for instance, feeling pretty badass now that you’re not in Tier II anymore, and suddenly you get smoked by someone driving a Tier IV American M3 Lee. Dammit, you think, if I’d just had a little more horsepower I could have scooted away from him! And then you notice that you’ve earned enough experience for a new engine. Bam! Look out, buddy.

Another wrinkle the game throws at you is that once you start leveling up, you can branch out into several different types of tanks. Light tanks are fast and nimble, but have small cannons and weak armor. Heavy tanks are monsters with huge guns and thick armor, but they’re slow and ponderous to maneuver. Medium tanks are balanced workhorses, but can be outrun by lights and outfought by heavies. Tank destroyers have big guns like heavy tanks, but light armor; they’re superb for ambush attacks, but can’t survive a head-to-head fight. And self-propelled guns are mobile artillery; they can shoot across practically the entire map, but if an enemy finds you at close range, you’re done for. The range of tank options available gives you a chance to specialize; you can find tank types that fit how you like to play, and then concentrate your leveling-up on tanks in those categories.

And on top of all that, the game gets a lot of implementation details really right. The controls for driving and shooting are simple and intuitive. The tanks move convincingly; the really heavy ones lurch forward and back like drunken pedestrians. When your loader puts a round in the tank’s gun, you hear a satisfying thunk. Hits from big guns look and sound different than hits from smaller ones. And so forth. Oh, and each battle plays out in 5-10 minutes, so it’s perfect for a quick break.

Anyway, it’s a really good game. And the best part is that until you get to Tier III or so, you truly can play for free and you’ll never feel like you’re missing anything. Once you get to the higher levels, you may want to throw down some change for a custom tank, or bonus experience points from your battles so you can level up faster; but even then it never feels like they’re forcing you to pay — the game never bogs down into a long, slow grind like so many free-to-play games do. I’ve never liked a free-to-play game enough to be convinced it was worth paying for its content, but World of Tanks sold me; I plunked down $15 for a bunch of enhancements. Considering the entertainment I’ve gotten out of the game, $15 feels like a bargain.

So yeah, World of Tanks is awesome, and you should totally be playing it. The download is here. Go get it.

And if you do join up, let me know what your username is so I can remind myself to feel bad when you blow me up.


And then they lynched the man who made the world

Alan Turing

Today is the 100th anniversary of the birth of Alan Turing.

If you haven’t heard of him, here’s a brief bio. Turing was a mathematical genius, one of the pioneers of computer science, and a key figure in the Allied victory in World War II.

Turing’s early work focused on a deceptively simple question: what types of problems can be solved by a computer? In 1937 he published a paper that described the operation of a hypothetical device that could be programmed to solve any problem that was theoretically machine-solvable. This concept, the “Turing machine,” would provide one of the foundations upon which modern computer science would be built.

When his nation, the United Kingdom, found itself in a second world war with Germany, the British government recognized his ability and squirreled him away at a place called Bletchley Park to work on one of the hardest problems of the war: how to crack the codes the Nazis used to encrypt the signals that controlled their armies and fleets. More specifically, they tasked him with breaking one particular code: Enigma.

Before the war, earlier versions of Enigma had been cracked by a team of Polish mathematicians led by Marian Rejewski. Their work had come to an end when Poland fell to the Germans, but they managed to share it with their British allies before the conquest. Building on their work, Turing developed a new way to crack Enigma, but his method had a crucial twist — it was able to quickly discard thousands of incorrect solutions without needing to test them all in turn, dramatically reducing the difficulty of reaching the one true solution. Turing’s insight led to the development of the British Bombe, the device that would give the Allies access to German military communications throughout the rest of the war. The intelligence that the British derived from this source and a few other broken German codes became known as “Ultra,” and while nobody can quantify exactly how major a contribution to the Allied war effort Turing’s work on Ultra was, Prime Minister Winston Churchill told King George VI that “it was thanks to ULTRA that we won the war.

After the war, he turned his attention back to computer science. He worked on the Manchester Mark I, one of the first computers able to be “programmed” in the modern sense by storing instructions in memory rather than having to be arduously re-wired for each task. He also devised the “Turing test,” a way to define the point at which a machine can be said to have achieved intelligence. (We have yet to devise a computer that can pass it.)

And then, on June 7, 1954, he killed himself.

The reason for his suicide had nothing to do with computers. It had everything, however, to do with bigotry. For while this man who had helped save the world from the Nazis and laid down principles that would guide the computer revolution for decades to come was brilliant, he was also a homosexual.

In Britain, in 1952, being gay was a crime. And in March of that year, Turing was arrested on charges of “gross indecency with a male.” He was presented with two possible sentences to choose from: a prison sentence, and chemical castration — a process of being pumped full of estrogen in order to suppress the libido. He chose the estrogen treatments; they turned the trim, athletic Turing, an avid runner who during the war would go to meetings in London by running the 40 miles to there from Bletchley, into a bloated wreck. And one can only imagine the shame he must have felt at being held up by his own government — the government he had worked so hard to save — as a kind of freak, an aberration of nature that needed stamping out.

So he took an apple, laced it with cyanide, and ate it.

Why am I telling you this story? Because it has always struck me as one of the great tragedies of the 20th century. Here was a man who by all accounts was a genius, and who contributed more to his nation and the free world than it could ever repay. And his compensation for this work — the thanks he received — was to be shamed and hounded until the only way out could be found through a bite from a poisoned apple. And all for a reason that would strike anyone today as utterly unremarkable.

Alan Turing deserved better than what he got from us.

Slowly, belatedly, the world has begun to awaken to this. A popular campaign in Britain in 2009 nudged Prime Minister Gordon Brown into issuing a formal apology for the British government’s treatment of Turing. And now, in 2012, people around the world are marking the 100th anniversary of his birth, just as you and I are doing here.

It’s not much. It’s not enough. But the only way good can come out of the injustices of the past is if they are remembered, and learned from. So let us today remember Alan Turing, so that we may remember to show to others the compassion we should have shown to him.


Against live-tweeting

Uncle Sam says: turn off your damn phoneHere’s something I was reminded of this morning when I checked in on Twitter and saw a stream of comments from the just-started Netroots Nation 2012 rolling in:

If you’re in the audience at a session at Netroots, or at any conference, really, do a favor for both yourself and whoever’s presenting to you and put away your damn gadgets, at least until the session is over.

I know you think you’re adding value to the proceedings by “live tweeting” them or whatever. But you’re not. All you’re doing is subtracting value. You’re hurting yourself, since you’re going to get less out of the presentation when you split your attention between it and your blinking gizmo than you would if you engaged fully with the presenter and her message. And you’re hurting the presenter, since the message you’re sending to her is that you’re not paying attention, which may lead her to screw up her presentation in an attempt to win your attention back.

(What? You thought they didn’t notice you’ve never looked up from your screen? Any presenter worth his salt is constantly keeping an eye on the audience’s interest level.)

I know you think that it’s critical that you get your opinions on the presentation out to your legions of followers right this minute. But trust me, your followers can wait for your thoughts until the session is over; you’re not Edward R. Murrow, and this is not the London Blitz. And by waiting a few minutes to hear the whole presentation, you can actually form a thoughtful opinion of its entire content, rather than just a knee-jerk reaction to whatever words the presenter just said.

There’s only one thing worse than breathlessly tweeting to people who aren’t in the session with you: breathlessly tweeting to people who are in the session with you. The polite term for this is “the backchannel,” and it’s always existed at conferences. In the past, though, backchannel discussions took place in the hallway after the session, not while the session was actually going on, and that makes a big difference. Think about it for a second: if you were having the same conversation out loud, right there in the room as someone’s trying to teach you something, you’d be considered an irretrievable boor. You can’t wait to have that conversation until after the session’s over? It’s that important?

This is why I get frustrated, when I make presentations, to hear people telling me about tweets from people in the audience — even glowing, positive ones. The whole reason I’m there is to try to engage with you, provoke you, fire up your mind; and that’s something that requires a degree of mental intimacy between the two of us, intimacy that can only be achieved if we give each other our undivided attention for a few minutes. If I’m half-assing the presentation, phoning it in, sure, feel free to get out your iPad. But if I’m not — if I’m working hard up there to communicate something to you, as I generally am — show me the respect of at least meeting me halfway. Even if you’re not convinced by what I’m saying, I’d rather have spirited, thoughtful criticism right there in the room; spirited, thoughtful criticism teaches me things, makes me a better presenter and sharper thinker. Passive-aggressive snarky tweets don’t do any of that.

So next time you’re sitting down in a conference session, do us both a favor. Put your devices away, at least for a little while. They won’t spoil. And you might just learn something!


Windows Live is dead

Bill Gates introducing Windows Live in 2005

Photo by niallkennedy

Seven years ago in this space, I noted the rollout of a big new branding effort from Microsoft, called “Live,” and asked what the hell it was actually about:

[I]t’s a mixed bag. What does that mean for the future of “Live”? The best place to look for analogies is probably the launch in 2000 of the Last Big Thing from Microsoft: .NET. The “Live” launch actually reminds me a lot of that. When .NET was first revealed, Microsoft went crazy trying to shove everything they did under the .NET umbrella. Windows became Windows.NET. Office became Office.NET. Their various server packages became .NET Servers. I imagined them frantically renaming all the streets in Redmond — “Main Street.NET” — to fit the pattern.

The problem was that the vast majority of these changes were purely cosmetic. Windows, Office, the servers, etc. weren’t being radically rewritten; they were just being rebranded. When you cut through the .NET hype, the actual technical accomplishments you found were more modest: a new runtime environment and API for building Windows applications, a new language (C#) hosted in that environment, and a few other things. The rest was hot air, which Joel Spolsky noted at the time

.NET was not the Year Zero event that it was made out to be at launch. It was not a revolution for Microsoft; it was an evolution — and by overhyping it, they confused their customers, who couldn’t tell what was real and what was puffery in .NET. Eventually MS dropped the .NET hype, the products that had no real connection to .NET quietly went back to their old names (notice how the upcoming new version of Windows is not “Windows.NET Vista”), and .NET found its place in the market.

I imagine we’ll see something similar happen with “Live”: it’ll be another evolution in the Microsoft platform. The bits that are inspired will put down roots, the bits that aren’t, won’t. And eventually Microsoft will have to sharpen its definition of what “Live” is and pare back the bajillion other projects that are now confusing the brand.

(That post ended up getting quoted by Joel Spolsky, which was nice.)

Here we are, seven years later, and all that time Microsoft has been going through the process I predicted they would — trying to figure out what the “Live” brand should actually mean to their customers. Various products bearing the “Live” moniker have come and gone, but the definition of what “Live” itself is never really came into focus.

So it’s not really a big surprise to hear now that Microsoft has given up on it:

With the new version of Windows, many of the Windows Live products and services that had been packaged separately will be installed as a part of the operating system. “There is no ‘separate brand’ to think about or a separate service to install,” Mr. Jones wrote.

Most important, Windows 8 customers will be free to substitute non-Microsoft products and services in place of the re-branded Windows Live successors. “You’re welcome to mix and match them with the software and services you choose,” he says.

“Windows Live” is disappearing.

The whole “Live” story — from muddled conception, to haphazard deployment, to quiet abandonment — has played out in a pattern depressingly similar to other Microsoft efforts of the last ten years. Microsoft has shipped a lot of products over that time, but nothing really seems to tie them all together; there’s no grand vision at the heart of the company’s work anymore, unlike competitors such as Apple and Google. The products they ship range from the excellent (Windows 7, XBox) to the OK-but-not-quite-great (Windows Phone,  Bing) to the downright embarrassing (Windows Vista, Microsoft Kin). But try and think of a philosophical through-line that ties all those products together; you can’t. That’s a major, major problem.

Microsoft has (finally) started to unify the interfaces of all these different systems with their Metro design language, which helps unify their identities somewhat, but a visual identity standard is not a product vision. Windows 8, with its shift of focus away from the traditional Windows application towards simpler “Metro-style” applications that feel more like phone and tablet apps than desktop apps, may start to bring some of that vision back, who knows. But I’m skeptical that One True Interface can be devised that works as well on a 4″ phone display as it does on a 22″ desktop monitor (or a 50″ HDTV). We’ll see.

The biggest problem Microsoft has, I think, is that there is nothing they’re working on these days that makes a person like me look at them and think “damn, I wish I was working in their ecosystem.” I used to be a Windows developer; that should make me a primary target to become one again. But I feel very little reason to want to do so. If I were going to branch out of the open-source Web ecosystem, it’d probably be to learn Android, or even Objective-C for iPhone development, before returning to Windows. There’s just nothing exciting about Windows these days — not even the promise of access to a vast audience of potential customers, since the momentum on that score has shifted to the iOS world.

Of course, Microsoft is huge and sitting on an enormous pile of cash, so they could just keep on muddling through for quite a long time. I hope they don’t, though. I hope they get their mojo back. Because we need them, if only to prevent the future from being an Apple/Google duopoly.

UPDATE (July 5): Another “we never figured out what it meant, exactly” Microsoft brand bites the dust — Zune.


The question isn’t why lions want to eat your children. It’s why your children aren’t smart enough to run

Thanks to Slate (“news for nervous upper-class white people, since 1996!”), I discovered today that there is a whole little niche of parents posting videos to YouTube showing lions trying to eat their children.

Before you flip out, note that I said trying to eat their children. Trying and failing, because the lions are behind thick glass walls in zoos.

Videos like this:

And this:

And this (which can’t be embedded, ugh).

Doing the kind of investigative research they are famous for, Slate asked an expert on lions why lions appear to want to eat your baby. The answer, shockingly, is that to a lion, your child looks delicious.

Lions, it turns out, like playing with their food. Craig Packer of the Lion Research Center at the University of Minnesota noted that while “some of the lions look quite playful in their attempts … sometimes lions and cheetah will spend several minutes playing with wildebeest calves or gazelle fawns before finally chomping them.” He added that “predators generally treat calves/fawns/babies differently from adults because they are such easy prey; there’s no real chance of escape, so what’s the hurry?”

All of which makes total sense. Lions are apex predators; babies are not.

But the question that got me wondering wasn’t why the lions wanted to eat the babies. It was why the babies don’t freak out when a lion looks like it wants to eat them.

I mean, I know why you and I don’t freak out when we see a lion behind a glass wall in a zoo — we understand that the lion can’t get through the glass wall. But a baby doesn’t have that kind of understanding of the world, does it? So why do all these babies look totally unconcerned that the jaws of a gigantic beast are inches from their tiny heads? Shouldn’t evolution have weeded that kind of blissful ignorance out of our species long ago?

As with most things, comedian Louis CK has a wise answer for this question. Babies burble happily when confronted with slavering predators, he explains, because we’ve managed to screw up evolution:

http://www.youtube.com/watch?v=F-CvigrOuqE#t=2m7s

When we don’t have predators to worry about, we get lazy. And that leads to babies who are not terrified by lions.

So there you go! The solution to modern America’s problems: fewer glass walls, more lions.

P.S. Like that Louis CK bit? He’s selling three full-length concert recordings as DRM-free digital downloads on his web site for just $5 each. Buy them now, you can thank me later.


Ethical aggregation: it’s simple

Terry and the PiratesThe Columbia Journalism Review has an article up by editor-in-chief Cyndi Stivers begging for some way to tell the difference between ethical online content aggregators and unethical ones:

I daresay many are grateful for the Twitter feeds, blogs, and newsletters that pull together links to what we need to know about—and we also appreciate smart commentary about them. But sometimes a writer (or website) goes too far, hiving off huge chunks of someone else’s work and presenting them with minimal added insight, most egregriously without a nod to the original source. During a skirmish last year with Arianna Huffington, The New York Times’s Bill Keller complained, “In Somalia, this would be called piracy. In the mediasphere, it is a respected business model.”

It then describes some of the various efforts underway to define “ethical aggregation,” such as those of the Council on Ethical Blogging and Aggregation, which has brought together a bunch of old- and new-media types to chew on this question,  and Curator’s Code, which is trying to get people to add bizarre little “via” and “hat tip” symbols to aggregated content. (Good luck with that.)

I’m sure the question of what the dividing line is between ethical and unethical aggregation seems complicated to Ms. Stivers and the various worthies participating in the above-named projects, but to me — someone who aggregates content pretty frequently, on this blog as well as on on Facebook and on Twitter — it seems really, really simple.

The difference between ethical and unethical aggregation is this: ethical aggregation makes the reader want to click through to read the original story; unethical aggregation doesn’t.

An example. See that quote I inserted up there from Ms. Stivers’ op-ed? That’s an example of me aggregating CJR’s content. But I did so in a way that teases it; you get a taste, and if you want more, there’s a link you can click that will take you to the full story.

Now, imagine if, instead of limiting my quote to a couple of sentences, I just copied and pasted the entirety Ms. Stivers’ article here verbatim, maybe adding a couple of words of my own like “interesting” or “this is important.” And then I put a link you could click through to read the whole story. Who would ever bother clicking that link? Why click through to read something you already just read?

That’s the difference between ethical and unethical aggregation. Ethical aggregation increases reader demand for the original story; unethical aggregation decreases it.

Once you have this criterion in mind, distinguishing between the two types of aggregation becomes simple. Just look at a couple of examples. First, let’s look at Dave Winer’s linkblog. This is pure aggregation — a collection of links Dave thinks are interesting, with a sentence from each to give you a sense for what’s on the other end of the link. Is Dave creating any original content for this site? No — his contribution is just his editorial judgement as to which stories are interesting and which are not. But this is clearly an example of ethical aggregation nonetheless, because he’s not using the stories on the other end of those links to build traffic for himself, either. If you want the story you have to click through; that’s the definition of ethical.

Now, for the counter-example, consider the experience Ad Age Media Guy Simon Dumenco had of having his content aggregated by the Huffington Post:

HuffPo’s aggregation… consisted of basically a short but thorough paraphrasing/rewriting of the Ad Age post — using the same set-up (i.e., pointing out that Apple had the misfortune of presenting its latest round of big announcements on the same day Weiner resigned from Congress) and the bulk of the data presented in the original Ad Age piece. Huffpo closed out its post with “See more stats from Ad Age here” — a disingenuous link, because Huffpo had already cherrypicked all the essential content. HuffPo clearly wanted readers to stay on its site instead of clicking through to AdAge.com.

So what does Google Analytics for AdAge.com tell us? Techmeme drove 746 page views to our original item. HuffPo — which of course is vastly bigger than Techmeme — drove 57 page views.

Fifty-seven page views — from a link on a site that proudly boasts it attracts “over 28M monthly unique, influential viewers.

Now, I don’t raise this example to single out HuffPo — they’ve been ramping up the volume and quality of the original content they generate, and toning down some of the aggregation excesses of the past. I raise it just because it’s a classic example of unethical aggregation. Lifting Dumenco’s content, giving it a light rewrite, and running with it is unethical, because it destroys the demand for the original product. Anyone who’s read the HuffPo rewrite has no reason to click through, which deprives Ad Age of the chance to earn revenue on the product that they paid Dumenco to create for them. It takes money out of Ad Age’s pocket. Which is pretty undeniably a dick move.

So there’s my proposed criterion for distinguishing ethical and unethical aggregation. Does it drive readers to want to get the original product? Or does it drive them away from the original product? It really is as simple as that.


Interchangeable news story on President Obama’s announcement of “personal support” for gay marriage

Obama, rainbow, unicorn

Above: the President makes his historic and unprecedented announcement

WASHINGTON — In a move hailed by gay rights and liberal activists as “unprecedented” and “historic,” President Barack Obama announced today that he personally believes that same-sex couples should be able to marry, but not enough to actually do anything about it.

“At a certain point, I’ve just concluded that for me personally it is important for me to go ahead and affirm that I think same-sex couples should be able to get married,” the president noted, historically and unprecedentedly. “That being said, there’s no reason for anybody to worry that I’m going to help in any way to make it easier for those same-sex couples to actually do that.”

“I mean,” added Mr. Obama, “I’m not crazy.

In addition to his assurances that he would in no way act on his deeply held personal conviction, Mr. Obama also pointed out that he believes that the states should continue to have the power to make the thing he is deeply and personally committed to totally and irrevocably illegal. Also, when asked, Mr. Obama declined to endorse the position that same-sex couples have a Constitutional right to marry.

“It is my deeply held personal conviction that same-sex couples should be able to get married,” Mr. Obama repeated, this time more slowly. “Unless, of course, that would offend their neighbors, or really anybody who lives within the same state as them, even people hundreds of miles away they have never met and will never meet. Then they’re on their own.”

Mr. Obama’s announcement was received with joy by gay rights activists and liberal commentators, all of whom said that this announcement was totally different from previous historic and unprecedented statements of deeply held personal conviction made by Mr. Obama regarding support for reining in the power of Wall Street, securing the rights of women, racial minorities, and workers, and ending the use by the United States of torture and secret, extralegal detention centers to prosecute the War on Terror, all of which were followed by token gestures in their general direction before being quietly abandoned.

“It’s true that Mr. Obama has a made something of a habit of responding to genuine problems in society with rhetorical flourishes rather than substantive policy proposals,” explained one nationally prominent gay rights activist. “I’m confident, however, that unlike his previous historic and unprecedented statements, this historic and unprecedented statement will be followed by action, because unlike the others, this unprecedented and historic statement is on a subject that I personally care about.”

A well-known liberal political blogger agreed. “While others may doubt whether President Obama’s statement today is truly historic and unprecedented,” the blogger remarked, “I can assure you that it is not just unprecedented, but also historic. We’re all talking about it, after all. And if it wasn’t historic and unprecedented, would people who obsess over the most trivial minutiae of national politics really spend time talking about it?”

At a campaign stop in South Carolina, Mr. Obama’s opponent in the 2012 presidential election, presumptive Republican nominee Mitt Romney, took issue with Mr. Obama’s remarks, describing them as a “flip-flop.”

“Unlike Barack Obama,” Romney told a crowd of supporters, “my position on same-sex marriage has been totally consistent: I oppose it, unless I’m campaigning in New York City, California, or New England, in which case I take care to tack on some tepid remarks indicating that I might be willing to support a weaker alternative like civil unions without ever coming out and saying so.”

“In politics,” noted Mr. Romney, “consistency is a virtue.”


Extraordinary claims require extraordinary evidence, or, no, Abraham Lincoln did not invent Facebook

Abraham Lincoln did not invent FacebookSo the big meme rocketing around the social media universe yesterday was that Abraham Lincoln had filed a patent application for a very Facebook-sounding newspaper in 1845:

Lincoln was requesting a patent for “The Gazette,” a system to “keep People aware of Others in the Town.” He laid out a plan where every town would have its own Gazette, named after the town itself. He listed the Springfield Gazette as his Visual Appendix, an example of the system he was talking about. Lincoln was proposing that each town build a centrally located collection of documents where “every Man may have his own page, where he might discuss his Family, his Work, and his Various Endeavors.”

This, of course, is raging bullshit. But that didn’t stop it from being the Online Rage of the Day, as people passed the link around to each other, completely uncritically.

I say “uncritically” because even a brief perusal of the blog post making the claim, by someone writing under the name Nate St. Pierre, should have made it obvious that it was false. The post offered one (1) piece of hard evidence to back up its argument: a blurry, low-res scan of a newspaper page purported to be the “Visual Appendix” mentioned above. The image quality is so poor that you can’t read any of the text on the page; all you can really make out is Lincoln’s picture at the top left (looking suspiciously crisper than the text surrounding it) and the banner reading “SPRINGFIELD GAZETTE” across the top.

A cursory examination of this supposed page makes it obvious that it’s been digitally altered — newspapers didn’t have the technology to print photographs until the 1880s (they had to make do with engravings and illustrations until then), so a photo of Lincoln in an 1840s newspaper makes no sense, and there’s a suspicious blank space next to “Springfield” in the dateline: “Springfield               December 24, 1845.” The explanation for the blank space came soon enough, when the original scanned page was turned up showing that it was from the Gazette of Springfield, Massachusetts, rather than Lincoln’s town of Springfield, Illinois. The hoaxer had simply digitally erased “Massachusetts” and pasted a photo of Lincoln in the top right of the page.

But my point in writing this isn’t to complain that everyone isn’t an Internet sleuth. It’s to complain that even if you aren’t, this story shouldn’t have made it past your smell test.

The reason is a simple principle: extraordinary claims require extraordinary evidence.

Here we have a truly extraordinary claim: that a future President of the United States, who had no other involvement with media or journalism in his career, had, before his public life had even begun, envisioned a revolutionary new type of newspaper, and had been committed enough to the idea to seek a patent on it.

That’s a pretty big claim! If it were true, it would upend much of what historians think they know about the young Lincoln. Lincoln’s life is one of the most closely studied and minutely researched in world history; an old saw has it that more books have been written about him than anyone except Napoleon Bonaparte and Jesus. And yet here, supposedly, is a chapter in the great man’s life that somehow all these authors completely missed.

And all the evidence you have to back up this startling assertion is… a blurry scan of an old newspaper page? No patent application documents? (The author claims he saw those documents at the Lincoln Library in Springfield, Illinois, but for some reason neglected to copy them.) No names of Lincoln scholars or other experts who can back up the assertion? (The author mentions one researcher at the Lincoln Library, but only gives a first name — “Matt” — which conveniently makes it difficult to find that person for confirmation.)  Not even a transcription of the unreadable text on the scanned page?

That’s it? Just a single, unreadable JPEG, and a bunch of unconfirmable, uncheckable assertions? That’s all you’ve got?

That should have been taken as the signal that there was nothing to see here — maybe you’ve got something, maybe not, but I can’t believe you until you come back with harder evidence. But it was not. The story spent a day rocketing around Facebook, Twitter and the rest, getting passed from person to person without a hint of skepticism. (How many people even read the post before they pulled the trigger on their Likes and Retweets? Who knows.)

The story eventually picked up enough momentum that popular bloggers started writing about it, and they, too, passed it along uncritically, at least until their commenters started tearing the story apart for them. But by that point the damage was done; the blog posts appeared to reinforce the veracity of the story, which only made it spread farther and faster. Eventually the blog posts were corrected and the Tweets and Likes died down, but by that point it didn’t really matter.

In some respects this is an old story; attractive falsehoods have always spread faster than boring truths. But I feel like in the social media age we have crossed a threshold of some sort — mostly because the dilemma journalists are familiar with of “the story too good to check” has become a dilemma that non-journalists have to grapple with too. One of the things that disturbed me about my first engagement with Twitter, back in 2008, was how frequently I was bombarded with assertions that turned out later to be untrue. Nobody cared enough to look into them; it was easier and more fun to just pass them along.

This is one of my primary sources of discontent with the direction Internet culture has taken. Ever since the first days of the World Wide Web, those of us who were involved in building things for it took it as a primary mandate to make publishing easier. And step by step, we did — from hand-coded HTML pages, to WYSIWYG editors, to content management systems, to blogs, to Twitter, each step removed some friction from the publishing process. And as a result, with each step more people started publishing their thoughts online. Which always struck me as a Good Thing.

But the social network age has exposed the flip side of that mandate — the easier it is for people to publish, the less time they will spend thinking about what they publish. When publishing is reduced to its barest essence, as on Twitter — when it’s just an empty box and a “Submit” button — people will publish anything and everything.

And that includes stuff that they later wish they hadn’t. One of the most common stories of recent years has been prominent people embarrassing themselves on Twitter, because publishing on Twitter is so easy that it’s easy to just blurt out whatever’s on your mind — even stuff that pops in your head when you’re drunk off your ass in a bar, or in a fit of anger at somebody, or otherwise temporarily out of your mind. All you have to do is pull out your smartphone before you come back to your senses and suddenly that thought that would previously only have been exposed to a few people around you is held out in the light for the whole world to see. The wonder isn’t that this has led so many people to make fools of themselves, it’s that there are people exposed to it who haven’t made fools of themselves. (Yet.)

But while it might be better for everybody if we made publishing just a little harder — just added enough friction to the process to force you to think before you tweet — that’s never going to happen. That ship has sailed. The only solution, I think, is for the rest of us to learn the lessons that journalists have learned the hard way. And one of those lessons is extraordinary claims require extraordinary evidence.

So next time, before you Like or Tweet or forward, stop and think. Ask yourself if what you’re reading passes that test. And if it doesn’t, step away from the social media cannon before you make a bad problem worse.


How to sell products to nerds

Duct tape on Apollo 17 lunar roverI came across a good post today about things non-technical people need to know to work productively with programmers. One point in particular jumped out at me:

Rule #5: Stop selling so hard.

In business, we talk a lot in the language of persuasion. We’re constantly selling. We’re constantly advocating for our ideas and claiming their potential to change the world. There’s a time and place for that kind of chest thumping. But it’s generally not with developers.

Most good developers are pessimists. They expect stuff to break. And they get suspicious when stuff doesn’t. So, naturally, they tend to be allergic to optimistic, hyperbolic sales speak.

When you’re working with developers, speak the truth plainly. Point out the benefits of what you are doing/proposing, but also share the warts and problems you’re seeing. When you don’t know something, say so and ask questions. You’ll get much further with your developer this way.

This is absolutely, one hundred percent true, and it’s something that I’ve seen sales and marketing types fail to understand pretty consistently, so I wanted to call it out.

I’d even go a step further and say that programmers aren’t just pessimists. We are fatalists. We believe that the only reason the world runs at all is because of frequent applications of bubble gum and baling wire in places we can’t see.

We think that way because our work requires us to spend our days climbing around in the innards of things, and innards, generally speaking, are not pretty. They’re actually pretty gross. Even things that look beautiful on the outside are usually made of pretty grody guts — they work not because they are reliable, but because layers and layers of duct tape keep the parts from flying off in a million different directions.

Why is this? Partly it’s because systems are built by humans, and humans are flawed creatures; a perfect system would require a perfect builder, and perfect we ain’t. But it’s also because even well-designed systems are frequently based on assumptions that turn out to be less true in the real world than they seemed on the drafting table.

See that picture of the moon rover up there, for instance? That vehicle was one of the most obsessively engineered machines ever created by man. But when it actually got to the moon, the fenders over the tires broke twice — once during the Apollo 16 mission, and again during Apollo 17 — resulting in the rover kicking up big “rooster tail” plumes of moon dust that got all over the astronauts and their gear. When it happened on Apollo 17, astronaut Eugene Cernan jury-rigged a new fender — with (you guessed it) duct tape.

In other words, geeks expect things to break because we see them break all the time — even things that are supposed to be unbreakable, or that have been engineered by geniuses with unlimited resources.

This is where most sales pitches to programmers go wrong: they try to convince us that the thing they’re selling has no flaws. That’s just sales talk, of course, and to normal people, this is probably reassuring; but to geeks, it sounds more like a confession. We covered up the flaws in this thing so well that nobody can see them! Ha ha!

Talk like that sets off an alarm in our heads. We know what you’re selling has problems, because everything has problems; the only question is why the salesperson is trying to hide them from us. Which makes us suspicious that the reason is because the problems are so bad that we’d run away screaming if we knew about them. Which turns us off.

The solution — the way to sell to nerds — is to embrace your product’s flaws, rather than hiding them. Talk about pros and cons; about how the product “isn’t for everybody.” Talk about tradeoffs and compromises, rather than home runs and “win-wins.” That puts your product into a context that we’re comfortable with, that we understand — we expect using your product to involve tradeoffs, because in our world everything involves tradeoffs. Being up front about them just tells us that you’re more likely than your competitors to be someone who will be helpful when we have to figure out what those tradeoffs are and how we can best work with them.

(If you’re a Salesperson from the Dark Side, of course, you will have realized in reading the above paragraph that saying your product “isn’t for everybody” doesn’t preclude you from telling each prospect individually that it’s right for them. And you’d be correct! You can almost always frame a product’s pros and cons in such a way that your potential customer thinks the pros speak to them and the cons speak to someone else, no matter who that customer is. You may have to torture the facts about your product somewhat to make this strategy work, of course, but if you’re a Darth Vader/Dick Cheney-esque personality type, maybe you’re OK with a little blood on your hands.)

So there’s your sales tip: if you want to sell something to a nerd like me, don’t try to convince me that it’s perfect. Try to convince me that it’s imperfect, just in ways that I can live with.


Cities die, too

Tumbleweed

Photo by Jez Arnold.

So I’m clicking around Reddit this evening looking for something interesting to read, and I find a link to an article about the dynamics of creativity from a publication called Greater Good, out of UC Berkeley. It’s an interview with author Jonah Lehrer, who’s just written a book on the subject called Imagine: How Creativity Works.

I haven’t read Mr. Lehrer’s book, but I sure as hell hope the arguments he makes there are more compelling than the ones he gave this interviewer. Because by the end of the interview I was goggle-eyed in disbelief at some of the things he was saying.

The first half of the interview isn’t so bad — there’s lots of hand-wavy, citation-less assertions, but that’s (unfortunately) par for the course in pop science writing, so I can live with it. But then the interview nears its end and Lehrer, apparently afraid to leave without making a Big Impression, guns his engines and jumps over credibility like Evel Knievel over Snake River Canyon.

Vroom!

Geoffrey West, a researcher I interviewed for the book, asks provocative questions about the differences between cities and companies. They look similar from a certain perspective, but they’re also quite different. Cities never die. Cities are immortal. You can have a devastating earthquake like San Francisco did in 1906—but the city is still here. You can nuke a city—it comes back. You flood a city—it comes back.

OK, point number one: cities are not “immortal.” Anyone with a sense of history knows that they die all the time. Sudden disasters like earthquakes and floods tend not to kill them, because if the underlying economics of the area are solid, people will spend the money to rebuild the buildings and drain the floodwaters. But if those economics are shaky, there’s no guarantees that people will decide rebuilding is worth the money. (Anyone from New Orleans can explain to you how this is decidedly not theoretical.)

And cities can die even without suffering an external shock, just from the slow action of broad forces. Commercial and demographic tides ebb and flow, making yesterday’s capital tomorrow’s ruin; there’s not much creativity going on right now in Chichen Itza, or Leptis Magna, or Cahokia. And you don’t have to look back centuries to see it happen, either. There’s ghost towns all over the United States that used to be thriving settlements until the silver ran out or the canal shut down or the highway passed them by.

It may look like cities don’t die, but I would argue that’s due to a flaw in the way we as humans observe these things: we tend to miss changes that develop over a timespan longer than our own life. Take Rochester, New York, for instance. Rochester existed when I was born, and I don’t doubt it will exist when I die. But the population of Rochester has been declining for fifty years now; the 2010 Census found the city with a lower population than it had in 1910.

What will Rochester look like in 2110, or 2210? There’s no way to know. It may turn things around and start growing again — much of the city’s growth in the first half of the 20th century was driven by the growth of hometown businesses like Kodak and Xerox that became synonymous with the industrial economy, and it’s not inconceivable that some future business cycle could put Rochester on the upswing again. But assume for a moment that the trend continues in a straight line of slow decline. It’s likely in that scenario that there will be an incorporated jurisdiction of some type in upstate New York named “Rochester” for a long time, even as the area hemorrhages residents, but that’s a far cry from the spirit of the Unkillable City that Lehrer conjures up.

Which brings me to Lehrer’s next point:

But companies die all the time. The average lifetime of a Fortune 100 company is 45 years. Twenty-five percent of Fortune 500 companies die every decade. Only two companies in the original Dow Jones still exist.

So why do companies die while cities live forever? What’s the difference? And what does that tell us about creativity?

Lehrer again turns to West for his answer:

Geoffrey West found that as a city gets bigger, everyone in that city becomes more productive. They invent more patents, make more trademarks, make more money, and so on. As companies get bigger, the opposite happens. Everyone in the company becomes less productive. There are fewer patents and fewer profits per employee.

In the end, this makes companies very vulnerable. Wall Streets says, “Get bigger! Get bigger!” Then they have this big expensive bureaucracy to maintain and they become more aligned with their old ideas. They’ve got to invest lots of money in expensive new acquisitions and sometimes those acquisitions don’t work out. Eventually their old ideas are no longer relevant, and they go belly up.

Which, frankly, strikes me as seriously off-base. There is a much simpler explanation for why companies die more frequently than cities: economic diversity.

A city, in other words, is a big, hairy bundle of thousands upon thousands of streams of economic activity, each of which operates independently of most of the others. If a major corporation sets up its headquarters in the city, it brings with it a large number of new economic streams, but the city is by definition bigger than the corporation; its lifeblood comes not just from the new HQ, but from all the other economic actors in the city as well — taxicabs and delis, condos and dry cleaners, as well as all the operations of all the other corporations in town.

This is important, because it means that a city is not an economic monoculture, and that means that if the economic environment changes — if a line of business that used to be reliably profitable suddenly becomes less so — the city’s overall economic rationale can not disappear overnight. Which isn’t to say that it can’t disappear — as we discussed above, it certainly can — but that process takes time to operate, as the ripple effects spread through the local economy. A major change in the economic climate can take decades to fully shake itself out. And that time buffer is what makes cities resilient; it gives their residents time to adapt, to create new economic streams to replace the old ones. Sometimes that works out, sometimes it doesn’t, but cities at least have the chance.

Corporations, on the other hand, generally do not. A corporation is an economic monoculture; the rationale for its existence is premised on a single set of economic facts. If those facts change dramatically — if the price of a key input goes up, or public tastes change, or a process previously thought harmless is discovered to have health-threatening side effects — it’s entirely possible for a business that looked economically sound yesterday to suddenly look unsound today. And once a business stops looking economically sound, it doesn’t take long for people (customers, partners, investors, creditors, etc.) to start pulling their money away from it.

(It’s worth noting that cities aren’t necessarily economically diverse, and corporations aren’t necessarily economic monocultures. “Company towns” whose economy is tied to a single key business or industry, for instance, certainly exist; think of Detroit and the domestic auto industry. But that’s an exception that proves the rule; if you make your city more of an economic monoculture, you open it up to the same risks that other economic monocultures run.)

So what does all this have to do with creativity? I’m honestly not sure. I haven’t read Geoffrey West’s research, so I’m just going on how Lehrer characterizes it, but at this level, at least, it seems contrived.

But that could just be me not being sufficiently creative, I guess.


How to survive an atomic bomb

Nuclear terrorism study illustrationThe Associated Press reports something that anyone with a passing familiarity with nuclear weapons already knows:

D.C. Nuclear Blast Wouldn’t Destroy City, Report Says

This is what the U.S. government imagines would happen if terrorists set off a nuclear bomb just blocks away from the White House: The explosion would destroy everything in every direction within one-half mile. An intense flash would blind drivers on the Beltway miles away. A radioactive cloud would drift toward Baltimore.

But the surprising conclusion? Just a bit farther from the epicenter of the blast, such a nuclear explosion would be pretty survivable…

“Few, if any, above ground buildings are expected to remain structurally sound or even standing, and few people would survive,” [the report] predicted. It described the blast area as a “no-go zone” for days afterward due to radiation. But the U.S. Capitol, the Supreme Court, the Washington Monument, the Lincoln and Jefferson memorials, and the Pentagon across the Potomac River were all in areas described as “light damage,” with some broken windows and mostly minor injuries.

This isn’t to say that such a blast would be a walk in the park — it would likely kill tens of thousands of people, and injure up to hundreds of thousands more — but it would not wipe Washington off of the map.

This conclusion may seem counter-intuitive, because when most of us think of nuclear weapons we think of the weapons the US and USSR built to aim at each other during the Cold War. But it’s important to understand that the type of bomb a terrorist group would be able to develop and deploy would be something very different — something very much smaller, and much less destructive.

How much less? The hypothetical terrorist bomb described in the report would explode with the force of 10,000 tons (or 10 kilotons) of TNT. Which is a lot, without question — but for comparison, consider that a single Minuteman III nuclear missile can carry a warhead rated to produce a blast of 350-475 kilotons of TNT. A 10-kiloton weapon isn’t even as powerful as the “Little Boy” bomb dropped on Hiroshima on August 6, 1945, which produced a yield somewhere between 15 and 20 kilotons. And that weapon, 50 to 100% more powerful as the one we’re discussing, didn’t destroy Hiroshima; it did enormous damage, to be sure, but the city survived and recovered.

The other thing to remember about a scenario like this is that it’s only one bomb. This is a key difference between this type of attack and what most people think of when they hear “nuclear weapons.” The image most people have of nuclear war is the Cold War scenario of the two great superpowers hurling tens of thousands of nukes at each other all at once — what RAND Corporation nuclear strategist Herman Kahn memorably described as a “wargasm.” It is this type of massive overkill attack that led to the concept that “the survivors would envy the dead,” since all they would be left with would be a world burnt to ashes, with all existing civilizations destroyed and no resources available to rebuild them.

The aftermath of a terrorist bomb would not be anything like that. Much of downtown D.C. would be in ruins, but most emergency services from Virginia and Maryland would still be intact and able to respond, and the rest of the nation (and the world) would be untouched and able to extend support. If you experienced an act of nuclear terrorism and survived, in other words, you would not be condemned to spending the rest of your life as an extra in a Mad Max movie. (Yes, life in Washington would never be the same after such an event, but “Washington’s economy would never fully recover” is still a long chalk from “the end of civilization as we know it.”)

Which brings me to the most important point about this type of scenario: it can be survived. It’s not like the Cold War wargasm scenario, where so much explosive tonnage is falling on your head that protecting yourself is impossible. There are things you can do if you find yourself in such a situation that can dramatically improve your chances of making it out alive.

Protection from nuclear fallout

The first thing to understand is that if you are still alive five minutes after a small nuclear weapon detonates, you are already very likely to continue surviving. People in the immediate vicinity of the explosion would be killed instantly, but because of the low yield of the bomb that vicinity is fairly localized; the study puts this area as about half a mile to a mile in radius.

If you happen to be in that area, there’s not a lot you can do to protect yourself. But if you’re there and you are still standing five minutes after the blast, or if you’re farther away when the bomb goes off, you’ve made it through the worst.

Assuming that you’re not killed instantly, then, the next thing to understand is that the decisions you make in the ten minutes after the bomb explodes will probably determine whether you live or die.

If you’re outside that immediate area but in the general region of the blast when it happens, your greatest initial risk will come from a direction you’re probably not expecting: your windows. A nuclear explosion begins with a dazzling flash; because light travels faster than sound (and everything else), this flash can precede the arrival of the blast from the explosion by 30 seconds or more. A great risk, therefore, is that people will see the flash in their peripheral vision, go to the window to see what happened, and then get lacerated by shards of flying glass as the blast wave arrives and shatters the window. So the first thing to know is that if you see “a bright flash of light like the sun, where the sun isn’t,” you should resist the temptation to investigate and instead take cover.

After the blast wave passes, if you’re still alive and ambulatory, your next decision will be whether to stay where you are or leave. Most people at this point will understandably feel the urge to flee the area at once. In most cases, though, this is exactly the wrong thing to do. The reason is simple: fallout.

When an atomic bomb explodes, the blast digs a big batch of dirt out of the ground and sends it flying up in the air. Because it’s been exposed to intense radiation, this dirt is highly radioactive. The force of the explosion sends it flying, but afterwards it begins to fall back to earth, carried along on its way down by the wind. This deadly dirt is nuclear fallout.

Unlike the bomb blast, radiation from fallout is only deadly if you’re exposed to it. So your immediate priority after surviving the initial blast should be to seek shelter immediately. You don’t have to touch the fallout directly to be exposed to its radiation; radiation can pass through solid materials, but it diffuses as it does so, so the more stuff (earth, concrete, brick, etc.) you can put between yourself and the fallout, the lower the amount of radiation that will be able to seep through to you.

Which means that what you want to do is find the place with the most stuff you can put between you and the radiation in the ten minutes or so you’ll have before the fallout starts to fall down on you. The ideal shelter from fallout is an underground concrete structure, because then you get protection not just from the walls but also from the earth around them and the building above them; but even the concrete walls of a modern above-ground office building can provide sufficient protection, especially if you take shelter in an interior room rather than one touching an exterior wall. You can see the “protection factor” (PF) of various types of shelter in the illustrations from the report over there on the right; note that the study puts the minimum acceptable PF for an “adequate” shelter at 10.

Odds are that your shelter isn’t someplace you’d be able to live comfortably in for a long period of time. But that’s completely OK! The point of this shelter isn’t to live in for a long period; it’s just to keep you away from the fallout while it’s highly radioactive. The intense radioactivity of fallout burns itself out surprisingly quickly; an hour after it falls it’s only half as radioactive as it originally was, and after a day it can be down to 20% or less. So your goal is to avoid exposure to radiation until the most intense radioactivity has subsided. (This is why fleeing immediately is such a bad idea; it puts you out in the open, completely unshielded from radiation, right at the time when the radioactivity is at its highest, most intense levels.)

How long you should stay in your shelter depends on how well-protected it is, because you’re trading off guaranteed but lower exposure in your shelter to more intense but less guaranteed exposure as you move farther from the blast area.  (In other words, the farther away you get, the less likely you are to be exposed to radiation at all; but if you are exposed while on foot, you’re completely exposed. Whereas in your shelter you’re definitely going to get some exposure, but how much will depend on the protection the shelter offers.) If you’re in a sturdy underground concrete structure, you can stay there safely for as long as a day; if you’re huddling in an abandoned car, you should get moving immediately after the cloud of fallout passes. But in either case, sheltering in place during the initial period of highest radioactivity will dramatically increase your chances of survival.

Why am I going into these details? Because it ties in with one of my recurring themes here on this blog: that you should be prepared. In an emergency, the difference between life and death is frequently as simple as whether you keep your wits about you and take a few simple steps to protect yourself.

An act of nuclear terrorism would be a tragedy beyond anything in living memory, but it wouldn’t be the end of the world. And if you’re armed with a little knowledge, you can drive the odds that it would be the end of your world way down, too.

UPDATE (March 31): I meant to link to the original report when writing this so you could refer to it for more information, but somehow forgot to insert the actual link. Sorry about that! There’s two documents you should look at — this one from 2009 discussing the general risks, and this one from 2011 that looks at the Washington, DC scenario in more detail.

There’s also a lively discussion of this post happening on Reddit.