No, Scott Adams isn’t right

Right way, wrong wayI noticed in my feeds today a post by Dave Winer titled “Scott Adams is right“:

I want to heartily endorse this Scott Adams piece, especially the part about politics.

I hadn’t seen the post by Adams (best known as the creator of Dilbert), so I clicked through to see what exactly Dave was endorsing. Here’s the nut:

Allow me to go through some examples of what we might regard as human intelligence and I’ll show you why it is nothing but illusions.

Politics: When it comes to politics, humans are joiners, not thinkers. The reason a computer can’t have a political conversation is because politics is not a subset of intelligence. It is dogma, bias, inertia, fear, and a whole lot of misunderstanding. If you wanted to program a computer to duplicate human intelligence in politics you would have to make the computer an idiot that agreed with whatever group it belonged regardless of the facts or logic of the situation.

If you insisted on making your computer rational, all it would ever say is stuff such as “I don’t have enough information to make a decision. Let’s legalize weed in Colorado and see what happens. If it works there, I favor legalizing it everywhere.” In other words, you can program a computer to recommend gathering relevant information before making political decisions, which is totally reasonable and intelligent, but 99% of humans would vehemently disagree with that approach. Intelligent opinions from machines would fail the Turing test because irrational humans wouldn’t recognize it as intelligent.

This line of thinking is straightforward, persuasive, and wrong.

It’s wrong because it sets up an arbitrary divide between rationality (understood as “decisionmaking based solely on data”) and every other form of human decisionmaking, which it dismisses as “dogma, bias, inertia, fear, and a whole lot of misunderstanding.” In other words, there’s decisionmaking based on data, which is Good, and decisionmaking that is influenced by anything other than data, which is Bad.

But this misses a fundamental point about human decisionmaking, especially in realms like politics, which is that it is a values-neutral project. I’ve written before on why this isn’t the case. The Data never stand 100% on their own, because the same set of data can look positive to me and negative to you if you and I hold opposing sets of values.

To understand what I mean, take Adams’ example of legalizing pot. Assume for a moment that “let’s legalize weed in Colorado and see what happens” is what we do, and eventually we get back data from Colorado showing that weed use is way up and prison overcrowding is way down. Assume also that these data are absolutely reliable — i.e. let’s not get sidetracked on questions about how they were gathered. For the purposes of this exercise, they are 100% accurate, reliable, true.

But are they describing a situation in Colorado that’s gotten better, or worse?

Your answer to that question will depend entirely on the values that you hold. If you believe that pot use is morally neutral and imprisoning people for morally neutral offenses is bad, you will look at these data and say that things have gotten better. Colorado isn’t locking people up for smoking pot anymore, hooray! But, if you believe that pot use is morally negative, that it represents a form of self-harm, you will look at the same data and come away alarmed. Look at how many more people in Colorado are hurting themselves, now that the legal deterrents have been removed! To you, things in Colorado have gone backward, not forward. But the data haven’t changed one bit.

My assumption is that Adams would look at this thought exercise and respond by saying that it doesn’t speak to his point, because the first person in our example is being rational, while the second one is being irrational. But that only makes sense if you take the first person’s values as “objective” and the second person’s as “dogma” or “bias” or “misunderstanding.” Values are never objective; they are reactions to the circumstances through which a particular person has lived, and as such they cannot be decoupled from human experience.

But just because values are not objective doesn’t mean they are not rational. Consider how people with similar life experiences — similar genetic gifts and curses, raised in similar households, exposed to similar external stimuli — will tend to develop similar values. People born into privilege tend to become Republicans; people born into poverty, Democrats. If values were irrational, you’d expect them to be randomly distributed, but they aren’t; they cluster around particular sets of life circumstances, which implies that they are rational reactions to those circumstances.

Which is why I think that Adams is wrong: you can’t divide decisions into “purely data-driven, and therefore Good” and “influenced by other factors, and therefore Bad” categories, because there is no such thing as a purely data-driven decision. To make a decision, the decisionmaker needs to have some pre-existing sense of what is good and what is bad, what is right and what is wrong; absent that, there is no reason other than random selection to choose one alternative over another. Remove values from the equation and all alternatives become equal; you might as well throw darts at a dartboard.

This is actually just as true for decisions made by computers as it is by decisions made by humans. We tend to think of computers as having the ability to be purely objective where humans cannot, but that is a misunderstanding of how they work. A computer, by itself, can’t make decisions at all. It requires a programmer — which is to say, a human — to program it with sets of rules and heuristics it can use to evaluate different options. And the human who writes those rules will do so in a way that reflects her own values, or the values of whomever is paying her to write them. The computer appears objective only because all those humans whose values informed its decision processes can hide themselves behind it.

(So, you ask, what happens when computers gain the ability to program themselves, without any human intervention required? That’s a very interesting question! But as of today it’s completely within the realm of science fiction, so there are limited benefits to spending time speculating about it.)

Adams’ hypothetical objective computer actually makes this point quite eloquently, if accidentally:

If you insisted on making your computer rational, all it would ever say is stuff such as “I don’t have enough information to make a decision. Let’s legalize weed in Colorado and see what happens. If it works there, I favor legalizing it everywhere.”

The accidental point being: what does “if it works” mean, exactly? If pot use goes up, does that mean it’s working or not working? If prison populations go down, does that mean it’s working or not working? You can’t answer the question without bringing a set of values to the table. It’s impossible.

Data can tell you how fast you are going, and what direction you are going in, but it can’t by itself tell you if you are going forward or back. But “are we going forward or back?” is the fundamental question that underlies every political decision. So wishing for objectivity in such things is wishing for the impossible.