In her last post, Heather Whitney discussed some of the implications of the legal dilemma portrayed in the Star Trek episode "The Measure of a Man." In this new post, she expands on some of the discussion generated by her last post.
Choice is complicated. In the moment of choosing, be it an individual deciding whether or not to eat a cookie or a company deciding whether to hand over a user’s data, it often appears to come down to a simple weighing of costs and benefits. But, while that deliberation process is significant, study after study show that there’s a lot more than our in-the-moment weighing makes it seem. We, for instance, are more trusting of others after a quick nasal spray of the hormone oxytocin. We actually eat less food when the plates we’re eating off of are smaller. A lot, it’s clear, goes on behind the scenes of choice. And, for those interested in helping people choose to do good, it’s critical to understand and then use these varying mechanisms to help people harness their “better” selves.
Harvard Law Professor Yochai Benkler sets off to do just this in his new book, The Penguin and the Leviathan. In the book, Benkler frames the battle as between two competing conceptions of human nature: self-interest (selfishness) on the one side and cooperation on the other. While I very much enjoyed (and recommend) Benkler’s book, I want to challenge that setup. Straightforwardly, the battle is not between “purely selfish” people on one side and cooperative “good” people on the other; the battle is between people who misunderstand what’s in their self-interest and those who don’t. And the challenge is figuring out how to get people to reevaluate what self-interest means, since our current understanding (that doing what’s in your self-interest means doing what’s greedy) actually makes it easier for people to choose that greedy thing.
Why Framing and Meaning Matter
When we make important choices, we ask ourselves a lot of questions. We ask which choice would best help our friends and family. We ask what our “gut” tells us to do. If we’re religious, we think about that. And, invariably, we ask “what’s best for me? What’s in my self-interest?”
But asking the questions are just the first step. What really influences our decision is how we come to understand what, for instance, self-interest means.
Here’s a study Benkler discusses that makes this more concrete:
Psychologist Lee Ross and colleagues divided participants into two groups and had them play the standard Prisoner’s Dilemma game. The only difference between the standard version and Ross’ was that Ross told one group they were playing “The Community Game” and the other group that they were playing “The Wall Street” game. To repeat, the only difference between the two games was the name; the difference in outcome, however, was astounding. Those who played the Community version cooperated 70 percent of the time while those playing the Wall Street game cooperated only 33 percent of the time.
The takeaway here is clear: while each participant experienced in-the-moment deliberations about whether or not to cooperate, by framing that choice as one made within a “Community Game” or “Wall Street Game”, the researchers were able to fundamentally alter the way participants went about deliberating. And they did this by exploiting the participants’ understanding of both what “Community” and “Wall Street” mean and what those participants think it means to make decisions within those value-laden frameworks.
Our Current Conception of Self-Interest
Ross’ participants clearly thought of “Community” in a way that suggested cooperation and “Wall Street” in a way that did not. What’s crucial to note, however, is that the connection participants made between “community” and cooperative behavior is in no way determined a priori. We can easily imagine another society that, due to historically contingent circumstances, now associates Community with aggression and hostility. We take the concept Community and then build an understanding (a conceptualization) of it. This also happens with Self-Interest and Selfishness. Formally, these are concepts; what’s important (and potentially problematic) is how we fill them in.
By looking at Benkler’s self-interest vs. cooperation battle we can find the origins of our current conceptualization of self-interest.
Benkler’s Leviathan conception of human nature is fueled by (unsurprisingly) Hobbes’ Leviathan and Adam Smith’s Wealth of Nations. On this view, humans are motivated by material gain and power. When deliberating, we mechanically weigh costs and benefits in order to deduce (and then choose) whatever helps us get ahead. We’re robotic, calculating, hyper-rational, and see others as weaklings, mere tools to exploit to their foolish detriment and our greedy gain. We are, quite frankly, unpleasant. It is this view that I think currently shapes our conception of self-interest.
And we can test this. While public service announcements try to get teens to see drugs as actually bad for them, we never call the girl who “just says no” selfish, nor do we call her action motivated by self-interest. Our dominant conception of self-interest sees self-interested actions as motivated by greed, and this girl’s actions strike us as neither of those things.
Instead, we associate this girl with Benkler’s competing conception of human nature – one that focuses on our ability to be virtuous, empathetic, cooperative, and generous. On this view, we can transcend our selfishness; we’re more than selfish. Benkler calls this view the “Penguin” in honor of Tux, the symbol of Linux – an operating system built on free and open source (and thus, Benkler implies, selfless) software. Based on the original choice framework I laid out above, when the Penguin is deciding what to do, she just doesn’t tend to ask the “what’s in my self-interest” question.
Why Our Current Self-Interest Conception Is Harmful
Let’s take an illustration Benkler puts forward to illustrate the difference between Leviathans and Penguins:
You’re sitting on a bench when a passerby drops $100. You pick it up. Nobody is around to see what you do. Do you keep it?
As Benkler sees it, for the “purely selfish person, the answer is simple.” They keep the money because “[t]here are no possible repercussions.” But, in contrast, “for the person with morals [the Penguin], there are repercussions – feelings of guilt for not having returned the money.”
Here’s what I find harmful: When someone is sitting on that bench and sees that money drop, we know that they, in trying to figure out what to do, might be asking themselves lots of different questions (what’s the right thing to do? How will I feel if I keep this money? Think about if you were the one who dropped the money, wouldn’t you want someone to return it, etc.). And, we know those questions can be highly influential. As I understand it, Benkler sees the person ask asking two main questions that lead to opposite choices: you can do what’s in your self-interest (what’s good for you) and keep the money or you can do what’s not in your self-interest but is instead moral (so good in some unqualified sense), and return the money.
But doesn’t this framework, which relies on a deprived conceptualization of self-as-inherently-greedy, push someone to believe that there is at least some value no matter what she decides? She can do some good (good for herself) by keeping the money or perhaps more good by returning it. But that clearly concedes too much. We actually don’t think keeping the money is good for her. And why? Because, when you get down to it, we don’t actually think of our selves as the horribly myopic Gollum-like creatures we would have to be in order for pocketing someone else’s $100 to be so unquestionably in our self-interest. But, because we understand “self-interest” to mean greed, we confuse ourselves. In other words, we’re always going to ask and care about what’s in our self-interest. When we’ve decided that greedy things are always in our self-interest (because we’ve accepted the Leviathan understanding as an accurate description of selfishness) we then think the greedy thing is good for us.
In contrast, imagine a world where our conceptualization of self-interest viewed the self as more Penguin-like. But, instead of Benkler’s view, where the Penguin is selfless, we understood that, being Penguins, helping others was quite often good for them and for us.
Now imagine the person on the park bench. They still have to choose whether or not to pocket the money. But, instead of seeing keeping the money as good for them, now they see doing so as potentially bad for them (it’s not what someone who’s truly self-interested might do). At that point, they may recognize that the only thing counting in favor of keeping the money would be greed – and that greed would be nakedly called what it is, instead of stealthily hiding behind a concept that suggests it is in their self-interest. I believe this framework, like the “Community Game” would set our decision making against a backdrop more conducive to virtuous choices.
So what do we do?
People are always going to be self-interested; when they are deciding what to do they’re going to ask themselves what is best for them. The question, and challenge, is getting them to think about themselves and their interests in a robust (non-Leviathan) way. So how do we do that?
The first step seems to be to change how we use “self-interest” and “selfishness”. If we talk about heroic characters as doing not just good but doing what was good for them, we can slowly transform our understanding of what “self-interest” means.
And then, for the next person on the park bench asking what’s in their self-interest, the inquiry will produce much more robust (and less depressing) thoughts and results.