Search

Deep Pragmatism with Joshua Greene



Dr. Joshua Greene is a Psychology Professor and a faculty member of the Center for Brain Science at Harvard University. His work focuses on the dual-process theory of emotions and reason as it relates to moral judgment. He is perhaps most known for his past neuropsychological work involving the trolley dilemma. Today, he continues his research into strategies for effective altruism and how to apply principles of what he calls “deep pragmatism,” for solving large-scale social challenges. We discuss the principles of deep pragmatism, as outlined in his book Moral Tribes: Emotion, Reason, and the Gap between Us and Them, in today’s episode.


Transcript available at: https://www.ambercazzell.com/post/msp-ep34-JoshuaGreene


APA citation: Cazzell, A. R. (Host). (2020, May 26). Deep Pragmatism with Joshua Greene [Audio Podcast]. Retrieved from https://www.ambercazzell.com/post/msp-ep34-JoshuaGreene


Note: This transcript was automatically generated. Please excuse typos and errors.


Amber Cazzell (00:01:19):

All right. Hi everybody today. I am very excited to be here with Josh Greene. Josh, thanks so much for joining me today. I really appreciate it. Good to be here. Thanks. Yeah. So the first time I ever learned about your work, I was an undergrad and I was taking this moral psychology and cultural values class and I think the first paper of yours I ever read was the secret joke of console. And this was, this was back in the day when I was sort of naively thinking like, man, I'm like very interested in morality and like I want to unravel sort of what is the secret sauce here? And I thought that your work with the trolley dilemma and all of that was very fascinating. I also thought it was interesting how you tended to favor sort of logical, rational moral discourse over and above sort of this emotional processing stuff.


Amber Cazzell (00:02:09):

And so we'll get into a little bit of that today. When I was talking to Josh off the podcast, I was saying that he's done a lot of, you know, he's probably most famous for his work with the trolley dilemma, but I kind of want to shy away from that a little bit because there's so much available on that already. And talk more about his recent work with deep pragmatism in his book moral tribes. So anyway Josh, I always like to start out these podcasts by hearing about researcher's backgrounds and I'd love to learn a bit about how you became interested in moral psychology and found yourself where you are today.


Joshua Greene (00:02:45):

So I'll try to keep this relatively short instead of giving my whole sort of life story, but the sort of a few key points along the way. I started doing debate when I was in high school, actually in, in junior high. And I got interested in, I think, you know, central questions of moral and political philosophy. A lot of them focused on the rights of the individual versus the greater good. Is it okay to control, you know, curtail people's privacy in the interest of national security and things like that. And so I debated these things with other people and you know, you, you argue with people and you argue both sides and you start to develop opinions. At least I did. And I was drawn to the idea that, well, you know, the way to ultimately make these decisions, the right thing to do is the thing that, that, that, that promotes the greater good.


Joshua Greene (00:03:38):

But I was also struck quite forcefully by some of the objections to that idea related to trolley problems. So is it okay to run somebody over with a trolley in order to push somebody in front of a speeding trolley in order to say, if five people were, if you can save five people who need organs, would it be okay to kidnap and carve up one person to distribute the organs? The other five? And that seemed clearly wrong to me, even though it seemed like those actions would promote the greater good. And so I kind of struggled with this and wanted to get to the bottom of it myself. And I started studying moral philosophy more seriously as an undergrad. But at the same time I started studying also moral or I guess social psychology and moral psychology and trying to understand how people make judgments about things like, you know, preserving endangered species.


Joshua Greene (00:04:31):

And I encountered the heuristics and biases research program pioneered by Kahneman and Tversky and thought, you know, this is really relevant here. That is our, our decision making in general is really sort of quirky, especially when we rely on our intuitions. And, you know, I was, I thought that similar things were going on with our moral thinking and you know, maybe, maybe, maybe, maybe if we better understood our moral psychology, we would be in a better position to figure out what's really right or wrong or at least come up with a better story standard for, for moral decision making. And then when I, when I hit upon the trolley stuff, I thought, okay, this is it. It just makes sense that, that sometimes we say it's okay to kill one person to save five people. And other times we don't. And I thought that a key bit of this had to do with the way we respond emotionally to certain kinds of actions.


Joshua Greene (00:05:29):

And I had was taking some neuroscience. I took a neuroscience class at the same time and connected this to what was going on in, in patients like the famous case of Phineas gage, who lost a bit of his prefrontal cortex and really lost his ability to have emotional reactions barrier on his decision making. And, and so it was, it was really putting the psychology of judgment and decision making, the neuroscience emotion and these philosophical puzzles together. That's what got me started, Don this. And so I spent a long time trying to understand the mechanics of those decisions, both at a neuroscientific level and at a psychological level. But for me, this was really all part of the broader question of how should we think about the biggest problems that confront us. And so, you know, more recently I have been focusing not so much on trolley dilemma specifically, which I think are a nice kind of fruit fly for the lab, but on these broader questions about how should we be thinking and what can we do, how can we use our understanding of human behavior to promote the greater good? Yeah,


Amber Cazzell (00:06:42):

Yeah, I think so. When I, when I listened to moral tribes as an audio book, one of the things that you talked about was this hope that philosophers will pick up on psychological literature about sort of how the brain works and use that when they're doing their work in ethics. I know this is kind of a broad question, especially right here at the beginning, but I'm curious like what you've sort of come to, what specifically you're thinking philosophers of ethics could benefit from a better understanding of,


Joshua Greene (00:07:19):

So, you know, the, the, the standard method in ethics is you, you try to come up with broad principles that seem to be right. And you test those principles against particular cases. So you might say, well, we should always do what's going to produce the greatest good. And then somebody says, yeah, but would it be okay to kill somebody and take their organs and give them to five other people that would promote the greater good, but that seems wrong. And then say, huh, okay, I agree, that seems wrong. Next principle. Right? and you know, you, you want principles that make sense, but you also want runs that fit with our, as John Rawls said, considered judgments about particular cases or, or, or types of cases. And so there's this back and forth between theory and, and, and individual cases in the same way that in science there's this back and forth between general scientific theories and then specific data points.


Joshua Greene (00:08:19):

And the general trend among philosophers, at least the kind who have attempted to articulate general principles as opposed to much more sort of vague and abstract kind of storytelling has been to get more and more sophisticated about the principles, but take their intuitions as given. And I think that's a mistake. I think instead of, you know, we should be sophisticated about our principles if necessary, but we should also be sophisticated about our intuitions. That is, you know, why if, if it seems like it's wrong to kill one person to save five in one case, but not in another case, well, what's really going on there? Right? and when we understand what's going on, are we likely to think that we're getting it right in case a, but not in case B or, or, or, or, or, or vice versa. And in particular, when it comes to, when I say trolley cases, what I really have in mind are cases where you can cause some emotionally salient harm or some harm to a particular individual in the name of the greater good.


Joshua Greene (00:09:29):

So people sometimes call these sacrificial dilemmas now, right? And I think that what's going on there is we have feelings about certain types of actions. And the reason why is, is because we have learned that certain types of actions are associated with negative outcomes either for ourselves or for other people. So you learn as a very young child not to commit basic acts of violence, right? So pushing, you know, some kid off those swings on the playground either you experienced that person's suffering as a negative outcome or you experienced your own punishment as a negative outcome or both. Right? And so, you know, basic things like pushing people and hurting people in direct physical ways. We have strong negative feelings about those because those things are associated with negative consequences quite reliably. Right? And then what the philosopher does is comes along and says, okay, I'm going to turn your moral world upside down with this hypothetical case.


Joshua Greene (00:10:28):

I'm going to give you a case where I stipulate that the greater good is promoted by this act of violence against an innocent person. And so what happens is you can think, okay, it's better to save more lives here, but but that doesn't make that feeling go away. The feeling that it's wrong to hurt that person. And not only that, it's a good feeling to have. We don't want to live in a world in which people are comfortable or not uncomfortable with committing basic acts of violence. Right? But it would be a mistake to let that feeling, which is which, which comes from a different context like committing acts of violence for one's own benefit or out of just one's own wants and aggression to apply to this weird kind of case where, you know, harming somebody promotes the greater good and then let that be an objection to the idea of promoting the greater good in, in, in general.


Joshua Greene (00:11:29):

So the more general, the, the more general point is that if we understand where our feelings come from, we can have a better sense of when they are applying in a way that makes sense. Like, you know, you're harming this person for your own selfish gain and so you're feeling that it's wrong. Is it all into something as opposed to you're having this negative feeling because those kinds of actions are just selfish and destructive, but then in this case it's not. Right. so that's that. That's that that would be digging a little bit deeper into how I think understanding the psychology can help us think through the moral, the moral questions that are more abstract level.


Amber Cazzell (00:12:13):

Yeah. So as you're speaking about this, it seems to me that there's this general sense of a trajectory for improvement in the department of making decisions of moral weights. And I'm, I'm wondering if that's a fair statement or not. I guess I'm kind of getting at, do you, do you consider yourself to be a moral realist in some way?


Joshua Greene (00:12:44):

So this is an interesting question and I, there's no simple answer there. If you asked me strictly strictly speaking, do I think that there are facts about what's right or wrong in the same way that there are facts about like, is the sun larger than the earth? I, I have to say that the, I think the answer is no, and I kind of wish that there were, I'd like to be a moral realist, but you know, this is most, it's the work that I did 20 years ago as a grad student, but I just did my best to think through all the possible ways in which there could be truly objective facts about what's right or what's wrong. And I just didn't think any of them worked all the way. I mean, I think the standard would be any, any rational being who can understand the question.


Joshua Greene (00:13:34):

If they say that there are no facts about what's right or wrong, they must be making some kind of mistake as opposed to just having different values or different responses. And I never found a satisfying way to be a full blown, more realist. With that said, I think that humans in general have a lot of convergence in their values. That is nobody wants to suffer as an end in itself. People may think it's worth suffering for some greater cause, but just suffering by itself, most people don't think of it as a good thing for themselves or for the people they care about. People want to have positive experiences. They and, and, and, and to put it in less abstract terms, people like to, you know, they want to be free of violence. They want to have their basic needs taken care of.


Joshua Greene (00:14:23):

Increasingly, people want to be educated, they want to live in democracies. They want to have live in a, in a nontoxic environment. I mean, all of those things. I think are pretty shared values and, and, and more generally if you look at people's values and you sort of ask, okay, why do you care about that? Okay, but why do you care about that? So you could or keep asking the question until you get a, until you sort of, you know, can't iterate anymore, you ultimately come to the quality of people's experience, right? So, so if you ask someone, you know, why, why do you go to work? And they would say, well, you know, I enjoy my work. So that's quality of experience. But also, you know, I need to make money and say, well, why do you care about money? And say, well, you know, I got to pay the rent and say, well, why do you have to, why do you need to pay the rent?


Joshua Greene (00:15:11):

Say, well, I need a place to live. Well, why not just live wherever and say, well, you know, it gets cold at night. And I said, well, what's wrong with being cold while it's painful? I said, well, what's wrong with pain? Well, it's just bad, right? That at some point that process bottoms out. And, and even though if you ask people what do you value, their answer is not pleasure and pain like Jeremy Bentham or suffering and happiness or something like that because that's not the level at which we ordinarily think about these things. If you keep asking why, but why, but why? But why? Until you get to a point where you can't answer anymore, it's almost always going to come down to the quality of someone's experience, their, their well, their experience, wellbeing or their, or their suffering. And so the thought is that this, the fact that we care about the quality of our experience and those of, at least some other people provides a kind of common currency that I think is, I think it's universal.


Joshua Greene (00:16:09):

It's not universal that this is the only thing that people care about. And it's certainly not universal that people care about other people's wellbeing in an impartial way. But if everybody cares about their own wellbeing and the wellbeing of at least some other people, and we want to have a system that we could all agree on as a sort of reasonable standard for adjudicating moral disagreements, then you say, okay, everybody's, everybody's experience matters and everybody's experience matters equally. And so we should do whatever is going to produce the best overall experience for all effected by our choices. And that's essentially the idea that is, that is utilitarianism, but utilitarianism is not only is it an ugly word, I think that when you put it in terms of this formula, it's, it's very, very easily misunderstood. And that's why I have sought to reframe utilitarianism as deep pragmatism because I think it's, it gives them more accurate picture of what it really means in practice to try to promote the greater good, taking into account all of our biases and all of the complexities of human behavior and the uncertainty of our actions and all of those things.


Joshua Greene (00:17:31):

I can elaborate on that if you want, but


Amber Cazzell (00:17:33):

Yeah. Yeah, I would, I would love that. I think a lot of the listeners, I think a good chunk of the listeners of this podcast are academics and they may or may not be familiar with your book moral tribes and shout out to the book. It's a, it's a great book. For listeners who want to learn about this more in depth, I recommend reading that, but I would love it if you could go into what some of those nuances of deep practice mutism our, what that, what that incorporates and how it is immune to a lot of the common criticisms you hear about utilitarianism.


Joshua Greene (00:18:12):

Yeah. okay. An office. So let me just reconnect this to your original question about Neta ethics, about whether there's any facts that matter. What's right or wrong in a very strict sense. I wish I could believe in sort of, you know, full-blown moral facts, but I can't, at least not yet. But I do think that we have this underlying base of, of, of shared values. It doesn't mean that all of our values are shared, otherwise we wouldn't be fighting all the time. But that there is a core common currency there that it's kind of like everybody has this second moral language that they speak people's first moral language are their feelings about particular actions in particular peoples and particular ideals. But the second language is the language of costs and benefits and the appreciation for impartiality. We each at least within a group of a certain size.


Joshua Greene (00:19:01):

Right. And so I don't, when I'm, I may not be a full blown moral realist, but I think that we have enough common ground to have workable, shared moral standards. So that's, that's how that, to bring that full circle with your original question. Yeah. Yeah. So, so utilitarianism, I mean it's, you couldn't have come up with a worst name for moral philosophy if you will try to sell it. I think there are some sort of shallow objections and then there are more deeper objections. So some of the shallow objections are really just based on misunderstanding. So when we think of adding up costs and benefits, we typically think of this in a kind of fiscal or economic decision making where it's typically done from a selfish perspective. So you know, adding up costs and benefits usually is in a selfish bean counting kind of way.


Joshua Greene (00:19:46):

And that's just a misunderstanding that is we're talking about we should do what's best overall for all humanity are all sending in beings, not, not just for whoever's making the decision. Another one is the idea that the, that the decision procedure should be one of actively adding up costs and benefits, right? So you go into the store and they just like, well, should I shoplift? What are the costs? What are the benefits? Right? And that's a terrible way to go about things. And this is something that, that no, John Stuart mill, one of the second of the three great utilitarians of the 19th century emphasized that you don't want to be out there with your moral spreadsheet all the time for almost everything you do in everyday life. You want to cultivate good automatic, emotionally embedded habits so that your basic tendency to be honest and to be helpful and, and law abiding and all of those things that should just happen automatically.


Joshua Greene (00:20:49):

And if you think about it, this isn't like an exception to the utilitarian principle. It follows from it because if you asked herself which kind of world is likely to go better, a world in which people follow basic moral norms automatically, emotionally without thinking about it, or a world in which everybody has to add up the cost and benefits for themselves with all of their miscalculations and biases. Of course, it's better if people just have good moral habits and, and, and the utilitarians recognize this from the beginning. So it's, it's just a mistake to think that, to confuse the normative standard, what is the ultimate thing that we're aiming for versus a daily decision procedure? Right? Mmm. Being a utilitarian in practice means being a normal good person most of the time. And it's, it's, it's only in sort of at the high level when you're deciding when, when you're making certain important decisions right.


Joshua Greene (00:21:53):

That that, that the differences between being a utilitarian or a deep pragmatist really come through like, should I keep all of my money for myself or should I devote a significant amount of it to people who I don't know personally, but who could benefit enormously from, from my resources. Right. That's, that's, that's where it really matters. And, and certain policy decisions and the kinds of politicians and policies that you would vote for, et cetera, and, and, and, and, and things like that. Another sort of common, I think, misunderstanding of utilitarianism is it's, it's, it's often framed in terms of, well, one is the word utility is terrible. So it gives this impression that, you know, like you asked somebody is, is, is splurging on some, you know, funky piece of clothing that really makes you happy? Is that a utilitarian thing to do?


Joshua Greene (00:22:46):

And superficially, it sounds like it's not because utility is, is, is stern and serious and it's all about efficiency. Whereas, you know, things that make you smile are, are, are, are, are not. Right. you know, utilitarian sort of conjures up images of parking garages and laundry facilities, right? And that's, that's a mistake. [inaudible] Just kind of follows from the misleading nature of the word utility. It's really about the quality of peoples experience, which is improved greatly by, you know, the things that bring us joy. And then at the other extreme you might say, okay, so utilitarianism is about maximizing happiness and that can lead to what I call the, my favorite things, understanding of utilitarianism. That is from the, from the song, from the sound of music. You know raindrops on roses, whiskers on kittens. Th that, that it's, it's, it's about maximizing the things that make us but, but, but, but is missing the things that are deep and important and meaningful.


Joshua Greene (00:23:51):

And I think that's a mistake as well, but it's not happiness in the sense that we think of as immediate happiness triggers, but things that are important like healthcare. So, you know, preventing people from getting a disease, you don't think of that as being about happiness in the way that you think of, you know, whiskers on kittens as being about happiness. But if you don't prevent that disease, it has a major effect on people's happiness, right? So you know, the, the terms utility on one end or happiness on the other, those can be extremely misleading. And then another way that it's really misleading is this idea that, well, if, if the goal is to promote the greater good, then that means that we must know that we're susceptible to kind of disastrous utopian plans. So charismatic leader says we all must stand together and you know, sacrifice this or that for the greater good.


Joshua Greene (00:24:48):

You know, Allah chairman Mao and you know, again, the utilitarian should ask, well is it actually going to promote the greater good if you line up behind chairman Mao or is that just what he is saying? Right. so, you know, these are all sort of in some sense, basic misunderstandings of the philosophy, but they are so prevalent and they add up so much that it, that, that it's, it's, it's startling how hard it is to get people to have the right idea. And then, you know, and then there are other deeper challenges that aren't just about misunderstandings, but that were where, and those are the places where I think we need the, where the science, the science really helps sort of understand the objections in a deeper way. And, and, and how they may be. There's, there's, there may be valid in a certain sense, but yeah, but are ultimately missing the bigger picture. And this is about things like making sacrifices real genuinely for the greater good. Or or whether or not we as individuals have obligations to use our resources to help unfortunate strangers or whether we should stop eating factory farmed meat or meat altogether and things like that. And they're, you know, those really are challenges to our intuitive common sense morality. But, but, but I think that those challenges should be taken seriously. And can be defended.


Amber Cazzell (00:26:24):

Yeah. Okay. There that there's a lot in there I want to ask about and I'm trying to figure out what order to start. So let's, so first of all, I think it's interesting, I don't know, do you think that, do you think that some of these objections really do arise out of a misunderstanding of the philosophy as opposed to like, I get the impression that one of the big hesitancies with this utilitarian philosophy is not necessarily a misunderstanding about how to characterize the common good, but almost feeling like, well, this is sort of this tautological this tautological cop-out in a way of saying like, okay, well, well, yeah, if it, if it, we're just going to sort of subsume deontological thinking and now kind of call it utilitarian in some way by saying, well, of course people are going to have sort of these norms that they operate by. What are, what are your thoughts on, on, is that sort of an attitude towards utilitarianism?


Joshua Greene (00:27:35):

Yeah, so I don't think it's a copout. I think that there are two ways essentially to defend utilitarianism against intuitive counter examples and you can call them accommodation and reform. So a combination is saying, no, no, no, no, no, utilitarianism doesn't have that implication. It's not saying that you should be out there with your spreadsheet. It's not saying that you should line up behind chairman now. It's not saying you should be selfish or only care about raindrops on roses. And you know, I mean those are all accommodation saying it doesn't have the implications that you think it has. But there are other cases which I alluded to before where there's real reform, right? And what utilitarianism de pragmatism says is, no, we in the affluent world really should be giving a lot more of our resources to people who can benefit from them.


Joshua Greene (00:28:26):

Right? and that is not trivial, right? It also, it also undermines a certain kind of selfishness, right? So some people, you know, let's say you're, you're a, a free market fundamentalist, right? Or kind of libertarian in the, in the Robert Nozick kind of vein that is, you think that ultimately people should be free, free to engage in voluntary exchange. And you know if, if by having people participate in the free market, some people get rich and some people get poor and some people get even richer and some people don't. That's just how it goes. As long as everybody's free. And as long as the transactions are all voluntary, you know, sure. You could take the money away from wealthy people and give it to poor people, but that would be wrong. They didn't earn it. Right. and you know, a certain kind of Liberty, there are some libertarians who will say, we need to preserve freedom person foremost because that promotes the greater good.


Joshua Greene (00:29:27):

And then it's an empirical question of, well, whether or not that's right, or to what extent, how much freedom is optimal in terms of promoting human happiness, but a fundamentalist libertarian like Nozick would say, sorry. Freedom is just what matters fundamentally. And, and, and you just don't have the right to take away from the haves to to the have nots and give it to the have nots if, if those resources were not ill gained, right. If they were, if, and so I think that there is a real, there is a real tension between fundamentalist libertarian values and deep pragmatist slash utilitarian values. And


Amber Cazzell (00:30:17):

Yeah.


Joshua Greene (00:30:18):

Well, and, and so I don't, I don't think it's a copout in this in the sense that it's a real moral, it has real moral consequences. It's not just absorbing everything that everybody else says into a framework that makes any disagreements trivial. Yeah.


Amber Cazzell (00:30:37):

And I mean I think that some of that also comes down to you. I mean, and maybe this is a libertarian question of sorts. Like I think another, as I was listening to moral tribes, a question that kept coming to my head is like, okay, well we're talking about, you often talk about making these using deep pragmatism as a framework for making decisions that are going to affect whole societies, lots of people's lives. And what is the appropriate way? Who gets to determine what maximizes good, what maximizes that common currency?


Joshua Greene (00:31:16):

Well, so these are ultimately empirical questions, right? That it's an empirical question. Whether higher taxes are lower taxes is going to lead to no greater happiness and less suffering in our society, right? It's an empirical question whether or not we ultimately make the world better by attempting to overthrow autocratic dictators or letting their, you know, laissez Faire and let their, their nations run their course. I mean, and, and so on and so on and so forth. So how do we answer those questions the same way we try to answer any other empirical questions that is using the best scientific evidence available. And there are some questions that you can answer with controlled laboratory experiments, but when it comes to questions about what works or doesn't work over long time scales in society, you know, you can't really run controlled experiments. So you have to do the best with the imperfect evidence that you have trying to be as, as precise and, and, and, and integrative really quantitative as, as, as, as possible. And this is what economists and political scientists and sociologists and psychologists and other kinds of poly policy experts try to do. But you know, there's no, there's no formula and there's no okay. Designated


Amber Cazzell (00:32:40):

Group of people who have the last word. I mean, you know, I would say science and democracy is, is, you know, is, is, is, is the boring but, but true answer to that question. Mm. Yeah. and I mean, another thing that was coming to mind and it, and it's related to this point that I was thinking as you're speaking, is just


Amber Cazzell (00:33:06):

So, so Richard Schwayder and I on this podcast had a discussion about his theory and of moral pluralism. And I'm wondering like what your thoughts are on, gosh, I don't know what the right term would be. Maybe like experienced pluralism, that, that there are different sorts of things that matter to people. And I, when you speak about this common currency, I think to some degree, you know, everybody recognizes the value of, you know, community and, and divinity and autonomy, like that's recognizable to everyone. But as Richard Twitter says, you know, they're, they're emphasized differently for every person. How does deep pragmatism, is it amenable to that sort of like values pluralism?


Joshua Greene (00:33:57):

Absolutely. because again, I think this is another common confusion is that it's a mistake to say happiness means different things to different people. You know, your made happy by your, you know, going to prayer with your, your co-religionists. Whereas I made happy by watching the ball game. That's, that's not two different meanings of happiness. That's your made happy by these things. And I made happy by these other things, right? So you know, to put it this way, if, if, if someone in Risa India where, where, where Schrader has done research is made greatly unhappy by what that individual perceives as a defiling of the sacred part of, of, of her home take that case versus a westerner who cares a little bit about baseball and, and that person's preferred team loses who, who, who, who suffered more as a result of those events.


Joshua Greene (00:35:03):

Right? And that's a meaningful question and it's pretty plausible to say, okay, the, the suffering that this person experienced because of what happened within the framework of their values is worse than what happened to this other person within their framework. Right? So we don't have to think that everybody cares about the same things or that there were no deeply important cultural differences to think that at the end of the day, happiness is happiness and suffering is suffering. Even if there are very important cultural differences in terms of what causes different people to be happy or to, or to suffer. So I, I think it's another one of these common confusions to think that because people have very different values at the most basic day to day level, that this means that there isn't a common currency behind those values.


Amber Cazzell (00:35:57):

Well, I, yeah. So I, I appreciate what you're saying there and I tend to agree, but I guess, I mean, what I was trying to get at more is that, you know, even in my own life as a single, as a single unit person, I'm going to forget about everyone else. For a second here, it's, it's still difficult for me to be able to maximize even my own happiness when we're, when we're talking about things that cross spheres. So you know, like a lot of these trade offs. So for instance, going back to like libertarianism, right? Sacrificing, let, let's talk about even even the virus right now, right? Like sacrificing some of your personal freedoms for the sake of health broadly. Like that's, it's not, it's not always straight forward to be able to tell for myself, right? Like, which one of these is going to maximize my own happiness?


Amber Cazzell (00:37:01):

So even then, if it's difficult on an individual level to be able to determine that because they're just fundamentally different values, both of which I find meaningful at stake. How do we then scale up to incorporating even broader things? So I guess I'm, I'm being a little bit reductionistic here and saying, okay, well I thought if at our core we buy into moral pluralism, I'm not sure that common currency is something that is as obvious or as simple as an empirical question of what makes us happier. Like we, you know, we, we freeze, we get, we get analysis paralysis and have difficulty determining what's going to make us happiest even on an individual level. Does that make sense? Kind of what I'm asking here. Like if you're, if you have two different job offers in one pays me less but gives me more quality family time and, and the other pays, the more you have less comp, you have less family time, but maybe you have more money to make that family time higher quality or something like that. These are not.


Joshua Greene (00:38:09):

So I think that there are two different questions here. One is, I mean, utilitarian doesn't, doesn't say that all decisions are easy, just plug it into the formula. Not at all. Most of it, if you're, if you're a deep pragmatist or utilitarian,


Joshua Greene (00:38:26):

What that really means is that most of the hard work is, is navigating uncertainty. Right? and the, you know, the problems that we focus on are the ones that are most uncertain. So there's, it's, the claim is not, Oh, just use the formula. It's easy. That's not the claim at all. But what the claim is that there is no fundamental incommensurability across different values within individuals or across individuals. So to take your own example, right? Let's say you have two different job offers, one offering more flexibility and family time and the other offering a higher salary. Is that a difficult decision because you have fundamentally incommensurable values or is that a difficult decision? Because it's hard to know exactly what level of, of, of personal flexibility is worth what level of income. But if, if I gave you the following choice, it said, okay, job a, you have slightly less freedom, you know, one, one day a year you have to be in the office or something like that.


Joshua Greene (00:39:31):

Whereas in the other one you never have to be in the office may get wherever you want, but you make twice as much money. Now it's an easy choice, right? So it's not that there are some fundamental incommensurability between say freedom and family time on the one hand and the salary on the other. If you give it enough salary, and if you re, you know, if, if you, if you twiddle the knobs enough, if you make the, the, the salary high enough and the, and, and the flexible ability, minimal enough, then you say it's easy, I'll take a higher salary. And if you go the other way and you say, Oh, I'll make an extra $3 with this job, but with the other job, I have all of this flexibility and family time that I'd enjoy. No, I wouldn't do that. I wouldn't give that up for $3.

Joshua Greene (00:40:16):

Right. So it's not about the incommensurability of the values. It's not that you can't make trade offs between personal freedom and income. It's that if that sometimes the variables are set so that it's hard to make the trade off. Right. In the same way that it's hard with money to decide whether or not you want to take the shore thing or do you know, double or nothing or triple or nothing or whatever it is. Right. So I think that both within individuals and across societies, there are enormous challenges in understanding what exactly will the consequences be and which consequences would we prefer if we were to experience them. But that doesn't mean that there's some fundamental increments or ability between values within individuals or across individuals. It just means that some decisions are really, really hard. Yeah. So I mean, just to be clear, I want to make sure I'm following correctly.


Joshua Greene (00:41:17):

Does this mean not you personally do not by that values are incommensurate or can be incommensurate I think that for practical purposes, that's correct. I think that Mmm, there may be some people who have certain values that they truly would not trade off against anything. So if you are a devout whatever, right? And someone says, you know, I'll give you a billion dollars and I'll give a billion dollars to all of these starving children, but you need to denounce God. Right? There may be some people who would never do it, even if they really believe that they would get the money and the children would get the money because they just absolutely would never do it. And so it's so, so is that incommensurability or does it just mean that they have something that they care about more about than everything else? So there can be cases like that, but the fact that people ultimately make choices including very difficult choices where the trade offs involved things that are across domains. Yeah. I think that in principle, any anything can be put. We're capable of making choices, weighing anything against anything else. We don't like it, but we can do it. Yeah.


Amber Cazzell (00:42:55):

Okay. This is really interesting and I yeah, I just, this is fun. I feel like I've been interviewing you about all your personal moral opinions now. Maybe I'll try taking you off the spot of it here.


Joshua Greene (00:43:06):

No, it's fine.


Amber Cazzell (00:43:10):

Your work has progressed. My understanding is that you've started to say, okay, well how can we take deep pragmatism and really start to apply it to some of the broader issues we're starting to face. I'm, I would love to hear a bit about what research you have been, what sort of applied research you've been doing on that front.


Joshua Greene (00:43:33):

Yeah. so the two main projects that are going on now and neither are published yet one is about intergroup conflict. And the other is about effective giving, effective altruism. Okay. So I'll start with the intergroup conflict work. And this has done with Evan de Philippa's who's a PhD student in my lab and in the organizational behavior program between psychology and Harvard business school. We are, you know, so my view. Okay. What, what makes the world go well, if you look around the world and ask, you know, where are things, where are people more likely to be happy versus unhappy? It's mostly places that have good governance people who live in democracies, people who live in places where basic education, health care are provided. Those are the places where people tend to be happier. And and so then the quote, okay, well how do you, how do you get more good governance?


Joshua Greene (00:44:39):

What's, what are the biggest obstacles to good governments? And I think the biggest obstacle to good governance is tribalism. And we see that as essential, especially in recent years in the United States. That there are so many things that we could do to make our country better, provide everybody with healthcare, provide everybody with, with, with good education, take the enormous wealth that we have amassed as a nation and distributed more equitably, not in a way that, that, that denies people the enjoyment of, of, of, of the fruits of their labor, but in a way that gives everybody a fair chance. And the big obstacles to this I think are this sense that this us versus them dynamic. And particularly between, you know, you can call it liberals and conservatives, but I, I in in some sense, I think it's, it's a mistake to really call the current Republican party conservative. There's nothing really conservative about it. It's really, it's, the Republican party has really become the white Christian ethnic nationalist party. And so, and, and so it's really that group that says is extremely skeptical of governance that considers the wellbeing of all people and not just the people who have traditionally dominated American politics. So how do you solve that?


Joshua Greene (00:46:13):

Based on, you know, put it, literally taking a lot of the work that I've put together and moral tribes, mostly not my own work. There's to me that, you know, trust is generated by mutually beneficial cooperation. And this is an idea that every social science has kind of hit on independently. Like in international relations. People have observed that nations that our trading partners don't go to war with each other. When you look at you know organizations that have successfully integrated, whether it's and this is early on, whether it's, you know, in, in, in, in sports or military it's been people working together and benefiting from that mutual cooperation. And, and and so what Evan and I have been trying to do is harness the basic principle of cooperation. That is our, our behavior is based on our feelings and our, what we, what we feel good about is, is, is when it comes to other people or the people who we view as on our team as, as, as having a mutually beneficial cooperative relationship.


Joshua Greene (00:47:22):

And so with this in mind we said, well, what would it take to get liberals and conservatives in the U S to feel like they're on the same team? And so we have been literally putting liberals and conservatives on the same team. We've had, we designed this quiz game where we have liberals and conservatives play as partners online and and we set up the game so that they benefit from each other's knowledge and are forced to make compromises and they come out ahead financially. If they do, and you know, this isn't published so I can't really talk about the results in, in, in, in detail, but you know, our preliminary results indicate that this works, that, that you can take people who are on opposite teams who really don't have nice things to say about each other in each other's groups. And you make them partners in a game where they have to work together and trust each other and rely on each other's complimentary knowledge. And they come out with better feelings. Not only do they like the people that they're playing with, but they come out with better feelings about the other side. And you know, so


Amber Cazzell (00:48:38):

There's a, I'm, I'm, I'm laughing a little bit cause I think I might've seen the prep for this on Twitter. Did you guys like push a big thing about like submitting questions that you think?


Joshua Greene (00:48:49):

Yeah, so that was, we, we, we, we had a contest to get people to generate material for our quiz. And so that's what you saw and that contest was really helpful. We got, we got some, it's, it's hard to get good quiz questions, where would you really maximize the difference? We want questions where the liberals are really likely to know the answers and the conservatives are not, or the conservatives are really likely to know the answers and the liberals are not. And we need this about both political things and non political things. And so we, we, we ran this contest. This was Evan's idea degenerate material and it, it did a nice job.


Amber Cazzell (00:49:26):

That's great. That's so cool. I love researcher, resourcefulness like that. Yeah.


Joshua Greene (00:49:32):

Yeah. And Evan is amazing that way.


Amber Cazzell (00:49:35):

That's awesome. Okay. So I'd love to hear a bit about the other line of research too that you've recently started.


Joshua Greene (00:49:42):

Yeah. So that, that I really can't talk about yet. Even the sort of approach that we're taking. I'll just say that. Well I can talk about one thing we've done with, with, with, with effective giving. The, we have the set of projects on what we call the veil of ignorance reasoning which the term veil of ignorance comes from John Rawls and, and Rawls. His idea was that a fair society or just society is one that you would want to live in if you didn't know who you were going to be. And so, you know, Rawls asks, well, what, what organizing principles for society where people choose from behind the veil of ignorance. That is if they didn't know if they were male or female or in an ethnic minority or majority, or whether they were talented or untalented or whatever.


Joshua Greene (00:50:35):

And Rawls came to a certain conclusion about this and very decidedly a non utilitarian conclusion around John Harsayani, the economists independently had the same idea also in the fifties. But he thought that this actually provided a kind of foundation for a utilitarian or as I prefer a deep pragmatist sort of approach. And in any case, we've, what we've been doing is applying this veil of ignorance idea to more specific dilemmas. So some of them are kind of trolley, like related to self driving cars and bioethical dilemmas and stuff like that. We have a preprint that we just posted about, you know, allocating ventilators during the [inaudible] crisis. But one thing we did with this is we ask people about charitable giving. So, and we had people actually make a real choice. So what we say to people is, okay, there's this charity in India that you know, $200 will fund two cataract surgeries.


Joshua Greene (00:51:35):

So you can restore sight in two people if you give this to the Indian charity. And then there's a charity in the U S where the same $200 will contribute to one person's surgery that will restore their sites. So basically the money goes farther in India than it does in the United States. And this is true. So you can just ask people, okay, well where do you want the money to go? You know, we say we're going to really make a donation. If your answer is picked, you'll decide where should we give the money? And about 50, 50. Some people say India, some people say the United States, even though the money goes twice as far in India if we say to people first, okay, there are three people, there are two people in India and there's one person in the United States and you're going to be one of these three people, but you don't know which.


Joshua Greene (00:52:20):

And it's an equal chance of each person. So you have a two out of three chance of being in India and a one out of three chance of being in the U S and if the money goes to India, then you have a two out of three chance of being saved that is having your eyesight restored. Right? And if you're, and if the money goes to the U S then you have a one in three chance of having your eyesight restored because you have an equal chance of being each of these three people. Where do you want the money to go? And of course most people say, well, I'd rather it go to India. So I have a two out of three chance of having my eyesight restored. And then we say, okay, well where do you want, we're going to actually make a donation.


Joshua Greene (00:52:58):

Where do you want the money to go? And after they've done that veil of ignorance, exercise thinking, well, what would I want if I didn't know who I was going to be? They're more likely to say, okay, you should actually give the money to India. Right? And so this is a case where getting people to step back and think in this kind of Rawlsian impartial way actually gets people to make decisions that favor the greater good. And we show this is true across a bunch of different cases. And then recently the, the, the thing we did with the coven 19 dilemma, this is not about the number of lives saved, but the number of life years saved. So we say to people, okay, suppose there are two patients one is 25 years old and one is 65 and there's one ventilator. So if if you give the ventilator to the person who is 65, then that person will live another 15 years, but the 25 year old will die.


Joshua Greene (00:53:56):

Or if you give the ventilator to the younger person, then he'll live another 55 years, but the older person will die. And it turns out that the younger person arrived a few minutes after the older person. So if you do first come first serve, it will go to the older person. But you could decide that no, we want to save the most years of life. So give the, the, the ventilator to the person who has more life to be saved. If you just ask people, should you do first come first serve and give it to the older person or should you give it to the younger person in order to save more life years? You know, people are split in various ways and it depends on how old you are. But if we ask people first to say, okay, suppose that you don't know who you're going to be.


Joshua Greene (00:54:38):

Again, doing the veil of ignorance thing, and I should say this is work led by Karen Wong and then the, the, the Kobe cases also with Reagan Bernhardt and, and all of this has been with my longtime collaborator, max Bazerman. So back to the research, if we ask people, okay, if you had a 50 50 chance of being the younger patient or the older patient, would you want the ventilator to go to the younger patient or the older patient? And people overwhelmingly say, well, I'd rather, if I'm going to win, I'd rather, if I have a 50 50 chance of winning, I'd rather have my win be getting most of my life than just getting a, you know, another 15 years at the end of my life. Right. And then after people work through that hypothetical, we say to them, okay, what would you, what do you actually think the policy should be for a hospital?


Joshua Greene (00:55:31):

And they, they're more likely to say that you should save the younger patient. Yeah. Wow. And the most dramatic effect is among older people. So if you ask people who are 60 and older or 65 and older, should it be first come first served and give it to the older person? Most of them say, give it to the older person who got there first. But if you have them think through the veil of ignorance dilemma, it flips, then a majority of them say, okay, give it to you. You should give it to the younger person. That's interesting. Why do you think that age effect is the case? Well, I think the egg of age effect of the case for a familiar reason, which is just self-serving bias, right? That as an older person, you say, don't discriminate against me. Right. and, and then, but when you say, okay, but if you didn't know who you were going to be, would you want to have favor saving more years? And they go, yeah. And then they, it's harder to put, it's harder to go with your self serving bias once you've thought it through in those terms. Yeah.


Amber Cazzell (00:56:34):

So in just our last few minutes here, I would love to hear how you hope a lot of your work now with deep pragmatism is picked up upon and applied, whether that, whether that's in the real world or just picked up on by other researchers.


Joshua Greene (00:56:52):

Mmm. Well I hope it will be, I mean, obviously I hope other researchers will, will, will pick up on what we're doing, but I'm hoping that this will have effects in the real world. So, yeah. You know, my hope is that the work that I described with the intertribal quiz game can be a proof of principle for a much larger effort of creating opportunities for mutually beneficial cooperation between people on opposite sides of tribal differences, whether that's liberals and conservatives in the United States or Israelis and Palestinians in the middle East or whatever it is. And so I hope that these principles can be applied more systematically in the real world. Yeah. And, and again, having this sort of proof of principle, assuming our results hold up will help. And then likewise, you know, we have some other ideas about how we can encourage people to use their resources to help people who need it most. And I would love for those things to be picked up by people who are really moving donation dollars in the real world.


Amber Cazzell (00:57:57):

Do you find that your own decision making on some of these things has changed over the years as you've kind of, yeah.


Joshua Greene (00:58:08):

Yeah. I mean, I guess I have my PRI, my personal life and my professional life as a scientist. You know, the biggest thing that I do personally is like a lot of people in the effective altruism movement. My wife and I donate a percentage of our income to charity. And we you know, generally follow the Def, the recommendations of GiveWell, which is an organization based in the Bay area that you know, does just unbelievable work, really trying to figure out just dollar for dollar, what kinds of, of charitable donations do the most good. And you know, mostly they've pointed to things like distributing insecticidal malaria nets and things like that. But you know, they're always looking for new opportunities and continuing to evaluate the, the, the organizations that they've, they've promoted in the past. And so, you know, that's, it's not very original, but I think it's, it may be, is, is as important or more important than anything else that I do. And then also, you know, we, we, my wife and I both try to not eat factory farmed meat and, you know, we're not as at home. We don't, when we're out. And you know, we have sort of different rules for when we're traveling or you know, with, with, with, with the kids who are picky and stuff. But we've dramatically cut down on our consumption of factory farmed meat to the point where it's much closer to zero than, than to the, to the typical American diet. And you know,


Amber Cazzell (00:59:40):

It's funny to me like as I was kind of prepping for this, for this interview a friend of mine and I were talking and I'm, I'm vegetarian, they're asking me why I'm vegetarian and I tried to explain that it had this reasoning to deal with like removing consciousness and stuff. And then, and then she asked me like, Hey like you, you're like, what about, what about human labor? That goes into like a lot of foods that are not fairly farmed, that are high in vegetarian diets, like keenwah and stuff like that. And it, and it made me chuckle because as it was, it was the conversation was not related to prepping for, for this interview, but it instantly made me think about some of the challenges of deep pragmatism and trying to identify what the relevance scope for assessing.


Joshua Greene (01:00:34):

It's everything, right? I mean, if, you know, take chocolate, right, which, you know, research shows, there's a lot of slave labor that goes into effectively slave labor that goes, goes into the production of most chocolate and you know, so you can buy fair trade chocolate and there's a debate about, but does that go far enough? But you know, it doesn't have to just be eating animal flesh. It can be, you know, issues related to how how these foods are produced. And you know, it's, it's, it's hard to, it's hard to keep track of all of it. And so, you know, but certainly nearly all of us can do better than we do. So my approach is, you know, cause it's easy to get overwhelmed and say, well, I can't be perfect, so I might as well not try. And I think that's, that's the worst conclusion that we can come to that I think the right way to think about it is, you know, you do your personal best you, you say, okay, what are the, what's the low hanging fruit? What are the best opportunities that I have to improve the wellbeing of humans and animals? And I'm going to just start by committing to doing those things. And then once those become second nature and just feel like part of the background, then you say, okay, well why not? What else could I do? Right. And just work through in order of priority, making each step challenging but doable. I think that that is by far the best approach rather than saying I'm a bad person if I'm not perfect, because that just leads to defeat. Right?


Amber Cazzell (01:02:10):

Yeah. Yeah. I mean, I think that's why it's so much Josh, I really appreciate getting into talk to you. It's, it's neat to finally meet the researcher behind so much work that I've read over the years.


Joshua Greene (01:02:29):

Thank you. It's been been a pleasure talking to you.