top of page

The Good Samaritan and Moral Motivation with Daniel Batson

Dr. Daniel Batson has a PhD in Social Psychology and Theology, and is an emeritus professor in the Psychology Department at the University of Tennessee and the University of Kansas. His distinguished career began in graduate school, in which he and his adviser, John Darley, designed and conducted what is now famously known as the Good Samaritan Study. This study subsequently fueled the so-called “situationist challenge.” He is also renown for defending “empathy-induced altruism” against Robert Cialdini’s selfishly-motivated altruism in a professional debate which played out over many years. In this podcast, we discuss these lines of work, and how they inform his opinions regarding the situationist challenge and his conceptualization of moral motivation.

APA citation: Cazzell, A. R. (Host). The Good Samaritan and Moral Motivation with Daniel Batson (2019, October 29).  [Audio Podcast]. Retrieved from


NOTE: This transcript was automatically generated. Please excuse typos and errors.

Amber Cazzell: 00:01:19 Hi everyone. I am with the legendary Dan Batson today and I am very excited to talk about his work and his opinions on the nature of morality or elements that a lot of people considered to be moral that he might not consider to be moral. So it's going to be a fun conversation today, Dan. I like to start with kind of background information about how people became interested in the line of work they found themselves in. So could you tell me a bit about how you became interested in studying theology and psychology?

Daniel Batson: 00:01:56 Well sure. I was looking around for a major as far as psychology is concerned and like a lot of adolescents I was curious to understand why I was doing the stupid things that I was doing and why other people were doing the things they were doing. The ology I, well, when, when I started to college I planned to become a minister and so it was fairly natural that I would, I went on to the theological seminary afterwards after college. And after I guess probably about two years of seminary, it's a three year program to become, at that point it was called a BD, which is a bachelor of divinity, but that was the credentialing degree to be a a parish minister. After a couple of years I was convinced that I had no business releasing my particular brand of religion on anybody else. And so and I also found that I think I was more interested in and better suited to research. So I took a turn.

Amber Cazzell: 00:03:22 So why did you decide that you shouldn't be a pastor? You'd mentioned you didn't think you should release your brand of religion. Why is that?

Daniel Batson: 00:03:32 Well at that point I guess I would say I was agnostic at that point. I wasn't sure that there was a God, I was pretty sure there wasn't an afterlife. Thinking that the God I believed in, if there, if I did believe in a God was not interested in whether I believed in him or her. So it, I didn't quite qualify as, as Orthodox, although I did go ahead and get ordained. I never, never had a church. Thank goodness. Both for the people in the church and for me.

Amber Cazzell: 00:04:14 Yeah. So how did you become interested in studying altruism on the psychology side? Was that partly born out from your interest in theology?

Daniel Batson: 00:04:27 Well, only indirectly. Yeah. after I decided I wasn't heading to the ministry, I did a doctoral degree at the seminary. Actually it was sort of an interdisciplinary studies and I had decided that I was interested in psychology as a research focus as part of that. But so I was working with a guy at the university of Princeton university, the Princeton theological seminary, which is where I was. Those are two separate institutions and neither one of them will allow you to be enrolled somewhere else at the same time. So I ended up doing the degrees sequentially one after the other. And the guy had been working with at the, at the seminary. They was Harold Schroeder and he worked on cognitive complexity. And so I was interested in the complexity of religious beliefs and also he did work on creativity a bit.

Daniel Batson: 00:05:37 And so using the creativity model for religious development and is what are the part of it I was interested in. But then between the spring when I applied for admission to the doctoral program with the university in social psychology Harry Schroeder left which let's be a bit high and dry because nobody else at the university was particularly interested in having a seminarian. And so I first tried to argue that maybe they would let me have a hairy as my advisor even though he was gone and they said no, that wasn't going to fly. So I was looking around for somebody else to work with. And I really, I had, I wasn't familiar with John Darley's work on bystander intervention. But obviously that, that, well, it certainly peaked my interest because the idea of doing experiments to try to test some of the processes involved in sort of ethical or moral behavior was certainly, it was an exciting prospect. So we ended up collaborating.

Amber Cazzell: 00:07:06 Yeah. So was it your time as John Darley's student that you did the Good Samaritan study?

Daniel Batson: 00:07:16 Yeah, that, that was actually the first semester I was in graduate school. We at Princeton at the time there and may still be true, I'm not sure. But there, there was what was called a minor research project that you were expected to do in your first or second year. And I was trying to go through this program fast because I had been in school for a long time having already done one doctorate in addition to the bachelor of divinity and the undergraduate stuff. And so I wanted to move along. And so we, we just, we're doing a brainstorming session on doing a study. I guess John's idea, he was interested in the hurry factor. That is whether putting people in a hurry would reduce the likelihood of helping. I thought that was a gimme it seemed fairly obvious to me, so it was less interested in it. But as we talked, thinking about people being in a hurry and, and going past somebody who was in need it got me thinking about the parable of the good Samaritan. And so just as we bounced back and forth, we sort of both discovered this, I can't claim precedent or anything. And so the idea of framing it around the parable it was, that was exciting to both of us and it's scratched my religious itch and my research itch as well. And I think John found it interesting. So away we went.

Amber Cazzell: 00:09:05 Yeah. I mean, it certainly has become one of the foundational studies that you learn about.

Daniel Batson: 00:09:15 Well I think the irony of it that interests people, it's and I mean it's, it's interesting. I don't think it's profound. I don't think it raises particularly new ideas. But there is the irony

Amber Cazzell: 00:09:35 And we've, we've gone on talking about the study, but for people who are listening that aren't familiar with psychology and aren't familiar with specifically studies related to S to bystander effects and situationism, could you briefly tell us about the premise and the findings of that Good Samaritan study?

Daniel Batson: 00:10:01 Well yeah, the, what we were testing basically was, was what factors, both some dispositional factors and situational factors are predictive of whether a person will help. And also we were measuring how they helped if they did that tends to get ignored and almost all the reports of the study, but it was part of the study. And the specific factors we were looking at in terms of the situational factors were whether the person was in a hurry or not and whether helping norms were salient. And the specific norm that we used was the parable, the good Samaritan and we set up a situation that was modeled on the parable, the good Samaritan. So that's where the irony part comes from. And on, on the dispositional side in terms of personality factors, we were interested in different ways of being religious different types of religious beliefs.

Daniel Batson: 00:11:09 And that was something I was particularly interested in having been at the seminary and I was teaching part time at the seminary at the time and so had access to seminarians as participants. And so the participants in the study were male seminary students. So we had a situation where male seminary students were cutting through an alleyway to get to where they were going to record a talk. And the talk was either on the parable, the good Samaritan that was the norm salient condition or on a sort of non traditional jobs for seminary students, which was a concern at that time. This was in 1970 and we were right after, well still in the Vietnam war, but civil rights movement and the age of Aquarius and things like that. So there, there were sort of nontraditional roles that were of concern. So that plus the hurry and some people were told that they were late for this talk that needed to get there as quickly as possible.

Daniel Batson: 00:12:21 That was a high hurry condition. Some people were told it was going to be a few minutes, but might as well go on over. And they were given a little map that directed them through this alleyway. And some people were, were not given any specific information. They said they should be about ready. You find out, you go on over. And then at a previous session we had measured these different dimensions of personal religion. And so what we found was that as far as whether people stopped and helped they went by this person who are our victim that we had set up in the alleyway. He was slumped in a doorway. This was run in December, so it was cold in Princeton. And actually we are fortunate to have some gray dingy days, which in December in Princeton, those are not rare. But we, we got some and he was sort of huddled in a doorway with a army pea jacket on.

Daniel Batson: 00:13:26 And jeans looking a bit scruffy and head down. And he was instructed not to look at them or anything thing, but just he coughed a couple of times as they went by and if they stopped, he was instructed to talk to them, you know, say that he was okay, he had a respiratory condition, it has taken some medication, he would be fine. And we had some people that wouldn't leave him. And so he was instructed to sort of get better quickly if they hung around because we were sending people into the alleyway there every 15 minutes.

Daniel Batson: 00:14:05 If we got on schedule, it was a serious problem. So what we found was that as far as whether people stopped the hurry manipulation was had a very powerful effect. That is people who were in a hurry or much less likely to stop than the people who had time on their hands. And the religious, the different dimensions of personal religion didn't make any difference, particularly in whether people stopped or not. But where they came into effect is on how people helped. If it did stop because we had sort of developed a coding system of people stopping and asking if he was okay people telling the person they were, they were going to, they're supposed to be going to a secretary in the, where they're going to do the recording. People who'd told her that was an indirect helping response they could offer help, but when he said he was okay, fine, but we had some people, we hadn't anticipated this, but we ended up adding an additional category, which was what we call super helpers.

Daniel Batson: 00:15:28 They were, they wouldn't leave him. And that became a problem since we were trying to be ready for the next person. And, but we had him offered money, prayed over trying to take him to the infirmary, things like that. And what we found was that those super helpers, the ones who wouldn't quit that was highly correlated with an Orthodox endorsement of Orthodox religious beliefs and sort of deeply held beliefs. And we interpreted that as, as a, a more a response to an internal lead to be helpful rather than to the needs that are expressed at least by the person because he was saying he was okay. He had taken his medicine and he was getting increasingly desperate to try to get him out of there. And some followup work has tended to support that interpretation of these are other studies that were done later.

Amber Cazzell: 00:16:44 Got it. So the good Samaritan study now is often kind of a listed or categorized with like the Zimbardo prison experiment and the Milgram study. Um as sort of this collection of studies that gave the situation as challenge its legs, so to speak. What was kind of your experience of that as it was happening? Cause a lot of these studies came out within a decade of each other, I think. Umwhat kind of, what side of the debate were you on sort of before, before running all of these studies and if that was even on your mind and then as you're running the good Samaritan study and maybe now as well, what was sort of your take on the situationist challenge?

Daniel Batson: 00:17:33 Well, the situation is challenge. At least typically it's a philosophical position, not a psychological one. That is, philosophers have kind of adopted that and made something it and for them, the, the argument at least as I understand it is gee, if, if these situational factors whether it's being in a hurry or being in a good mood or having smelled fresh cookies or whatever if that affects behavior the way it seems to do then maybe there is no such thing as moral character or some people go as far as there's no such thing as personality. And of course, I think that's absolutely nonsense. And I mean, certainly not a bias going in. I was the one who was particularly interested in the dispositional factor. And ironically it had an effect, it just didn't have an effect on whether people help and that's, that's what the, the reports typically stop there.

Daniel Batson: 00:18:50 So that study actually provided evidence that both the dispositions and situational factors can affect helping. It's just that the situational factor affected whether people heledp particularly in a situation where there was tight time and we actually had did a follow up study, a not a, this is not at Princeton. This is when I went to University of Kansas. But that indicated that that's the explanation for the hurry effect was competing responsibility. That is, they were of course her egg to help us because we had told them they were like for this appointment and they needed to get there as quickly as possible. So I don't think it was just sort of a, their moral concern went out the window. It's that they were caught in a bind. So both things are true I think. And I guess one other thing I would add is Kurt Levine, I don't know whether that's a name that's familiar at all, but he's usually called the, the father of experimental social psychology. And early on back in the 30s, 1930s he was talking about behavior that is our actions being a function of, and he said he, he the person in the environment or who he comma as person in the situation that is, it's both things work together and that's where I would be certainly.

Amber Cazzell: 00:20:39 Yeah. So why do you think it is that after all these years, the hurry finding is emphasized so strongly and also that the study has become associated with situationism even though you're right, like clearly it's talking about dispositions. Clearly there are other explanations for the hurry factor that could be related to a disposition as well. Um so why do you think it's gotten...

Daniel Batson: 00:21:12 Yeah, well I, I mean I think I wouldn't, yeah, cause see I wouldn't talk about the, the situationist challenge or a situational challenge, how we talk about a situational correction. And by correction, what I mean is that situational factors. Do affect our behavior a whole lot more than most people imagine. And that, that's actually what the Milgram study showed. I mean, he was, the Milgram study wasn't an experiment there. There really wasn't an comparison group. But the comparison was to people's expectations. He asked people what they thought would happen and then he compares his results to what they thought. And it turns out that situational factors are much more powerful than people imagine. And I think that's absolutely true. So a situation is correction. We needed it. I think social psychology has effectively provided it and I think it's important, but if you then move from that to say, and by the way, personality doesn't exist, I think you've gone way too far.

Amber Cazzell: 00:22:28 Yeah. So when I first asked you to come onto the podcast, you noted that you weren't sure of the fit because I had asked about the context of the empathy, altruism kind of debate between you and Robert Cialdini. You said that you don't think that altruism is moral. And I had said, okay, well, that I think is right, right there. Gonna be an interesting conversation for the podcast. So I want to talk about that and unpack that with you. And first I think it might make some sense to talk about the history of the egoistic altruistic debate between yourself and Robert Cialdini how did that debate first start?

Daniel Batson: 00:23:12 Well, it's, it's not the debate over egoism versus altruism has been around in Western thought for a long, long time. Probably several millennia, maybe more. There's some argument about whether Plato and Aristotle had a very clear distinction the way we do now, but certainly since the enlightenment and the Renaissance that that debate's been there. So we can hardly claim to have created the debate. And I mean, part of it comes down to definitions. Egoism is simply a motivation to increase your own welfare. And altruism is motivation or motivational states with the goal of increasing another person's welfare. Auguste Compte's the one who coined the term altruism as a sort of other concern to lay us alongside self concern. So if and, and the question is, is never are we egoistic or altruistic? Because I don't think anybody doubts that we humans are capable of egoistic motives.

Daniel Batson: 00:24:47 We often are motivated to increase our own welfare. The question is whether that's all there is, whether everything that looks like it's concerned for others is actually some subtle way of benefiting ourselves. So that's, that's the issue basically. And that is turns not on whether we help other people or something like that. It's, it's concerned with the motives and it's a lot harder to determine the motives that are operating the nature of the motives. And you're talking about nature of the motives. That is, what's the ultimate goal? Is the goal to benefit myself or is the goals to benefit the other and to play that little further, I could be, my goal could be to benefit the other and it may benefit myself, but if that's an unintended consequence, that still would be an altruistic motive. On the other hand if I benefit the other, but my goal in doing that is to benefit myself in some way, then that's an egoistic motive even though in both cases I'm benefiting the other. So the behavior itself doesn't tell us anything. The question is why, what's the motive behind it? What are the goals that are involved? And so that's what the debate was about within experimental social psychology because we had the methods, they're not easy, but we had the methods to be able to tease apart these motives and that's, that's what got it going. And when I started it, I started my side of it anyway. I assumed all our motivation was egoistic and I ended up changing my mind.

Amber Cazzell: 00:26:41 Can you tell me more about the trajectory of, of that, why you changed your mind?

Daniel Batson: 00:26:47 Well, the results of the experiments actually it started after graduate school. When I moved to the university of Kansas shortly after I got there, Jay Koch, who was a first year graduate student and a very talented man. He suggested, I guess he got assigned to me on a research line. And as a new faculty member, you're sort of in a fog. And I'm not sure how it happened, but anyway, they were nice enough to give me a research assistant and it was Jay. And so he said, well maybe empathic emotions increase helping. Maybe we should look at that. And I said, okay, sure, we can do that. And so we did a study where we manipulated the degree to which people felt empathy for a person in need. They were presented with a person in need and the degree to which they felt empathy.

Daniel Batson: 00:27:53 And then they were given a chance to help. And the prediction was that the people in whom we induced empathy would help more than the people we didn't. And that's what we found. That was not a great shock. It was a little comforting because it's always it's nice to have your hypothesis of. The interesting thing for me was talking with people afterwards. You know, these studies involve all these studies involved deception. What today is usually called heavy deception because we have very mild deceptions in some cases, but this I mean we've got the choice of either confronting people with somebody who really is suffering or we've got the choice and the alternative to confront them with a person they believe is suffering. And I much preferred the second alternative there because I think it's ethically more acceptable even though it involves deception.

Daniel Batson: 00:28:59 So anyway, we briefly studies and talking with the people afterwards. What's in the high empathy condition condition, excuse me. They, they really seem to care about Katie. This was Katie Banks was the name of the person who was in need. They really seemed to care about her and it didn't seem to be just for some sort of self benefit. And so that got me interested in, I mean, I, I really hadn't thought about the nature of the motives before that because I just assumed that it was self interest that was driving it. But that's then got us going on, well, how in the world would you possibly know? And trying to design studies to answer that question.

Amber Cazzell: 00:29:47 Yeah. So how, how did you determine what a motivation is ultimately about?

Daniel Batson: 00:29:53 Okay. Well and maybe I should say ultimate goal here. I'm not talking about some metaphysical ultimate goal or even evolutionary, the evolutionary biology sometimes talk about ultimate causes and and proximal, but I'm talking about what the person is after in the situation. And you can have multiple ultimate goals. It's not just that there's one, but,uso the question is how do you know whether it goal is ultimate or whether it's instrumental that is, it's just a stepping stone to something else or an unintended consequence. It's something that happens, but it was not part of the motivational structure itself. Uand to do that, I think first you have to have something you think, if we're talking about altruism here and uor helping, you have to have something that you think is a likely source of an altruistic motive that is, it makes sense to look in the best possible place.

Daniel Batson: 00:31:01 And what we focused on was empathic concerned in part because of the, that's what we had looked at before. And that's where the question arose for me. But it turns out that if you look back across the last several centuries, why that's, that's probably the most frequently proposed source of an altruistic motive. So we need, we need to be sure we're looking in the right place because we're not assuming that all motives are altruistic. But then second, given that source of an altruistic motive, we need to think about what egoistic motives might be produced by that source. And then we need to vary the situation so that we can disentangle those things. And then we run the experiment and see what we find. So to, to make it a little more concrete probably the most popular well over, over, the last hundred years, I think, and probably more than that.

Daniel Batson: 00:32:20 And it was the most popular at the time. We started thinking about this stuff. Egoistic motive, the most popular egoistic motive was what we call versive arousal reduction. That is I see this person who's suffering. I feel empathic. That concern is an unpleasant state because they're in an unpleasant state. And so I'm motivated to reduce my own aversive of arousal, my own empathic concern. And that's why I help them. Because if I removed their need, that's the stimulus that's causing me to feel concerned. So then the question is, well, maybe I should play through the altruistic side of that too. But if the motive around altruistic why, if it leads me to help, then I will reduce their need. That's true. But presumably if it's a motives altruistic, that's an unintended consequence. That's not what I'm after actually in this situation, I'm after removing their need.

Daniel Batson: 00:33:33 So what we lit upon is a way to try to distinguish those two motives, that empathy induced altruistic motive and empathy causing aversive arousal and averse arousal reduction that egoistic motive was whether it was easy or difficult to escape from continued exposure to the need situation without helping. We made the helping somewhat costly. In this, I'm thinking of a particular study, and in this study, our participants were observing somebody else performing a task and receiving random electric shocks to create aversive conditions. It was supposed to task performance under aversive conditions that they were observing. And it turned out that the person, the, the worker who was doing the task, supposedly another research participant, actually a Confederate, and what they were seeing was not over closed circuit TV. It was a videotape that we had created.

Daniel Batson: 00:34:42 But they thought it was real and happening at the time. So the person, a worker reacted very badly to the shocks because of a traumatic experience as a child. And then unexpectedly the participants were given a chance to take her place and take the shocks in her stead. That was the helping measure. So it, they perceive it to be costly. And I mean, we, we occasionally had people who were in tears trying to make this decision. So it had some impact. And in the easy escape condition at the point they were given a chance to take her place. They do that. If they didn't decided not to, they could just fill out a brief questionnaire and then they could leave and they would see no more of her performance in the difficult escape. They expected to continue watching additional trials.

Daniel Batson: 00:35:49 If they didn't take her place, they would continue to watch cause she was going to go on and do these trials if they didn't take place. So they were going to have continued exposure. And we assumed that if, if the motive is to reduce your own aversive arousal, empathic arousal caused by seeing her suffer, then in the easy escape condition, we would expect people not to help. They would just leave. And in the difficult escape, they would go ahead and help because otherwise they're going to have to watch her and continue to feel the aversive of arousal. And that effect was very strong and exactly in that direction. The thing that's interesting is that for the people who in who we had induced an empathic concern so that it's not just, well we're, we're trying to see, could it be altruistic? The idea would be if, if it's altruistic, then even under easy escape, they should want to help because the only way they can remove her need is by helping her, the only way out for her. And we found that whereas under easy escapes, the people were who they felt low empathy were much less likely to help than the people who felt high empathy under difficult escape.

Daniel Batson: 00:37:22 Both sides, both people feeling low empathy and high empathy, we're likely to help. And that's the pattern we would expect if the motivation evoked by the empathy was altruistic, not aversive arousal reduction. And so it's that kind of logic but you have to develop a different instantiation of the logic in order to test different egoistic alternatives. And there are a number of egoistic alternatives that we've, we have tested Bob Cialdini has suggested a couple of them, but there are others that he w he did not suggest that we've had to test as well. So the debate as well and the sort of the testing of this went on from Oh, around 1982 around 2000, so about 20 years.

Amber Cazzell: 00:38:32 Wow. And I mean, that debate, as you've noted, has been going on for a long time, is likely to continue going on for a long time. So

Daniel Batson: 00:38:41 I don't think so. Not that, not the testing of this.

Amber Cazzell: 00:38:45 Oh, not the psychological testing of it.

Daniel Batson: 00:38:48 Well yeah, I don't, I mean, people who have looked at the research and have actually reviewed it, which takes a long time because there's a lot of it. I mean it certainly is true that people who are actively committed to a position where before it started may still be committed to that position. But I think the evidence, my impression is the evidence is clear and there aren't any current challenges.

Amber Cazzell: 00:39:26 So you had mentioned that this is different than sort of an evolutionary kind of take on what egoism might mean. So maybe I'm confounding them. Certainly. It seems like evolution is one of the most dominant frameworks that psychologists are operating out of now. When thinking about altruism, there's always sort of this connection to evolutionary fitness as an explanation for it.

Daniel Batson: 00:40:01 Well, they're talking about, they're just talking about something different. They do call it an altruism, but evolutionary biologists, they're talking about behavior that reduces an individual's reproductive fitness. And so number one, they're not talking about motives. And actually somebody like Richard Dawkins is particularly clear on this. He wrote the selfish gene and you know, he says, this is, we're not, we're not talking about motives or intentions. We're talking about behavior. But the other thing is, well, they're talking about his behavior that reduces the reproductive fitness of an individual. Relative to some people will say relative to con specifics. I actually think it should be more general than that because presumably you could be altruistic towards other species as well. And we sometimes are. So

Amber Cazzell: 00:41:11 That's interesting. I've always heard of altruism from an evolutionary framework. Like all that explanations that I've seen have been trying to explain how it, how altruistic behaviors increase fitness of genes not decrease.

Daniel Batson: 00:41:30 Oh yeah. Well, that's the, the evolutionary explanations are saying altruism, evolutionary altruism doesn't exist at least as gene type explanations. So they're saying this looks like the individual is reducing its reproductive fitness, but in fact, if we're talking about kin selection there, the genes are increasing their likelihood of appearing in the next generation because the people who are helped are related and they have a better chance of putting more genes to the next generation than does the person who's doing the helping. It's, it's that kind of logic or reciprocal altruism, which is basically symbiotic behavior that it's to my benefit to help now I'll get help later by somebody, whatever. So yeah, the, it's talked about as altruism because it has that surface similarity. Now, if you go to what's called group selection there again it may not be to the benefit of the individual gene, but it's to the benefit of survival.

Daniel Batson: 00:42:48 The group, the people who argue for that, that that remains a controversial position. But it was strongly criticized in the 1960s and seventies, but is more common now. But again, they're simply talking about something different. And you can, you can do a little two by two table of what I would call evolutionary altruism that is this reducing the reproductive fitness of the individual. And you can do egoism and altruism, evolutionary egoism, and evolutionary altruism. And you can do individual egos of an altruism as I'm talking about it. That is the motives in the proximal situation, the immediate situation. And there's, there's no, no correlation between those two. There doesn't need to be, there's no logical necessity. You can have, you could have altruistic motives that are based on evolutionary egoism so to, to know that there's evolutionary egoism tells us nothing about the nature of the motives that are operating.

Amber Cazzell: 00:44:07 Yeah. So you're, in your work, you've identified this more recent work. You've identified two other motivations that we haven't talked about that could lead to prosocial behavior collectivism and principlism. Could you tell me about those?

Daniel Batson: 00:44:26 Well these are all goal directed motives. And so you go, wisdom is motivational state with the ultimate goal of increasing my own welfare, altruism, motivational state with the Oldman goal of increasing another individual or individuals welfare. Collectivism is a motivational statement. The ultimate goal of increasing some groups, welfare, a group or collective and principalism that's a manufactured perm. And I'm not entirely happy with it, but is it's essentially moral motivation. It's motivation with the ultimate goal of upholding some moral principle. Standard. Ideal.

Amber Cazzell: 00:45:10 Yeah. So it seems like you identify only principalism as a moral motivation and altruism.

Daniel Batson: 00:45:21 No, that's right. It's as, as a motive. Yes. Because it's, it's ultimate goal is to uphold the moral principles, standard or ideal. That's right. And the others could produce behavior that is judged moral. Any of the other three can produce behavior that's judged moral, but they also can produce behavior that's judged immoral. So that's,

Amber Cazzell: 00:45:53 So that reminds me a lot of Kant's sort of sense of moral duty or moral obligation. I forget exactly what his term is, but this idea that it's only moral if the intention is that, and it's not, it's not about pleasure or some of these other motivations. Right, right.

Daniel Batson: 00:46:17 No, it is consistent with that part of Kant, but it's not just a Kantian sense of principles. Cause I, that's one of the reasons I'm not entirely happy with the term principlism because that's what particularly a lot of philosophers assume. I'm talking immediately, I'm talking about Kant, but a you know, utilitarian principle of you know, you should do what produces the greatest good for the greatest number and is definitely not Kant. But that, that would be if a person, if that's their ultimate goal and to produce the greatest good for the greatest number, that's principlism by my definition of a principle that says, do no harm. That's principle. That that's, if that's the ultimate goal, that's principlism, but of course you could, you could say I don't want to do harm, but I didn't want to do harm because I'd feel guilty if I did. And that's, that's an instrumental moral motive if you want in the service of an egoistic motive avoiding guilt.

Amber Cazzell: 00:47:39 Yeah. So I, I might need you to correct my, my logic and in thinking about your work because I identify most strongly with I identify most strongly probably as a researcher of virtue ethics. And when I was reading your, your definitions of these four different types of motivations and this idea that only principalism is a moral motivation and that the other three are not. I sort of had a difficult time wrapping my mind around how meaningful or practical principlism is. So for instance, it seems like we could think about motivations like altruism or collectivism as just habituated principles that were initially almost like a moral heuristic, if you will. So that you might have this general principle that it is good to help others in need and that that can habituate over time into altruism, but have had an origin in a moral principle that a person is aiming at. I'm not sure how principalism, how practical principlism, how useful or practical that is in sort of this messy world of, you know, daily ethical decisions.

Daniel Batson: 00:49:32 Oh, I actually think it's quite useful. And virtue ethics is an example. That leads to principles. Again, I'm using principles very broadly here and to some degree relying on the dictionary because if you look at what moral is according to the dictionary, it's of or concerned with principles of right and wrong character. And I think they're using principles very broadly, would certainly include virtue ethics within that. Now, I mean, the situation you raise of, well, maybe it started as virtue ethics and it then becomes altruism. Well, I think, I think we may have slid on the definition of altruism a bit, but maybe not. If it really does become altruism as I'm defining it, then it's not that it has it's manifesting itself. It has evolved. It's, it's something different now because the, if the goal in this situation is to increase the other's welfare, wherever that came from, that would be an altruistic motive by my definition. If the goal on the other hand is to be fair, for example which is not a minor motive and not something that's unimportant in this messy world that would be principles.

Amber Cazzell: 00:51:08 So, in order for it to be principlism, does it need, does the principal need to be in conscious awareness as a person is deliberating?

Daniel Batson: 00:51:20 Not necessarily, no, no. Well, people need to recognize it as moral at all. If that's what it is though by somebody's definition of moral and definitions of moral change it's a, a principle of right or wrong conduct and you know, so I mean, it's for a, a good Randian Ayn Rand you know, to sort of pursue self fulfillment and aggrandizement is moral.

Daniel Batson: 00:52:06 Yeah. that's, that's a principle and it's not, it's not equal wisdom for a Randian. It's, they're doing it supposedly for moral reasons.

Amber Cazzell: 00:52:16 I see. So

Daniel Batson: 00:52:21 I wonder, but that's fine.

Amber Cazzell: 00:52:25 Yeah.

Daniel Batson: 00:52:26 I mean, Rand has, you know, her little book the virtue of selfishness. She could claim herself as a virtue ethicist.

Amber Cazzell: 00:52:38 So I mean I think it's pretty clear in the conversation that we're having, but just to make it the conversation more explicit. It seems like a lot of historical atrocities are done in the name of principlism and that these principles are applying this, this applying moral motivation doesn't always result in moral action. And I think that this might be the point of your recent book, the, What's Wrong with Morality book?

Daniel Batson: 00:53:14 Well that's what, that's one of the points. I mean when, when you say it doesn't necessarily lead to moral action, usually what's going on there is the person who's making the judgment about whether it leads to moral action as a different set of morals from the person who is doing the acting. So we've just got a conflict over what, what is moral. And that's not what that book is about. But what that book is about is whatever morals you adopt, whatever you think is moral, if it's your virtue ethics position, if it's a utilitarian position, if it's a Kantian position, whatever If you act to uphold those standards are ideals in this given situation then you have a moral motive there. And I'm not saying that's the only motive you have. You may have other motives too, but you have a moral motive.

Amber Cazzell: 00:54:23 Yeah. So what is the premise then of of the, like the, the part about the, what's wrong part of what's wrong with morality?

Daniel Batson: 00:54:33 Well, the, the, what's wrong refers to morality doesn't seem to do its job as, as you suggested that is, it seems to lead to things that many people at least would not think are particularly moral. And so the book is trying to explore what are the possible reasons for that. And there are actually the, after setting up the basic framework, then the second chapter is about you know, personal deficiencies. And so it's, it's, we're back to the situationist as person-situation issues that is there, there are things with in us as individuals and in our development that can lead us to fail to live up to our own moral principles. And this is, this is trying to do the evaluation internally that is the same person who's doing the acting, doing the evaluating. And then the third chapter is about situational pressures. Some of the studies you talked about the study with you know, situational pressure leading to people doing things that in retrospect they would think are wrong.

Amber Cazzell: 00:55:58 So you spoke about

Daniel Batson: 00:56:01 Well let me, let me go on and mention the actual, the book of the, the bulk of the book is about motivational problems. And I focused specifically on what I call moral autocracy, which is a, as opposed to moral integrity or principles. And that is where, if my ultimate goal is to uphold the principle that I would call that moral integrity. That is, I hold the principal and that's what I'm trying to do. A moral hypocrisy is a motive with the ultimate goal of appearing moral without actually being moral. If I can, or maybe a better way to put it is to appear moral while if possible, avoiding the costs of actually being moral. And in order for that, a motive to be effective, I have to appear moral, not just to other people, but to myself. And so that's, that's a form of egoism that is, I'm, I'm wanting to appear moral, presumably to get the self benefits that accrue to the person who is moral. But it's egoism that is behind the mask of morality. So we researched to try to tease apart whether when people are acting morally, it's because of moral integrity or moral hypocrisy. Same logic as we did with the egoism, altruism, but in a different domain.

Amber Cazzell: 00:57:49 Yeah. So what is entailed in your view to have taken a good moral action? Clearly there needs to be the right moral motivation. So principalism involved.

Daniel Batson: 00:58:10 Not necessarily, well, I mean the right moral action for wrong reasons. Right?

Amber Cazzell: 00:58:18 Right. So I guess what I'm trying to get at is in order for, for, in order to have the best moral, I'm not even sure what the right language is, but we want to combine, right? We want to combine good motivation. So Prince polism with good right behavior or action that ensues from that ideally, right.

Daniel Batson: 00:58:46 Well it's conceivable that you don't need moral motivation to produce moral actions. I mean there, there are people who have argued that way at times. That is you could, you can just put people sort of in a constrained situation so that they have to act morally. That's the actually some of the recent that has in the last 10 years or maybe 15 by now work on moral behavior, particularly in behavioral economics tends to suggest we just need to sort of handle the situational constraints. And I don't think that'll do it personally, but because I think we're way too clever we can dodge the situation constraints unless somebody just got a gun to our head all the time. And that's not gonna happen. I don't think. I hope not. So I mean, I guess the way I tend to think about what I understand to be the issues you're raising, how do we get these motives, these different motives, each of which has both strengths and weaknesses. How do we get them to work together in a way that produces what we, at least we consider a moral outcome. And again, that's the, the judgment of whether it's moral or not. There's always somebody's judgment. There's, there's no, I mean, something doesn't come with moral stamped on it. That's something we impose.

Amber Cazzell: 01:00:42 So do you, I think that that's interesting and not I'm not sure that everyone would agree with your ladder statement. So it sounds like you take a moral position of moral relativism.

Speaker 5: 01:00:58 Well, I, I'm, I'm a researcher and a scientist. I'm not, I'm not trying to take a normative stance here. And so I would like to have an analysis that allows me to make sense both of moral relativism and of moral absolutism or realism or whatever you want to call it. I, that's something that the person themselves holds I don't, I mean, it's a, if somebody wants to say, well you know, I know what's moral. And, I can, you know, present empirical evidence that this is moral. Okay. I'll certainly be willing to listen to the evidence. I'm skeptical that that kind of evidence is going to be forthcoming. I mean, the person could certainly say, I know what's moral and most of us do say that, but that doesn't make it moral, I think. I think morality is a human construction. I don't, I, I mean, I know there are, there are evolutionary positions that sort of say we have a module that is, you know, kind of moral sensitive and I don't think it's genetically based. I think certainly genetically we have the capacity for morality, but then we have the capacity genetically for anything we can do and anything we have. So I don't think that tells us a whole lot.

Amber Cazzell: 01:02:41 Yeah. Interesting. So just, we're about out of time, but as a final sendoff, I'd love to know what you're thinking about and working on these days specifically.

Daniel Batson: 01:02:56 Well I've retired.

Amber Cazzell: 01:03:00 Yes, congrats on that. Is that nice?

Daniel Batson: 01:03:03 Well well, I retired in order to have more time to write actually. And so what I'm working on and have been working on is trying to do more book level projects instead of actively collecting data. And it seemed like the best way to avoid continuing to collect data and write up research in articles and stuff was to retire because I don't have a lab and I don't have the data dangerous strategy, but so far it seems to be working okay. And I've, I felt like there was some book level projects I wanted to get done before I was either dead or totally Gaga. So I hope I'm not totally yet, but

Amber Cazzell: 01:04:01 No, no, you're not.

Daniel Batson: 01:04:02 So since retirement, I, I've done I guess two books on altruism and one on morality. And now I'm working on one on empathic concern. I mean, I have to beat the drums that are in my band. I can't I'm not gonna spin off and start talking about things I don't know anything about, at least not yet. That's what I'm working on slowly, slowly. And I think it's going to be an interesting call as to whether I finish it first or it finishes me first. We'll see.

Amber Cazzell: 01:04:49 Oh, well, thank you so much, Dan. This conversation has been a fun one and it's been enlightening for me. I learned a lot, even though I thought I was fairly well versed in these topics. Now I know that I'm not, so thank you for the conversation and just illuminating a lot of sort of the background behind theseseminal works.

Daniel Batson: 01:05:16 Well, it's, it is uh I find it an absolutely fascinating area, but it's also an incredibly complex area, both just, just all the things that are going on and I'm taking account of them as best you can, but also in addition to just what's there, what we bring to it, because all of us have our own investments in these things. And that, that's, that's one of the reasons that I think it's so important to do scientific work on this because if done well, I think the, the major virtue is science is, it can show us we're wrong.

Amber Cazzell: 01:06:01 Wonderful, Thank you.

Daniel Batson: 01:06:01 Okay.

Amber Cazzell: 01:06:03 Thanks for listening. If you have questions, comments, suggestions or requests, contact me at The moral science podcast is sponsored by ERA INC, a research and design think tank that's reinventing how people interact with each other. Music throughout the program is Microobee by Keinzweiter and can be found at


bottom of page