Search

Morality is Relationship Regulation with Tage Rai



Dr. Tage Rai is the Associate Editor for social sciences at Science magazine, and is a research associate at MIT’s Sloan School of Management. His research focuses on moral conflict, violence, and personhood, and has been published in top academic journals, including Nature, The Proceedings of the National Academy of Sciences, and Psychological Review, as well as a trade book titled Virtuous Violence which was published in 2014. In this podcast, we discuss his work with Alan Fiske to develop the relationship regulation theory of morality—a theory that moral judgments and actions stem from our desire to maintain certain types of relationship categories.


APA citation: Cazzell, A. R. (Host). Morality is Relationship Regulation with Tage Rai (2019, November 26).  [Audio Podcast]. Retrieved from https://www.ambercazzell.com/post/msp-ep20-TageRai


NOTE: This transcript was generated automatically. Please excuse typos and errors.


Tage Rai: 00:01:13 So I grew up in central Florida, you know, it was kind of Southern. Wasn't the best education maybe. And so in a lot of ways, like when I was growing up again in college, I felt like, man, I was kind of like a disadvantage. But then, you know, as I started to get more into psychology, it was kind of like this really weird thing where you would hit these theories and you would talk to a psychologist even. And it was kind of like they really bending over backwards to kind of explain differences between people and differences between values and stuff. By saying, Oh, well actually, you know, those conservatives really agree with us. They're just mistaken about something. You know, they, they just have mistaken information in some way and as opposed to a much more simpler explanation that actually there might be competing values. And so, you know, and then when I was in college, I read, I read this book called Culture of Honor by Richard Nisbett and Dov Cohen.


Tage Rai: 00:02:26 And that book was all about kind of where I came from. These cultural norms really it's honor and how southerners were like more quick to respond with violence and stuff to insults. And, and things like that. Now, you know, some of that work is held up, some of it hasn't. But like it really kind of like got me into this space of thing. And then when I went to grad school, when I was actually interested in studying was sort of cognitive science of religion. But then, you know, as I started getting more into it, it became clear that, you know, like so much of religion really is morality. And that was right around the time that moral foundations theory was picking up and that was sort of becoming a hot area. And again, my advisor was an anthropologist and so I was dealing a lot with kind of disagreements across cultures.


Tage Rai: 00:03:25 And that was really what I was really, what it was especially interested in is, you know, how can it be that you have the exact same action, but you know, Western liberals think it's absolutely important to condemn it. And then another group of people don't just accept it. They actually think it's like praiseworthy. That's the right thing to do. And after you didn't do it, that would be wrong, you know, some horrific act of violence. And you know, when you started going through the literature, so much of it really pushed this idea that, Oh, well actually there are universal preferences, right? This was really popular in the moral education literature especially. And, and to the extent that there's disagreements over a quality or violence or anything like that, that's really, that's really about social biases that are sort of interfering with your moral senses.


Amber Cazzell: 00:04:19 Could you explain a little bit of what you mean by like the bias piece? Like an example?


Tage Rai: 00:04:27 Yeah. So you know, I, let's say you have a classic line of work in the moral development literature would be work by Elliot Turiel, and that work might say, Hey, look, you know, everybody has kind of universal aversions to harm, so they don't want to harm other people they disapprove of intentionally doing it. They're really concerned with rights and justice as well, which concretely ends up transforming into something like equality for all people. Then when you're faced with something like massive gender inequality within a culture you're faced with this question of, well, well how does that happen? People seem to believe it's the right thing to do. And the solution that was generally put up at that time and that's still popular is say, Oh, well, people's sense of morality is that they value nonviolence and is that they value equality, but certain kinds of social biases creep into cultural institutions and their power dynamics historical determinants that then overpower your kind of moral intuitions or something inequality.


Amber Cazzell: 00:05:53 Okay. So like in the, in the tutorial and new cheese Matana of tradition, you're saying that they would consider conventions an amoral aspect of life that biases the moral realm.


Tage Rai: 00:06:08 Yes, absolutely. So, you know, in the same way that, you know, they would say that, Oh, well, if you're doing something because an authority told you to do it, you're obeying authority. That that doesn't really count. If you're as moral, that's a social convention because you internally believe it's the right thing to do. Or something that counts as a moral belief or moral intuition. And practice actually cut off a lot of them are rally literature from the rest of social psychology. And so there were people like me and others who kind of felt like, Oh, well we need to actually bring morality back into social psychology back. You know? And that maybe this cleaving between, you know, moral psych and, and social convention and stuff isn't sort of an artificial line. And that maybe a lot of the moral diversity we see in the world isn't sort of a mistake. It's not, it's not just like a, a kind of difference in information or something like that. Maybe it's the actually people have really competing values and then we need to figure out what those values are.


Amber Cazzell: 00:07:25 Yeah. and so just to reiterate, it sounds like you're saying, well cross cultural differences in morality in sort of the Turiel tradition have been relegated to a classification that simply calls them amoral rather than recognizing that these might represent truly moral cross cultural differences.


Tage Rai: 00:07:53 Yeah, absolutely. So, so, you know, another way to think about this is in the sort of like height, moral foundations type approach. Yeah, I would say is that, Oh, the, the classic view only counts harms and rights are in this model, harms and equality, preparedness as moral. And then the other stuff moral judgments that have to do with obeying authority or supporting your in groups or you know, pure interactions, that stuff's not actually moral in spite of the fact that all the people in these cultures really seem to talk about them in moral terms.


Amber Cazzell: 00:08:32 Wait, so, so I'm just, if we could back up for just a minute, can you paint sort of the, the, the social behind the scenes structured, what's going on? Because my understanding, and please correct me if I'm wrong about any of these branches, is that your, your sort of doctoral dissertation chair was Alan Fiske, is that right? Yes. Okay. And then Alan Fiske and both Jon, both Jonathan Haidt and Alan Fiske were students of Richard Schweder. Is that correct? Or am I wrong about Alan Fiske?


Tage Rai: 00:09:03 That is correct. And then Jon Haidt was actually a student of Allan Fiske and then did a postdoc with Richard Shweder.


Amber Cazzell: 00:09:15 Oh, okay. Fascinating. Okay. It's always, it's always fun for me to learn because it's, it's kind of neat to see how researchers have influenced each other. So it, even though Jonathan Haidt was a postdoc of Richard Shweder, you're saying that he's, it sounded to me like you're implying that Jonathan height is a little, is less morally pluralistic then then like you would consider yourself to be, but maybe that's not correct. Maybe that's not fair.


Tage Rai: 00:09:49 No, I think, I think he's probably less pluralistic than I am. But you know, if we take the full space of moral psychology research and you know, cut it in half the, I think to a non specialist, you know, the, the differences between like the relationship regulation people and the moral foundations people are kind of small potatoes compared to the difference between like us you know, the, the domain theory folks or the harm hypothesis folks or, or, or, or, or things like that.


Amber Cazzell: 00:10:27 So could you tell me a bit more about you're saying that Jonathan Haidt seems to think of morality as a matter of harm care. But when I interpret, when I look at moral foundations theory, I think of him as, as trying to say like, no purity and authority hierarchy. These things are actually moral foundations.


Tage Rai: 00:10:48 Yeah. So, sorry, that's a miscommunication on my part. I was saying that he does think of all that stuff as moral and Turiel folks would've only counted the harm and fairness.


Amber Cazzell: 00:11:01 Okay. I see.


Tage Rai: 00:11:03 Much of Jon's work was really saying like, Hey, look, know we'd been only counting two of these foundations and then everything else we've been saying that's actually nonmoral bias or amoral or something. That stuff should be included too. His stuff was really picking up right when I was kind of like getting into this and, and you know, in the world of people who do this sort of work, especially when you're looking across cultures, I think you look at the disagreement and I, and one camp would say, Oh, those disagreements are sort of ephemeral. They're, they're kind of, they're kind of accidental. They don't reflect actual moral differences. And that's true whether you're a kind of a domain person, whether you're a harm hypothesis person, whether you're like a universal moral grammar person. But then there's like the other camp that are people like me and the relationship regulation folks and a universal moral grammar folks, or sorry, the moral foundations folks who say, no, no, there's all this moral diversity that actually is moral. It's just stuff that we can't categorize as harm and fairness.


Amber Cazzell: 00:12:18 Okay. So now as, as this is all as Jonathan Haidt's work is kind of unfolding and popularizing you tell me about how you moved from kind of looking at what's going on with his work into creating sort of this relationship regulation theory.


Tage Rai: 00:12:37 Yeah. So the, the key differences where, you know in the end, even even his theory and that kind of work is really still about content. So the claim is that really morality is about actions. You know, some actions are fair, some actions are unfair, some actions cause harm. Some actions don't. Some actions defy authorities, some actions go against in groups, whatever. And, and in some cases that really fits with within like things like going against in groups going against authorities or, or supporting them in other cases like some, some actions are pure or impure. Those are the way he does it is really cast is non-relational. And you know, what I really wanted to argue is that actually, you know, if we really step back and think about well why would our sens of morality evolve in the first place? It really shouldn't evolve to be about particular kinds of actions.


Tage Rai: 00:13:43 It shouldn't be the case that actually, you know, any given action could be moral or immoral. And what really matters is the social context and especially with the social relational context. And so then the question of figuring out the sort of basis of morality isn't about figuring out, well what are the sort of kinds of content that matters? Instead it's about figuring out, well what are the kinds of relationships that matter and what are the sort of fundamental relations that people engage in across the world? And then what are the sort of moral motives within those relations that are really driving them? And so that, that gets you to a very different place. Then, you know, all of the morality is about kind of things like, you know, how are you supporting your group or how are you scoring authority or how are you supporting these kinds of other kinds of like more market based relations or quality-based relations and any particular action might be good or bad.


Amber Cazzell: 00:14:42 Yeah. Yeah. So just taking a more, yeah. Relationship centric approached morality. So can you, let's go ahead and talk about those various relationships because this is something that's new to me and maybe I'm just naive, but hopefully other people will find discussion of those relationships valuable. So what are kind of those four styles of relationships?


Tage Rai: 00:15:10 Sure. So I should start by saying, you know, you can have a kind of relationship regulation view that basically morality is, it's really more about social relational context and not about particular kinds of actions like crimes or parents or something without adhering to a relational models theory, which would be like the thing that sets out the four kinds of relating. So you could have a different kind of taxonomy, right? So, you know I use this one because I thought it was the best, but you know, if it's if you were someone like Margaret Clark or something, you would say there's only two kinds of relations. There's communal relations and exchange relations or if you were somebody else, you might say there's like five kinds of relations or whatever. Whichever one of those you choose like is sort of secondary to the, to the bigger issue of whether it's about relationships. But in order to get any traction on this idea, you have to use some sort of a taxonomy of relations, otherwise you wouldn't be able to move forward. And so I use this word called relational model. See, this is how I ended up working with Alan Fiske. Cause he's an anthropologist and I'm actually like a cognitive psychologist. But I started, we're going to learn because I, because we had this intersection and and so he had developed this work based on a lot of ethnographic work, this theory of, of how people relate everywhere.


Tage Rai: 00:16:45 And the claim was that they tend to relate in one or four ways that usually it's not always pure. Like you can connect with the same person in lots of different ways. But it's gonna fall on one of these formats. So the first one is called communal sharing. And this is a kind of relationship in which you kind of really, it's about a sense of oneness with the other person that you're in, that you're in the same group and, and the sort of, you know, and then the second kind of relational model will be something like authority ranking. And so this gets at hierarchies where some people are above other people. The third kind of relation well would be equality matching. So this is really about relaying the people through through a sense that we're, we're different but we're, we're equals. And so we take turns and we'll, you know, we, we make sure everybody gets the same amount, that kind of thing.


Tage Rai: 00:17:43 And then the fourth was called market pricing and this is sort of how you interact in markets with strangers often but not always where you have to take lots of different goods and sort of equate them on a common metric. So, so the, the most clear example of that is is when you're buying and selling things and very different goods, but you're, you're agreeing to this common metric of money and that's how you're going to do it. But it also expands to anything like if you're trying to figure out, well, what's the most equitable kind of outcome or who has the most merit to get some particular resource, those kinds of things. Anytime you're dealing with that sort of justice where it's about kind of trading off stuff that's not in the same kind of like one for one of equality, then then you're in this sort of market based relationship.


Tage Rai: 00:18:42 Then you know, my relationship regulation where it was saying, okay, well this is a really great taxonomy for the fundamental relations in the world. What are the kind of motivating moralities within these kinds of ways of relating and so, and they're the, you know, the argument was that, well, when you're communal sharing, what's most important is to maintain a kind of sense of unity. Oh, you know, we're, we're, you know, this kind of like loyalty to each other. Overall house were, were you speak the same voice. We have the same message for one person in authority ranking. It was really about hierarchy that you know, subordinates have to defer to superiors. What superiors have to kind of take responsibility for support. And again, this is a classic case. Both of those are where these are sorts of things, like if you support a in group or if you did something because a superior told you to Oh, you know, more traditional theories in moral psychology would not have coded that stuff.


Tage Rai: 00:19:46 It's more, it would have coded that as like, well, that's not real morality. Real morality means you have to think it's true independent of what other people think and you have to like do it independent of somebody telling you to do it. And whereas I was already, I know that actually those are really the core parts of a lot of morality is, you know, the stuff people do because of the sort of social connections they have within their groups and the sort of social relational obligations they feel. That's, that's really the bulk of a lot more out. Mmm. Yeah. Within equality matching, the moral modes were really about maintaining equality, doing, doing these things that that kind of makes sure everybody's, you know, leveled and nobody's above anybody else. And then within market pricing, the of principle of moral motives was this thing called we call it proportionality, which meant that, you know the sort of outcomes need to be proportional. So the outputs need to be proportional to inputs. You know, things need to be equitable, that sort of thing.


Amber Cazzell: 00:20:51 Yeah. Yeah. So let's dive into each of these if you don't mind. And about how, cause, cause a lot of your work it looks like is examining how these things can very cross culturally or even like just, you know, a lot of the political stuff you were mentioning earlier. Um, potentially same same overarching culture but different subcultures. How so how do these things, how do these moralities play out in different ways that would cause us, like you were saying, you know, the, we used to potentially write off and say, Oh well that's not, that's just a nonmoral issue. Like biasing something. Rather than recognizing and wrestling that these are in fact just different moralities. So for the communal sharing, you were saying that unity is kind of the driving force. Well, what does that look like? And how does that play out in different cultures?


Tage Rai: 00:21:55 Yeah, so you know, I think, I think oftentimes we, when we're thinking about morality, we tend to think about these like really big issues, political issues and like no huge conflicts and stuff like that. I actually think a lot of relationship regulation happens in life every day. Moral situations, you know, so, you know, imagine you have, imagine you have a group of a group of people who go out for dinner and they're splitting a pizza or something like that. If we're, if we're motivated by unity, then we're not going to track who takes, how much is that everybody can just dig in and take however much they want or need and we're not going to care who takes how much and we're just going to kind of, everybody can actually pitch in however much they want in terms of paying. If instead we're kind of doing this through hierarchy then what's gonna happen as the people who are sort of highest in the ranking are going to get first dibs at, at the, at the pizza or they're going to get to kind of determine who gets what.


Tage Rai: 00:23:11 And they're also maybe going to take responsibility for paying. And if instead we're doing things by equality then we're going to make sure we each take the exact same amount of pizza and then we're going to make sure that we each like put in the exact same amount of money if we're doing things through proportionality. And I say, well, you know Tage is like bigger than Amber, so he should go a little more pizza. But he should also probably put in an amount of money and commensurate with the amount of pizza or something. Okay. Does that make sense? Yes. And then what happens, what I think is really interesting is when you get moral disagreements, so some one person is using authority model and other person is using kind of an equality model or something. You see this a lot where, you know, an older person or like a parent of somebody who's like, no, no, I want to pay the whole check or whatever. I'm all right. And then the other people are like, no, no let"s share it equally. I then you've got conflict. Yeah. I think those sorts of everyday kind of disagreements happen all the time. And so often, you know, it's, it's not that anybody had mistaken information or didn't know exactly what's going on. It's that they disagree about what the appropriate sort of social relational model is to be used.


Amber Cazzell: 00:24:33 Yeah. Yeah. So I mean I, I still want to, I want to dive into a couple of different things. I'm really interested in your thoughts on like why people have disagreements about the proper social role, why that happens so frequently and then potentially also like cross-culturally while that, why that happens seemingly systematically. But I still kind of want to get a feel for maybe some more, just like practical, tangible examples of each of these things. So like. With unity you had spoken about how that could cause yeah, like moral, moral strife, not just like I, I thought it was particularly interesting this idea that unifying the group also meant kind of separation from other groups and what that meant as far as kind of the relationships between virtues and vices in this paradigm.


Tage Rai: 00:25:34 Yeah, I mean, yeah, I'd have to think about what, how we want to define virtues and vices here. But unity is this for example. So a lot of the work I've done in the last few years is really tied more to kind of violence and harm that people do to each other. And you know, a lot of that is really, you know, oftentimes a lot of violence is, if it's in an intergroup context is saying, well there are the other, I'm bringing them because they are different and you're our group and they pose some sort of threat to our group and, and oftentimes that threat has a sort of contamination threat to the group's unity. That sort of what matters is maintaining that group essence. And if some other group is perceived as posing a threat to that essence, something that could like cause differentiation within the group, then that means that they, they need to be, that needs to be eliminated and possibly through harm. This can actually happen in the intra group context. If a member of the group has essentially become polluted or contaminated somehow, then they may need to be excised from the group. And so this is what we see in a lot of a lot of cases.


Tage Rai: 00:26:56 Like I think in the paper we use the honor killing case where if you look at those ethnographies, all the language is really about this kind of idea that like it's not the case that these families necessarily you know, hate their daughters or something like that after they've been raped or something instead of, they have lots of conflicted emotions about this, which I think reflect the conflicting kind of motives that are at play and the conflicting kinds of boundaries. And oftentimes it's really difficult, but there's a sense that well, it's something that has to be done in order to sort of repair the essence within the family and also repair the sort of a unity between the family and the rest of the community.


Amber Cazzell: 00:27:43 Yeah. So, and are you like when, when people in who are dealing with these kinds of things have these conflicting emotions, is there it, is the source of that conflict from your theory, just simply the, the mix of social relations?


Tage Rai: 00:28:03 Yes. I think, I think the our theory would predict that basically when you see the most conflict that has to do with sort of the extent to which different models have been sort of what the term would be like constituent or constituted. And basically that just means th the how, how strong those sort of motives are like how, how much or how much are you really being pulled by these different kinds of social relational obligations. And you got a lot of conflict when you have different models that are equally sort of strong. You see this a lot when it's like, Oh well, you know I think something would be, you know, I have an obligation to an authority to sort of report or do some do, you know, punish or transgression, but the transgression was done by, you know, someone in my family or somebody who's very deeply tied to like my ingroup, somebody have a strong sense of community with, then you've got this clash.


Amber Cazzell: 00:29:06 Yeah. So where, if it's okay, I do want to kind of dive into this idea of like, well how do we start to make meaning of our various social relations and start to classify them as like, okay, well this is, this is a communal sharing relationship. This is a hierarchy relationship. I mean, that's probably a tough question and it might be, it's probably outside the scope of your normal research, but do you have a hunch as far as like where these classifications are coming from?


Tage Rai: 00:29:42 Yeah. So, I think, I think that, yeah. You know, if we want to say, well why is it that there's, yeah. So, so I should preface by saying again, like any two people can sort of engage in all the models with each other. It's more about the sort of relative and amount that they're doing. And they may be doing a lot of communal sharing or they may be doing a lot of market pricing or something. And then when you look across cultures, again, it's just like relative, not you're gonna find all four miles of places, but some cultures are going to be doing a lot more communal sharing or ranking than other cultures, or maybe more equality matching or market pricing that, you know, my own intuition is that a lot of this really comes down to sort of social ecological factors within these communities that sort of make these different ways of relating, make more or less sense.


Tage Rai: 00:30:49 So, so I tend to think of these relational laws as ways of relating as sort of strategies for navigating social relationships and different strategies that are going to be more effective or make more sense in different contexts. So, you know, you could imagine that, well, let's go back to our pizza example. Maybe in a world before sort of calculators or something like that. It just kind of makes, it's just easier to just kind of say, okay, you have half the pizza and I have had the pizza and I'll put in and half the money and you put in half the money or something. Once we actually have calculators, now we actually have the ability to really quickly figure out, well if you took, you know, three eights of the pizza and I took five, eight. So the pizza then how much money, how much is five days of the money or something like that.


Tage Rai: 00:31:40 So changes in the environment and the technology or whatever are going to make different ways of relating a more viable, yeah. Yeah. Morally you might think that communal sharing, this idea that we're all aligned, we're all kind of pitching in and we're all sort of need-based and here for each other or something like that. Maybe that's like a really good form of social insurance. Right? So like if the, if the world is unpredictable and you kind of need to stay above a certain threshold in order to make it and actually it's kind of like risk pooling and this is, this is really useful but maybe that actually changes as the population size changes so maybe when there's more people and you're having to interact with more people across more contexts and it's actually like really hard to sort of maintain that and so maybe it makes more sense to kind of shift to other kinds of models that are better equipped to handle larger groups or more dispersed groups, that kind of thing. So maybe if it really is a bunch of interactions in lots of different ways with people, you're not going to see it very often. Then maybe communal sharing and unity isn't really the way to go. Maybe something like market pricing makes more sense.


Tage Rai: 00:32:56 And then I think you can kind of aggregate this up to the differences across cultures where you might see more communal sharing authority ranking kind of small scale societies and then more kind of matching your market pricing as you get into bigger, more dispersed places.


Amber Cazzell: 00:33:12 Yeah. So what are some of the biggest criticisms of the theory that you've heard?


Tage Rai: 00:33:20 Mmm, so, I think they're, I'm trying to think how to put this. So, so the key, one of the key things is that there's always an issue about whether, what I would say is the key criticism is that it includes too much stuff as moral.


Tage Rai: 00:33:52 So, you know, if you look at the work by say Kurt gray, you know, he really wanted to establish that a ton of the stuff that I'm calling moral really just isn't actually, morality is really all about harm. And the perception of harm and that harm is, sort of based in this template where intentionally harming another person is, is bad. Whereas my work would say, actually, you know, sometimes harm is perceived as morally wrong. Sometimes harm is perceived as morally right, the exact same kind of action is going to have a different interpretation depending on the social context. And those different interpretations aren't more or less correct than each other.


Amber Cazzell: 00:34:42 Yeah. So what about Jonathan Haidt? I saw in the acknowledgements on your paper that he had done some reviews of it. So what were some of hits? His thoughts since this since your theory in some ways expands on his and in other ways is critical of it?


Tage Rai: 00:35:00 He was, I, you know, I, I, my recollection is he gave a very gracious review and I think in the paper there were many times when I said something like morality is relationship regulation or something. And his only request was that I say something like, the core of morality is relationship regulation. Yeah. so he was, he was pretty much okay with most of it. He just felt that, especially when it comes to purity, that maybe, you know, the theory doesn't capture enough that, that, that there are going to be certain kinds of morality that aren't relational. So if, if I personally think that smoking is wrong or something is because I have this visceral feeling of disgust that that is a, is a one person thing, but that's not about the social group. And that that's going to be true for a lot of the impurity sorts of things.


Amber Cazzell: 00:36:08 Yes. Oh, just, I, I'm curious what your thoughts about that comment of his are. Cause when I was reading through your paper, I was also thinking about I was trying to imagine what Jonathan Haidt might be thinking about it and thought like, how does this work with like his, his edge cases like sex with a dead chicken and stuff like that.


Tage Rai: 00:36:29 No, that's okay. Those are the kinds of things where he felt like our theory doesn't get there because would say that there is not like a relationship with a dead chicken. My attitude tended to be, and it still is that actually a lot of those things, the violation really is relational. So, for example, you know, the classic kind of MFT case would be something like, Oh, you know, is it, is it you know, I think the original title of that first paper was something like, is it morally wrong to eat your dog or something? Code this as a, as a purity violation. And that people think, yes, it's, you know, I'm all your dog and I was paradigmatic. Well, yeah, because it's your dog, but the wrongness of it isn't eating a dog. The wrongness of it is that it's your dog. It's part of the family.


Tage Rai: 00:37:24 The violation is a relational violence. And even if we say, you know, even if we look cross-culturally at like places where they think dog eating is wrong, morally wrong, kind of in the abstract sense or places that say that are places where dogs are kept his family pets. And so, you know, in the places where dogs aren't kept as family pets, nobody thinks it's a immoral to eat dogs. So, you know, with all these sort of purity things, I tend to think like, Oh well we're acting as if these are sort of social relational context free. It's like nobody has a problem with that. Like incest is another canonical case, right? So the classic kind of Julia and Mark, you know, are brother and sister and the incest or something that they are having sex but nobody's gonna know about it and they use condoms or whatever. Again, what's wrong about that is not two people having sex. What's wrong about it is that they're brother and sister. The relationship is core to the violation. And you know, whether incest is good or bad or cross-culture is, you know, depends on what sort of relationships have been categorized as too close or not. In some places sex between first cousins would be incessant. Other places it's not, again, the act is the same in all those places. It's, it's the relationship and how it's been construed that's different.


Amber Cazzell: 00:38:50 Yeah. This is, I mean it's, it's really interesting stuff and it makes intuitive sense to me that, yeah, morality is about, about managing our relationships and there might be lots of different ways that that occurs. What are your thoughts about how about a lot of the moral automaticity, automaticity research like the fact that people have a reaction to the idea of having sex with a dead chicken is interesting because it's not like, it's not like that's a dilemma that they have regular practice with, but people seem to have these intuitive reactions to it. So where do you kind of fall on this automaticity issue?


Tage Rai: 00:39:37 So, you know, I think some of that is sort of orthogonal to kind of the things we did. So the way I interpreted a lot of that is, you know there's this work on kind of typicality. So, and you were getting at this like, well, some of these things are just really weird. You know, my impression of that is that you can have two different moral transgressions. One is sex with a dead chicken. One is, you know, and there's this, there's this paper by you know, David Tannenbaum and Dave Pizarro and Eric Ulmann, I think, or Daniel or somebody like that gets at this. Uand in that paper, what they have is, is something like, you know, the person either hurts the,ucat, like the either beat a cat or they beat their apartment. Okay. And, and they show this thing like, well, it's worse to be your partner, but actually people have worse judgments about the person that beats their cat.


Tage Rai: 00:40:41 And and similarly, like in the example, you know, like, you can imagine that, okay, it's, it's worse to assault another person that is to have sex with a dead chicken. But maybe I actually draw worse inferences about the person who has sex with a dead chicken than I do about the person who assaults someone else. And the reason is because of a lot of morality, a lot moral judgment, I think isn't really about evaluating actions. That's how we ask the questions. But they're really more about evaluating the person. And what I would say is they're really about valuing that person's sort of social relational potential. So, so their potential for being in a relationship with me, I'm evaluating them as a relational partner and the person who has sex a dead chicken because that's such a atypical event because so rare, I'm actually getting more inferential information out of that than the case where somebody assaults somebody.


Tage Rai: 00:41:40 If someone beats on or someone gets into a fight and yeah, maybe that's not as bad, but that person is a real, real psycho. I don't want to deal with that. Yeah. As a relational partner. Whereas I, you know, actually in fact, you know, my argument would be there are plenty of cases where I think that you know, assault. I might perceive that as morally okay. Depending on the context. Right. That's going to be more rare for the sex with dead chicken. Yeah. I don't think its impossible. If I told you that, you know, if I told you that, well, you know, sex with dead chicken, it's like a, you know, a hazing thing for this college fraternity or something, then you know, people might still not be into it, but maybe their moral judgments would be a little less like extreme.


Amber Cazzell: 00:42:26 Yeah. That's interesting. I mean I think you're probably right about that. That's my, that's my own kinda gut intuition and it makes me, I've been doing some thinking about about like that idea that, excuse me, that some some behaviors give us what we feel like are more explanatory power for figuring out something about another person's character. So a buddy of mine who's a doctoral student at here at Stanford, he and I were talking about this idea of moral strength and that there was a study done by, I think it was Starmanns and Bloom that was looking at who adults prefer to reward. Do they prefer to reward children that do the right thing when it's really effortful, effortful for them to do or do they prefer to reward the child who does the right thing without being tempted in the first place.


Amber Cazzell: 00:43:29 And it seems like adults prefer to award the kids who do the right thing, but effortfully and, and the paper kind of went into some of the philosophical background on that. Like, okay, a moral duty versus, you know, maybe a more of a virtuous practical wisdom kind of perspective on things. But, but it's interesting. And so I, my kind of intuition is like, we want to have moral automaticity to some degree. Like it would be great if we weren't being tempted by negative things. But my buddy was kind of of the opposite mind. Like, well no, there should be like some effort involved in this. And had talked about that idea of he used the words moral strength that somebody's struggling through and then ultimately doing the right thing shows a type of moral strength. And so that's actually preferable. And that, that's pretty interesting. It's pretty interesting cause I would, would've thought that, you know, automatic actions would provide sort of consistency that you can rely on that person. But that doesn't seem to be how most people think about it.


Tage Rai: 00:44:38 Yeah. So I would think that it completely depends on content because the content is going to affect what inference I make about the person. So if, if the question, if I say, you know, I just, I really love cake, but I know it's bad for me. You know, I have to work really hard. It's so tempting, but I didn't eat cake, you know? Versus somebody who's like, yeah, I'm just not that into cake. And so, you know, it's easy for me not to do it. You're more impressed with the person who who had to really fight that temptation and you think that's really moral, morally good or something like that. And I think it's because it signals some, some positive aspect of self control or something. In contrast, if I, if I said to you, God, I just really want to cheat on my wife all the time, I just wanna cheat on her so much, but I just, I'm able to stop myself from having sex with other women. But it takes all my effort versus the person who is kind of like, no, I just love my wife and I don't have any desire for anyone else. You like the second guy more, righ?


Amber Cazzell: 00:45:54 Well, I would intuitively think so, but honestly, I'm not sure. I'm not sure how I mean you would probably like the second guy better, but when you're assessing their


Tage Rai: 00:46:05 Well, I think, I think we would think that person's more moral because we would say actually they're more virtuous because the virtue isn't tied to the effort. The virtue is tied to having some inherent goodness in their preferences. Yeah. I'll make it more extreme. You know, the person who says, I just really want to have sex with children. I just, I just want to all the time, but I fight that urge. Yeah. We don't look at them as a moral compass. Do you think this is a disturbed individual, even if they haven't acted on their temptations, even if they've done something that we normally think of as a good thing, which is resisting a temptation that is bad for them, we still say, no, no, no, no, no, that's not somebody I want near my children. That's not somebody that's good. We would much prefer the person who just doesn't actually have the desire to have sex with children. And again, because I think what we're really trying to infer is who do I want to interact with in a particular context? Who do I want to be my, my social partner? You know, in the cake example, it's like, yeah, I want the person who can resist things like cake and the sex with children. Example, I don't want, I want the person who doesn't have that desire.


Amber Cazzell: 00:47:22 Right. that's, yeah, that's, these are all good points. So have you been doing work on any of that? Like social inference stuff?


Tage Rai: 00:47:34 So we did some stuff at one point. There was some related work that, so I ha, you know, I have this like book chapter on some of this stuff about you know, what, what is really going on and sort of moral inference and what are we trying to figure out. We ran some experiments. What you would predict is, you know, so the really interesting cases are cases where we don't care about things like intentions, you know, or we don't know whether somebody did something on purpose or not. Or you know, and we did some studies that we never really finished up or we never decided to take further cause there were some related work I think that came out from another lab that was really about trying to predict when are intentions going to matter more or less and so on, on the sort of relationship regulation view.


Tage Rai: 00:48:28 Oh, intentions are going to matter when they're actually have sort of diagnostic predictive validity for relationships and under the conditions where they don't, then maybe it's not gonna matter whether you did something predictive. So if, if it turns out that yeah, you did it by accident but it's just going to keep happening in the future because you don't actually have control over it, intention your intentions may not matter as much if it turns out that you did it by accident but for other reasons, we don't see you as a viable relationship partner so you're just low on things like warmth and competence or something. Then it's not really going to matter there either whether you did it by accident or on purpose because we don't have any intention of having a relationship with you in the first month. If something you did caused sort of your vocable damage to the relationship, then in some sense it doesn't really matter whether you did it on purpose or by accident and this is where you get to something like an honor killing case where it doesn't matter whether what the victims intentions were, whether they had any control or anything like that.


Tage Rai: 00:49:45 If, if the damage done to the relationship is that extensive and it can't be repaired, then the relationship has to end. Even if the person didn't have any intention or control over what they did.


Amber Cazzell: 00:50:01 Yeah. So how what is sort of your view on what is to be done as far as multicultural exchanges when people are, are interacting with each other with such widely different social construals.


Tage Rai: 00:50:27 Mmm. You know, I don't, I don't know whether they're sort of easy answers to that. I think they're kind of, no, because that sort of gets into a place of, well, what should be sort of your prescriptive ethics or something that we're willing to impose on other cultures. So when should we like go in to some place and tell them actually the way you're doing things, it's just too offensive to human rights or something. You have to do it differently. And at least, I mean, is that, I mean, that's how I interpreted your question where you think of it.


Amber Cazzell: 00:51:07 Yeah. No, I mean, I mean that's, that's a fine answer. I think it's a realistic answer of like, there are no answers here. But


Tage Rai: 00:51:16 Yeah, in that context, what I would say is like, I think it's helpful to kind of have a sense of, well what are the sort of intrinsic kind of motivations that are driving people. And that's what relationship regulation theory is really trying to get at. You know, an argument I've made before is that sort of imbalance in any particular direction may be bad. So too much unity can be bad too much. You know, hierarchy can be bad. Too much markets, proportionality stuff can be bad. And so really we're, you're trying to find some sort of balance and there's probably not going to be any sort of optimal balance, but there might be like a range of a pretty wide range of acceptable values and only when a culture is like really falling, like really on the extreme and too far outside then might you want to say, Hey, they've shifted too far in one direction.


Tage Rai: 00:52:18 You know, we should, we should be willing to take action here. Critically. The, you know, the, the sort of meta ethical implication of a lot fo my work. So, so I was, you were asking earlier about criticisms, I forgot. The biggest criticism I get is from philosophers, which is an oftentimes what are called moral realists, which is that they say, well, Tage if we believe what you're saying, then nothing has any value anymore or something like that. That that there is no objective morality anymore. And I do think that we kind of, it would be good if we accepted that actually that, yeah, it is all subjective. And when we go into another place and tell them they have to do things differently, we are just imposing our values on them. And to the extent that there's anything real there or anything objective, it has to get to these extremes I was talking about like, well, maybe there's a pretty broad swath of moral values and balancing among those values that is still going to lead to kind of meaningful, sustainable life in different places. Really. Only the extremes might actually fall outside of that, but otherwise we're just kind of imposing our norms on other people.


Amber Cazzell: 00:53:39 Yeah. That was starting to remind me of the conversation that I had with Rick Schweder. Talking about, you know, universe, there's certain universals, but they're not, they're not emphasized uniformly. So I know Rick Schweder wouldn't consider himself like a moral relativists or subjectivist, but Mmm. It sounds like you both are taking the approach that maybe there's sort of this there are certain types of things that are fairly uniformly accepted. You know, we have these four different relational styles that it's morally good to try to maintain. But outside of that, you know, there, there might be some things that are just fundamentally immoral.


Tage Rai: 00:54:21 Well, so, okay. So Rick would say he insists that he's not relatives, but like any, any other person, would say he is. Oh, or subjectivist or whatever. The reason is because he thinks there's some universal rules like consistency. So, you know, if I say X is X and then I act opposite to that, that would be wrong or something. But that's the, ah, yeah, bro, for all practical purposes, you would fall in the subjectivist, relativist camp. I think, and so we're not because to re to be like a true sort of realist absolutist objectivist what you have to believe is that, you know, if that killing is wrong is true in the same sense that two plus two equals four, that if we got rid of all humans, if we erased all of human history, that killing is wrong, would still be true. And I don't believe that. I think that's, that's crazy. But that's the position of probably the majority of philosophers. I think


Amber Cazzell: 00:55:46 The, the position of the majority of philosophers being that killing people is wrong if, if are killing is,


Tage Rai: 00:55:52 I'm using killing people is wrong as an example, but just that have the same kind of truth value as like mathematical truths.


Amber Cazzell: 00:56:02 But you think that that goes so far as to, I mean, I like, I think a lot of moral philosophers would probably appreciate that morality is tied with consciousness to some degree. So if there's no consciousness, there's, how can there be morality in any meaningful sense? Right.


Tage Rai: 00:56:19 I think the hard absolute position is that these things exist independent of human will or consciousness. You know, a more middle of the road position is something like Jon Haidt's position, which would be that, Oh well morality is objective in the same sense that like people have evolved to like salt and sweet or something. They've evolved to like things like harm reduction and equality. And so if human evolution was different than we would have a different set of morality. But to the extent like conditional on the way humans have evolved, then certain things are sort of objectively right or objectivly wrong. I think that's at least viable. I just think that there's so much more diversity that it makes it almost sort of practically infeasible cause it's not going to be something like four or five taste buds. It's going to be just a massive amount of diversity.


Amber Cazzell: 00:57:24 Hmm. Yeah. Yeah. Okay. Well, we're, we're running up against time here, but just for the last couple of minutes, can you tell me about how you would hope to see the study of morality change over the coming years? How would it take your theories and run with them? And what would the new directions be?


Tage Rai: 00:57:47 You know, I mean I think in some ways the people who were kind of on my side of the ledger of wanting to sort of integrate moral psychology back into sort of more general social developmental psychology and cognitive psychology feel pretty good. We won over the last like 10, 15 years. Like most of the most exciting work isn't really happening. Like most of the work that's exciting in morality isn't really happening in moral psychology. I don't think it's happening in other areas. It's happening among people who study cooperation or it's happening, you know, among people who study like tight and loose cultures or it's happening among people who are saying intergroup relations. You know, that, that there has been this kind of understanding that, Oh well all of these, all these terrible things in the world, you know, ethnic cleansing, genocide, whatever we use.


Tage Rai: 00:58:52 To the extent that moral psychology had anything to do with those things, it was to say, Oh, well they must be occurring through moral disengagement. Moral psychology has been turned off in those cases. And that's just not true anymore. Now people are really studying that stuff as as, Oh, why, how our moral beliefs and values playing into making one group of people want to kill another group of people. And that I think has been like a giant sea change in a way where morality much more integrated into social psychology. I think, you know, the future really is getting at the question you asked earlier about like, well why is it that different groups in different places in different times have different, different competing values. And there I think, you know, that's where the real kind of future is in some of these cases is about figuring out the ecologies that support different forms of morality.


Tage Rai: 00:59:51 You know, so like when I look at the Jon Haidt stuff about liberals and conservatives you know, the big take home from that was that, Oh, well, conservatives are just like moralizing a lot more stuff. And that was like broader than American conservatives and liberals. And that's where like, you know, this idea that Oh there are cultures that just moralize more in other places that moralize less and kind of figuring out, well why is that? Why is it that some people think that there's sort of fewer, more, there's more restrictions to get toward a good life versus other places that think there's fewer. Those are going to be the really cool basic questions. And then I think on the applied side, I think what we're seeing now is by a racing that border between morality and like social bias, we can now study many more fly problems than we used to.


Tage Rai: 01:00:41 So you know, the future is going to be figuring out like well what role is morality playing in sex, sexual harassment and in you know, ethnic conflict and in you know, violence more generally, which is, you know, what I've been doing the last couple of years. We used to just say that the only rule across all of those things was just, Oh, morality is turned off and now we're actually going to be able to explore the different ways in which is not the sort of breakdown of morality that's causing all these terrible things in the world, but actually activation of our moral sense.


Amber Cazzell: 01:01:17 Yeah. Well, I mean, and, and it sounded like when it comes to applying a lot of this stuff in like you're at MIT, so you know that there's a lot of sort of, we're running into a lot of dilemmas with the digital world on how to deal with morality. Like AI ethics is a huge issue. And so I just can't help but think that we're going to have to really carefully think through, all right, how do we not be moral colonialists or whatever. But at the same time, if we're trying to teach AI and if we're trying to train AI sets, then to some degree we're reifying certain moralities and privileging certain moralities and social construals that others might not. And so even if we don't no what position to take, something's going to happen by default. Mmm. Do you ever like think about that? Does that ever keep you up at night or anything?


Tage Rai: 01:02:20 I mean I think that's a, that's like a fascinating question, you know, I, okay. I mean it's funny like I tend to kind of what excites me is actually like that default thing you're talking about, which is like I kind of want to see what kinds of moralities the AIs come up with. If we sort of leave them to their own devices as best as possible now still going to be the case that we'll eventually be able to trace back and figure out some sort of input we started that serves that these things emotion. But to the extent that we can kind of leave them alone, it would be really cool if they start to come up with moral systems that we haven't even imagined.


Amber Cazzell: 01:03:00 Like their own kind of moralities emerging from their own sort of social controls. Is that what you mean?


Tage Rai: 01:03:06 I mean, if it, if it turns out, you know, if it turns out the nature of the relations among, you know, these networks that are so highly interconnected with like, you know, you know billions of interfaces or something, it becomes like a really radically different form of social relations then anything that's been possible before then that should trigger a different form of morality.


Amber Cazzell: 01:03:38 Oh, sorry, what was that? It kind of, I just want to see that. Yeah. I feel like if Elon Musk was listening into this conversation, he would be like pounding his head into the wall, which is like, no, that's exactly what we don't want. I do think it would be fascinating to, to see but it does seem like before that could happen, we're going to run into major problems with AI reflecting our own values back at us.


Tage Rai: 01:04:05 Absolutely. But that's why I kind of want to skip past that part. Like, I have no reason to trust you know, machines less than I trust human beings, so.


Amber Cazzell: 01:04:16 Right, right. All right, well let's let's leave it off there. Thank you so much Tage. I really appreciate your conversation today.


Tage Rai: 01:04:24 Sure, thank you Amber.


Outro: 01:04:25 Thanks for listening. If you have questions, comments, suggestions, or requests, contact me at www.moralsciencepodcast.com the moral science podcast is sponsored by ERA inc, a research and design think tank that's reinventing how people interact with each other. Music throughout the program is Microobee by Keinzweiter and can be found at freemusicarchive.org.