Skip to content

Listen to the episode

On this episode of Unlocking Us

This is the second episode in our series on the possibilities and costs of living beyond human scale. In this episode, Brené and William discuss group behavior on social media and how we show up with each other online versus offline.  We’ll also learn about the specific types of content that fuel algorithms to amplify moral outrage and how they tie to our search for belonging.

About the guest

Dr. William Brady

William Brady is an Assistant Professor of Management and Organizations. His research examines the dynamics of emotion at the social network level and their consequences for group behavior. His recent work studies how human psychology and AI-mediated social contexts interact to shape our emotions and intergroup attitudes. Combining tools of behavioral science and computational social science, his research aims to develop person-centered and design-centered interventions to improve our digital social interactions.

 

Professor Brady’s research has been published in leading journals such as PNAS, Nature Human Behaviour, Science Advances, and Perspectives on Psychological Science. His work has also been featured in popular press outlets, including The New York Times, BBC, Wired, and The Wall Street Journal. In recognition of his contributions, he has been selected for the Association for Psychological Science Rising Star Award and the SAGE Emerging Scholar award. Professor Brady earned his BA in Psychology and Philosophy, with distinction, from UNC-Chapel Hill, his Ph.D. in Social Psychology at New York University, and was awarded a postdoctoral fellowship from the National Science Foundation where he worked at Yale University.

Show notes

Social-Media Algorithms Have Hijacked “Social Learning,” based on the research of William Brady, Joshua Conrad Jackson, Björn Lindström, and M. J. Crockett, in KelloggInsight.

How Social Learning Amplifies Moral Outrage Expression in Online Social Networks, by William J. Brady, Killian McLoughlin, Tuan N. Doan, and Molly J. Crockett, in Science Advances.

Overperception of Moral Outrage in Online Social Networks Inflates Beliefs About Intergroup Hostility, by William J. Brady, Killian L. McLoughlin, Mark P. Torres, Kara F. Luo, Maria Gendron, and M. J. Crockett, in Nature Human Behavior.

Troll and Divide: The Language of Online Polarization, by Almog Simchon, William J. Brady, and Jay J. Van Bavel in PNAS Nexus.

Sign up for our newsletter here.

Transcript

Brené Brown: Hi everyone, I’m Brené Brown and this is Unlocking Us.

[music]

BB: So we have a very interesting conversation today. Everybody in this series, I could talk to them for hours and hours, but I keep getting like the fisheye from folks here, like, “Wrap it up, we’re hitting an hour.” We’re doing a series and this is a little bit different. We’re doing a series that’s going to cross over between Unlocking Us and Dare to Lead. And we’re going to talk about the challenges, the possibilities, the costs of living beyond human scale. And when I say living beyond human scale, I mean we are really socially, biologically, cognitively, spiritually wired to be in connection, kind of IRL, in real life with each other, with our families, with our friends, with community. And we are living in this environment of social media, artificial intelligence, machine learning. We have access 24 hour news and 50 different channels, all with different ethos and ethics and financial models.

BB: Everybody wants our time and everyone’s saying what they think we want to hear and it just seems beyond human scale right now. And so I’m talking to folks who can help us make sense of it and help us kind of, I don’t know, get underneath the machine to figure out who we are and what we’re seeing and why it can feel so overwhelming. And if we can pull back, what are some of the great possibilities. Today specifically I’m talking to Dr. William Brady. Dr. William Brady is an Assistant Professor of Management and Organizations at Northwestern University, he’s in the Kellogg School of Management. And his research really sits at the intersection of emotion, group behavior and artificial intelligence and social networks. He really wants to understand from a behavioral science perspective and computational social science what’s happening with us, especially online and what’s happening with us when we get into groups. He’s published in many, many academic journals and you possibly have seen his work in the New York Times, the BBC, Wired, the Wall Street Journal.

BB: He has made a lot of contributions and he is very early in his career. He’s made a ton of contributions, he’s been selected for the Association of Psychological Science, he’s got the Rising Star Award. He has a BA in Psychology and Philosophy with distinction from UNC Chapel Hill, he’s got a PhD in Social Psychology from New York University, and he just completed a postdoctoral fellowship from the National Science Foundation where he worked at Yale before he got to Northwestern. And we are really going to talk today about one of his areas of expertise, which is moral outrage and moral outrage on social media platforms, the differences in how we show up online versus offline, how social media algorithms amplify moral outrage and why and how we’re rewarded for engaging in that kind of behavior online. Ideological, political extremes on both sides. And we are also going to talk about bots and how bots are used to troll and divide nations and divide people. It’s a really interesting conversation. I’m glad you’re here. Let’s dig in.

BB: William Brady, welcome to Unlocking Us.

William Brady: Thanks so much for having me.

BB: I’d usually say my pleasure, but I’m like, oh, my pleasure and my pain to have you. Your research is, I guess it’s a lot of things. It’s important, very relevant, and kind of scary.

WB: I feel the same way as a researcher, although I hope by the end of the conversation I can convince everyone it’s not all bad, but there are things we need to think about.

BB: There are things we need to think about and I’m not sure that we’re incentivized to think. So before we get started, tell us your story. Where are you from? Where’d you grow up? How’d you end up here?

WB: Yeah. Well, it’s funny, I grew up in North Carolina in the Bible Belt in the ’90s. And I think that’s actually where my interest in studying moral psychology came from. So in other words, how we come to hold the moral views that we do and how we interact with other people surrounding those views and the group identities that we form. Because if you grow up in that context, that’s one of the most salient things, when you meet someone, they ask you what church do you belong to? And if you’re not part of that group, it also comes with a lot of pros and cons there. So I was always interested in how we moralize things. And then growing up in college, when Facebook literally first came out when I was a freshman in college, it struck me that there’s some interesting connection with this new social media context and how we talk about moral issues and political issues.

WB: And actually my entry point into the research I do now specifically was my history as an animal rights activist. And I thought, well, social media actually is a good way to get information out there about this cause that I care about. And at first I was super excited about it, but I think as we’ve seen, there’s a lot of pros and cons of political activism on social media. In some ways it can really raise awareness of political issues but then on the other hand, lots of toxicity is involved in that. And I think we’re going to talk about some of my research on why that is today.

BB: I’m going to take you back into your story if that’s okay.

WB: Of course.

BB: Usually this is where the researchers are like, “Can we get to the data?” But I’m like, “We can, but as a qualitative person, an ethnographer, let’s go back.” So it makes a ton of sense. Like I didn’t know that you grew up in the Bible Belt. I mean, as someone who studies social learning as it impacts moral outrage, what was your experience? Were you a cerebral kid?

WB: Yeah. I think… It’s interesting because to go… I guess I’ll go a little bit into more of my family details. So my mother is actually Jewish and that was kind of interesting. My dad was Catholic and so part of me was always a little… Didn’t exactly fit in with the traditional Christian upbringing that is typical in the South. And so that definitely got me thinking a little bit about what are the assumptions going into this thing that we all just take for granted, going to church and these Christian beliefs. And I think the thing that got me thinking a lot actually were some negative things that I saw associated with Christianity as it is in the South. And of course, this is not to say… There’s a lot of positives I should say, like the sense of community it gives people is amazing.

WB: But a specific story I remember is when I first started going to my high school, there were all these Christian pastors that would, somehow they were allowed onto our campus and they’d be basically holding up these signs that were like, LGBTQ people, of course they weren’t using those terms, were… should burn in hell, and like women should not be allowed to get abortions, etcetera, etcetera, all the kind of major evangelical talking points. And it was just interesting because some of it I would even consider hate speech and it was just sitting there on our campus, but it was just not questioned whether or not should be allowed because it was a example of religious freedom.

WB: And these are the kinds of stories where I started thinking about how can social norms develop where we just don’t question that this is something that is defensible or is allowable. And it’s just interesting to me when you grow up in a culture where these are the norms, you just kind of go along with it, and some people believe it, some people don’t. But at that time anyway, it was something where even leadership at the high school didn’t stop to think maybe this is offensive to some people in our community. And I really do think that’s a function of social norms and social learning. In other words, what we come to find as common and appropriate in our community.

BB: Were you at a Christian school?

WB: This was a public school in North Carolina. I guess I don’t have to say which one, but it was one of the biggest in the state.

BB: Wow. Okay. So you didn’t have the language that you have now, obviously, as a professor and a researcher and very prestigious academic background. Were you emotionally reacting to that as a high school student or were you thinking about it? Was it both a cognitive questioning and an emotional questioning or were you already a thinker?

WB: Well, I was 16, probably had more hormones going on than I do now. [laughter] So mostly it was emotional and it did lead to some confrontations because there were a lot of us 16, 17, 18, not everyone was fine with what was going on. And so, yeah, there were some confrontations because there was a lot of emotions running high. I mean, you have gay people in our school who were feeling attacked and they should feel upset. And then you have people who grew up as fundamentalist Christians who feel outraged that someone else is outraged. So it was an interesting time. I remember in school the kind of response from the school leadership was, “Oh well, everyone needs to get along.” And it’s like, well, okay, as a teenager feeling very emotional, why should I try to get along with what is clearly an example of hate speech?

WB: So yeah, at that time I think I initially had an emotional reaction and then going into college learning about moral psychology, learning about ethics and philosophy, that was my entry point into thinking about this from an intellectual perspective. But I think actually there’s… And it’s funny because now I also study the role of emotions in our moral psychology, in our social media behavior. And I think it’s important to have those emotional reactions as a basis to understand what is going on because it can help you also understand what’s going on for the other person that you might not agree with and why they can say, “Oh, I don’t see what’s wrong with this.” Because for them they’ve learned something different. They’ve learned a different rule of what is appropriate, for example.

BB: It’s really… As someone who studies emotion, it’s really interesting to hear you say emotion as a jumping off point for, but I mean, I guess the affect or emotion underlying moral outrage on both sides is what we have in common. Is that kind of what you’re saying?

WB: That’s right. And the interesting thing that I find in my research is no matter how much people disagree with the moral view of another political group or another person, moral outrage tends to be a universal signal. We can still recognize that that person is upset or they feel offended even if we completely disagree with the cognitive component. Like what are they actually believing and what are their views? So moral outrage serves as this universal signal that communicates to people, I’m offended, you’re offended. And it actually allows us to communicate about our social norms.

BB: Wow. Very interesting. And I really appreciate you, I don’t know how often you talk about your high school experience as a platform for being a scholar now, but I think it’s helpful. It’s important to me and I think it’s also a prophetic tale because right now there’s a fight in Texas to put pastors in public schools. I wish we could look back on history and say, “Wow, we’ve come a long way baby.” But as the feminist teacher used to say, “We haven’t come a long way and don’t call me baby.” It’s like, we have not… We’re still here. All right. So I want to talk about two of your articles. I will say it was hard to narrow them down because even going back to kind of, I think, would you say it was the postdoc at Yale?

WB: That’s right, yes.

BB: Yeah. Postdoc at Yale. Even that research for me has been really interesting. So the two articles I want to talk about today are, one is kind of a summary of more academic studies, social media algorithms have hijacked social learning. And this is really about, I would say the intersection of moral outrage and social learning. And then after we talk about that, I do want to talk about, as we come into the 2024 election, I do want to talk about a second article, authorship on the second article was Almog Simchon and then Jay Van Bavel. Okay. And that article is “Troll and Divide: The Language of Online Polarization.”

BB: So I feel like we have a great segue into “Social Media Algorithms Have Hijacked ‘Social Learning.’” The subtitle here is, “We Make Sense of the World by Observing and Mimicking Others, but Digital Platforms Throw That Process Into Turmoil. Can Anything Be Done?” So we’ve talked a little bit about what motivated your interest in studying moral outrage. I want to start with some definitions because it feels like that’s important. How would you define moral outrage? And then how would you define social learning before we get to the intersection of where all hell breaks loose between these constructs. Let’s define each one. So moral outrage.

WB: So moral outrage I think is best to consider it as three different components. And this is generally a good way I think to consider emotions. So first of all, we can think about the eliciting conditions. In other words, what triggers moral outrage? So the key characteristic of what triggers moral outrage is that we detect that there’s been a transgression against our sense of right and wrong. So it’s fundamentally linked to what we talked about earlier, our sense of morality. And then it comes with a typical experience, which is usually described as a mixture of anger and disgust. So consider it like negative, high arousal. And then it comes with also I think something specific to moral outrage.

WB: These outcomes that are very relevant to the transgressions against our sense of right and wrong, we want to punish people, we want to hold them accountable. So when you put those three things together, an emotion that is related to anger and disgust triggered by a breach of our sense of right and wrong, it usually leads us to punish or want to hold people accountable. So imagine you’re a vegan, you see something about factory farming, it elicits this feeling of maybe anger or disgust in you, and then you typically want to hold someone accountable or punish. Those are the key characteristics of moral outrage.

BB: Okay. So I’m already flagged as an emotions researcher by something you said. I want to check in about something that I’ve seen. I don’t research this area, but I think I’ve seen it. You said disgust and what was the other word you used?

WB: Anger.

BB: Anger and disgust. Is it fair to say that I often see contempt?

WB: Yes. Some of these fine-grained distinctions, I think you can definitely make, I’m trying to paint a picture of outrage as this kind of constellation of things that usually we describe as outrage. And I think contempt is a great example where, what’s the difference between outrage and contempt? It’s kind of difficult to say. Especially, I think the key though is like if we’re in the domain of morality and you’re reacting to some kind of transgression, I think that’s when you get into the moral outrage realm. And contempt could be considered a part of that for sure.

BB: The reason why I’m asking is when you say disgust and contempt, I get anxious because it leads me to this question right away. And I’m trying not to get too inside baseball with emotions, but it leads me right away to worry if dehumanization is a slippery slope, if moral outrage, if what we feel is disgust, which can be inherently dehumanizing and part of it is contempt, which is like, I’m so much better than you that your opinion is, you’re not even worth it to me. Like what was the example you used? Farm…

WB: Some example of some cruelty happening in a factory farm environment. Sure.

BB: Yeah. Factory farm cruelty and then we move from witnessing to experiencing emotionally and then it makes sense to me. But I want to check out that I’m following you correctly that the next phase is punitive.

WB: Right. So I think it doesn’t have to lead to that, but it often does. Or at least it motivates us to think in those ways. And to your point about dehumanization, I think it definitely can be associated with dehumanization. And in fact, both work from my group and a couple others have shown that outrage in the context of online spaces is associated with hate speech. And I think that’s related to your point. It can motivate us to lash out in these ways that are dehumanizing. I do want to make a distinction though, outrage isn’t inevitably that. I think it does potentially have some upsides. I’ve always thought about it having both good and bad outcomes that can come with it and we can talk about that more. But yes, it’s certainly the case that it has been linked to things like hate speech in online spaces and then in the offline world, of course things like dehumanization and violence even.

BB: Yeah, I do want to talk about that because I’m grateful for moral outrage in some cases, and I think it can be an important catalyst for social change, and so I love that you’re holding some inevitable tension as most researchers do around binaries of, this construct is all bad or all good, because I do think, “Thank God for moral outrage.” And, “Oh my God, it’s a living hell.” Like, it can be both, right?

WB: Exactly.

BB: Okay. So let’s talk about your research and how social learning processes amplify online moral outrage.

WB: Definitely. So one of the things I’ve thought a lot about as a researcher is the following, what are the differences in the social world or the social context when we’re having face-to-face conversations versus when we’re in a social media environment? And on one hand, there’s a ton of similarities, right? Like, basic social psychology applies in both cases, but there are some things that are unique to social media platforms, and some of them are unique in the sense that they just literally don’t exist in offline settings. So for example, the influence of algorithms, which we’re going to cover extensively, but then other things are more continuous, in other words, there are, for example, group size, there are groups in offline settings, of course, but on social media, groups are massive, usually are in much larger social networks.

WB: And the other thing that is related to social learning is the idea of social feedback and social reward. Now, let me break that down a little bit. When we’re in offline face-to-face settings, we’re actually highly attuned to how another person is responding to us and feedback they’re giving us. So even in our interview right now, if you make a joke, if you smile or seem positive, it sends a signal to me, “Oh, we’re having a pretty good rapport,” but if you were grilling me, then maybe I would say, “I need to change what I’m talking about,” just an example. But in the online case, it’s really interesting because what I’ve argued is that you get this social feedback that’s very streamlined in the sense that it’s quantifiable, we know exactly how many likes, how many dings we’re getting when we post something, and it’s also delivered in ways that actually mimic what has been described in psychological research as variable reinforcement patterns.

WB: And basically what that means is we get the likes and the shares delivered to us in ways that actually make us more likely to pay attention to them and to kind of be affected by those in ways that affect our behavior. And so what I’ve studied in some of my research is the fact that when we are getting rewarded for expressing moral outrage, people give us likes, they share the content, it actually predicts us being more likely to express outrage in the future. So it turns out we’re very sensitive to that social reward, especially when it comes to our moral outrage expression. So there’s a key social learning process there because of the way social feedback is delivered to us on social media, we learn to express more, but the last thing I’ll say about this is it turns out to be even more complicated because moral outrage, as some of my research has shown, is some of the content that’s most likely to get amplified by the algorithms that deliver us content on social media.

WB: And so now it becomes this feedback loop where in general, I might get rewarded for moral outrage by people, but the algorithms are amplifying that content, so people are more likely to reward outrage in the first place because the algorithms show it to them. And so now I’m actually getting extra reward for expressing moral outrage and what is the inference that I make? Oh, everyone likes this, so I should continue to do it. And it’s not necessarily a conscious process like that, but at the same time we’ve all had that experience where certain posts, we get all this feedback and we’re like, “Oh, that was a good post,” and you might suddenly start to do those kinds of posts more over time.

BB: What’s the relationship between social learning, and these might be constructs that you haven’t studied, but I would love your point of view, between social learning and kind of an ego fragility, an ego protection of everyone trying to navigate online, does that make us even more vulnerable and susceptible to it?

WB: Yeah, that’s interesting. I don’t study this, but my wife is actually a therapist, so I’ve thought about it, but it is interesting, and I think one of the key things that is communicated to us in this context of social learning, where we’re basically responding to reward or punishment that we see, or we’re responding to what we see in the environment, how common, how appropriate is it? The things that we do, we’re very sensitive to whether we’re getting the likes or not, and if you are someone who has low self-esteem, for example, maybe your ego is a little fragile, actually, there’s research on this, you’re even more sensitive to the variation in social reward that comes from the platforms. But the thing that’s interesting is, it’s only partially, how much social reward we get is only partially a function of the things that we did.

WB: There might be random factors that come into play like when did you post the content? Did the algorithms show that content to people? We’re not always aware of that, and it’s interesting because that can govern how we feel, did we get less social reward or something like that. So people who are more insecure can be the most susceptible to variation in social reward, that’s generally a negative thing because it actually is a poor indicator of how much people approve of the things that you’re doing. And of course, if your whole behavior is driven by trying to seek approval, it’s going to send you on all kinds of strange and random paths. And I think we’ve seen examples of that from social media figures who have kind of rose to power and sometimes have a strategy to appeal to their audience.

BB: In your study, did you find differences in outrage expression between ideologically extreme networks and other networks?

WB: We actually do see some subtle differences here in terms of how people are responding to social information. So for people who are already in ideologically extreme networks…

BB: How would you define that? Let me stop you there.

WB: Oh, right, right. Ideologically extreme is referring to their political ideology, and so someone who is more extreme would be someone, say, who is extremely left on the political spectrum, or extremely right on the political spectrum, just referring to US politics. We can actually estimate that in online settings by looking at the accounts people tend to follow, and because we know the political ideological extremity of the political figures that they follow, we actually can make an inference to say, “Hey, if you tend to follow 95% far right people, you’re most likely to be more politically extreme yourself.” So that’s how we determine that. And what we find is that people who are in these extreme networks, it’s actually interesting, they are less sensitive to the variation in likes and the social reward that they’re getting, they just keep expressing outrage no matter what.

WB: And part of that, there’s two reasons why we think that’s going on. One of them we’ve studied very well, which is that if it’s a common thing in your group, so if you’re in a politically extreme network, of course, there’s more outrage that’s spreading around in your network, then you just know that it’s something that is normative, so you keep doing it regardless of the likes that you get. Also, it might be less likely to draw likes because it’s pretty common. It could also be the case, though, that people develop almost this habitual way of expressing outrage because they learn this is how I communicate in this network, and they’re not even really sensitive to punishment or reward. So that’s less studied, but those are the two explanations that we draw.

BB: Does that mean if we look at… Let’s just use the standard distribution of data bell curve, like for extreme, so let’s say that we have 20, and it’s probably not 20, it’s probably less than 20, but we have 20 far left, 20 far right. Are you saying that the 60% in a normal bell of distribution, the moderate are more sensitive to the social learning feedback than the tails?

WB: Correct, that’s what we find in our studies, specifically when it comes to outrage expression. So I should add the caveat, but it makes sense when you think about it as well because people who are not on the extremes, there’s almost more room to express more moral outrage because you don’t… Not, most people… I mean, actually, some people do come to the platforms already riled up and ready to attack people, and actually those are people who are most likely to express outrage in the first place, but there’s also a learning effect for people who are more moderate, where depending on the networks they’re in and the types of social reward they’re getting, if they do choose to express outrage, they’re actually more sensitive to that feedback.

BB: That makes total sense to me, like in my own experience with my own life. I mean, what do you make of it? If you’ve taken off your researcher hat and just your William hat with your wife who’s a therapist, does it make sense to you, just on a really personal level, how in that normal distribution, 60%, we’re testing ways of being, and we’re testing ways of belonging, and we’re more susceptible and vulnerable to that?

WB: Yeah, I mean, first of all, it makes total sense, and by the way, you hit the nail on the head with that word belonging, because I think a lot of what describes some of these processes is our natural tendency to try to find belonging vis-à-vis groups that we’re in. So if you are someone who is trying to discover your identification with a political group, I actually have some research on this, it turns out if you express outrage, that’s one of the easiest ways to signal to other people that you are a genuine and committed group member. Not to say that people do this strategically, some people might, but there’s all these social layers going on here, and so it’s something that is inevitable, as humans we are inherently social creatures, and in every single interaction we’re in, unconsciously we are scanning the social environment, taking in social information, figuring out how do I relate to this person, how do I relate to this group versus the other group? This is something that is inescapable.

BB: Yeah, and I would say that belonging is in our hard wiring, and in the absence of belonging, there’s a lot of suffering. And I would say that a lot of people have learned to leverage that really well in terms of offering belonging in exchange for certitude and moral outrage, I think that’s part of it. Would you agree or no?

WB: I would agree, and I’m not a religion scholar, but there’s no doubt that going back to my early experiences we discussed in the beginning, there’s no doubt that one of the functions of Christianity in the South is it gives people this community and this belonging, and it can come with a lot of good things, but it can also come with a lot of dark things, because one of the most fundamental findings in social psychology is as soon as we identify with a group, the consequence of that is we feel more belonging with the group, but we also contrast ourselves with out-groups, and it’s really interesting to see how quickly our brain does this. You can assign people to completely arbitrary groups, like if we come into an experiment and I say, “Hey, guess what? You are a circle and I’m a square.” All of a sudden, we view our groups as a competition, even though they’re completely made up. This is a fundamental feature of our psychology.

BB: Yeah, it validates so much of… From the belonging research I’ve done and the desperation to belong, especially as we feel collectively more uncertain, more emotionally dis-regulated. I wonder sometimes can moral outrage be an emotion regulating, can it serve an emotional regulation function?

WB: For sure. I’m glad you brought that up because I’ve done several studies where I actually message people on Twitter and I ask them like, “By the way, why are you expressing moral outrage?” And also, I study how outraged people are versus how outraged people perceive the author to be, and we can talk about that later, but one of the things that comes up in this research, people will actually, it’s surprising, people will gladly express to me why they are expressing outrage. There’s a cathartic component for a lot of people, and I think in that sense it’s an emotion regulatory tool, because by expressing this at least in the short term, it actually can make you feel better about getting it out.

WB: And I think getting back to our conversation about what are the positive functions of moral outrage, the truth is, in the US, we live in a highly unequal society, whether it’s race, gender, economics, and a lot of people will have feelings that they need to get out and in a way, either to just challenge the status quo or express feelings about that, and I think in that sense, outrage really serves this emotion regulatory tool. Is it sustainable? Does it make you feel good in the long run? I’m not sure about that, but certainly in the short-term it can serve that role.

BB: Yeah, and I think one of the systems of oppression in the country is a pathologizing of outrage where it is completely warranted, and so that… What we’re saying makes sense. Tell me why we lean on prime information in social learning and what is prime information? God, this was so crazy to me. This was so good.

WB: Yeah, so let me now get a little bit into the psychology of social learning, which we were alluding to earlier. One of the really interesting things about how we learn from other people is that we actually don’t do it in a way that is always entirely accurate. So what do I mean by that? Actually, we have biases that drive our social learning, so you referenced the term, that we introduced this term, PRIME information, which is an acronym that refers to prestigious, in-group, moralized and emotional information, so four types of information.

BB: Can I stop you and say it one more time?

WB: Oh yeah.

BB: The PRIME is prestigious, in-group, moral, and emotional. Right?

WB: That’s right.

BB: Okay.

WB: That’s right. So the reason why we focus on those four types of information is because it turns out we have specific biases to learn from those types of information that are very well studied in the social science literature. So for example, why would we be biased to learn from someone we view as prestigious? It turns out that this is actually very functional and it leads to efficient social learning over time. The reason is because what does prestige usually signal? Or what does it usually represent? It represents someone who has been successful in some context, and so if you are learning and choosing who you should be learning a skill or some information from, you actually want to learn from the successful person, because then you don’t have to learn all the mistakes that other people are making, right? And actually over time through evolution, there’s evidence that we have developed this bias because of that function.

WB: And you can actually make the same argument for all of the other four dimensions. So for example, why is it useful to learn from in-group members rather than out-group members? In-group members have better knowledge of the immediate environment that you’re in, that’s why you’re in that group, and so it’s the most efficient. And then finally, when we think about moral and emotional information, those are the types of information that tend to be very relevant to either social or physical survival, emotional information you need to be able to pay attention to like snakes, this is a classic example. Because if you’re not, you’re going to get bitten and you might die. So our brain actually prioritizes moralized and emotional information because it generally helps us navigate the world. So the really interesting thing about this is, okay, we have these social learning biases, what happens when you attach those to an environment where you have algorithms on social media that have a specific goal in mind?

WB: Their goal is to amplify content that draws in our attention and draws in our engagement. Now, why does that happen, or why is that the goal? Because that’s how social media platforms create advertising revenue. And so the interesting thing about that is, well, if I asked your listeners what type of information do you think is most likely to draw on engagement? Well, the answer is, if you’ve been paying attention, the type of content, the PRIME content that we’re biased to, attend to, and learn from, and so as a side effect of this goal to promote engagement, incidentally the social media platforms are amplifying that PRIME content, the prestigious, in-group, moral, and emotional. And so what we argue is it actually creates a situation of where there’s this feedback loop. We naturally interact with that information, we click it, we share it, we post it ourselves. The algorithms amplify it because of how they’re designed, but then what happens is the environment actually gets oversaturated with PRIME information, and we’re just learning to produce more.

WB: And so that’s why in several contexts, especially when it comes to politics, we basically begin to learn that it’s appropriate or that it’s common to express PRIME information more than it really is. And then when you’re dealing with moralistic, emotional, in-group information, that can lead to conflict rather than to collective problem solving and cooperation that actually in offline settings, this bias usually helps us navigate. And the reason is because when you think about it, negative moral information in the offline world is a lot more rare. For example, if we’re trying to detect cheaters in our social groups, they tend to be rare, we punish them, they go away. But in the online world with this artificial inflation, now it’s like… And we’ve probably all had this experience, if we ever try to read politics on social media, it’s like the whole entire world is burning, right? Like there’s so much negative and moral information. And it can be really taxing for someone who is not as plugged into that community.

BB: I want to play back what I think I heard you say, what I think… I’ve read this article several times. PRIME, you know, us kind of privileging prestige, in-group, morally, and emotionally kind of aligned, that hard-wired instinct completely becomes screwed up and broken when it hits an algorithmic social media world. And the algorithm PRIMEs shit that should not be PRIMEd. Is that what you’re saying in a more sophisticated way?

WB: I think that’s definitely one way to think about it. I would make it a little more nuanced by saying the algorithms tend to promote content that we are attracted to naturally, right? And so think about like a car wreck. When we drive by a car wreck, we all turn our heads and we pay attention to it, it’s just something like, no one doesn’t do that. Like everyone does that. But that doesn’t mean that we want to keep seeing car wrecks, right? Like, that’s just not what we would prefer.

BB: No. No.

WB: In fact, I have some work on this led by Steve Rathje, who’s a postdoc, basically what we show on a survey is that everyone realizes that there’s a lot of this negative emotional and moral content online, and that we even click on it a lot of times, but people report that they don’t want to be seen as much as they are. So we often recognize that there’s this discrepancy between what we naturally get drawn into and what we prefer to see, but the algorithms don’t know that at the moment. And so even if you click on something and you don’t necessarily want to, but you’re like, “Oh, I just wanted to like… I couldn’t help it. I had to check that out,” they’re going to keep promoting that. And I think that process helps explain some of the stuff you’re talking about and all kinds of other content.

WB: And actually, there’s one other thing I would mention about this that’s very important to this conversation, which is that it also explains why a minority of extreme political individuals often dominate the political social media space, especially on Twitter, X, and Meta. Because the algorithms are amplifying their content, it tends to have more PRIME information baked into it, especially the moral and emotional, and so they’re going to amplify it as if these minority extreme users are the majority. And that is what we argue really skews our understanding of social norms, what is common in the environment. We actually think, and I have a study on this, that these really outraged people are more common than they actually are. That starts to mess with our understanding of groups.

BB: Okay. Let’s talk about solutions. Yeah. Because I get really nuts around this, I think. Explain bounded diversification to me.

WB: Yeah. So it’s kind of a mouthful, but going off of what I was just talking about, we know one of the reasons why things like polarizing conversations, things like toxicity, things like morality and emotion is produced by a very small minority group of extreme users. In fact, there was a recent study that like 75% of that content is produced by just like 20% of users or even less. And so the whole point behind this idea of bounded diversification, or now what I… I actually now call it representative diversification, to be more intuitive, is that we want to try to change the environment so that people who are not on the far tails of the extremity, if you imagine that normal distribution that you referenced earlier, they’re not overrepresented.

WB: So we want to actually have the opinions of less extreme people also in the mix, so that if you’re a user on the platform, you actually have a more accurate social understanding of what different groups are believing. Because you want to know not only what your group actually is thinking, because that’s how you tend to conform, but you also want to know what the outgroup is actually thinking. If you’re a Democrat and your understanding of Republicans is they are all like hate speech, like fear mongering, extreme individuals, there’s data suggesting that that misperception, because that’s not actually true, it makes you more polarized and it makes you dislike the out-group more. Of course, it would.

WB: And so our goal is to think about how can we design algorithms that are counteracting this overrepresentation of extreme individuals, both from the in-group and out-group, which is why we have that idea of diversification. Most people in online context, you are getting exposed to out-group members’ thoughts sometimes, but it usually is like an extreme comment and your in-group member is like commenting on that, and that’s how we’re exposed to out-groups. And so our understanding gets skewed. So we want to create an algorithm that can improve this representation in people’s social media feeds especially in the election season that’s coming up.

BB: It makes so much sense to me how it happens, that if you have an opinion and the only out-group opinion you hear is so far extreme that it pushes you to the extreme, because it ratchets up fear. “Oh, I’m not close to what this person believes. I’m a hundred miles from what this person believes. And I don’t even want to get close to this person, so I’m going to take a sharp turn here.” Is there any incentive for representative diversification? Is that the new way of saying like, “Tell the truth?” Is there any hope on your part that Meta will do this?

WB: Yeah. So actually in the upcoming 2024 US presidential election, I’m doing a big study that is actually going to use the Bluesky social media platform. Jack Dorsey created this… The former CEO of Twitter created this open source, like Twitter alternative, but the cool thing about the platform is you can actually implement your own social media algorithms, and so I’m working with an engineer to try to implement this idea that I’ve been telling you about. And one of the key things to answer your question more directly, is that I actually predict that even if you do this representative diversification where you’re showing more constructive arguments from the political out-group, for example, it’s actually still going to maintain people’s engagement because although we have attention to like toxic content and stuff like that, in the long term, that stuff will exhaust users.

WB: And in fact, if you look at Pew Research, the majority of social media users are actually exhausted by this kind of content, especially in politics, and actually some of my colleagues at the University of Southern California, they have this project called Social Media Index, so like Matt Motyl and Ravi Iyer, they demonstrated that on Twitter and Facebook, people are seeing a lot of this toxic content and don’t like it. They think that there’s content they’re seeing that actually they think it’s bad for the world, and that could lead to hate speech. So my point is, there is an incentive at the user level to reduce this content, and even though it’s true that we all click on like outrage inducing stuff, in the long-term, I think user retention will not be affected by improving some of the representation of that content so it’s more socially representative.

BB: I love this answer, and I have to say it resonates with just conversations that I’m in among people who have large followings like I do, that there’s, for the first time, a very serious conversation about shutting it all down. I went off for a year, and I have to say it was probably the best year for me, mental health resetting-wise. Like it’s a change or die situation I think for a lot of folks who are actually influential. And I do believe that social media can be an incredible tool for activism, for social change. I also think it’s getting increasingly dangerous without change. Is that fair?

WB: I think that’s a great summary. And I also think that these companies are aware of this, and I think they are actively trying to implement things, but there’s a fundamental tension with the advertising revenue goals of the platform. So what we’re trying to do is to provide evidence or tests that you can actually improve discourse without also just completely removing engagement. I mean, people are not just robots, like we do have these hard-wired attractions to content, but we also have goals and ideas of what we want social media to look like even in political spaces. And I think the goal of the algorithm we’re designing is to try to get closer to what that might look like.

BB: I mean, I think if we go back to belonging, at some point, people are going to become exhausted from running toward the bunkers, and the bunkers are so tenuous, because if you disagree with the people behind the bunkers, you’re cast away very quickly. That’s the hard part, right? And I think people want to see their ideas reflected more honestly. Do you agree?

WB: One hundred percent, and that’s actually one of the main goals of this algorithm. If we can represent a wider distribution of people’s beliefs, then what you’re talking about is less likely to happen, and I totally agree. Like I have been frustrated by this. I’m someone who would consider myself generally progressive and on the left, but even someone like me who considers myself that way, I’ve been in situations where I’m like, I don’t know if I should comment on this because I might get so much outrage toward me, when I think I mostly agree with this, but I don’t know if I’ll say it right. Like we’ve all had that experience, and that leads to a silencing of political views, which is just not a situation in a democracy that we want to be in.

WB: I think if we really do believe in Democratic conversation, we need the range of views represented, although one of the things that we argue is we can actually cut off some of the far extreme stuff like hate speech and toxicity because that’s, first of all, representing a minority of individuals who are going to be putting out that stuff. We can still get rid of that because some people have argued for like wholesale diversification, but a weird consequence of that in politics is that means I should be as someone on the left, exposed to like far right ideology. That doesn’t really make sense, right? Or for someone who’s more moderate. So we argue that we should bound the diversification, which is where the term bounded came from.

BB: That’s a helpful add. So I want to move to this because it’s so connected with everything we’ve been talking about. So, “Troll and Divide: The Language of Online Polarization.” I would say the article really investigates how trolls contribute to conflict between groups by using highly polarized language and rhetoric. Is that an accurate assessment?

WB: Yes.

BB: Okay. How did you assess the online polarization?

WB: Yeah. So in this study, what we were able to do is look at the political conversations that tend to demonstrate what colloquially we might call an echo chamber effect. So basically, conversations where if you’re a Democrat or you’re a Liberal, you’re mostly getting retweeted or you’re getting shares, you’re getting comments from people on the left, and really there’s like no interaction with the political right. And then vice versa. We can actually look at those conversations that have that characteristic and we can start to analyze the language that is most likely to predict when things become polarized in that way, in the sense that you’re really seeing this divergence in how people communicate.

WB: And then it allows us to create this dictionary that basically allows us to predict when is a message most likely to create this polarization in the social network? What we ended up discovering is that a lot of that language related to our conversation in this episode, is really centered around highly emotional moralized content. So calling someone evil, for example, as you might guess, that’s going to typically polarize and you’re going to have this type of conversation. So that’s how we measure polarization in that context.

BB: Is this true or not true? Are there dangerous actors involved in this outside of let’s say the US that are trying to divide Americans and contributing to these really polarized conversations using the language from your dictionary. Is that a truth or a not a truth?

WB: Well, we know for example, in the 2016 election that there were definitely foreign accounts that were being employed during the US election, and what our paper shows, if you analyze their behavior, they’re posting a lot of content that contains this polarizing content that we’ve shown empirically is associated with polarization in online spaces. So it’s hard to speak to what exactly is their strategy or what is the intention, but we can see what has happened descriptively. Also we see similar things, by the way, with misinformation. We have evidence that these, Internet Research Agency as the organization was discovered to be called, the type of misinformation that they’re producing and that is getting shared by them is also the type of misinformation most likely to elicit moral outrage. So we’ve studied this very carefully.

BB: Wow. Say that last part again.

WB: Right. So the misinformation associated with some of these troll accounts is misinformation most likely to elicit moral outrage, and that’s when you compare the other types of information, so for example, political information produced by New York Times, it elicits some outrage, but what we’ve found pretty systematically is that the misinformation content is producing even more outrage. And as we have discussed, that can also actually impact how it’s spreading online.

BB: So, I’m just, I’m like pause, because I just got this message on my Instagram, it said, “You have 150,000 some odd potential spam accounts following you.” And it said, “Would you like to delete them?” And I’m like, “Yeah, delete them.” And then it said, “There’s too many to delete.” And I was like, I don’t know how to get out of this, but that’s a Meta issue. And then when I got into a real dust up a couple of years ago around my podcasting platform, when we called a liaison on one of the social media platforms, they’re like, “You’re in the crosshairs right now with some bots.” And I was like, “Show me an example of a bot.” And I was like, “No, that this is like, this is a woman from Milwaukee, look at her picture.” And they’re like, “It doesn’t work that way.” Like, is there a tool for dealing with this? I mean, I’m thinking about the election, William, and it makes me super anxious.

WB: Yeah. For sure. I mean, we definitely should give some credit to the platforms because they’ve implemented a lot of tools that have… There’s evidence that it has reduced a lot of the bots and misinformation…

BB: Oh, there is?

WB: But the problem is, it hasn’t worked perfectly, and there’s no doubt that in any election cycle, there’s this threat, and they’re doing all they can, but it’s hard because especially in the age of generative AI, there’s just so much you can actually produce. Now, misinformation is not necessarily a supply issue, so you… We’ve always been able to produce a lot of misinformation, it’s more about the consumption. But I think with generative AI, it’s more difficult to tell what is blatantly false. It’s very difficult.

WB: And I think my main concern is not so much that like we’re going to see a massive increase in misinformation consumption, but that given we know that AI exists and is being involved in the news production, we’re going to start to get confused and maybe we’re going to start to get tired of having to figure things out.

WB: And it can actually generally reduce the trust in the information ecosystem. So that’s my concern in the upcoming election, and I hope, you know, we can all just try to do our best. Platforms have things like signaling the veracity of news domains, but yeah, just try your best, pay attention to content. If it sounds fishy, maybe look it up. That’s where we’re at right now.

BB: I know. I think what I hear you say that the thing that makes me nervous because I suffer from this a lot, is discernment fatigue…

WB: Yes, yes.

BB: And confirmation bias. I’m like, is that true or not? But I think I’d like it to be true. So I’m going to go with that’s true. One thing that I read, I don’t remember where it was, my last thing for you that I loved, I think this was your idea, or you and the team. I would like a little note on everything I see that says, “This is why you’re seeing this.”

WB: That’s right. I think, actually from polls, we know that most social media users would love more transparency, and what you just described is specifically about, yeah, like algorithm transparency. And I think that would be really helpful to at least making people aware potentially of some of these biases in what they’re seeing resulting from the algorithms like we’ve talked about. I think that’s definitely one small thing that could help, but I do think that we need more education, so people just understand how algorithms are working and how they’re selectively increasing certain content over others. It actually turns out, most people are not aware. They’re aware that algorithms exist, but they’re not aware of the details on how it actually works.

BB: Yeah, I had a friend tell me the other day, we were walking together and she’s like, “Oh, do you have to use the word algorithm? Like I just, I stopped listening when you used that word.” I’m like… I was like, “I get it, me too. But I think we got to figure it out.” Okay. I’m going to go through these. These are rapid fire. Are you ready?

WB: Okay. Let’s do it.

BB: Fill in the blank for me. Vulnerability is?

WB: In the context of online environments?

BB: In the context of William Brady.

WB: Vulnerability is putting your strongest convictions out there and being open to having them challenged, having respect for other people who oppose you.

BB: Damn. Just wow. Like everybody in the room is like this, “Whoa.” Yeah. I don’t…

WB: Yeah, I just made it up.

[chuckle]

BB: Yeah, I don’t… Like, I don’t like your answer, but okay. Okay. You, William are called to be really brave, but you can feel your fear in your throat. It’s real. What’s the very first thing you do?

WB: Fight or flight, I guess. But I think I’m confused over the question.

BB: Yeah. Like when you’re really scared, but you have to do something, you’re going to do it because you want to be brave, but your fear is very real. What’s the first thing you do?

WB: The first thing I do is suck it up, and if I really need to do it, I just have to dive in the deep end.

BB: You go? You just go?

WB: I go. Yep. Take the plunge.

BB: Okay. Last TV show you binged and loved?

WB: I’m watching Shogun right now on FX, and it’s amazing, and I’m totally caught up in it.

BB: Okay. I’ve heard amazing things, but you’re the first person I’ve talked to that’s watching it. Favorite movie of all time?

WB: Having just watched Dune 2, it’s up there, but I think for me as a sci-fi nerd, 2001 Space Odyssey is my number one.

BB: Oh, you’re a pod door guy. The pod doors. Yeah. Okay. A concert that you’ll never forget?

WB: This is slightly niche, but there is a Swedish hardcore band that got popular in the US in the ’90s called Refused, and they did a reunion tour in like 2010, and it was just the best concert I’ve ever been to.

BB: I can tell you you’re the first to answer this way. [laughter] Favorite meal?

WB: I love Szechuan food.

BB: What specifically?

WB: I love like fried tofu covered in Szechuan sauce with vegetables.

BB: You cannot go wrong there, right? You look like you feel bad about it, but you’re like, that’s not probably good, but it’s so good. What’s on your nightstand?

WB: I have two sci-fi books, the Silo series by Hugh Howey, which I really am into right now. And I also have a noise machine because I love sleeping to white noise.

BB: Me too. A snapshot of an ordinary moment in your life that gives you real joy?

WB: Every day with my dogs. I have two Husky, Malamute, Shepherd mixes. Big fluffy dogs. They love the snow. Definitely that.

BB: And last question, one thing that you’re deeply grateful for right now?

WB: My wife got me into individual therapy and I’ve been going for maintenance. It’s been so great to be able to just check in, have a time every week where I get into my emotions. I think as men, we don’t often have the chance or the desire to do that, but it really forces you to. And I’ve been really loving it, you know.

BB: That’s like my favorite answer that I’ve ever heard. It’s so good. William Brady, thank you so much for being with us on Unlocking Us. This was important, enlightening and just real. And so thank you for taking really heady stuff and making it accessible for us on this podcast because you said it, we don’t understand it, and a lot of people are doing great work, but not translating it for us to consume and think about when we jump on our social media platforms. So grateful to you.

WB: Yeah. Thanks so much for the conversation. It was a lot of fun.

BB: Thank you.

[music]

BB: God, this is… It’s so… I hope you all think it’s as interesting as I do because I just think sometimes I want to know and sometimes I don’t want to know, but I feel like as a leader, as a parent, as a partner, as a person just trying to navigate what is sometimes exciting and new and other times just feels like complete trash and bullshit, I just want to tap out, I’m so grateful for people digging in to understand what’s happening underneath the hood. You can learn more about the episode along with all the show notes on brenebrown.com. We’ll link to a lot of William’s really fascinating work and articles about his work. We’ll have transcripts up in three to five days for everyone.

BB: Also, we are sending shorter weekly newsletters that recap our podcasts and content for the week, and you can sign up for those on the episode page. You’ll also get all of William’s links to where you can find him and what he’s doing. I appreciate you being here. I hope you’re enjoying the series. There’ll be comments open on the podcast page, so if you’ve got questions about, “Hey, this is interesting, I’d love to know more about this,” maybe we can point you in the right direction, or, “Here’s an interesting idea for this series on Living Beyond Human Scale.” We’d love to know what you think. All right. Stay awkward, brave, and kind.

[music]

BB: Unlocking Us is produced by Brené Brown Education and Research Group. The music is by Carrie Rodriguez and Gina Chavez. Get new episodes as soon as they’re published by following Unlocking Us on your favorite podcast app. We are part of the Vox Media Podcast Network. Discover more award-winning shows at podcasts.voxmedia.com.

 

© 2024 Brené Brown Education and Research Group, LLC. All rights reserved.

Brown, B. (Host). (2024, March 28). Dr. William Brady on Social Media, Moral Outrage, and Polarization. [Audio podcast episode]. In Unlocking Us with Brené Brown. Vox Media Podcast Network. https://brenebrown.com/podcast/social-media-moral-outrage-and-polarization/

Back to Top