Skip to content

Listen to the episode

On this episode of Unlocking Us

In this episode, Brené and Craig discuss what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? And, when we start unleashing these systems in high stakes environments like education, healthcare, and criminal justice, what guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice?

This is the third episode in our series on the possibilities and costs of living beyond human scale, and it is a must-listen!

Please note: In this podcast, Dr. Watkins and I talk about how AI is being used across healthcare. One topic that we discuss is how AI is being used to identify suicidal ideation. If you or a loved one is in immediate danger, please call or text the National Suicide & Crisis Lifeline at 988 (24/7 in the US). If calling 911 or the police in your area, it is important to notify the operator that it is a psychiatric emergency and ask for police officers trained in crisis intervention or trained to assist people experiencing a psychiatric emergency.

About the guest

Dr. S. Craig Watkins

Craig Watkins is the Ernest A. Sharpe Centennial Professor and the Executive Director of the IC2 Institute at the University of Texas at Austin. Craig is an internationally recognized scholar, who studies the impacts of computer-mediated technologies in society and has been a Visiting Professor at MIT’s Institute for Data, Systems, and Society. The author of six books, Craig has written and lectured widely about the social and behavioral impacts of digital media with a particular focus on matters related to race and systemic inequalities. Currently, he leads multiple research teams that are focusing on the acceleration of artificial intelligence/machine learning in the health and well-being space. This work focuses on both the computational and social and ethical aspects of AI and machine learning. Funded by the National Institutes of Health, Craig is a Co-Principal Investigator with a team of researchers from the University of Texas and Cornell’s School of Medicine, which is developing algorithmic models to better understand the factors underlying the rising rates of suicide among young people in the U.S. He also leads a team that is using one of the largest publicly available health datasets from the National Institutes of Health to develop models to understand the interactions between demographics, social and environmental factors, and the distribution of chronic diseases and mental health disorders. Further, Craig is working with a team to understand the risks and opportunities health care professionals face as the capacities of AI and machine learning continue to expand, raising questions about who will drive the future of health care work: machines or humans? Craig also works as an advisor to a variety of organizations who are thinking deeply about the future of generative AI and well-being including Google, Scratch/Lego Foundation, UNICEF, the National Science Foundation, National Endowment for the Arts, and the New York Museum of Science.

Show notes

Artificial Intelligence and the Future of Racial Justice, S. Craig Watkins, TEDxMIT, December 2021

The Digital Edge: How Black and Latino Youth Navigate Digital Inequality, by S. Craig Watkins with Alexander Cho, Andres Lombana-Bermudez, Vivian Shaw, Jacqueline Ryan Vickery, Lauren Weinzimmer

Sign up for our newsletter here.

Transcript

Brené Brown: Hi, everyone. I’m Brené Brown, and this is Unlocking Us. No, I have not started smoking two packs of cigs a day, I’ve just got a raspy cough. Let me just tell you, this conversation, not only was my mind blown as I was having it, I loved watching people on my team listen to the podcast and look at me and just make the little mind blown emoji thing. This is such a cool and scary and hopeful conversation. It’s just, I don’t know that I’ve ever learned so much in 60 minutes in my career. So I’m so excited that you’re here. This is the third episode in a series that we’re doing that I call Living Beyond Human Scale. What are the possibilities? What are the costs? What’s the role of community and IRL relationships in a world that is like AI, social media, 24-hour news?

BB: We’re being bombarded by information, at the same time, people are fighting for our attention and will use any means necessary to get it. I don’t know about you, but I’m tired. This series is going to be unique in that it’s going to cross over between Unlocking Us and Dare to Lead. I’m so glad you’re here. So I’m talking to Professor S. Craig Watkins, and he just has a gift of walking us into complicated things and making them so straightforward and then asking really unanswerable, important questions. Maybe they’re answerable, well, not by me, but it’s just incredible.

[music]

BB: Let me tell you a little bit about Craig before we get started. So S. Craig Watkins is the Ernest A. Sharpe Centennial Professor and the executive director of the IC² Institute at UT Austin, the University of Texas at Austin, Hook ‘Em. He is an internationally recognized scholar who studies the impact of computer mediated technologies in society. He has been a visiting professor at MIT’s Institute for Data Systems and Society. He’s written six books. He lectures all over the world around social and behavioral impacts of digital media with particular focus on matters related to race and systemic inequality. He leads several research teams that are global, and he’s funded by the NIH, the National Institutes of Health. He’s leading a team that’s using one of the largest publicly available data, health data sets from the National Institutes of Health to develop models to understand the interactions between demographics, social and environmental factors, and the distribution of chronic disease and mental health disorders.

BB: His full bio will be on the podcast page on brenebrown.com. Comments will be open. Maybe if you’ve got some really good questions, I could pry some more answers out of him. He was really generous with his time, but I’m really glad you’re here. On a very serious note, I want to say that Craig and I talk about how AI is being used across healthcare, and one topic that we are discussing is how AI is being used to identify suicidal ideation. So if this is a topic for you that has got a big hook, I would suggest not listening. And always, if you or a loved one is in immediate danger, please call or text the National Suicide & Crisis Lifeline. In the US, it’s just 988. It’s open 24/7, and if you need to call the police in your area, which would be 911 in the US, but if you’re calling the police in any area, it’s always important to notify the operator that it’s a psychiatric emergency and ask for an officer trained in crisis intervention or trained to assist people experiencing a psychiatric emergency.

BB: One of the quotes from this, before we jump in, that really stood out to me, that I just cannot stop thinking about is Craig saying, “When we started unleashing AI systems in high stake environments like education, healthcare, criminal justice, without any proper understanding, without any guardrails, without any policies, without any ethical principles to guide how we were going to go about incorporating these decisions into these high stake environments, you have to ask, what were we thinking?” Let’s jump in.

[music]

BB: Craig, welcome to the podcast. I’m so happy you’re here.

Craig Watkins: Thank you for having me.

BB: So we’d love to start with you sharing your story with us. And normally, especially researchers are like, “My research story?” And I’m like, “No, like you were born… Little baby story, like you were born where? How did you get into this?” Tell us about yourself.

CW: Yeah, absolutely. So I’m a native Texan, which is rare among the faculty at the University of Texas at Austin, actually. Grew up in Dallas, the youngest of three. My mother and father neither went to school beyond high school, but had high aspirations for their children, myself included. And from a very early age, my mom essentially socialized me to think big and to put importance on education. And so that was something that was always a significant part of my life. As a very young child, I can remember always just writing. Literally, like I learned in second or third grade how to physically make books with the spine and the…

BB: Wow.

CW: Front cover, back cover, and would literally physically make books and then just populate those books with stories, with characters, with images. And so I’ve always had an interest in writing. I’ve always had an interest in just human behavior and just observing human behavior. And over the course of my academic training, that became a lot more formal. Undergraduate, University of Texas, where I currently live in Austin and teach at UT Austin, graduate school, PhD, University of Michigan. I lived on the East Coast, Philadelphia and New York for a while, but developed a deep interest in wanting to understand human behavior and human behavior in particular, in relationship to technology. So it’s something that I’ve always just had a natural interest in. And as you can imagine, over the last 20 years or so of my academic career, just the significant and tremendous evolutions in technology and what that means for human behavior, it’s been just a nice alignment for me, from a natural, curiosity perspective, formal academic training perspective. And so it’s worked out pretty well for me in terms of how the world has evolved and how my interest in the world has continued to grow as well.

BB: When I think of AI and I think of research, and I think of human behavior, which is my interest in the intersection, I think of you. And what I love is that your mom said, think big, and you study artificial intelligence and the biggest stuff [laughter] in the whole world. You went there.

CW: Who could have known, right? Obviously no idea back in the ’80s or so, ’90s or so, what the world was evolving towards. But certainly, it’s been a really interesting journey and excited along the way, definitely.

BB: I am so glad you’re with us and we got a chance to talk a little bit before the podcast. And you know that I have a slight obsession with your work, and I’ve tried to divide it up into two super related things. One thing, and I will put links on the show page to all of Craig’s work and where you can find him, his LinkedIn, his books. What I really want y’all to watch that has blown my mind, that I want to talk about and dig into is your TEDxMIT talk, so the TED talk that you gave at MIT. Before we get started, can you tell us a little bit about the project at UT that you’re on, and also what you’re doing also with MIT and the collaborative projects that you’re working on, as context for our conversation?

CW: Yeah, absolutely. So at UT, I wear a few different hats. I’m the director of a think-and-do tank called the IC² Institute. Under my leadership, over the last year and a half or so, we’ve dug pretty deeply into the intersection between innovation and health and AI in particular. And so this idea of bringing innovative thinking, innovative partnerships to how we understand health and wellbeing through the lens or through the perspective, the possibilities and also the perils of artificial intelligence. UT also has three grand challenges, and these are initiatives that are funded by the Office of the Vice President for Research, endorsed by the president of the university. And one of those three grand challenges is good systems. And good systems is a reference to what we oftentimes refer to as ethical and responsible artificial intelligence. And I’m one of the co-principal investigators for the Good Systems Project, focusing primarily on thinking about the ways in which we can design and deploy AI systems in ways that mitigate, rather than exacerbate inequities like race and gender inequity, for example.

CW: And my work in AI works on two interesting ends of the spectrum. We have teams that are doing a lot of the computational work, that is working with large data sets, for example, from the National Institutes of Health, are trying to understand, how do we design models and use data to understand with a little bit more empirical nuance what’s happening from a health disparities perspective? And we can get into a little bit more of that in detail, if you’d like. But at the other end of that spectrum is the human dimension. And that is thinking not only about AI from a computational, mathematical perspective, or thinking about it strictly as a technical or computational problem, but understanding that there is significant human and ethical questions at stake. And so we spend a lot of time talking to just people on the front line, particularly people in healthcare, getting their sense of AI, their concerns about AI, their aspirations for AI.

CW: And so I feel like the way in which we approach the work gives us the kind of multidimensional perspective that helps us to understand all of these intersecting questions that are driving conversations today around AI, across many domains and certainly as it relates to AI. And I had a fortunate opportunity about a couple of years ago to spend a year at MIT as a visiting professor, working with teams there at the Institute for Data, Systems, and Society where they have launched similar initiatives, where they’re trying to build teams that are trying to understand the more complex and nuanced ways in which race and racial discrimination, systemic racism influence society, and trying to build up models and teams and collaborations that allow us to understand these dynamics in much more sophisticated ways. And so we continue to work with some of the teams there, particularly in the healthcare space, in the health and wellbeing space.

BB: So let’s start with, I want to ask some questions about what I think I learned from you and dig in. So one of the things that’s been scary in my experience… I spend probably the vast, vast majority of my time inside organizations. There is a mad scramble right now within organizations to understand how to use AI. I’m using ChatGPT a lot right now. Everyone’s kind of experimenting and thinking about it. And I’ve got one daughter in graduate school, I’ve got a child graduating from… A young person, not a child, but a young person graduating from high school. We talk about the future all the time. One of the things that was very interesting to me about your work is when I go into organizations, I get really nervous, because what I hear people say is, “Oh, we’re going to use AI, that will control for all the bias in everything from hiring to how we offer services, to how we evaluate customers.” And my sense is that it’s not going to work that way. My fear is, just from using my own ChatGPT prompts, my fear is we may not eliminate bias, we may scale it. We may just bring it to full scale. And things I’m thinking about are like algorithms for hiring that people have been using for five years and show that there’s tremendous gender bias or race bias in those hiring algorithms. So tell me about the intersection of AI, scale, and fairness.

CW: Yeah, so we have a day or so you say, for this conversation?

[laughter]

BB: I was going to say, I’m going to start with a small… Let me tell you something. There’s a couple things in that MIT Ted talk that freaked me out and blew my mind. And the first one was completely reconceptualizing how I think about fairness. So yeah, it’s just crazy to me.

CW: Yeah. What I was trying to get at at that talk and my work is that… And only academics could take a notion or a concept like fairness and render it almost impossible to understand. It seems fairness just seems really easy, right? Simply fairness. But what’s happened, Brené, is that in the machine learning and AI developer community, they have heard the critiques that the systems that they’re building inadvertently, and I don’t think any of this is intentional, but the system that they’re building, if it’s a hiring algorithm, if it’s a algorithm to diagnose a chronic health condition, an algorithm designed to determine whether or not someone will be a repeat offender or default on a loan, that what we’ve come to understand as we have deployed these systems in higher and higher stakes environments; education, healthcare, criminal justice, et cetera, that they are inadvertently replicating and automating the kinds of historical and legacy biases that have been a significant part of the story of these institutions.

CW: And so what the developers have come to understand is that this is something, obviously, that we don’t want to replicate. We don’t want to scale, we don’t want to automate these biases. And so they’ve come up with different notions of fairness. So, how can you build an algorithmic model that functions in a way, that performs this task, whatever that task is, in a way that’s reasonably fair? And if we take race and ethnicity for example, this might be one way of thinking about this. And so think about this, for example. So take race and what we’ve come to understand about racial bias and racial discrimination, trying to build a model. And so what some developers argue is that the way in which you mitigate or reduce the likelihood of this model performing in a way that’s disproportionate to how it might perform its task across race, is just to try to erase any representation or any indication of race or ethnicity in the model.

CW: So you strip it of any data, any notes, anything that might reflect race. And so this idea of creating a model that’s racially unaware, and this idea by being racially unaware, it then reduces, at least from this perspective, the likelihood of reproducing historical forms of racial bias. But on the other end of that, on the flip side of that argument is saying, is that possible? And should we build models that are, in fact, race aware? In other words, that you build into the model some consideration of race, some consideration of how race maybe develops, certain kinds of properties or certain kinds of features respective of the domain of interest that you’re looking at. And so it’s just one example of how developers are taking different approaches to trying to cultivate models that operate from a perspective that’s more fair.

CW: So here’s what’s really interesting. So there was this study that some colleagues of mine and others did at MIT where they were looking at medical images. And they essentially trained a model to be able to try to predict if the model could identify the race and ethnicity of the persons whose medical image this belonged to, if you strip the image of any explicit markers of race or ethnicity. And the model was still able to predict with high degree of accuracy, the race of the patient who was imaged in this particular screening. So then they did some other things. They said, well, how about if we account for body mass index? How about if we account for chronic diseases? Because we know that certain chronic disease is distributed along racial and ethnic lines. How about if we account for organ size, bone density? All of these things that may be subtle or proxy indicators of race. And so they stripped the images, they stripped the database of all of those kinds of indicators, and the model was still able to predict the race and ethnicity of the persons represented in these images.

BB: Wow.

CW: And then they even went so far as to degrade the quality of the image, and the model could still, with a high degree of accuracy, identify the race and ethnicity of the patient represented in that image. The story here is that these models are picking up on something, some feature, some signal that even humans can’t detect, human experts in this space can’t detect. And what it suggests is that our models are likely understanding and identifying race in ways that we aren’t even aware, which suggests that how they’re performing their task, if that’s predicting a repeat offender, if that’s predicting who’s likely to pay back a loan or not, if that’s predicting who’s likely to graduate from college in four years or not, that is picking up on racial features and racial signals that we’re not aware of, suggesting that it may be virtually impossible to build a “race neutral” or race unaware algorithm.

BB: Well, let me ask you this. My reflexive response to that is, I’m not sure that racial neutrality like gender neutrality is a good idea, because it gives a little bit of the colorblind stuff and gives a little bit of the assumption that everyone starts from the same place in terms of opportunity and exposure. Where am I wrong here?

CW: You’re not, and it’s precisely part of the problem with how these fairness models and fairness frameworks have been developed. So the intentions are good, but I think the deep awareness and understanding of the complexities of systemic inequalities, systemic racism, for example, aren’t very well understood. And so as a result, what I was trying to suggest in that talk is, good intentions aside, that these models in the long run are likely going to be largely ineffective, precisely because they operate from the assumption that racial bias and discrimination are interpersonal, that they’re individual, as opposed to structural and systemic. And so in that sense, they’re good initial efforts, but I think if we really want to get at this problem in a way that’s going to be enduring and impactful, it’s going to require a little bit more sophistication and thought. And it speaks to what I see as another evolving development in the whole AI machine learning space, is that 10, 15 years ago, certainly 20 plus years ago, these were problems that were largely being addressed by a very limited range of expertise: engineers, computer scientists, maybe data scientists.

CW: But what we’ve come to understand now is that it requires much more multidisciplinary expertise. You need social scientists in the room, you need humanists in the room, you need designers in the room, you need ethicists in the room. And understanding now that the complexity of these systems requires complexity of thought, multidisciplinary kinds of expertise in order to address them. And hopefully, what that talk will encourage… And I think talks like that and research like that is encouraging AI and machine learning developers to bring more people into the conversation in order to say, “Hey, if this model is attempting to address injustice in the criminal justice system, injustice in how we deliver healthcare and health services, we’ve got to have domain expertise in the room. We just can’t have computational expertise in the room. We need domain expertise helping us think about all of these different variables that are likely going to impact any model that we develop, data sets that we compose, procedures that we articulate as we begin to develop this system.”

CW: And I think that’s something that we’re beginning to see more and more of, and it’s certainly something that we’re doing here at UT Austin with the Good Systems Grand Challenge, which is really saying, “Hey, bring together multidisciplinary expertise to solve these problems. This is no longer a problem that’s adequately solved by just computer scientists or engineers.” Doing that has gotten us to the point now where we see that that’s insufficient, it’s inadequate and increasingly indefensible.

BB: Okay. Let me ask you a question. So I want to just define for everyone listening. When you say “domain expertise,” you mean not just people who can code and understand algorithms, but a behavioral scientist, someone who has got a domain expertise in structural or institutional inequality, so an ethicist. So just for fun, what would your perfect table look like?

CW: Can I give an example of the table we’re trying to put together, literally as we speak?

BB: Yes. Love it.

CW: So I’m part of a team that’s just been funded by the National Institutes of Health to explore the development of AI and machine learning models to help us better understand what’s driving these rapidly increasing rates of young African-Americans committing suicide. So we’ve all heard about the youth mental health, behavioral health crisis in this country, and it gets expressed in a number of ways, higher rates of depression, general anxiety disorder, loneliness, and significantly and tragically suicidal ideation and increasing rates of suicide. We know that suicide is the second leading cause of death for young people between the ages of 16 to 24 or so. And so we received this funding from NIH to try to say, “Could we develop models that may be able to identify what are some of the social risk factors? What are some of the behavioral factors that might contribute to or even be predictive of someone contemplating or trying to commit suicide?”

CW: And this is a partnership with the med school from Cornell. So we’ve got engineers and researchers, computational experts from Cornell and here at UT Austin. Now, we could have approached that project and said, okay, let’s just put our computational expertise, our computational talent together and develop these models and see what we can generate. Write a paper, hand that over to the NIH and say, here it is. Good job done. We’re finished. But we said, that isn’t really satisfactory. That isn’t sufficient. We’ve got to bring other people to the table, other people into the conversation. And so part of what we also proposed to the National Institutes of Health is that we would compose an advisory board that consisted of behavioral health specialists, that consisted of community stakeholders, people who are on the ground, who are talking to, live with, and understand what young African-Americans are going through and could give our computational team expertise, guidance and frameworks, ideas that we might not ever consider simply because we don’t have the deep domain expertise that they have in that space.

CW: And the idea of that is they may be able to give us feedback on, is our dataset appropriate? Is it proper? Is it going to give us sufficient insight? Is it going to be able to perform a task, predictive tasks that are relevant to the problem as they see them? And so it’s this idea of building AI with rather than for people, with rather than for distressed populations.

BB: Wow.

CW: And when we begin to do that, when you begin to bring that outside, external expertise into the room at the table, you get a very different perspective of what’s happening on the ground. You get a very different perspective of what the possibilities and limitations of your model are. And so that’s how we’ve approached the work and what we think is unique about the work. But I also think this is becoming a new standard. It’s certainly a new standard in terms of how the National Science Foundation, the National Institutes of Health are funding future AI research. They’re beginning to require researchers to say, “Hey, who are you talking to? Are you talking to people beyond your laboratory? Are you talking to people beyond your academic expertise?” Because if we’re going to build AI that matters, AI that impacts the world in a significant way, we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems in real world context.

BB: As a social worker, you’re speaking my normal language, which is, make sure the people at the table making decisions for clients, make sure those include people with lived experience, community on-the-ground experience. It’s got a very old school organizing. We want the quantitative, but we also need the qualitative. It reminds me of Karen Stout, who was my mentor when I was getting my PhD. Her term was femicide, the killing of women by intimate partners. And what’s coming up for me in this conversation right now, Craig, is I was the first person with a qualitative dissertation at my college, and it was like a shit show. They really fought it.

CW: [chuckle] You’re right.

BB: But one time she told me, because she was a domestic violence researcher, she said, “It would be so great if all that mattered were the stories, but we have to have the quantitative information. We have to have scope and scale and the numbers, and it would be so great if all that mattered were the numbers. We can’t move anything for funding or really understanding without the stories.” So is it fair to say that there’s a new awareness that lived experience, on-the-ground experience, behavioral science, humanism, as you mentioned, these are relevant for building AI that serves people?

CW: Absolutely. Obviously you’re right that the critical currency in all of this, when we talk about AI and machine learning, is data. And I think we are approaching techniques and developing practices that are beginning to understand different kinds of data and how we can leverage these systems to probe different kinds of data to yield outputs, outcomes, results, findings that could be extraordinarily revealing. And so for instance, in this suicide project that I mentioned to you, actually, some of the critical piece of data that we’re using, what’s oftentimes referred to in the field as unstructured data. Unstructured data is another way of thinking about stories. What stories are healthcare professionals on the ground saying about young African-Americans? What stories are law enforcement officials saying about young African-Americans? So we’re looking at a data set, a registry of violent deaths that are beginning to try to capture some of that unstructured data, more qualitative data.

CW: And so we now have computational techniques, we now have systems that can actually begin to analyze that language. What words are they using? What environmental triggers are they identifying? What behavioral triggers or what social support triggers are they identifying as they describe a condition, as they describe a person and their journey and how it might help us to understand how that led to suicide. And so being able to mine unstructured data, and we’re seeing this via social media, we’re seeing this via diaries that people keep. There’s an interesting researcher who studies suicide, and he studied suicide by studying suicidal notes. What notes do people leave behind as they begin to take on suicide?

BB: Oh, god, tough.

CW: And this idea of studying those notes, what he refers to as the language of suicide in terms of, what are people communicating? How are they communicating that? What sentiments are they expressing? What key words are they using? What are the linguistic markers that might help us to understand in a very different and unique way? What are those signals that may help us to see and identify before someone reaches crises, how to intervene and how to prevent some of these things from happening? But I am increasingly fascinated by how we might be able to mine stories as a form of data to understand human complexity, to understand the human condition in extraordinary ways.

BB: Now, you’re speaking my language. [laughter] I’m not a zero and ones kind of girl, but man, when you talk about unstructured qualitative data, now we’re singing from the same hymnal here. And I know you need both, but I do think there’s so much language and story as the portal that we share, I think, into each other. And so I think it’s exciting.

[music]

BB: One of the things that blew me away again about the TEDxMIT talk… First of all, can I just say thank you for speaking to a room of academics in a non-academic way? I’ve had my family listen to it, and they were like, “Well, is this a professor talking to professors? Because I need a heads up.” We need a closed caption [laughter] that translates. And I said, “It is a professor talking to professors.” But, the reason why I love your work is I always think a mark of a genius is someone that can explain conceptually complex things in a way that’s understandable to everyone. One of the things that you explained in this talk that was both vitally important, I believe, but also really scary, is when you talked about the different types of racism; so interpersonal racism, pretty easy to recognize between people, language, it’s fairly obvious, and institutional racism and then structural racism. And you built a map of these things that looked like the London Underground. It was so complicated. Tell me about, this is one of the things that keeps me up at night, the increasing use of AI in policing and trying to build computational and mathematical models that account for not just interpersonal racism, but institutional and structural racism and the complexity of those things.

CW: Yeah, it’s…

BB: Again, another easy question for you.

CW: Yeah, yeah, absolutely. I should have ate my Wheaties this morning. [laughter] No, it’s a great question. I think part of what I was trying to get at in that talk is, so I’m a social scientist by training and so I’ve had the, I guess I could say fortune and opportunity just to have been immersed in the academic literature for 20, 25 plus years or so, throughout my academic training and career, even longer actually. And you understand that systemic discrimination is very powerful in so far as we begin to understand how racial discrimination in one domain, let’s say neighborhood and housing, has implications for how racial disparities get expressed in other domains. Education, for example, access to schools, access to high quality schools, what that then means in terms of access to college, access to employment, and that these things are woven and interconnected in ways that are so intertwined.

CW: The web of degree of how these things are intertwined with each other is so complex and so dynamic. And this is part of what I was… My time at MIT and beyond, just talking to people in these spaces from a computational perspective, how… Is it even possible to develop… What kind of dataset would you need to really begin to understand the interconnectedness of all of this? Just where someone lives, what the implications are for health disparities, what the implications are for access to employment, access to education, access to other kinds of resources. And it’s just extraordinary how these different things are interlocked, so how they’re connected and how one disparity in another is reciprocal and connected to another disparity in a whole different domain. And I don’t know, and so I consider myself more so than anything, a learner in this space.

CW: And so I feel like I’m still learning, but I haven’t landed on a solution or even the notion that it’s possible from a computational perspective to deal with the utter complexity of all of this. Now, what we started doing at MIT was just isolating very specific domains; healthcare, discrimination in housing, and trying to see if we could come up with predictive models, if we could come up with computational models that allowed us to understand with a little bit of more empirical rigor and specificity what was happening in a specific domain. But when you start trying to understand how what’s happening in that domain is connected to what’s happening in another domain, building those kinds of models, building those kinds of datasets, I’m not even quite sure if we understand or have the right questions to ask to be able to really help us begin to start cultivating or developing the right kinds of problem definitions, the right kinds of data preparations, the right kinds of modeling that might allow us to understand with even more complexity what’s happening here.

CW: And so the idea of building models for what’s called predictive policing, and understanding that oftentimes what’s happening in predictive policing is simply scaling the kinds of historical biases that have typically defined the ways in which police go about their jobs, public safety, how they manage the streets, how they manage certain communities vis- à-vis other communities. And there’s this kind of self-fulfilling prophecy that if you understood crime as existing in these zip codes and these zip codes, and how you’ve arrested people and how you’ve deployed resources to police those communities, the idea is that if you are allocating resources, surveilling communities, policing communities in ways that you believe are required, that it’s going to lead to higher arrest rates, it’s going to lead to a whole lot of other outcomes. And so you end up just repeating that cycle as you begin to start relying on these historical trends through your data signals to say, “Hey, this is where we should be allocating resources. This is where we should be policing. This is where we should be addressing concerns about this or concerns about that in terms of criminal activity, criminal behavior.”

CW: And so what you end up creating are oftentimes systems that are not necessarily predictive of outcomes that we think are important, but are in some cases instead, predictions of ways in which we have tended to behave in the past. And so a crime prediction algorithm becomes not an algorithm that predicts who’s most likely to be a repeat offender, but instead it becomes a model that predicts who’s most likely to be arrested. And that’s a very different prediction, a very different outcome that then requires…

BB: Whoa. Yes.

CW: A very different kind of institutional response.

BB: Who’s more likely to commit a crime, versus who’s more likely to be arrested? Two different questions.

CW: And that’s oftentimes referred to increasingly today in the AI community, is what’s oftentimes referred to as the alignment problem. And while I don’t agree fully with how the alignment problem is currently being defined, is this notion that we are building systems that are leading to unintended consequences, consequences that are not aligned with our values as a democratic society. And so this idea that when you build a crime risk prediction model, you want that model to perform at a very high level, a very reliable level, and you want it not to discriminate or be biased. And what we’ve seen instead was that these kinds of models, in fact, are biased, that they do discriminate. And so therefore, they’re misaligned with our values as a society in terms of what we want them to do, based on what they really do when they get applied in real world context.

BB: I’m kind of stuck here, in a way. Who’s the we? Because I think of that quote that I thought of a lot after George Floyd’s murder, that the systems aren’t broken, they’re working exactly as they were designed. So are the folks that are defining fair and working on alignment, have they gone through some rigorous testing to make sure they’re the ones who should be defining what aligned and fair is? Do you know what I’m saying?

CW: Yeah, no, I get exactly what you’re saying. And I think the resounding response to that question is no, that they oftentimes haven’t gone through the proper training or haven’t developed the depth expertise to understand how and what we need to be aligning society to do. And so you can make a case, and this is part of the problem that I have with the ways in which the alignment problem has been defined. You mentioned, for example, hiring algorithms. And so there was a study, this report a few years ago about the Amazon hiring model that was biased against women, when they were looking to hire engineers. And they ended up basically discontinuing any development and certainly any deployment of that model, because all of the experiments suggested that it was biased against women, that it was picking up on certain linguistic signals and markers and resumes that oftentimes favored men, male candidates, as opposed to female candidates.

CW: And some could say, “Oh, well certainly we can’t have that, and that’s not what we intended the model to do.” Well, the real question is, that model was aligned quite neatly with the ways in which Amazon historically had hired. And so there was nothing exceptional, there was nothing unique about that model. In fact, it was quite aligned. It was quite consistent with how Amazon had hired, which is why it was developing the predictive outcomes that it was developing. Because clearly, Amazon had a history of hiring male engineers vis-à-vis female engineers. And so in that sense, is it an alignment problem or is it another problem that we need to understand in terms of what’s happening here? And I think your articulation is really where we need to be going as we understand and try to develop these concepts and frameworks that might help give us guardrails, give us principles, give us concepts that we can then begin to integrate into our practices to help us better understand and address these issues as they play out and materialize in the ways in which these systems get deployed.

CW: And so oftentimes, even in the risk assessment models where it turns out the important study that was published by ProPublica a few years ago, about the COMPAS model and how it was discriminating and biased against Black defendants vis-à-vis their White defendants. And again, the question is, this model was in some ways performing in ways that were quite aligned with historically how judges had made decisions about who was most likely to be a repeat offender, and therefore the kinds of outcomes that they were making based on those human conceptual models. And what we’ve done is just create a system where we are unintentionally designing machine models, machine learning models that are behaving in similar ways, making similar decisions, but simply in ways that are scaled, simply in ways that are automated, simply in ways that are accelerated beyond anything that we could have ever imagined.

BB: Yeah, there are two things that come to my mind when you say that. One is, the person with the most power defines aligned. And if it’s really aligned to just causes, then it reminds me of the Jack Nicholson line, like, “You can’t handle alignment. [laughter] You can’t handle the truth.” If you really want to align to the highest ideals of democracy, that is going to shift power in ways that the people who hold it will not be consistently comfortable, is the nicest way I can put that.

CW: I think you’re absolutely right. And not only are they not comfortable with it, I don’t even think they can conceptualize what that would look like, even. In other words, it would require recognition of a kind of expertise and tapping into communities and experiences that they typically might have little, if any connection to. And so how their notion of alignment gets defined and how they proceed with trying to address the “alignment problem” in AI, I think will continue to suffer from these kinds of constraints for the very reasons that we’ve identified here.

BB: Wow. So now we’re not going to say… Instead of saying, “The systems are not broken, they’re working just like they were built,” I don’t want to, in 10 years, say “The algorithms are not broken. They’re working exactly as designed.”

CW: Absolutely. Yeah, absolutely. And this is the interesting thing about systemic structural inequality. I don’t think any of this is necessarily intentional. So I’m not aware, and I’ve been knee-deep in the literature for decades. I’m not aware of any experiment, any study, any revelation that suggests to us or tells us when these systems are being built at the highest level, OpenAI, Google, Facebook, et cetera, I’m not aware of any process where they intentionally design these systems to perform in these ways that lead to these disproportionate impacts. But nevertheless, by virtue of how they define problems, by virtue of how they build and prepare data sets, how they build and develop the architecture for models, that these kinds of biases, implicit though they may be, end up obviously significantly impacting the ways in which these systems perform, therefore undermining their ability to address these disparities in any meaningful or significant way. And I think that that’s important to note, is that just by virtue of doing what they tend to do, what they’ve been trained to do, how they function and operate as systems, as practices, as customs, oftentimes end up reproducing these kinds of social economic disparities in ways that are even unbeknownst to them.

BB: Okay, can I not push back, but push in on something?

CW: Absolutely.

BB: I totally agree with your hypothesis of generosity that there’s nothing like the Machiavellian, “Ha, ha, ha, we’re going to build this and we’re going to really exploit these folks.” But I also wonder… Weirdly, last week I was in Chicago interviewing Kara Swisher for her new book on technology, and she had an interesting hypothesis. In the interview, she said, “What is the opposite of diversity, equity, and inclusion? Homogeneity, inequality, and exclusion.”

CW: Exclusion. Yeah.

BB: Right. And so she said, one of the things that she has witnessed as being a tech reporter on the ground for 20 years in early days, is that the folks that are running Meta and Amazon and some of these companies have not had the lived experience of bias or poverty or pain, and therefore, maybe they didn’t build something evil to start with, but certainly the lack of exposure to different lived experiences, maybe it’s not intentional, but I don’t think the impact is any less hurtful. Does that make sense?

CW: I think you’re right. And I think that was a point that I was trying to make, that even though it isn’t necessarily intentional or by design, it doesn’t reduce the nature of the impact and the significance. So take for example, just about a year, year and a half ago, I was having a conversation with a CEO from one of the big AI hiring companies. And they had come to realize, the company had come to realize that how they had built their system and how employers were finding potential employees through these big platforms that are now doing a lot of the hiring for companies and for different organizations. And what they’ve come to realize are the ways in which racial bias are baked into the hiring process. But they had no way internally, in terms of their system, to really understand empirically how this was playing out, because they never thought to ask when people upload their resume to this company. They weren’t required to declare, for example, their racial or ethnic identity.

CW: So then it becomes hard to know, who’s getting employed, who’s not getting employed? Who’s getting interviewed, who’s not getting interviewed? If we want to do some type of cross-tabulation, how are men and women comparing vis- à-vis these different dimensions? How are Blacks, Whites, Asians, Latinx…

BB: Wow.

CW: Comparing vis- à -vis these different dimensions that we want to understand? And the conversation that we were having was trying to help them essentially develop ways to maybe infer race via resumes that were being submitted. And they weren’t really comfortable with that. And what they recognized is that at some point they were going to have to change their model, and if not require, at least ask or invite people to declare a racial or ethnic identity for purposes of being able to understand and to run tests underneath their engine to see how these things distribute along racial and ethnic lines. Again, who’s getting interviewed? Who’s getting hired? Who’s getting denied? And what he said to me, Brené, which is really powerful, he wasn’t a part of the founding team, but he’s the current CEO.

CW: And what he said to me is that those issues that we were discussing, some of which I’ve just shared with you, never came up when the original founders built the platform. They never even thought to ask the question, “How does racial bias, how does gender bias impact or influence the hiring process?” There’s decades of literature, research literature, that have documented the ways in which hiring social networks, the ways in which customs and norms within organizations, affinity bias, likeness bias, similarity bias, there’s tons of literature that has established these as facts of life in the ways in which people get hired and how those processes develop. And so my point is, I don’t think that they…

BB: Wow.

CW: Necessarily built a system that they wanted to discriminate or be biased against women or against people of color, but because it never even occurred to them to ask the question. And kind of to your point, vis-à-vis lived experience, vis-à -vis their own position in the world, it never occurred to them that these are issues that are likely going to diminish the quality and performance of this company, this model that we’re building. Therefore, how do we get ahead of this before it becomes a problem later on, which is what’s happened now? And so the point is…

BB: Wow.

CW: Not even understanding or having the awareness to ask these questions can pose significant problems, create significant deficits or significant challenges to developing systems that are high-performing, systems that mitigate these kinds of historical and legacy biases. And that’s what I was getting at when I said I don’t necessarily think that it’s always, if ever intentional, but it doesn’t mean that it’s not just as problematic, just not as impactful, just not as profound as if it were someone sitting in a secret room, saying, “Ha-ha, how can we discriminate against this or that group?”

BB: It’s so interesting because that example that you shared about the hiring, first of all, this is how newly read in I am on this stuff. I didn’t realize that all those big hiring online companies were AI, machine learning driven, and of course, it makes sense. But this is such a working example, in my mind, of what you talked about in the beginning about fairness. So without the right people at the table, the assumption is, “Hey, no, I don’t want to ask about race or gender, because we’re going to be a super fair company.” And then not asking about race and gender means that you have no evaluation data to see if you’re doing the right thing.

CW: Exactly. Yeah. And I think what the company has realized is that clearly these kinds of historical biases are impacting who gets hired, who gets interviewed, and suggesting to me that for some populations, their likelihood for success, that is to say, being recognized, being identified as a viable candidate, are significantly diminished because of the ways in which algorithmic procedures are filtering out certain resumes, privileging others. And they’re now beginning to understand, “Hey, wait a minute… ” And I’m sure they’re getting internal feedback maybe from African-Americans who upload their resumes or women who upload their resumes, and so they’re getting both formal and informal feedback or signals suggesting that something’s happening here. And if we want to understand what’s happening here, we don’t even have a mechanism or a way of studying this in a formal way because we never even thought to collect that kind of data.

[music]

BB: I cannot think, to be honest with you, about a more important conversation we need to be having right now. It’s just the idea, in my mind, of scaling what’s wrong and hurtful is so scary. And I think I see it happening in the organizations I’m in, because one of the things that you talked about in the TEDxMIT talk that was so profound is about these two Black men who were falsely arrested, wrong people. AI was used to identify them. And the answer from law enforcement was, “The computer got it wrong.” And I’m actually seeing that right now in organizations that I’m in, like, “Wow, the computer really screwed this up,” or “Wow, the algorithm really didn’t do what we wanted it to do,” as if it’s an entity that walks into work with a briefcase and a satchel. And so I want to talk about your belief that instead of calling it “artificial intelligence,” we should call it what?

CW: Yeah, I suspect that in five, certainly 10 years, that we may no longer use the term “artificial intelligence” for all of the baggage that it’s likely to continue developing. And instead, and it may not be this precisely, but something more along the lines of augmented intelligence. In other words, where we need to get to really quickly in society is understanding how do we design and deploy these systems in ways that augment, enhance, expand human intelligence and capacity rather than substitute, replace, or render obsolete human intelligence and capacity? We are squarely in the battle right now in terms of what AI will mean for society, and I think more importantly, who will help drive and contribute to whatever that conversation, however that conversation unfolds and materializes. And what we’re arguing about is that we need to bring more voices, more diverse perspectives, more diverse expertise into this conversation to help make sure that we move along a path that’s going to lead us towards augmentation rather than automation or artificial intelligence.

CW: What was striking about that example that I gave is we’ve come to understand now that facial recognition systems, as they were originally being developed, were just simply flawed and faulty. That the rate of error in terms of identifying people with darker skin color, for instance, women compared to men, just consistently higher rates of error when it came to that. And a lot of that had to do with the ways in which these models are trained, the training datasets used to develop these models, in terms of who they perform well for versus who they don’t perform very well for. And one of the African-American men who was falsely accused of a crime via facial recognition system, when they brought him in and they showed him the photo, he looked at the photo and he looked at them and he said, “This obviously isn’t me, so why am I even here?” And the response was, “Oh, well, I guess the computer got it wrong.” And my response is that the computer didn’t get it wrong, necessarily. The humans who enforced that decision or output that the computer made got it wrong. And so what we’re really trying to push back against is what’s oftentimes referred to in the research literature as automated bias.

CW: We could have just a conversation just about the many different ways in which bias runs through and is distributed across the entire development cycle of artificial intelligence, even from conceptualization, how a problem is defined or conceptualized and how bias gets baked into that. We hear a lot about the data and how the training datasets lead to bias, but there are other elements and manifestations of bias across the development cycle, including, how, where, and under what context these systems get deployed. And in this case, automation bias is a reference to humans increasingly surrendering more and more decision-making authority to machines. So even when human expertise, even when human judgment, human experience, just looking at this photo when you showed up at this man’s house and saying, “Hey, there’s a mismatch here. This isn’t the person that’s in this photo.” And instead, they allowed the machine, they allowed the system to determine their action. And that’s where we’ve really got to guard against, deferring complete authority to these systems in ways that undermine our capacity to introduce human expertise, human compassion, just human common sense. And so those are some of the challenges that we face as we begin to deliver and give more… Build or deliver more and more faith in these systems, even when they may run counter to what human expertise and experience might suggest.

BB: Dang. This is so, boom, mind-blowing for me. I saw this interview with Jeff Bezos a couple of weeks ago, and he said, “What people don’t understand is that if the data say one thing and my gut says another, I’m going with my gut.” We’re giving over too much. Do you think we’re relinquishing not just power, but also responsibility? Like, ultimately, I don’t give a shit what the computer says about this guy. I hold that officer responsible.

CW: Absolutely. And I think you’ve used the word a few times in the conversation today, “scary.” And it’s interesting, Brené, when we talk to different stakeholders across this space. One of the words that often comes up is scary.

BB: Really?

CW: Yeah, and I’ve been thinking more and more about, why is that? So when we’re talking to healthcare professionals and talking to them about their concerns or their aspirations for AI, if it’s clinicians, if it’s social workers, if it’s community health workers, if it’s nurses, or if it’s just others outside of the healthcare domain, for example, students even, this word “scary” comes up over and over and over again. And as we’re seeing this in the data that we’re collecting, I’ve been trying to understand what’s going on there. What are people really saying? The thing that we oftentimes hear when people talk about ChatGPT, when it was unlocked and unleashed into the world, people’s immediate response was scary. People in education, their immediate response was scary. Therefore, we shut it down and denied students access to it for fear that they’ll cheat, for fear that it’ll lead to academic dishonesty. It’ll diminish their ability or even interest in writing.

CW: And so I think the word, scary, is in some ways a reference to uncertainty, a reference to confusion, a reference to not feeling comfortable in terms of understanding just what’s at stake as we see these systems being pushed onto society in a much more accelerated fashion, in higher and higher stakes, without the proper guardrails, without the proper systems, regulations, laws, and policies that enable us to manage these systems in a way that’s appropriate, in a way that’s proper. I really do believe that in the next five to 10 years or so, we’ll look back… Because I like to think we’re kind of in the Stone Ages of all of this. And think about generative AI, we’re literally at the early days of this. And at some point, we’ll look back and say, “What in the hell were we thinking when we started unleashing these systems in high-stakes environments, education, healthcare, criminal justice, without any proper understanding, without any guardrails, without any policies, without any ethical principles to guide how we go about incorporating these decisions into high-stakes environments? What were we thinking? How did we even come to making those kinds of decisions, recognizing all of the ensuing problems that are associated with moving at this pace, this velocity, this scale, in a way that we simply just aren’t currently prepared, or as you like to say, wired to even accommodate?”

BB: I don’t like that. [laughter] I agree. One thing I’ll share with you from my research, that I think… Well, for me, scary… You just nailed the scary, for me. I’m fearful because the distribution of power is so off right now that I don’t want this in the wrong hands without complete understanding. Maybe in 10 years, we’ll look back and say… I think we will look back and say exactly what you said, I just worry about the casualties over the next 10 years as we try to figure it out, because tech is moving faster than policy is moving and protections are moving. And Craig, I think one of the things from my work that I’ve seen is the biggest shame trigger, the biggest professional shame trigger that exists for people is the fear of being irrelevant. And you are one of a handful of people, and I’ve talked to… I don’t know what the… Shit ton is the word I’d use. I’ve talked to a lot of people, you are one of the few people that I’ve talked to that talk about domain expertise. We need the people with the lived experiences, we need the people on the ground, we need the letters, we need the unstructured data. Unfortunately, many of the people that I’ve talked to who I would not have on the podcast because I would not personally want to scale their ideas, would say, “We don’t have to mess around with all that messy stuff anymore.” That, to me, is the scary part.

CW: Yeah, because in some ways, it’s… And I understand the need to make human complexity less complex and more within our grasp…

BB: Yes.

CW: But it requires getting our hands dirty a little bit with these different kinds of data, when building teams that are composed of multidisciplinary expertise, expanding who’s at the table, who’s in the room when conversations are happening, when decisions are being made about what’s being designed and what’s being deployed and what the downstream impact of that might be. And hopefully, we are building a culture, we’re building a conversation, we’re building and documenting research and literature via experimentation and a variety of other techniques to make the case for why this is absolutely necessary. As we push further and further ahead into this world, because let’s face it, right?

CW: This is the world that’s going to happen, a world as increasingly presented by AI-based kinds of solutions, machine learning based kind of solutions, algorithmic design kinds of solutions. So the question isn’t if this will happen, but the question is how will this happen and for whom this will happen and under what circumstances? And that’s where we really got to do the work in terms of just bringing in different kinds of voices, different kinds of expertise to bring to the table, bring to the conversation, issues, ideas, concepts, lived experiences that otherwise get utterly erased from the process.

BB: This is why I’m really grateful for your work. Now, I have… [chuckle] We have answered five out of my 21 questions, but so you just have to promise to come back again some time on the podcast. Would you be open to that?

CW: Yeah, absolutely, look forward to it. Enjoyed the conversation. And you’re right, there’s a lot to think about here.

BB: Yeah, because I think you have some really interesting takes on the upside of social media. I want to talk to you about that next, but for time, we’re going to stop here, but this does not give you a pass from the rapid fire. Are you ready as Craig Watkins?

CW: Let’s see.

[laughter]

BB: You’re not going to have time to plug it into anything, okay. Fill in the blank for me. Vulnerability, not systemic vulnerability, but relational personal vulnerability is what, in your mind?

CW: Not communicating one’s needs.

BB: Okay. You, Craig, are called to be really brave, you’ve got to do something that’s really hard and really brave, and you can feel the fear in your body. What’s the very first thing you do?

CW: Me probably? In that moment? Pray.

BB: Yes.

CW: And ask for… To summon up strength that I probably can’t imagine being able to summon up, but somehow being able to tap into some well of strength and capacity that may be beyond my ability to even conceive.

BB: I love that. Some strength that surpasses all human understanding. That’s the prayer part. Right?

CW: Absolutely. [chuckle]

BB: Love it. Okay, last TV show you binged and loved.

CW: Oh, so this is a challenge for me, because as much… There’s so much good TV out there, and I struggle to find time to watch it, but the last thing that I was able to watch recently was Amazon’s Mr. & Mrs. Smith

BB: Oh, yeah.

CW: Yeah. And I watched it in part because Glover is just… He’s an intriguing creative to me. I first discovered him through his TV show, Atlanta, which I thought the first couple of seasons was just really brilliant in terms of how it began to probe a lived experience, young African-Americans in a way that I had never really seen in entertainment media before. And so he taps into a kind of lived experience that is… At one time, the New York… So for instance, the New York Times called Atlanta the Blackest show ever made. But then you look at a show like Mr. & Mrs. Smith, and he shows the ability to expand in ways that are just, I think, extraordinary, and yet still bring that kind of Black ethos, Black experience in ways that I think are still powerful, unique, and creative. And so I was just curious to see what he would do in a show that was completely out-of-world different from what I’d seen him in, with the show, Atlanta.

BB: Yeah, he may be a polymath. His musical abilities, his writing abilities, he’s incredible. Favorite film of all time.

CW: Oh, man. That’s a really interesting question. Favorite film of all time. I should have one. I don’t know if I really have one, but if I had to… I’m trying to think off the top of my mind.

BB: It can be just one that you really love, that if you come across it, you’re going to watch it.

CW: I would say… What was the movie… I should know this, the movie with Matt Damon, where he was at MIT?

BB: Good Will Hunting.

CW: Good Will Hunting. Yeah. I’ve written about movies in the past, I’ve written about Black cinema in the past, and so certainly have studied movies in a prior iteration or version of myself, but yeah, just for whatever reasons, Good Will Hunting just struck a chord with me because it’s about the underdog and how there’s so much potential in the underdog that’s rarely, if ever tapped, and what would it mean if we can unleash that power into the world? And yeah, there’s probably hidden talent in this country and elsewhere that never, ever gets discovered or realized.

BB: Yeah, and it’s just such a story about class and trauma and violence, and when he solves that proof on the board in the hall, it’s just like… Yeah, it’s… Oh, you picked… That’s one of my top five. Okay, a concert that you’ll never forget. Are you a music person?

CW: I am. I went to… So I’m a big hip-hop guy, and years ago, I went to a concert with the group, OutKast, from Atlanta. And they put on a remarkable performance, the way in which they presented hip hop in the live in the moment experience, which is really incredible for me. And yeah, I remember that from time to time, had such a great experience there, and feel like it was, for me, a really incredible opportunity.

BB: I love it. Favorite meal.

CW: Favorite meal. So my wife, you may not know, you actually know, but a year ago, she introduced me to homemade Brazilian fish stew. And when I ate it, I felt like I was in heaven. I’ve tried to replicate it, but I haven’t been able to replicate it. But the mixture, the tomato base with coconut milk, with fresh fish, red peppers, onions, paprika, cumin, it was just a delight and really enjoyed that meal, and one day, will find a way to replicate it in my own attempts.

BB: I think you’re going to end up using AI to try to do that, but I don’t think it’s ever… [laughter] I don’t think it’s going to have what your wife puts in it. Okay, two last questions, what’s a snapshot of an ordinary moment in your life that gives you real joy?

CW: Something that I started doing during the pandemic and have continued doing and find great joy in, is just going on walks by myself and listening to either great music or really interesting podcasts, and just getting away from the grind of meetings, the grind of work, the grinds of writing, etcetera, and just enjoying that opportunity to walk, to decompress, to enjoy… Literally to enjoy the environment around me. I happen to live in a neighborhood that has great sunsets, and I take pictures of those sunsets, and it’s just a nice reminder of what’s important in the world and what’s important in life and how important it is just to take care of ourselves.

BB: Beautiful. Last one, tell me one thing that you’re grateful for right now.

CW: I would say my mother. And my mother passed away in 2013, she passed away on the same day of the Boston Marathon bombing. And my mother is the reason why I’m where I am today, why… what I do today. I mentioned earlier, I was born and raised in Dallas. I grew up in South Dallas, all Black neighborhood, went to all Black public schools, but always understood for some reason that the world had something to offer me, and I had something to offer the world. And a lot of that, I think, is attributed to my mother and the values that she instilled in me, the confidence that she instilled in me. And so there rarely is a day that goes by that at some point, I don’t say thank you to myself and to heaven for her.

BB: We are very grateful that your mom told you to think big, because you are thinking very big, and I think we’ll be better off for it. So, Craig, thank you so much for being on the podcast. It was just such an important conversation, and I truly am really grateful for your remarkable work. And I know we’ll talk again, because I have 16 more questions that we didn’t get to.

[laughter]

CW: All right. I look forward to that follow-up conversation someday, thank you so much.

BB: Thank you so much.

[music]

BB: Okay, I don’t want to close this thing with the word “scary” again, but, holy shit. Let me tell you what helps me sleep better at night, knowing that Dr. Watkins is on the case, along with some other really talented computational people that are deeply, deeply tethered to ethics and social justice. It’s the whole series. What’s the potential? The potential is so great. And what are the costs? High, if we don’t get ahead of intention policies and having the right people like he mentioned at the table. You can learn more about this episode along with all the show notes, all of his information about where you can find his work on brenebrown.com.

BB: We’ll have comments open on the page. Love to know what you think. We’re going to link to all of Craig’s work. I think I mentioned the MIT TED Talk. You got to watch that TED Talk. This would be the most amazing lunch and learn at work, just to take 20 minutes and watch a TED talk and have a 40-minute discussion, that’s one hour, about what you learned, what you thought was important. It’s essential. We always have transcripts up within three to five days of the episode going live. The more I learn about social media, the more shitshow-ey I think it is, so we’re not doing comments on Instagram and Facebook right now, we’re opening comments on the website, and we’re trying to build these really neat communication and community building tools in it, and it’s still in process, so would love for you to leave comments and ask questions there.

BB: We’re also doing a lot of community building with newsletters, so we’ve got an occasional newsletter, but we’re also doing shorter weekly newsletters that kind of have key takeaways, weekly challenges to think about work and life and love, so you can sign up for newsletters on the episode page as well. I’m really glad you’re here. These can be heady conversations, but I think they’re also heart conversations. We got to figure out if we’re building the world we want to live in, and for me, that’s an awkward, brave, and kind world, so…

[music]

BB: I’ll see you next week.

[music]

BB: Unlocking Us is produced by Brené Brown Education and Research Group. The music is by Carrie Rodriguez and Gina Chavez. Get new episodes as soon as they’re published by following Unlocking Us on your favorite podcast app. We are part of the Vox Media podcast network. Discover more award-winning shows at podcast.voxmedia.com.

 

© 2024 Brené Brown Education and Research Group, LLC. All rights reserved.

Brown, B. (Host). (2024, April 3). Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans. [Audio podcast episode]. In Unlocking Us with Brené Brown. Vox Media Podcast Network. https://brenebrown.com/podcast/why-ais-potential-to-combat-or-scale-systemic-injustice-still-comes-down-to-humans/

Back to Top