Episode 12: "Interview with Dr. Desmond Patton, University of Pennsylvania"
Listen to this episode:
Episode description:
In this episode I have a discussion with Dr. Desmond Patton, the Brian and Randi Schwartz University Professor at the University of Pennsylvania. We talk about his research into the relationship between social media and gang violence and the cultural nuances in language amongst predominantly black and Hispanic youth, as well as current topics in AI and technology in general such as bias, intellectual property, and emotion simulating chatbots.
“Expressions of loss predict aggressive comments on Twitter among gang-involved youth in Chicago”
“Contextual Analysis of Social Media: The Promise and Challenge of Eliciting Context in Social Media Posts with Natural Language Processing”
Safiya U. Noble, Ph.D.:
https://safiyaunoble.com
“Yuval Noah Harari argues that AI has hacked the operating system of human civilisation”
The AI Dilemma:
Episode script:
SCOT: Hello everyone, and welcome back to AI QuickBits: Snackable Artificial Intelligence Content for Everyone. My name is Scot Pansing, and today I'm going to be speaking with Dr. Desmond Patton, the Brian and Randy Schwartz University professor at the University of Pennsylvania. Dr. Patton has multiple safety advisory and board positions, including For Spotify's Safety Advisory Council and Twitter's Academic Research Advisory board. His career includes groundbreaking research into the relationship between social media and gang violence, specifically how communities constructed online can influence often harmful behavior offline. Dr. Patton has also researched the cultural nuances in language amongst predominantly black and Hispanic youth and created a platform called the Contextual Analysis of Social Media, or Chasm.
Desmond, thank you so much for speaking with me today. I really appreciate your time.
DESMOND: Thank you so much for having me.
SCOT: Before we get into some current topics, I'd love to talk about your background a little bit and some of your work. I noticed there's a highly cited paper, including by the Supreme Court of the United States: “Expressions of loss predict aggressive comments on Twitter among gang-involved youth in Chicago” – trauma detection to preempt violence on social media. I'd love for you to talk a little bit about grief as a pathway to aggressive communication on social platforms.
DESMOND: I think it's important to start with kind of how I got into this space and how grief became an important feature or characteristic to focus on in the transmission of violence on social media platforms. So I define myself as a social worker first, a gun violence researcher and a public interest technologist. And so what that means for me is that I lead with social work values, meaning that we treat everyone with respect, that we honor the dignity and worth of every human being.
But what it also means is, as a social scientist, I use cutting edge methodologies to understand phenomena, in particular the phenomena of gun violence. And what I've been studying over the last decade is social media as a neighborhood, as a new environment for which gun violence may occur. And so I use qualitative methods to interview community members and outreach workers and folks that are impacted by violence or may have been victims of violence or perpetrators of violence. Then I also partner with data scientists to identify psychosocial codes like aggression, grief and loss, substance use, and now joy and well being on social media platforms. In particular, I have focused most of my time on Twitter.
So this idea of grief came out of years of research looking at aggression on Twitter. And we have been following young people who self identify as gang involved on Twitter for many years, and we started off with this idea that we could predict violence. I was a junior professor, and I thought, oh, there's all these new technologies out there that might be able to help us identify patterns of communication that move conversation from social media to offline violence. In the process of doing that, I realized that was wrong, that was not ethical, and it was not the best use of these great tools.
And so in kind of doubling down on my qualitative sensibilities and my social sensibilities, I began to look at users as individual human beings with complicated lives, and when we started to dig deeper, we began to understand that oftentimes young people were posting about life in particular trauma and grief before they would post about more aggressive or threatening behaviors online. So we saw this kind of anecdotally observationally and we decided to test it. And when we tested it, we saw that this was not random, that this pattern of grief before aggression was quite common among young people in Chicago during our experimental time.
It became clear that we needed to think very deeply about the role that grief plays in shifting and morphing language online that may become more aggressive. And so that became the beginning of a series of studies to not only look at grief on social media and how we can use machine learning tools to identify these signals of grief, but to also spend time with people.
We just concluded a study in Harlem where we were hanging out with older adults in Harlem, black Harlemites, that were impacted by COVID and were looking for a place to talk about their grief and the grief that was happening in black churches. We also spent time with young people who were living in some of the low income housing in Harlem, who spent a lot of time with us, talking about how their grief was also related to the gun violence they were experiencing in the neighborhood.
That's kind of a long winded approach kind of tells me that I think the grief is an important piece of this puzzle of understanding the gun violence epidemic in the US.
SCOT: Yeah I mean, anger as a part of grief.. I would think that even before Twitter and whatnot, when bulletin boards were going on and the internet was first going on, 1.0 – when you remove that human element and it's just a keyboard and maybe a bulletin board or whatever. We used to call them keyboard jockeys. It seemed like people were a lot more accepting of just sort of spilling out their anger onto the internet.. seeing people face to face. So this seems very relevant. And that would make sense that anger as part of grief could manifest itself that way on Twitter and in these ways.
And so did this sort of lead to your language analysis bias research as well? Because I think I read that you were noticing that on social and also with AI tools, but just that that the language of social or AI was more geared towards you know, the typical white male Silicon Valley and or was that a completely separate track for you?
DESMOND: I would say they're all intertwined. I would say that we started off by looking at the language of young black youth in Chicago and needed to engage in a set of reflexive practices that would unearth the biases that we bring towards interpretation of Twitter data. And this is really important and impactful because even me as a black researcher, I held lots of white supremacist and white toxic ideas about young black people and how they showed up and conveyed themselves on social media platforms. And so it became an important foundational principle and practice for us to reckon with that bias, identify that bias and engage in a set of processes that help us to get more context.
And so what came from that early reckoning? A couple of things. Number one, it was important to be indulging the scholarship of black scholars and communication that had been writing about bias in technology for years. And then it became clear for us that as we think about this application for the analysis of social media data that we needed a process for uncovering bias and censoring marginalized voices.
And so we created this process called CASM, which is the contextual analysis of social media approach, which forces us to identify our bias as a baseline to extract context from external sources like Hip Hop Wiki, when we're kind of rusting with language, African American language, or social media, things that we're unfamiliar with. And then to constantly be looking for additional context to challenge and problematize how we're interpreting social media posts. And then that would then inform how we would label data.
So I would say as a lab we focus tremendously and of a large amount of time on the annotation and labeling of data to reckon with that bias and to get beyond binary classifications so that we are actually seeing human beings in language. So that has shepherded our studies moving forward to really be wrestling with context in this manner.
SCOT: I mean, bias is obviously a huge topic in artificial intelligence. There are so many different facets of bias. I'd like to talk a little bit about some of these image generation tools. I, like many people, have been fascinated with Midjourney for months. And something that I have been uncomfortable with is that if I want any diversity in the images that I'm going to come back, I have to specifically ask for black people or Asian people or Hispanic people. I mean, if you just say people, it gives you white people. I think it's sort of interesting that the systemic bias is just right there on display. You can prove it, it's there.
And I suppose that when photography was invented, and then all these photos were taken even before digitization, and then all of those many, I would say the vast majority of photographs that were ever taken have been digitized. And then, of course, all the digital photography. It seems like basically, people pointed the cameras most of the time at white people. So the bias that some people might have argued in the past, it's sort of how can you argue it now? It's clear. You ask for people and white people come back, there's no diversity.
And then another thing I saw recently, which is not so much about Midjourney or generative AI tools, but about image recognition of a sort, is I saw someone take a video of the automatic faucet in, like, a public restroom, and they were a person of color, and they're putting their hand underneath the faucet, and no water is coming out. But as soon as they take a white paper towel and put it on their hand and put it underneath, the water comes out. So even in a very simple way like that. So I'm curious your thoughts on the bias inherent in these models, especially with these generative AI imagery tools.
DESMOND: So there's so many issues and the examples that you opposed really bring up for me that we have not been listening to Sophia Noble and the call outs that she has been making for years about the baked in bias in algorithmic systems and the danger in blaming the technology for the bias, as opposed to having a reckoning with issues of inclusion in technology production and development and conceptualization.
And I think there's absolutely a way out of this. I think that there is a way for us to end these challenges, or at least to have a massive reduction if we get serious about inclusion. And inclusion has to start in the K-12 schools. We have to unearth the talent and young people from diverse backgrounds and experiences early.
And it doesn't have to be young people who are good at math, it can be a host of people that can bring their lived experience to the product table, to the development of technologies, to the conversations around ethical AI. And yet we continue to wrestle in this space.
I think we need more educational programs that span from university offered programs to YouTube offered programs, to programs embedded in tech companies that really seek to think about how we best leverage this type of inclusion as a metric for success in radically changing how we think about who gets to sit at these AI tables.
And so I think that there's that end of it. And then we have the overdoing, right? So then there are the examples of, yes, we need to diversify our facial recognition systems, and so we give cameras to unhoused people and have them take pictures. And that's our way of being more inclusive. That's also bad and a faulty way of representation.
And so I think part of the problem is that we're not situating ourselves in real life, we're not situating ourselves in real life, we're not grappling with these complex problems from the folks who are affected day in and day out. And there's a way around that; we can do better. And that's one of the things that I'm trying to do in social work. Because in schools of social work we have lots of people that have diverse experiences.
And one of the things I'm trying to do is to create a cadre of social workers that can work in tech that can be at the table with engineers and product managers and trust and safety advocates to help anticipate and problematize and create plans and processes to deal with some of these issues. And so yes, it's sad that this is still something that we're reckoning with, but I think a part of the challenge is really trying to articulate how does race and racism based in AI affect the bottom line. And I think until we have a reckoning with that and I really can lean into what Melody Hobson has been telling us about why talking about race is important for business.
That has not quite melted into the foundational principles of tech companies. But I think there's lots of folks, lots of folks of color, lots of researchers, lots of community members that are pushing for new ways to do this.
SCOT: I love that you brought it back to K-12 education as well. And the inclusion I think is huge; I think you're right. Do we need people to be able to raise their hands at the really upstream where when the products are being developed to say, have you thought about this? One recent one, this is a little bit of a tangent, but there was this Levi's thing about a month ago where they came out and said, “Hey, this is great! We're going to start to use more diverse people in our imagery but we're going to use generative AI to make the images.” So they're not going to hire photographers or models, let alone diverse models. They're just going to generate the images.
And I just couldn't believe.. I said gosh, I'm sure there's a lot of smart people at Levi's. How is it that someone didn't raise their hand but it happened that way?
I've actually been asking myself a lot of hard questions with the generative AI image tools like Midjourney. I'd like to read you a very brief, a couple sentences from something that came out recently by the Center for Artistic Inquiry and Reporting or CAIR. They have an open letter called Restrict AI illustration from Publishing.
And in the letter it says, “AI art generators are trained on enormous data sets containing millions upon millions of copyrighted images harvested without their creators knowledge, let alone compensation or consent. This is effectively the greatest art heist in history, perpetuated by respectable seeming corporate entities backed by Silicon Valley Venture capital. It's daylight robbery.”
And now I'm really asking myself these things. I don't even know if I'm going to really even continue. I'm not sure. But I definitely have big feelings now around using Midjourney.. and I'm based in LA.. There's a writers’ strike going on. There's a real backlash from not the entire creative community, obviously, but it’s not a small sliver of the creative community that is really sounding alarms. I'm curious your perspective on that.
DESMOND: So this is an area that I focus on a lot, but it makes tremendous sense that a part of the challenge is we have this techno chauvinist approach where we continuously rely on AI to be the answer. And these tools are tools. And I think that we are missing the boat, that we can use them as assistants, but they can and never should replace a real human experience.
And in particular, we have amazing black and brown talent that would be more than willing to engage in ethical use of their creative energies to amplify voices in this space. And so to negate those experiences, to negate that creativity, to negate that level of nuance, is a part of the baked in bias.
So we need to see that example that you listed as the problem. AI is not the answer to solve that problem. And so I think that we need to continue to convene spaces where we identify these really thorny examples because oftentimes people are not recognizing and identifying these types of applications as bias.
SCOT: Yeah, I agree. And I think a recent interview I did with Theo Priestley on this podcast, I think I saw today, him say something like that sometimes these images that come back are like junk food. Like you said, if you take away the majority of the human element, then it's not really going to get you there.
Again, the podcast is called AI Quick Bits, so I'm going to go with the final area I want to ask you about is something with AI and ethics and the future. I get that bias and misinformation and several other topics are very important. I don't deny that and I think about them a lot. But another one that I've been thinking about is the emotional simulation of a lot of these tools. Now language has become.. there was a wonderful article in The Economist, actually, by the Sapiens author Yuval Noah Harari, he was basically saying that now that computers can engage in conversational language with humans, that they've basically hacked the operating system of humans.
And there's another video by Tristan Harris and Asia Raskin who created The Social Dilemma for Netflix. They called it “The AI Dilemma” – and basically what I'm getting at is that people feel like these machines, many people feel like that they're interacting with something that's alive and because they've anthropomorphized the machine and really, it's a Wild West as far as policy and regulation.
So these companies, many of them, it's not surprising that they're going to push the limits of.. “oh well then if we're having conversations with people, let's simulate emotion. Let's persuade people perhaps to buy things or to do other things.” And also in this video, “The AI Dilemma” they talk about instead of “Alpha Go,” they talk about “Alpha Persuasion.”
Meaning that for example, if I have a secret topic and you have a secret topic and we're each supposed to get each other to say positive things about the topic and whoever gets the other person to say the most positive thing wins.. I would think it would be that a computer now with all these large language models could get extremely effective at that game.
So what do you think about the ethical minefield that awaits us with respect to emotional simulation and persuasion with regards to these large language models and Chat?
DESMOND: Transparency, transparency, transparency. I don't have a problem with human like chatbots that are assistive tools to our healthcare system, in particular the fields of psychiatry and social work and psychology because these systems are challenged. There's lots of unequal treatment within these systems, in particular for marginalized groups that oftentimes don't have equal access to therapeutics.
And so I think that to have tools that can create new pathways for marginalized communities to get resources to manage stress, to identify coping mechanisms, I do think those things are really helpful if used in moderation and when we are clear and transparent about who you're talking to.
And I think we should never become so reliant that we trust that a chatbot can wrestle with the types of complexities particularly embedded within the human language. I spent a lot of time analyzing African American language and one of the kind of challenges that comes up is like distinctions between what might be considered aggressive and what might be considered traumatic or grief expression. And oftentimes that is layered in the words that are used, how events within neighborhood shape language and how words are used within a particular community.
The hyper local context of language where institutions and streets take on different meaning. And these are things that I believe AI language systems still struggle with. When I take the examples of tweets from black communities in Chicago and embed them in ChatGPT-3, I get an error message because it can't wrestle with kind of the merger of African American language and social media speaking.
So if I'm a young person trying to get help from a human like chatbot and I'm bringing my full self, my authentic self and I'm really in need of help and the chatbot keeps reacting to me as if I am speaking white privileged English, then that could be as harmful as not having the chat bot. And so I think that we have to A. be transparent, B. think about these tools in moderation and C. make sure that we are thinking about the variety of use cases where things are not always readily clear or apparent.
SCOT: Yeah, I think where you're going with the language analysis is huge because I do agree that these are tools. And when I asked the question, it may have seemed a little leading, like I was completely against human seeming chatbots.
But I do believe that you're right that these are tools and that they can potentially identify needed support, but maybe not necessarily provide the actual emotional support that's needed, but identify needs.
Because like you said, if someone is bringing their full self to a chatbot and it just doesn't understand because it's just tuned to one white male or whatever, Silicon Valley just one narrow lane of human communication when we are such a diverse, global community, especially now that we're all connected, then how is it going to identify that needed support in all the communities?
Well, before we wrap up here, I do want to give you some time to talk about both your upcoming book, Life and Death on the Digital Streets. I'd love to hear more about that and also your new center, the Penn Center for Inclusive, Innovation and Tech. So if you don't mind, I'd love to hear a little bit about both of those items.
DESMOND: Yeah, absolutely. I am wrapping up a book on a young black girl that I have followed on Twitter since her death in April 2014. Her name was Jakira Barnes, and I have analyzed her tweets and the tweets of her network on Twitter for years. I've looked at pain and joy and sadness and aggression and gang violence and hurt and pain and love. And this book is a reflection on what it means to be a black scholar looking at a black child in this context.
It's a look at this world of social media and gun violence and some lessons learned and some mistakes made that I hope other researchers and folks interested in this field can learn from as well. So that book, I'll be wrapping up that book this year, and hopefully it will be forthcoming next year.
And I'm also excited to talk a little bit about our new center at the University of Pennsylvania, the Penn Center for Inclusive Innovation and Technology. And we really hope to be a resource for community folks in Philadelphia and beyond, that have amazing ideas for innovations in their community that might tap into technology or other innovative tools but don't have resources.
So we want to leverage the resources of this great Ivy League institution and really support people who are looking for engineering support and need to identify a network of folks to support them or are struggling with some challenging DEI concept that is important to the development of their tool. Or folks who are looking for a more structured environment and connection and a convening space to work together.
And so that center will launch this fall and really want to connect with folks throughout the country that share similar interests and want to work together.
SCOT: Nice. Well, congratulations on the launch of that center.
DESMOND: Thank you.
SCOT: Good luck with that and your upcoming book. And thank you so much for sharing with me today. I really appreciate your time, Desmond. Dr. Desmond Patton. Thank you so much. It's been a pleasure to speak with you. I really appreciate your time.
DESMOND: Thank you, Scot.
SCOT: Bye.