GradLIFE Podcast

AI @ Illinois: Surveillance and Syntax with Kainen Bell and Antonio Hamilton

June 18, 2024 Graduate College (UIUC)
AI @ Illinois: Surveillance and Syntax with Kainen Bell and Antonio Hamilton
GradLIFE Podcast
More Info
GradLIFE Podcast
AI @ Illinois: Surveillance and Syntax with Kainen Bell and Antonio Hamilton
Jun 18, 2024
Graduate College (UIUC)

This episode is part of our special GradLIFE series AI at Illinois, where we delve into the impacts of artificial intelligence technologies on our graduate students' research, teaching, and thinking.

On this episode, Bri Lafond (Writing Studies doctoral candidate and Graduate College Career Exploration Fellow) sits down with Kainen Bell (School of Information Sciences) and Antonio Hamilton (English) for a chat about how AI is impacting conversations around writing, education, and surveillance.

____

Show Notes:

Kainen Bell
Information Sciences at Illinois 
Community Data Clinic
Wikimedia Foundation

Antonio Hamilton
English at Illinois
Writers Workshop

Some Miscellany from the Show: 

Anti-Surveillance Campaigns in Brazil
- Get my face out of your sight! (Tire meu rosto da sua mira!) - National Campaign in Brazil
- No Camera in my face (Sem Câmera na minha cara) - Local Campaign in Recife Brazil

From Kainen: To learn more about how global communities are resisting Data Colonialism, read the recently published book, Resisting Data Colonialism – A Practical Intervention
-
Created by a network of Activists, Scholars, and Community Organizers named Tierra Común
- Kainen wrote a submission titled, Resistance storytelling: Anti-Surveillance campaign in Recife, Brazil (Page 63)

Institute for IP and Social Justice
Algorithms of Oppression, Safiya Noble
Gender Shades, Joy Buolamwini

GradLIFE is a production of the Graduate College at the University of Illinois Urbana-Champaign. For more information, and for anything else related to the Graduate College, visit us at
grad.illinois.edu

Show Notes Transcript

This episode is part of our special GradLIFE series AI at Illinois, where we delve into the impacts of artificial intelligence technologies on our graduate students' research, teaching, and thinking.

On this episode, Bri Lafond (Writing Studies doctoral candidate and Graduate College Career Exploration Fellow) sits down with Kainen Bell (School of Information Sciences) and Antonio Hamilton (English) for a chat about how AI is impacting conversations around writing, education, and surveillance.

____

Show Notes:

Kainen Bell
Information Sciences at Illinois 
Community Data Clinic
Wikimedia Foundation

Antonio Hamilton
English at Illinois
Writers Workshop

Some Miscellany from the Show: 

Anti-Surveillance Campaigns in Brazil
- Get my face out of your sight! (Tire meu rosto da sua mira!) - National Campaign in Brazil
- No Camera in my face (Sem Câmera na minha cara) - Local Campaign in Recife Brazil

From Kainen: To learn more about how global communities are resisting Data Colonialism, read the recently published book, Resisting Data Colonialism – A Practical Intervention
-
Created by a network of Activists, Scholars, and Community Organizers named Tierra Común
- Kainen wrote a submission titled, Resistance storytelling: Anti-Surveillance campaign in Recife, Brazil (Page 63)

Institute for IP and Social Justice
Algorithms of Oppression, Safiya Noble
Gender Shades, Joy Buolamwini

GradLIFE is a production of the Graduate College at the University of Illinois Urbana-Champaign. For more information, and for anything else related to the Graduate College, visit us at
grad.illinois.edu

John Moist:

Hi, I'm John Moist, and you're listening to the GradLife podcast, where we take a deep dive into topics related to graduate education at the University of Illinois Urbana Champaign. We're here with another installment of our AI@Illinois podcast series, and I'm back with Bri Lafond. Bri, tell us a little bit about the conversation you just had.

Bri:

Thanks, John. So in this episode of AI@Illinois, I talk with two graduate students who are specifically exploring issues of bias in AI and its impacts on both students and communities. So let me briefly introduce my guests to you. Kainen Bell is a PhD student in Information Sciences, his research uncovers algorithmic biases and follows the work of digital rights activists and organizers of anti surveillance campaigns in Brazil. Kainen's goal is to learn how Afro-Brazilian communities collaborate to resist and prevent the abuse of surveillance technologies in their communities. He's a research assistant for the U of I's Community Data Clinic, and he's a Wikimedia race and knowledge equity fellow. Antonio Hamilton is a PhD candidate in English with a concentration in writing studies. His research centers on the impact that generative AI has on the writing process. Specifically, he's concerned with generative AI's impact on writing identity and how AI may promote more standardized English writing by deprioritizing diverse writing styles. He's also Assistant Director of the Writers Workshop, the campus writing center.

John Moist:

Let's take a listen.

Bri:

So we are starting off episode two of the AI at that one my podcast, and this is Bri Lafond speaking. So I'm going to ask our two guests to introduce themselves. Can we start with Canaan?

Kainen:

Hi, thanks for inviting me here. So I'm a third year PhD student at the University of Illinois. I study information sciences, and I'm currently researching about digital activism and how groups in Brazil and the US are basically protesting to resist surveillance like facial recognition. And I'm from Seattle, went to University of Washington, and currently I'm a Wikimedia race and knowledge equity fellow, funded through the Wikimedia Foundation and the Institute for IP and social justice. So happy to be here.

Bri:

Awesome. Thank you so much. And yeah, Antonio:

Antonio:

Thank you for having me here today. I'm Antonio Hamilton. I'm a fourth year PhD student and English department with the concentration track of writing studies. My research deals with how AI technologies and automated technology, or generative AI, is impacting writing, particularly, I'm interested in how it impacts writer's identity, and also how it potentially impact language diversity in these systems. And I got my Ph, well, not my PhD. I got my master's at Florida State University, and I got my bachelor's at the University of Alaska Fairbanks.

Bri:

Awesome. So yeah, let's kind of jump into the conversation. So kind of based on both of your backgrounds and kind of like hearing the work that you're doing, obviously your work is intersecting with issues around AI, and I wanted to kind of specifically start off with this idea of, like, the conversation around AI, because that's kind of what we keep hearing, this buzzword of like everyone's talking about it, if we're thinking about, like, public conversation about AI, or academic conversations about AI, who is being left out of those conversations, and how might that intersect with some of the work that you're doing? Let's go to Kainen first.

Kainen:

I think currently, as you’re saying, AI, a lot of the conversations are held in academic spaces, so at the universities, talking about the future of AI and its development. And I think with chatgpt coming out, a lot more people that may not be in academia are experiencing AI through just like testing the chatgpt functions. However, there are a lot of challenges or negatives of AI, or it could be, like, abused by police for surveillance. So I think activists and organizers and human rights groups are many times left out of the conversation, especially in academic spaces on how to on the consequences of AI, because I think, like, we’re kind of in the space where we see we see it like magic, and we’re seeing all the positives of it, but not looking at some of the negative implications. So I think, yeah, those human rights organizers, nonprofit organizations, and activist groups are not brought to the forefront of these conversations, and also the communities that can be impacted by these by the AI, by communities of color, if AI algorithms are used by police, for example, in facial recognition, those who are most impacted aren’t at the conversations at all. And then lastly, I’ll say I think also the global north, like the US, Europe, and even China, are the ones kind of at the forefront of the conversations. Countries, like in the Global South, for example, aren’t really at in at the forefront of the development or the conversations, and many times, the technologies are basically exported to those countries and used in those contexts. So yeah, that’s what I think. Yeah.

Bri:

I think that’s a really interesting point about this relationship between like, global North and global South is not only exporting the technologies, but also global North countries exploiting the labor of the global south to do, like data cleaning on training sets and things like that. But yeah, Antonio, who’s kind of being left out of this conversation from your perspective?

Antonio:

So I definitely would say, agree, in terms of academia has become the main people that are having these conversations, and that’s really not thinking of outside groups. So, like, for example, I so my best friend, like, I remember when I went to go visit her during the summer, she talked about using, like, chat GPT for her work. Like, when it comes to, like, writing emails, there are certain documents for her job and whatnot. And I think a lot of the conversations haven’t considered how this may impact certain workforces and certain occupations. I know, for example, a lot of conversations happening within the law in terms of, like lawyers, of like, how, how this may impact, but I don’t think a lot of people are really focusing on, like, how a lot of the legal documents they have to create, how, like ChatGPT could potentially, you know, replace that work in a lot of ways and whatnot. And so I think thinking about like occupations in general, what type of like implications do like AI technologies have for like, the type of labor and the type of things that they have to produce. And then I also think in the same way of, we’re not also thinking about, like, who is most vulnerable in terms of the AI technologies of I think we all because we, when we think about like, like these general technologies that are coming out, we kind of think of like, there’s like, a standard they’re implementing, and there are a lot of people who don’t meet those standards. So, like, when I’m thinking about like writing like these technologies have like a baseline in terms of, like, promoting writing that is standardized in a way that may not, like, may disadvantage people who don’t write in those type of styles. And so we’re leaving out those conversations of who those other people may be in those situations and whatnot. So I think keeping an academic space, we’re thinking more about the positive things that it can do and maybe a little bit of negative at the same time. But I think most of it is being looked at like how it can help a lot of things, and less of like, who is it impacting? Who’s vulnerable, outside of just writing essays or papers. Who else will be using these technologies and how that may shift the type of work they may be doing.

Kainen:

You reminded me of some ideas of like, also around copyright artists as well, because there's, like, the AI tools that can build like art for you, and all of artists are saying, hey, like, my work was either copied or influencing this and it wasn't with my consent, because I was at a conference and they were talking about these intersections of IP art, and a lot of people were also saying, like, with AI, you can say, write me a book like XYZ, or write me a song that sounds like this person, and their creative intellectual property isn't protected. So those are like things that we're seeing also and those voices are being not heard because they're already underpaid, artists, musicians, and this is making it even harder for them. So that your talk just reminded me of that. Yeah, I think

Antonio:

Yeah, I think that's very relevant. Because, no, I think it was like maybe a few years ago, maybe it was last year where there was that art award that was won by AI generated artwork. And there was a lot of controversy about that, about how, how does an AI win an award? And then, in terms of like, AI is not creating anything new for the most part, it's just taking what's already in this data, and it's putting things together in a new way and kind of reproducing it in a different light. And so I think there's a lot of concern about like that copyright issue. I think about a lot of conversation with ChatGPT, which is currently there are lawsuits going on for ChatGPT, about the type of data they collect, and if they're allowed to, if they should be allowed to do that or not. And so I definitely think the art thing is a big conversation, and they've been having this conversation I think, way before, like this current bubble, even in the journalism world, they've been having this conversation before

Bri:

Yeah, and some of what you both are pointing out is this burst of the last year, in terms of, like, because you would see articles that will say generated or written with help, with AI or whatever. And I know there was a lot of you know conversations about how, what does that do in terms of work again? So I think I know in sports, a lot of like the reporting of like the game stats and whatnot, those aren't usually written by, or, always written by a human. They're written by AI or with help with AI or whatnot. So there's already these ways, slight ways, we're seeing how these generative technologies are already like impacting different areas of the workforce. impacting the workforce and the most vulnerable who are within the workforce, so folks who are already potentially being underpaid for their work, or again, that these technologies are built on labor from underpaid workers to do that quality control and data cleaning. So yeah, I want to think a little bit more about this, I guess, kind of interrelationship between like human-driven data that is feeding into these systems. So kind of to shift a little bit, but thinking still about like how it's humans interacting with these technologies that is building them up. What are some of the ways that like algorithmic surveillance, like, plays into what you all do. And I know this was, like a specific term that Kainen, like your work is, like explicitly dealing with. So I think we'll go to you first, but I'm thinking more broadly about surveillance as well, in terms of just like, sucking up data, sucking up human interaction, and like the content that we put online. Kainen, can you talk a little bit about surveillance in relation to your work?

Kainen:

Yeah, in terms of surveillance, it's looking at how people are monitored or tracked, and police and law enforcement have been using algorithms for for a while now to basically streamline the process. So I'm interested in how law enforcement use facial recognition or other surveillance technologies like license plate readers and with AI, it's basically scaling up their ability to monitor citizens, specifically facial recognition. Some of the dangers is that when it's highly inaccurate for like people of color, those who have darker skin tones, or who are transgender and like reports by Joy Buolamwini, her study called Gender Shades, it basically showed that the leading facial recognition technologies were basically had, like we said, high error rates for these populations. However, police around the US were using them, and people were being like, misidentified again. So AI is basically allowing police to try to use predictive policing tactics, so using the algorithm and AI to say, Oh, this is where crime might happen, or over monitoring this person's behavior the way they're walking so they might commit a crime. And since these technologies have biases, AI is basically scaling that up and re-replicating that. So, yeah, AI surveillance have a lot to do with my research, and we also, we don't really understand AI, truly, for multiple reasons. One, a lot of companies basically try to protect their intellectual property so that the algorithms in the AI can't be audited. And we call this like black boxing. So as we're using more AI, where we understand it less, and then we're basically unable to take out or really check where the bias is happening. So that's very scary. So for example, I follow the work in Brazil, and basically in Sao Paulo, which is the largest city in Latin America, there's a project called Smart Sampa, and basically the city wants to implement 20,000 facial recognition cameras around the city, at like schools, parks, hospitals and again, like in Brazil, it's been shown that police overuse facial recognition on black Brazilians, and a lot of resilience have already been misidentified. So this project is, like, really scary, and I'm following the work of these activists for this organization called Um,(Spanish) Tire Meu Rosto Da Sua Mira, which is, get my get your camera out of my sight. Um, basically it's a national organization. So this is some of the work I'm like following, and I'm trying to learn how I can be an ally to these movements, as a researcher and a person. Hope I answered your question.

Bri:

I think so, and I think it's interesting that there's this tension between that these technologies are being implemented and some of the implementation is disproportionately affecting people of color and people who are gender non-conforming. Some people might say the solution to that is, well, feed it more data featuring those populations. But there is this larger problem of just the surveillance existing at all or trying to engage in this predictive policing. So like Antonio, I know your work isn't directly engaging with surveillance, but I'm thinking about some of the conversations we've had, maybe about the kinds of data that gets taken up in these things. Can you kind of like speak to that?

Antonio:

Yeah, because I think the surveillance goes hand in hand with data quality, and I feel like the reason why these issues are happening at an astronomical level is that, you know, with facial recognition, it’s because the data quality that they’re basing all these off of are already disproportionately affecting people of color or people who are non-gender conforming. So when your data set says, for example, that if you know if in a particular community, if you have most of the people that are in jail are people of color, and that is what your data is basing everything off of, then it is going to misidentify people of color as the people that are actually committing the crimes, when that may not be the case. And so that is the issue of data quality, that you don’t have a diverse enough data set that can account for all of that, because once again, we have humans that are creating these systems which they’re encoding their own biases into it. And so it’s just the cycle that just gets blown up to a larger scale. And so like in my area, that’s kind of the same thing when we think of like ChatGPT, and like all these writing technologies, is the data quality thing. If you have a bunch of writing samples that you know are predominantly, you know, white, cisgender male, traditional academic edited English, then anyone who does not do that is automatically going to kind of be written out of that system. And so yes, you can prompt ChatGPT, but even when you’re prompting it, what type of data quality is it basing the diverse writing off of? And from my own like going around with it, it is not the best. So, I mean, there are a lot of, like, caricature type of interpretations, or a lot of just very simplistic understandings of it. And so in order for it to get better, you have to kind of surveil like this writing style, but at the same time, it’s like, what issues does that raise when you’re thinking of communities that have disproportionately and have a historical precedence of being attacked by technology, the technology development, and then you’re saying, we’re going to surveil them more? It is, there’s, to me, there are like, ethical issues and conversations that should be had that I don’t think are being had in terms of, like, well, just feed it more data or better data. But I’m like, is that a good choice, though, if we’re doing that to people who have never really benefited from technological advances and have always been like the guinea pigs of technological advancements. And so it becomes a very complicated issue of just like just feeding more data, and then when you do feed it more data and it becomes bigger. I think it becomes an issue of, what is the purpose of this technology, and what do we get out of it? So if it can theoretically write in every type of language, then what is the purpose of writing? In that regard, when we surveil too much of everything, it kind of then makes us question the purpose of what we’re using it for.

Kainen:

And the danger is that those who have the power to use it, like we said, law enforcement, they aren't really being regulated, or really you aren't tracking how often they're using these systems or the AI, so it's just really dangerous to have that much power and the lack of regulation or control. So that's kind of one of the main other issues with it.

Bri:

Yeah, I think that’s definitely important because we kind of started off the conversation with who’s left out of conversations around AI. That’s because the conversations are centered around how these tools are empowering big tech or how they are empowering law enforcement. So there’s obviously going to be another side to that coin. We’ve kind of spoken a little bit generally, but I’m wondering if maybe I can hear from both of you a little bit about some of your specific research areas, like specific examples that might demonstrate things for folks who are listening specifically. Antonio, when you were talking, I was thinking about some of those results that you have found from experimenting with it. So could you maybe talk a little bit about some of the inequities or biases that you have seen come out, and what was the kind of experiment that you’ve done?

Antonio:

Yeah, so I have one of them. I heard someone else had told me their experience that I’m at a conference after, I think during a presentation I was giving. So I’ll start with mine first. Initially, with ChatGPT, I wanted to see this understanding of, just like, without any type of overly prompting. So I asked it to write something in English. I told it to write something in Ebonics, and I told it to write something in broken English because I wanted to see how I was interpreting these terms, and what type of results I would get if they were as I assumed they would be, kind of very loaded and racist. So when I did the ‘write in broken English,’ it gave me ‘me speak no English. I don’t speak we very well.’ And I’m like, well, that’s scary because that means it’s getting it from the caricatures of movie scripts that we see, like those parody movies. Like, when do you have that stock Latin character? Character, but they make it seem like they can’t speak English. And then when I asked it to speak Ebonics, it gave like,‘Yo, what’s up, homie?’ Type of like, once again, this caricature type of way of speaking, which is scary again because it’s just like, these are valid ways in which people communicate, and very nuanced ways, and to chalk them up to just these stereotypical movie answers. It doesn’t show like it has an understanding of how to actually construct sentences in a communicative way. But when I asked it to write in English, it responded and said, ‘Sure, what would you like me to write?’ So it tells me, once again, what the standard is when you look at the system. So then I decided to take Vershon Ashanti’s article that they wrote in African American Vernacular English. It was a published article, and amazingly written. And basically, students write in their own languages and whatnot, and I had ChatGPT edit it. So I said, ‘edit it.’ I didn’t prompt it. I just said, ‘edit this text.’ I gave like a paragraph or two of it. It took out every abbreviation, every type of stylistic choice that was indicative of African American English, and it made it more standardized American white English, and that tells me, like, just by saying ‘editing,’ it’s going to go to the default standard of what writing, quote-unquote, should look like. So I then took that response that it gave me, put it back into the system, and then said,‘Write this in African American Vernacular English.’ And then all it really did was just abbreviate words and add a few apostrophes here and there, and that’s it. It didn’t do anything in regards to syntax. Didn’t do anything in regards to understanding how, you know, things are said in a different way. It just gives a very basic minimum. So once again, that points back to data quality, not having good data sets to be able to give a more nuanced response in regards to doing this. And then at the conference, I was talking with this scholar, and she said she had, when ChatGPT first rolled out, she was toying around with it, and she asked ChatGPT to give a feminist perspective from the Quran. And then ChatGPT told her,‘Unfortunately, I can’t do this. That’s dangerous rhetoric,’ or something along those lines that has now been fixed, apparently, but initially it wasn’t. So that tells you, once again, that the system is built to privilege only one kind of person who interacts with this, and that you constantly have to continue to revise it in order for it to accommodate people who are not the standard, quote-unquote. And so to me, this shows that there’s a lot of inequities within this system and how it functions, and that, I guess, the over-reliance on it, or the over-posturing of how it can be this most amazing thing can be a little bit dangerous and a little bit short-sighted, not understanding the potential long-term effects. If yes, it could be a tool for, you know, ESL students that may come in, they come with, like, coming to this country, the university system, and kind of meet the bridge that gap. But like, once again, what is the system teaching those ESL students about what writing is and how writing is kind of very monolithic in a way that doesn’t try to understand the diversity of how things are written. So I think that is a big issue in terms of, like, if we just continue to use this, then we’re just kind of producing this very standardized writing style, and we’re not really understanding the diverse voices that have existed, continue to exist, and that also that we’re trying to promote, if we just resort to the system, yeah,

Bri:

It kind of illustrates pretty quickly, like the boundaries of working within a system that is built on finding norms and built on finding the average and reproducing it. So Kainen, you talked a little bit about some of the kind of organizations that you are working alongside as you are researching, but maybe, could you give a specific example of something from kind of what you've been researching so far that might kind of demonstrate the impact of AI. I was.

Kainen:

I was going to say, your last example, or Antonio, where the algorithm said, ‘Oh, this isn’t basically, we can’t do this,’ but it got fixed later. It reminds me of the book‘Algorithms of Oppression’ by Safiya Noble. She actually went here to the University of Illinois, did her Ph.D. here, and she talks about how algorithms from big data like Google are embedded in racism and sexism. She gave an example. I think when she was a student here, she basically looked up on Google, like ‘black girls,’ and the images that showed up were like pornographic searches, or even like gorillas. And then after she called this out, Google fixed the algorithm so the search result doesn’t come up, but those hidden biases are still there. So that’s what that reminded me of. Your last example, I was going to try to give an example of my experiences in Brazil. So I used to live in Brazil, and before I came to do my Ph.D., I was living there. Something that inspired me to research about facial recognition was when I was in the airport, going to Brazil. I was in line, and then basically, to enter the flight, we all needed to scan our face. This was like 2021, so I scanned my face, and then it showed all my information, like my seat, my name, etc. So then I started thinking, I wonder where else in Brazil facial recognition is being used? So then I went back to Brazil and basically learned that the government wanted to implement facial recognition in airports. So basically, they made this pilot program where you can just enter your flight by just using your biometric Yeah, I think it's, it's interesting too to think about data, by just using your face. And it was like an opt-in program. They basically pitched it as, ‘Oh, it’s going to make travel so much easier. We’re going to help the environment. It’s like paperless travel.’ So it’s kind of how we, when we like that there's been such a quick proliferation of these talk about new technologies in AI, we talk about the benefits and how it’s going to help and make things more efficient. However, on the back end, of course, it can lead to more surveillance. So a year, two years later, I was back in Brazil for a conference, and I just realized such a big, technologies and their implementation, but it's been, dramatic change in how much more systems were using face recognition. So of course, at the airport, in the subway systems, but even to enter my friend’s apartment building, we had to scan our faces. And basically, I had an incident it's hitting different places at different rates. It's where my face wasn’t in the database, even though my friends emailed their apartment saying,‘Hey, my friend’s going to be staying here for the week. Here’s his name and information.’ They didn’t register me in the system. So disproportionate, right? So, for example, Illinois does have a every day, I would leave to go to my conference, and I would follow someone out, and I had no issues. They would just unlock the gate for that person. And then this one day, I was by myself, and it’s like day five I’ve been there, and the security guard was like, ‘Hey, um, scan your face.’ And it’s in law on the books about it's protecting individuals from Portuguese, so I’m explaining, like, ‘Oh, sorry. Like, I’m not in the database, but you have my information.’ And he said, ‘Oh, scan your face.’ So I scanned it, and then he said, ‘Oh, you’re not in the database.’ It’s like, yeah, that’s what I biometric data collection and use. But that's obviously like a said. And basically, he was like semi-interrogating me, like,‘Why are you here?’ And he wouldn’t let me leave for like five minutes. And I’m a person, I’m an African American, and at the apartment, there were a lot of white Brazilians, and they very unique situation. Most US states don't have anything like had no issues leaving at all. So this is one example of how racism integrated with AI and technology can be used to further oppress vulnerable populations. So that was just my personal experience. And basically, these groups that I’m working with in Brazil, those are a lot of their concerns as that. And then, like looking at countries around the world, many well, the targeting of white poor and like Afro-Brazilian populations, as I said, by police or in stores, transgender individuals. Those are some things that I saw firsthand with my experiences being in Brazil.

Unknown:

And then I have the privilege of being an American have, like as much as like what you're talking about in Brazil, citizen. So if something happens, like if a police officer misidentifies me, I can say, ‘Oh no, I’m sorry, I’m if not more, that are like building their systems on these, like AI driven models. American,’ and they’ll believe me. But if I were Brazilian, I’m less likely to be believed and more likely to be abused, etc. So I just wanted to add that my experience there is different from that of a Brazilian who is there as well.

Bri:

I think that gets back to these layers of oppression. We’ve talked a bit throughout about how the proliferation of these technologies is going to impact different classes of folks based on the kinds of labor they perform, but also based on racialized, gendered aspects—like all of those things are working together to contextualize people within these systems. We’ve already been talking about what people should know, or what they maybe don’t know now but need to learn about how these technologies are being used. I’m thinking about how both of your work is looking at education, whether that be literal education in academia or public education and advocacy, trying to teach people about these systems. What are some of the challenges of performing that kind of education and advocacy work around issues of AI and algorithmically driven technologies? Antonio, could we start with you?

Antonio:

I think there are many issues, one being that it’s new. It’s kind of hard to get people on board to understand the long-term impact of this. It’s like, ‘Oh, it’s not going to be that big of an issue,’ and then fast forward 15 years, it is a significant issue. Especially if you’re not in the research area, it’s kind of hard to make someone else connect to it in the same way you are looking at it. That’s one challenge. Another challenge is that because it’s new and exciting, and everybody’s talking about it as this hot topic right now, looking at the negatives is difficult for some people. Additionally, when you have corporations pioneering a lot of this stuff that are money-driven, they’re controlling the conversation a lot and touting the benefits of how this can help change your life. So when you try to fight against that, your voice is probably not breaking through in the same way as the voice saying, ‘You need this type of AI technology on your laptop, your computer, your phone. Use it in your classroom. It will make your job so much easier.’ It’s hard to fight against that unless you’re organizing in a way that challenges those corporations and power systems directly. But even then, if they don’t have a financial incentive to make a change, that’s a challenge to educate people about it. Also, because it’s still new, it’s hard to know where this is going to end up in the next five to ten years or how fast it’s going to evolve. That’s also a challenge in coming up with solutions or advice because it’s still very early in how we’re researching this. I don’t think anyone right now has a hard, true-tested way of what the best way to research this is. It’s a lot of trial and error, and because of that, it’s hard to be concrete. I know there’s a lot of stuff coming out now with generative AI technology regarding writing and teaching tools in the classroom. There are collections coming out, and people are just publishing things off the first thing they hear, which I don’t think is productive in some respects. Because it’s a hot topic, everyone wants to publish about it. So the level of rigor that they’re going to do for the research is not going to be as it should be, or as you would hope it would be, because they’re trying to get something published on it and be the first one out on it. Getting through all the fluff to actually get concrete information about this that’s substantive is a challenge as well. So I think it’s a combination of all of that that is going to be an issue. Technology is always something that people are excited about, like the next advancement, the next biggest thing. Think of the iPhone; they release them almost every year, and you have thousands of people lining up at the mall or the Apple store to get one. So technology is exciting, and when you have people criticizing it, it’s not the easiest position to be in.

Bri:

As I’m hearing you talk, I’m thinking about communications infrastructures and how the same corporations, or similar ones, that are developing these technologies are the ones that people engage with every day. It’s been a slow creep. People may not be thinking about all the auto-prediction that comes up when you’re working with Microsoft Word and how much that has developed over the past couple of years. You’re already using these same kinds of technologies, and you may not even realize it. So, how do you combat that messaging within this larger structure of these technologies? So yeah, Kainen, what are some of the struggles in thinking about, not necessarily an explicitly academic setting, but a public advocacy setting?

Kainen:

I guess it relates to our discussion at the very beginning, where we acknowledged that a lot of it is centered around academia. So, I was going to say, it takes a lot of privilege to be an activist and to lead some of these activist movements. Many times, the movements for digital rights are led by upper-middle-class academics, and they’re not led by the people most impacted. So, many white scholars, critical scholars, critical data scholars—scholars like Safiya Noble, Ruha Benjamin, Simone Browne—have talked about how these movements don’t represent the people that are most likely to be impacted. So, when you’re an advocate telling people to resist AI, resist surveillance, a lot of vulnerable populations don’t have that ability. For example, they can’t just not go to work. Those who are truck drivers are being heavily surveilled now, or in healthcare settings, so they don’t have the privilege to say, ‘Okay, well, I’m going to refuse work,’ because they have families to feed, etc. So that might prevent people from participating, because either they don’t see the power they have, or what we’re asking them to do isn’t feasible for them at the moment. Another thing is making the conversation relevant to their daily lives. Again, as we’re centering it around academia, we’re not relating it to the everyday person who isn’t in academia. So they’re like, ‘Why should I care about this again? I have a lot of struggles in my day-to-day life. I don’t have time to worry about AI right now.’ So those are the things I’m trying to learn: how do organizers, like I said, in Brazil and the US, collaborate in conjunction with communities impacted, and how do they ground their movements to the everyday person? Those are things I’m wanting to learn. Oh, another thing that I’ve heard specifically from the organizers in Brazil is that a challenge is just getting exposure to learning about the movements that they’re fighting for outside of their country. So when I met with them in May, I basically asked, ‘So, how can I support your movement and be an ally?’ And they said, ‘Yeah, we just need more exposure. So people in the US and other parts of the world know the issues we’re fighting for, that these big projects are being implemented without our consent, and we just need more support.’ So that’s kind of what I’m trying to do, the work I’m doing now, is to provide more exposure. Because I do think in the US, we tend to pay more attention to what’s happening here than globally, which is an issue. So those are some of the challenges that I’ve seen.

Bri:

Yeah, and Kainen, you’re kind of anticipating my final question. So, I think I’m going to move into a wrap-up here, which is, what draws both of you to researching this topic and what’s vital about it for you? So yeah, Kainen, I don’t know if you want to continue that thought specifically?

Kainen:

I’m proposing my dissertation next semester. One of the questions I’m asking is how Afro-Brazilians and black people in the diaspora resist surveillance, and how we can collaborate together. The topic is personal for me as an African American, and also because I used to live in Brazil. I just wanted to support black-serving organizations and groups in Brazil, and essentially use the resources I have to support and create a coalition. My personal experiences inspired me, and also seeing the relevance today. Like I said, that project, where they’re trying to implement 20,000 facial recognition cameras in Brazil, shows that once you integrate infrastructure, it’s super hard to dismantle if you’re integrating these AI cameras. Once you’ve started, it’s hard to get off the train. That’s why it’s important right now to promote awareness around these projects and their harms. AI isn’t all good. We don’t understand it, as we’ve said, but as Antonia mentioned, the consequences will be seen immediately and in the future, even if we don’t see them now. So, those are some of the reasons I’m doing this work. And social justice is really important to me in advocacy.

Bri:

Antonio, yeah, what draws you to this work?

Antonio:

I think what draws me to this work is the way we think about writing and learning. Even if it may be considered a traditional way of thinking, I believe it’s important. When we spend time with ourselves trying to process thoughts, whether it’s writing on a marker board, chalkboard, notepad, or tablet, it’s crucial for us to hold on to that. I feel like AI technologies can potentially threaten this process. If we’re resorting to letting AI do our homework or write our papers, then what are we gaining from that? What are we learning? One aspect that motivates me is the existing issues with language diversity, which I’ve experienced throughout my education—from elementary school to college. If I’m already encountering these issues with humans, and now we’re introducing AI into the mix, what about those voices that have never been heard or are trying to be heard? These AI technologies aren’t supporting that mission. In our field, there’s a push to acknowledge diverse voices and writing styles, and to uplift and promote them. When I teach, my first goal is to promote the students’ writing styles, not to conform them to a standard. It’s about helping them elevate their style of writing to make it sound the best. These technologies potentially threaten this. If they are going to be part of the future, I want to be more critical about how we can make them more productive and not just accept them as they are without pushing back. Otherwise, all the work we’re doing now, and have done before, will be pointless. I feel personally connected to this, and it’s hard to convey this message sometimes. In the research area, it doesn’t really make sense. It’s challenging to get the point across about language diversity until it happens. Then, when it does, it’s like, ‘Oh goodness, this all sounds the same.’ We’re all producing the same content, and I don’t want to reach that point where everyone is forced to conform to a standard because these technologies become so ingrained in our lives. We already see this with everyday technology. For example, we’re on Zoom right now, being recorded, and this technology is already ingrained in our lives, to an extent where we’re somewhat coerced. So, I want to figure out the best way to not be coerced but to work with the type of coercion that we are experiencing.

Bri:

Yeah, I think that it’s a crucial point that these technologies, as they become more and more prolific, are going to shape the way we sound or the way we behave. There will be normative expectations around what we do, how we talk, and how we communicate with one another. I think we’re probably going to wrap up there. So, I want to thank you both so much for participating in this conversation and offering these perspectives to folks who may be only just starting to think about AI technologies.

Antonio:

Thank you.

Kainen:

Thank you so much.

John Moist:

GradLIFE is a production of the Graduate College at the University of Illinois. If you want to learn more about the GradLIFE podcast, blog, newsletter, or anything else Graduate College related, visit us at grad.illinois.edu for more information. Until next time, I'm John Moist, and this has been the GradLIFE Podcast.