GradLIFE Podcast

AI @ Illinois: AI and Accessibility with Kyrie Zhou

July 09, 2024 Graduate College (UIUC)
AI @ Illinois: AI and Accessibility with Kyrie Zhou
GradLIFE Podcast
More Info
GradLIFE Podcast
AI @ Illinois: AI and Accessibility with Kyrie Zhou
Jul 09, 2024
Graduate College (UIUC)

This episode is part of our special GradLIFE series AI at Illinois, where we delve into the impacts of artificial intelligence technologies on our graduate students' research, teaching, and thinking.

On this episode, Bri Lafond (Writing Studies doctoral candidate and Graduate College Career Exploration Fellow) sits down with Kyrie Zhou (Information Sciences) to chat about generative AI, education, accessibility, and researching bias in new technologies.

____

Show Notes:

Kyrie Zhou
Information Sciences at Illinois
Kyrie's Website
Kyrie's Publications

Madelyn Rose Sanfilippo - School of Information Sciences
Ted Underwood - School of Information Sciences
Rachel Adler - School of Information Sciences


University of Illinois Generative AI Solutions Hub - Office of the Provost
Generative Artificial Intelligence - Center for Innovation in Teaching and Learning


Some Miscellany from the Show:

Kyrie's Recent Piece: "Accessible Adventures: Teaching Accessibility to High School Students Through Games"

Kyrie's Recent Piece: "The Teachers Are Confused As Well: A Multiple-Stakeholder Ethics Discussion on Large Language Models in Computing Education"

GradLIFE is a production of the Graduate College at the University of Illinois Urbana-Champaign. For more information, and for anything else related to the Graduate College, visit us at
grad.illinois.edu

Show Notes Transcript

This episode is part of our special GradLIFE series AI at Illinois, where we delve into the impacts of artificial intelligence technologies on our graduate students' research, teaching, and thinking.

On this episode, Bri Lafond (Writing Studies doctoral candidate and Graduate College Career Exploration Fellow) sits down with Kyrie Zhou (Information Sciences) to chat about generative AI, education, accessibility, and researching bias in new technologies.

____

Show Notes:

Kyrie Zhou
Information Sciences at Illinois
Kyrie's Website
Kyrie's Publications

Madelyn Rose Sanfilippo - School of Information Sciences
Ted Underwood - School of Information Sciences
Rachel Adler - School of Information Sciences


University of Illinois Generative AI Solutions Hub - Office of the Provost
Generative Artificial Intelligence - Center for Innovation in Teaching and Learning


Some Miscellany from the Show:

Kyrie's Recent Piece: "Accessible Adventures: Teaching Accessibility to High School Students Through Games"

Kyrie's Recent Piece: "The Teachers Are Confused As Well: A Multiple-Stakeholder Ethics Discussion on Large Language Models in Computing Education"

GradLIFE is a production of the Graduate College at the University of Illinois Urbana-Champaign. For more information, and for anything else related to the Graduate College, visit us at
grad.illinois.edu

John Moist:

John, Hi, I'm John moist, and you're listening to the GradLIFE podcast where we take a deep dive into topics related to graduate education at the University of Illinois Urbana Champaign. I'm here with another installment of our AI at Illinois podcast series, and back again with Bri Lafond. Bri, tell us a little bit about this new conversation you've had.

Bri:

Thanks, John. So in this episode of AI at Illinois, I talk with PhD candidate Kyrie Zhou about his work in the iSchool. Kyrie's research interests are broadly in tech accessibility, tech ethics and tech education. He aspires to design, govern and teach about information and communication technologies, as well as AI experience for vulnerable populations. More recently, his research is focused on accessibility, design and education, as well as the ethics and governance of generative AI.

John Moist:

Let's take a listen.

Bri:

So let's go ahead and just jump into a little bit of intro. So this is our next episode of the AI Illinois podcast, and today I'm talking to Kyrie. So Kyrie, can you give us just a basic introduction to who you are and what you do?

Kyrie:

I'm Kyrie, a fourth year PhD candidate in the School of Information Sciences at the University of Illinois, at Urbana Champaign. Obviously, I work closely with Dr Madelyn Sanfilippo, Dr Rachel Adler and Dr Ted Underwood at the iSchool, and also Dr Shinto at ...Chinese University. I obtained my Bachelor's degree in Computer Science from Wuhan University China. My research interests are broadly in tech inclusivity, tech ethics and tech education. I aspire to design, govern and teach about ICT and AI experiences for vulnerable populations. So here ICT means information and communication technology in a broad sense. Yep, that's it for me.

Bri:

Yeah, sounds good. So obviously, AI is woven into some of what you're doing, but people have only really started broadly, like people in general, have only really started to, like, learn about what AI is and what it does, in only the past year or so. So can you talk a little bit about specifically, how does AI come into your research and what you do?

Kyrie:

Sure. So I've been working on AI for several years. At the beginning, I started with some machine learning stuff, like theoretical machine learning adversary attacks, but later I found that I'm more interested in the interaction between AI and human and then I shifted into human computer interaction research, but AI is still part of the discussion in my research, like all the ethical issues of AI, accessibility issues and education of AI to college students, to high schoolers and to younger kids.

Bri:

So you're essentially thinking like you're studying AI, but you're really thinking about, how do people interact and engage with AI,

Kyrie:

Right, Exactly, and how AI impacts people in the real world, yeah.

Bri:

“So, speaking of a real-world context, and our own context here at the University of Illinois, AI is already starting to have a pretty big impact on education. Could you talk about what you’ve found in your research so far about how students are making use of AI in educational contexts?”

Kyrie:

Yeah, of course, we have a preprint recently. In the preprint, we explored the ethics and regulation of large language model use, actually mostly ChatGPT is in our sample in higher education, computing education. We discussed ethics with multiple stakeholders, including undergrad students, grad students, professors, and also industrial practice. The student mental models of large language models include a writing tool, a coding tool, and an information tool. We also discuss ethical implications in each aspect, including privacy concerns, inaccurate large language model responses, hallucinations, biases, and academic integrity issues. What we found is that students are actually overall cautious about large language model use. First, they are afraid the teachers will find out about large language model use in their writing or assignment or coding because large language models often generate clearly AI-style responses. So students tended to apply their own thoughts after using ChatGPT to draft a paragraph or a code snippet. Also, many of them do want to learn something out of their degree because technology is expensive here in the United States, and students—they don’t want to waste money. Also, finding a job in the IT field is more about polishing up your technical skills and soft skills like communication, rather than having a fancy GPA. So overall, I think students are doing fine with large language models in the educational context.

Bri:

Yeah. So it sounds like, based on your findings, that like students are thinking about the impacts on their own education, but there have been a lot of like from educators perspective. So like from the perspective of teachers, they've been worried about how students are using these things. So what would you say are maybe some of the concerns that educators should have or should not have about how their students are using these things?

Kyrie:

That's a good question. Well, personally, I don't think there's much to worry about regarding students with ChatGPT or AI use in higher education. It was the same situation when calculators were invented or when Google search was invented, right? Will students still learn how to do basic calculation? Will students directly copy content from Google. But you see now we're just fine with all those tools. So when a technology is invented, the more important thing is to rethink education, knowledge, assessment and academic integrity. For example, assignments should be designed in a way that should not be directly solved by ChatGPT. And also, soft skills valued in industry should be emphasized, like communication skills and teamwork, which is often lacking in the CS curriculum right now. And if there have to be some guardrails to help students learn, universities can probably implement some proactive education, such as AI workshops to help students and also teachers effectively and responsibly use ChatGPT. It's a new component of digital literacy, but I don't think there's much to worry about from the teacher's perspective.

Bri:

And your work so far has kind of focused on students who are working in a computer science context, right? So like using it maybe to help generate code that they are, that they're then like, also working with. In terms of your future research plans, do you continue to kind of look at computer science contexts for education, or do you have interest in other contexts, like other disciplines, how they're taking it up?

Kyrie:

Right. That's, yeah, that's something I want, wanted to do, actually, at the beginning of this project, I kind of wanted to compare the ChatGPT use across disciplines, how CS students use chatgpt, how English students use chatgpt, because it's obviously different in different disciplines. And we did wrote, write about it in the paper. We say we should regulate chatgpt use with flexible and contextual policies, and the policy should be department level instead of university level, since every subject is different, and I think that's something I would do in the future, doing focus groups with students and professors from different backgrounds and to see how they use ChatGPT and generatvie AI differently.

Bri:

So kind of my, my mind goes to outside the frame of our conversation. I'm going to jump down a little bit. So my mind kind of goes to with that, this idea of like guidance or like regulation, so like at the stage that we're currently at, like you were, you were saying, like, it makes more sense to have maybe departmental level policies versus like much larger pieces. So do you think it is possible at this point to have guidelines or like academic integrity rules in relation to AI that are informed by research, like what is possible at this stage in terms of top down guidance versus what people should not maybe jump into?

Kyrie:

I think, according to the interview with students and professors. We think it's a bit too early for universities to step in, because currently, there's no clarification regarding what constitutes academic integrity issues in the large language model era, and also there's a lack of clarification regarding who should be held, held responsible for misuse of chatgpt, if the students are responsible, if the teachers are responsible, or if the university is responsible. There's currently no clarification. So this is. This would be too early, and also it's not very feasible. Should the teachers check every student's assignments to see if they are ChatGPT generated? That's just not very practical, I would say. So maybe we can just wait and see how people use ChatGPT in educational contexts, and if more clarification is achieved, guidelines can step in at that point.

Bri:

Yeah, so we've talked a little bit about some of, like, the research that you're doing, but I also know that you've participated in some advocacy in terms of policy comment. So can you talk a little bit about that experience and, like, what is some of the work that you have done already in regards to policy and AI

Kyrie:

Sure, I've submitted some policy comments to government departments, with my advisors, with my colleagues. For example, recently, we submitted one policy comment titled copyright in artificial intelligence to the copyright office, and we discussed research that considers the implications of using copyrighted materials to train AI, especially generative AI models, which answers the input question. We also discussed research that considers the implications of managing copyright protection for material generated by AI models, which answers the output question. And then we discuss existing laws about copyright and unlearning techniques and how they can be adapted to address the copyright issues with generative AI. And at the end of the comment, we emphasize that we believe existing copyright law is sufficient to protect interests and appropriately incentivize creative production by the use of generated AI and but at the same time, we need to extend this clearly when applied by the courts in case law, and also clarify rights via guidance and creators and users of the systems are not necessarily experts in copyright law, so right? It's basically talking about this issue, discussing related research and trying to generate some practical guidelines for some government departments for them to regulate AI and technologies.

Bri:

Yeah, that's really interesting kind of thinking about the bigger picture. And I wanted to kind of turn back a little bit to thinking about, like, people's uses of these technologies. And I know another kind of facet of your work is thinking about access. So like questions of access and accessibility with these technologies. So in terms of AI and accessibility, what are some of like, the aspects that people should be aware of in relation to the accessibility of these tech the app. What are some of the implications that people should be aware of with regard to the accessibility of these technologies?

Kyrie:

I think a key takeaway from my accessibility research is that not everyone has access to AI, and not everyone can make full advantage of AI, given their socioeconomic status or disabilities and equal access to AI is also important in education settings, as we talked about. ChatGPT, the paid version of it affords better privacy protection. It allows you to manipulate how your data is being used by it, but it also generates higher quality responses than the free version. So what's the implications of this for students who can't afford the paid version? Would the privacy be compromised to a large extent? Would they do less well academically? These are all questions we need to answer as researchers and educators.

Bri:

Yeah, I think it's that's another interesting, like, facet of the conversation in terms of, like, instructors potentially being worried about how their students are taking these up. But should we maybe also be worried about students who are not taking these up, or who are not able to take these up exactly, yeah, yeah. So yeah, could you maybe talk a little bit more about some of the research that you've done so far in relation to accessibility and not necessarily AI, but maybe algorithmically driven technologies?

Kyrie:

Yeah, sure. In our previous research, we... about crypto wallets, we found little attention has been paid to understanding and improving accessibility barriers in crypto wallet software. It's the same case with many other emerging technologies such as AI too obviously, but in that project, we asked both blind and sighted individuals to use Metamask, one of the most popular crypto wallets, and we uncovered interrelated accessibility, learnability and security issues with this wallet. And then we further designed, redesigned Metamask in terms of accessibility, education and security to make crypto wallets more accessible. So we used some accessibility features like labeling buttons, adequately organizing Web Elements and simplifying complicated tasks during crypto wallet use. And we also embedded just-in-time educational resources, including videos and text to help blind users understand crypto concepts and export wallets, but in the end, we synthesize two key insights for accessibility from the user experiment and our iterative design process. First, adhering to accessibility standards and best practices such as WCAG could address some critical accessibility issues like unlabeled buttons and foster greater adoption of crypto wallets. So future design of crypto worlds should explicitly consider accessibility and implement accessibility best practices, and our second takeaway is that blind users have their unique challenges and needs than silent users. Some simple operations for silent users, for example, confirming the secret...recovery by rearranging, shuffle, 12 words into the original order is extremely hard with a screen reader, because they have to go back and forth to check each word and select them in the wallet. So crypto developers and tech developers in general are often cited to consider accessibility as a key design requirement for their design process. So, yeah, that's about this project, about crypto wallet accessibility.

Bri:

So in kind of hearing you talk through that, and it's really thinking about user experience specifically for like blind or, you know, like legally blind folks who are trying to access these technologies, and it makes me think about broader issues of algorithmic bias in terms of maybe people's experiences that aren't being accounted for in design processes. So could you maybe talk through what are like? How does algorithmic bias or like issues of particular, you know, user positionality, so how does that...How does that kind of intersect with your work?

Kyrie:

Right. Bias issues and all the gender issues, because I'm often researching gender bias in particular, I think these are persisting and underestimated in the AI systems, as well as other aspects of the current society. In our recent research, we found two of the world's most prominent large language models, chatgpt, in the US and early in China, reflect general bias. We did a qualitative analysis of social media discussions around these two large language models and identify different narrative patterns. For example, people more often complained about the implicit bias in ChatGPT like linking different genders to specific professional titles, you know, the typical woman, nurse, man, doctor, thing, but people more often complained about explicit bias, explicit concerning biases in Ernie, the Chinese chatgpt, Chinese version, Chinese equivalents to chatgpt, for example, overly promoting women's pursuit of marriage over career, something like that. And we propose governance recommendations to regulate gender bias in large language models. For example, it is important to create more concrete and contextual policies with the cultural influence on gender bias and AI, and also, more broadly, industrial practices that have built upon social norms to protect these privacy can partly be attributed to legislation around privacy, such as GDPR and enforcement actions against the actors who refuse to comply, such as Facebook or Meta, whatever its name is. So we think norms to mitigate discrimination in AI systems might be similarly formed with legislative efforts and by engaging and educating users. So that's the main takeaways from that project. It. But recently, in another project we developed hands on tutorials to equip AI creators with awareness and knowledge of generalized which is often an overlooked topic in Computer Science and Artificial Intelligence courses, and also most of the creators, a majority of the AI creators nowadays are male and who are found to kind of lack an awareness of gender bias in the in the existing literature. That's another reason we want to develop those tutorials.

Bri:

Yeah, so your work kind of is looking at the impacts of these things, so like looking at end users' experiences, but also your focus on education, particularly in computer science, is trying to, like, intervene early on, right, right. So you also started to, kind of, you talked a little bit about this kind of comparative analysis, and thinking about ChatGPT in the US and like the equivalent in China. So I think maybe folks here who are interested, like who I've been talking to at the at the U of I have been talking mostly about a US context. So could you talk a little bit about the context of China, in relation to AI, and maybe what is happening with uptake of the technology there that we're not necessarily hearing about or seeing in the US?

Kyrie:

Yeah, sure, as I as I mentioned just now, Ernie, the large language model in China, exhibits explicit concerning gender bias. So understanding AI ethics issues and pitfalls in conservative cultures is important, since women in those cultures face more traditional and are assigned more traditional gender roles. Also AI developed in those cultures are also given less consideration regarding anti discrimination, accessibility, privacy and so on. Such research should be imposed, in my opinion, but in reality, it's sometimes disclosed by reviewers in academic venues. Many HCI researchers have complained that their papers talking about ICT use in non western contexts have been criticized and rejected by reviewers for not being generalizable or something like that. So what's the point? But yeah, I'm personally willing and find it fulfilling to research ICT using less researched marginalized regions you know, which is lacking in the literature.

Bri:

Yeah, that's really interesting, because you're you're pointing to, like, not only are you looking at issues, potentially of algorithmic bias, but like, running up against issues of publishing biases, and like, a, like, a Western centric publishing or Western centric academic context that only wants to focus on the US.

Kyrie:

Right, yeah. That's something I'm trying to do.

Bri:

We've talked quite a bit about the different threads that your research takes up, so thinking about things from an educational perspective and the end user experience, but also like the folks who are learning to design these systems and how to potentially intervene with issues of bias. So what would you say are, like, the threads that that tie your work together, like, how, like, in terms of thinking about your own positionality as a researcher, how do you bring these things together and like, what would you say is, like, the coherent narrative of what you do as a researcher?

Kyrie:

It's, yeah, it's a long story actually, with all my research goals. So my research mainly revolves around three angles, tech ethics, tech inclusivity and tech education, the spread that ties all the different aspects of my work together. Well, I happen to discuss this with my advisor, Madelyn Sanfilippo the other day, is how technology impacts people, especially vulnerable populations. I think part of the reason for my broad research interests is that I work with people from different backgrounds, computer science, information science, social sciences, but my research started with tech ethics issues like user privacy and security, and later, I was involved in a project about teaching AI ethics and cyber security in high schools, so I started tech education research as well, which is a fulfilling thing to do with these projects, the kids got to learn tech and their impact on society early in their life, and they can be more mindful of responsible tech use. Regarding my tech inclusivity research, I think that's influence from my wife, because she's a feminist social media influencer, promoting feminism in China, and that has significantly impacted my research agenda. Two or three years ago, I realized promoting more inclusive ICT and AI for women, older adults, children and other vulnerable populations is important, yet under investigated. I also got to increase my self empathy level when doing tech inclusivity research, which is important for other lines of my research too. So, yeah, that's basically the story of my research involvement.

Bri:

No, it's really interesting. And yeah, like, like, I said, you're, you're looking at a lot of different areas, but there is this core to it, which is about trying to think about vulnerable populations, how they're in, how they interact with and are impacted by these technologies that seem so pervasive. Yeah, right, yeah. So I'm going to ask just, kind of, like, it's a little bit of a different question. So in terms we've been talking about, like, kind of your awareness of and research of these technologies, but I'm kind of curious, do you use AI technologies in your day to day life, or in your research, and if so, how?

Kyrie:

That's actually a great question, because as a researcher, we not only need to understand how technology impacts other people, we also need to self reflect, like how ourselves use technology and how they impact ourselves. So I've been using Grammarly recently a lot. I'm not sure if it's a generative AI tool, but it's an AI tool for sure. So it has been my saver, I would say, because I have been suffered from severe, undiagnosed obsessive compulsive disorder for more than 20 years, I checked every piece of my writing, of emails, papers, assignments, multiple times before letting them go. I also checked the doors log multiple times before I leave my apartment. So I started using Grammarly several months ago. What it brings to me is more confidence and less anxiety. For example, after correcting errors it highlights for me, I feel my writing is good enough to go. Besides that, I also mindfully avoid repetitive behaviors and other aspects of my life, like checking unread messages, unread social media feeds and reviewing weekly and yearly plans and so on. So, yeah, it has a it has a great impact on my mental health, I would say.

Bri:

And it sounds like too, something that you're pointing to, there is the potential of these tools to like have an affective impact, that they're not just these sterile tools, but they can, they can impact you emotionally. I don't know, have you thought about or looked at any of that in your research?

Kyrie:

I think a long time ago, I did a research about understanding how people watch a food show, the live streaming or short videos about the about featuring a person eating. They those videos impact people's eating behavior, sleeping behavior and emotional status in a good way. Some people are not they don't feel their food is good, but when they watch the food shows, they feel some precarious satisfaction, and then they are, you know, they can better enjoy their food, something like that kind of direct impact of technology on people.

Bri:

Now it's, it is interesting, because we think about like, again, I think for the broader public issues of aI have only really come to the forefront in thinking about generative AI and almost entirely chatgpt, but that same kind of machine learning drives the algorithms that We are engaging with constantly, like social media or like streaming platforms, things like that. So that these things are we live alongside them, and they do impact us, right?

Kyrie:

Exactly. Yeah, AI is a high point, especially generative AI, ChatGPT, but I think what we encounter kind of more frequently in real life, are not necessarily AI based. Some of them are. Sometimes they're AI driven, AI based recommendations. A lot of times we just use the social media and online forums. Well, they still have the recommendation system. AI is everywhere, but I think we still need to understand how those technologies, like not 100% AI technologies, impact people's real lives. That's still important research and should not be dumped with the hype around generative AI nowadays.

Bri:

Yeah, this has been super fascinating. So I just wanted to know, where can people learn more about your work and maybe read some of the the things that you're currently publishing on these issues.

Kyrie:

I have a very updated, up to date personal website, and also a Google Scholar page. So I try to ensure open access with preprint servers like archive research research servers or other research servers or the university server, so people can have timely access to them without being delayed by the peer review process. Also, people are encouraged to email me to discuss research and call is open to those chats.

Bri:

Yeah, and we will definitely include a link to your website in the in the show notes for this episode. But, um, yeah, I wanted to just thank you for participating today. Any, are there any last kind of like points that you wanted to to offer, maybe something we didn't get to?

Kyrie:

Uh, no, I don't think so. I don't think so. I think we had a really nice discussion today. Thank you, Bri!

John Moist:

GradLIFE is a production of the Graduate College at the University of Illinois. If you want to learn more about the grad life, podcast, blog, newsletter, or anything else Graduate College-related, visit us at grad.illinois.edu for more information. Until next time, I'm John Moist, and this has been the GradLIFE podcast.