Ethical AI Podcast

Episode 1: Framing the Topic

September 01, 2022 Season 1 Episode 1
Episode 1: Framing the Topic
Ethical AI Podcast
More Info
Ethical AI Podcast
Episode 1: Framing the Topic
Sep 01, 2022 Season 1 Episode 1

Three faculty members, Vivan Halloran, Nikki Pohl, and Beth Plale,  discuss the ethical and equity issues that arise with the application and use of AI technologies and why it’s essential to look at AI through the lens of equity and inclusion.

Vivian Halloran
Professor of English and Associate Dean for Diversity and Inclusion in the College of Arts and Sciences and Acting Director for Curriculum for the Liberal Arts and Management Program (LAMP)

Nikki Pohl
Professor and Joan & Marvin Carmack Chair of Chemistry and Associate Dean for Natural and Mathematical Sciences and Research in the College of Arts and Sciences

Beth Plale
The Michael A. and Laurie Burns McRobbie Bicentennial Professor of Computer Engineering and the Director of the Data to Insight Center

References
Artificial Intelligence in Higher Education:  Applications, Promise and Perils, and Ethical Questions, Elana Zeide, Educause Review, Aug 2019 https://er.educause.edu/articles/2019/8/artificial-intelligence-in-higher-education-applications-promise-and-perils-and-ethical-questions

 Safiya Noble, “Algorithms of Oppression”. New York University Press, New York, NY, 2018

Show Notes Transcript

Three faculty members, Vivan Halloran, Nikki Pohl, and Beth Plale,  discuss the ethical and equity issues that arise with the application and use of AI technologies and why it’s essential to look at AI through the lens of equity and inclusion.

Vivian Halloran
Professor of English and Associate Dean for Diversity and Inclusion in the College of Arts and Sciences and Acting Director for Curriculum for the Liberal Arts and Management Program (LAMP)

Nikki Pohl
Professor and Joan & Marvin Carmack Chair of Chemistry and Associate Dean for Natural and Mathematical Sciences and Research in the College of Arts and Sciences

Beth Plale
The Michael A. and Laurie Burns McRobbie Bicentennial Professor of Computer Engineering and the Director of the Data to Insight Center

References
Artificial Intelligence in Higher Education:  Applications, Promise and Perils, and Ethical Questions, Elana Zeide, Educause Review, Aug 2019 https://er.educause.edu/articles/2019/8/artificial-intelligence-in-higher-education-applications-promise-and-perils-and-ethical-questions

 Safiya Noble, “Algorithms of Oppression”. New York University Press, New York, NY, 2018

INTRO MUSIC 1

 Laurie Burns McRobbie:

Welcome to creating equity in an AI enabled world: conversations about the ethical issues raised by the intersections of artificial intelligence technologies, and us. I'm Laurie Burns McRobbie,  University fellow in the center of excellence for women and technology. Each episode of this podcast series will engage members of the IU community in discussion about how we should think about AI in the real world, how it affects all of us, and more importantly, how we can use these technologies to create a more equitable world.

 INTRO MUSIC 2

 Laurie Burns McRobbie:

Ethical AI is a term in widespread use and means different things to different people, different industries and different disciplines. The Center of Excellence for Women and Technology is focusing on ethical AI alongside many other topic areas designed to equip and empower women and their allies with technical knowledge they can use now and take into their future careers. For us at the center. Ethical AI means equitable AI, the constellation of technologies, data and human interactions that ensure a level playing field for everyone. The history of technology development has resulted in a truly extraordinary array of benefits, but they have not benefited everyone equally. And in some cases, the deployment of certain AI based decision making capabilities actually perpetuates inequality. We want to help change that by supporting faculty, staff and students and learning about AI in the real world, and to empower them to be knowledgeable and responsible users and creators. And one of the ways we hope to do that is with this podcast, which will explore how AI is being used and Indiana University and should be used from different disciplinary perspectives. And we're going to start by taking a broad cross disciplinary view to introduce a number of specific topics, many of which will be covered in future episodes. I hope this sparks your curiosity to keep tuning in. With me today are three members of the IU Bloomington faculty, each of whom has spent a lot of time in different disciplines thinking about the implications of artificial intelligence for us as teachers, researchers, public servants and well as human beings. Vivian Halloran is professor of English and Associate Dean for Diversity and Inclusion in the College of Arts and Sciences, and acting director for the liberal arts and management program lamp. Vivian teaches students to recognize evidence and learn to analyze it by reading about AI and inequality, and algorithmic justice. Nicola Pohl is professor and Joan and Marvin karmic Chair of chemistry and Associate Dean for Natural and Mathematical Sciences and research in the College of Arts and Sciences. Nikki's research centers on automation in the chemistry space. Beth Plale is the Michael A and Laurie Burns McRobbie Bicentennial professor of computer engineering, and the director of the data to Insight center. Beth is a co PI of a large federally funded AI Institute where she is working on issues of democratizing AI. Vivian, Nikki and Beth have all been integral parts of the Center of Excellence for Women and Technology as faculty leaders, advisory council members, and in best case, a co founder of the Center. And I should note that Vivian and Beth and I are in the studio today. While, Nikki is joining us by zoom. Welcome to all three of you.

 Guests:

Thank you happy to be here.

Laurie Burns McRobbie:

Let's start by deconstructing this term ethical AI. It's in wide use these days, and lots of researchers and writers are focusing on it from different perspectives. For many of us, and certainly for all of us here today. Ethical AI speaks to how AI technologies can perpetuate biases and exclusions and maybe even more crucially, how we can influence the use of these technologies to to enable a more equitable environment. But let's start by having each of you describe your own approach to thinking about ethical AI. Beth, do you want to kick us off? 

 

Beth Plale:

Perfect. Yeah, thank you. So let me before I before I go into kind of a kind of a framing for how I think about it. Let me just give a really quick example. And I use this kind of example in my class. So suppose we've got sensors around Bloomington, and the sensors are picking up birds or wild birds. And then in there's a set of data that's collected. And then there's a model, an AI model that is trained around the birds around Bloomington. And then later when that model gets put out, and we'll call it an AI service that gets put out there, then someone can take an image and send the image to the AI service and the AI service will come back and say, well, it's a Cardinal. So it's doing a prediction based for you based on all the training, it's done in the data around Bloomington. So that's what an AI service is. So when I think about the ethical obligations, and I think about it from the point of view of a service, I think about the obligation of those who build, should that AI service have been built? Could it have built been built better? You know, is there kind of a misguided approach to building it? But there's also downstream actors that have an obligation, those who acquire the AI service? Should it have been bought for this use case? Should it not have been? So is there a misapplication of this? Those who use the application, you know, should it have been used in this setting? You know, is there improper use? And then those who use it later, but let me come on, we'll come back to those use later, those who use it. So you're using a prediction that is in it that's trained in Iran, Bloomington, you take it to Oregon, you hand it an image, and you say, what is this? Well, you know, it knows about about cardinals, it knows about Blue Jays, it doesn't know about the birds in and around Oregon. So you've mis mis applied in a trained model in a different setting. So the the person who's using it needs to understand that. And then again, also with this model, you use it now you collect it now, 10 years later, after environmental changes, how useful is that model again, so it's those who use it a decade later. So it's not only the developers, it's those who purchase it, who use it and continue to use it that all need to understand what it is they're using, and how it's useful and not useful. I'll stop there.

Laurie Burns McRobbie:

Vivian, your thoughts here?

Vivian Halloran:

Well, I like Beth's explanation of how something that a data set can can not apply in a different context than it was meant to the same I would say goes for ethics. Overall, an ethical system is based on the assumption that there are some universal truths about how people should behave towards one another towards the Earth, etc. But these values change, the more we know, the more we see the impacts or the effects of actions on people. And so I think what's important when I think about ethical AI is the notion that it's not a permanent or universal thing. It's something that needs to be constantly revised and updated. And we need to speak up when we notice, hey, this AI application is not working for X people, it is working for Y people, right? So why is that? So questioning assumptions, and really looking out for ways in which AI systems can help augment or enable people to do what they do better? And to help one another function better, rather than to just say, Oh, well, that was the way that somebody did it. And it was bad then. So it's, it's less important to blame, find blame than to fix things to align with what we know to be most fair now. And as we keep changing our definition and expanding and broadening it. That's what I think it should should try to keep doing. 

Laurie Burns McRobbie:

Nikki?

Nikki Pohl:

so I completely agree with Vivian and Beth's take on ethical AI coming from the sciences. When I heard Vivian say, well, it can change over time. That's exactly what we don't want. Once you once you're analyzing your astronomy data, it should be it should withstand the test of time. However, what's interesting to me is so much of the AI that we've been developing has been modeled after human intelligence, it's supposed to be artificial human intelligence, which to me, the more exciting part is thinking about truly artificial intelligence, things that a human being cannot do so that it can augment our abilities. And part of ethical AI is also thinking about the ability of AI to not replace human beings but augment what we can do very well.

 

Laurie Burns McRobbie:

Right. So this thank you, all three of you would like to a lot to think about here and and I'd like each of you to talk a little bit more specifically about how you in your disciplines, computer science, engineering, English and language and text and chemistry. How how artificial technology, intelligence, and aspects that we might think of as being part of artificial intelligence are affecting your disciplines. And and and if you have comments as well on how you see it, in fact affecting the academic world more broadly, we can we can do a follow up question on that. I'll come back to that in a minute. But let's start with how it's affecting your discipline. Vivian want to kick us off with this one?

Vivian Halloran:

Sure. Well, now there's predictive technology that helps people write more clearly. So it's like a built in grammar teacher, whenever you use services like Grammarly, or even just Word or Google documents. So a lot of the work that we used to do in composition was explained not just how to fix bad grammar, but explaining how the system of language worked. So there is a way in which students no longer will sit for the explanations of why number agreement has makes the meaning of a sentence more clear. Instead, they just run their paper through Grammarly or run it through just Word or Google Docs, and it improves. So we have to meet that challenge by making the explanations of why certain things happen more compelling, so that then students can in essence, do what Nikki said and use the Grammarly or the AI that helps with predictive word choice work to enhance what they already say, rather than relying heavily on these tools to do their thinking for them. So that's one one thing. And then we have access to a lot of interesting technology in the classroom for literature. But there's a lot of resistance in the part of literature professors to encourage the use of, you know, novels that have QR codes that will take you to a video or something like that. So if you're teaching contemporary literature, that's really good. If you're doing archival research, you can use digital humanities tools to enhance the clarity of old manuscripts so you can see the text better. So there's ways in which people are using it, but it's not universally embraced.

Laurie Burns McRobbie:

And Nikki, what, what's your, where are you seeing these technologies, changing, affecting chemistry?

 

Nikki Pohl

So one of the great examples that I brought up in my undergraduate organic class too, is the idea of a computer doing the work of an organic chemist and figuring out how to make a molecule. Molecules are three dimensional, in space. And that's a part of that has there has been very difficult to translate it into zeros and ones in a machine. And so for decades, chemists have thought we're safe, we don't have to worry about it. Our skills are not replaceable, or augmentable. But now programs have come out that can do that. They can't quite do it, I do yet. But you can start seeing there, they're getting toward being able to take all the literature that we have produced for many, many decades, and start to assimilate it in ways that would be impossible for a human being to do. And so we're starting to have to grapple with the the aspects of our jobs of skill sets that we've taught historically, to every set of graduate students and even a little bit to the undergraduates and start to have to rethink, is this still the most valuable skill set? It's still an incredible brain teaser puzzle. But now we need to learn more about all these other things, too. And then there's the initial scared, are we all going to lose our jobs? And but I would argue no, now if, if I don't have to do that kind of work, I can just start looking at where are the holes in the data? You know, there's a huge section of AI, of course, that we don't even see, because we have no data, we have no reproducible large datasets in the vast majority of human endeavors. And that is still part of where most of the sciences are. So thinking carefully about where do we go and get those datasets? How do we, how do we implement How do we do analyze those datasets in a way that will actually augment what we do? And the first thing I would say in my classes, too, is of course, what we decide to spend our time and money and resources on is a completely human decision that has nothing to do with AI, what sort of applications which drug do we want to try to design, what kind of materials and what properties those are human aspects, that we that constraints and aspects and desires that have nothing to do with the technology itself?

 

 

Beth Plale

If I may, I think indicate no thank you. I think that's a really important observation, speaking from informatics, on you know, computer engineering and computer science, the the influx of of digitized content, you know, digitization has gotten cheaper social media has has has yielded tremendous amount of data and computational resources have grown that and to allow for analytical methods over large data sets such that the analysis of this is really I think, dominating these these computing disciplines because of the the, the, the newness of it all and the discoveries that are that are possible there. You know, in a in a sense, it's it's, it's it is opportunistic, Nick, for the reasons you're saying I mean, the data you get is the data that's been digitized. I saw this in working with the with Howdy trust, where Howdy trust is digitizing books, volumes from university libraries, the decisions about what got digitized with the decisions that the individual libraries we're making, So you end up with with 17,000 or 17 million books, And what do you have as a corpus in a specific discipline? Well, you sort of have to figure that out, because the needs on analysis are very different from what the libraries are trying to do, and selecting books for digitization, very different criteria. So I do think, you know, one looks at a big data, a huge data set and says, Great, I've got everything, it's like, no, you don't. And now what do you do with that? So I think the fact that it opens up research opportunities is really important. You know, I am concerned because, you know, I do look at I do look at, you know, at at, at issues of ethics, and equity and accountability, and both in my research and in my classrooms, and and I look at a field that's moving very, very quickly in urine and is sort of like, well, let's slow down, let's talk about some of this, you know, and so I think there's some just kind of running forward with what we can discover. But I think in some senses, it's it's running forward without necessarily being aware of the, of the the history and other disciplines, that that that provide a context around data sets that that data science people are having to relearn. So I think there's some of that slowing down is to is to work more deeply in concert with the disciplines such that the developments are meaningful, in in the space of both of the two working together. So I think there's, it's exciting. And there's issues, I think there, there's considerable issues there.

Vivian Halloran:

And I would actually add that something that Beth said, just reminded me that what the advent of the digital humanities has brought on is a change in the way that humanists work. Because not the experts in the content, know the content, but they don't necessarily know how to use the tools or how to best take advantage of some of the technology that's there. And so we're learning to work in teams, which is not part of our disciplines. We're all trained to be solitary scholars, looking at the archives or our books. And so this is changing the way we're doing business, even if not all of us are incorporating AI technologies into how we teach, or how we do research. We're all benefiting from a field that's making it less stigmatized to work with others.

Laurie Burns McRobbie:

Really interesting points, I'd like to ask you to say more about what you see your students experiencing and seeing whether it's in the lab or the classroom, in terms of changes in the discipline and how that might be affected, affecting how the discipline might need to evolve. Vivian, you spoke of the importance of curiosity, of helping students push past what the result is that they see the tool, we're produces a result? And they don't necessarily question what goes on, are you finding that students are responsive to hearing these calls to push past just with the technology says?

Vivian Halloran:

Not so much at when they come into the classroom, but once we start talking about things, like for example, I'm going to wear my management and human organization major hat. So in the lamp program, students, when I teach the arts of communication are shocked to find out the part of their interview process for jobs would include AI mediated interviews. And so that just terrifies them. And so the moment that we talk about what those are and show them what that looks like, they say, Oh, but I make videos of myself all the time, so that they begin to realize that it's not unlike their normal experience of interfacing with media, but it is something that is now part of how we just handle employment, HR. And so in that regard, I think they're they're realizing how much they already know. And so it's a good kind of shock, but then they need to know more. So that bursts their curiosity because the prospect at the end of that is a job. And since they have that intrinsic motivation, they want to find out more

Beth Plale:

Yeah, if I guess we can just offer that as another perspective on your setting. There's an important perspective. You know, AI is being used in our administrative functions. I have a role that has me interfacing with the IT institutional IT campus IT people you know, and And, you know, so AI is is is already present, you know, it's it's present in marketing to prospective students, you know, estimating class sizes, you know, so there's, it's at the institutional level. And I'm citing from an EDUCAUSE article that I can all share the the reference to, it's used in student support, not necessarily at IU, I haven't verified that. But in general, it's used in student support. So in helping students in terms of the guidance counselor's giving guidance on on the classes, students would take getting in there with their schedule, their schedules worked out, which is arguably a hard problem, I think that's a good use of AI. And then but also in recommending courses that students can take and recommending majors, you know, so and then. And then finally, at the instructional level, we're seeing systems that allow for personalized learning personalized learning platforms. So these students are encountering AI in their, you know, not in the direct instructional class, but in in their, in their interactions by other campus. People in ways that that I think it's helpful that thing that they know about, and I also think it's important for the students, but I also think it's important for the administrator, that is, you know, the guidance person that is using a tool, an AI tool to guide someone in a major in one direction or another, to know the limitations of the tool that they're using. So I do think it's incumbent upon campuses, to to not only identify where AI is being used, but to educate the people that are using those tools to guide our students going forward. So I think there's there's that aspect that I think is is directly impacting students that that needs a little bit more clarity, perhaps in a deeper podcast than what we're 

Laurie Burns McRobbie:

Yeah, Yeah, I think that is a whole topic in its own right. It's certainly we really think about the experience of students in an educational environment, it's enabled and augmented, and so forth by by these technologies. What does that mean, and of course, we've just, we're still, in this pandemic, unfortunately, where we've had an awful lot of online learning, there is a whole series of systems and that I think we'll definitely be exploring that in a in a future podcast. I want to note that as Beth, you mentioned, the EDUCAUSE report, wherever there are references made here, we'll be posting those alongside the episode itself on the secret website. So you'll be able to go there, you can go there. And I can find the, the report that Beth is mentioning, as well as any others that come up in this in this episode. Thinking about sort of the academic world, but but sort of turning from how we operate as an as an institution or any higher ed institution might operate, to the role that universities have in society, with respect to ethical AI, and how we how universities should be the role that universities should be playing. for society at large. When we think about these issues. I'll throw it out for any of you to jump in and take the first crack at that.

Nikki Pohl:

So one of the things that's strikes me, we are at a public institution, and we are a nonprofit, and so much of the money and funding so far in AI has come from the for profit world, which means that the values and the goals for their AI is going to be very different than what you would see from necessarily public standpoint, the kinds of datasets that are collected the kinds of AI that is invested in. And so I feel like a one of the major roles of universities is to shine a light on the other aspects of AI where it could be useful, that aren't already being pursued by corporate partners, for example, and also pointing out the ethical issues and the the potential problems with equity and outcomes, depending on how investments are made in AI, because we have the luxury of of not having to worry about pleasing a particular employee to get an employer to get a product out very quickly. And so we have the time to think and also the atmosphere to think and I think that's a very important role for universities that we all as employees as students, generate and keep these discussions going about the use of AI technologies. 

Beth Plale:

You know, if I may, I think our federal government has a role to play particularly our funding agencies. So the funding agencies will fund large scale computers and storage for computational analysis of large scale models, what is what is not in place our large resources for AI analysis and the and storage and availability of the large data The sets that your your companies do have access to so it's it is making it less competitive for academic researchers in the AI space. And again, the federal government and the White House recognize that and have established a task force to look into doing just that in establishing a National AI research resource that would allow AI researchers be more competitive, because with tools like like deep learning, it is the the compute needs are quite intense, because the data sets are large and and while on the computer is heavily specialized for that type of analysis. So I do again, if we're gonna, if we're going to stay competitive it the it has to be a cooperation between the academic researchers who continue to do the kinds of work they do in support of science and engineering, and the federal government that's allowing the funding of the computational resources that are needed. 

Vivian Halloran:

And I think that's a you bring up a really good point that the federal government needs to help out the academic researchers. One interesting aspect, somehow that I see is connected is the fact that academic researchers work in spaces where our students reside in right, we are at a resident campus. And so the ways I see my students using an AI product is like using the app to find out what the campus bus is, that's great, because it lets them know when they can hop on the bus. It's also has a downside, because we have all sorts of data on where people go at any given time. And it's tied to their, to their phone and other ways of gathering data. So we need to be mindful that as a university, we gather a lot of data to make things work at a large scale. We also know way too much about our students that we don't necessarily need to know. And so how do we keep those kinds of data points separate from, you know, where our students supposed to be? And when do we need to bring them together. So if somebody disappears mid bus route, you know what happened. So those are, those are interesting ideas that our students are both living in the campus and also attending school as students sometimes even working. So we know so many aspects of their lives. 

Laurie Burns McRobbie:

Yes, and we do need to talk about data privacy protections and policies around personal data use, because they're really inseparable from the use of AI, and other technologies, for things like surveillance and monitoring, automated decision making, etc. And as you say, something as seemingly benign and useful is helping people catch campus buses. So like lots of big complex organizations, whether they're in higher ed or other sectors. IU also has an obligation to protect the data it collects, and to use it carefully and appropriately. And hopefully, we can be a model for Responsible policies in this area, as well as providing leadership through the research we do on ethical data and technology use. And we will definitely be discussing these issues in a future episode. We're coming close to the end of our conversation, and I want to ask a general question of each of you, which is, what's one thing you would like people to understand about how AI is functioning in our world today, from an ethical standpoint, and an equity standpoint, and from your different disciplinary perspectives?

Beth Plale:

I can certainly start go ahead. I am I am all about accountability and accountability of AI, let's call these services I refer to this bird Prediction Service, which is benign, but let's use it as an example. So this is an AI service. And the work I do in the AI Institute, it needs to be accountable. And one way it can be accountable is to be explainable. People may have been may be familiar with the term explainability so how does one enabled and this is a design development to sit disorder decision? How does one make that tool such that it can explain what the decisions that it's making to whomever it's interacting with at an appropriate level and that's one of the challenges that the that the AI Institute is taking on? When a couple that with an I'm big into you know, what is the developer Do you know what are the the the other stakeholders do what what can be done with a regulatory way but there's also community action that I think has a role here. I'm really taken by the the right to repair movement, which is a national movement that says look, you know, the electronics I bought, I should be able to take it to my local shop, and they should have a manual to fix that they should have the diagnostic tools to fix it. And they should have all the the the spare parts they need to fix it. Right now. That's not happening. I see the same thing happening in AI we need a right to question where people demand that these AI tools that are influencing for instance, You shouldn't go into pre med because you failed chemistry, they have the right to question that tool, and they have a right to make that tool accountable. And what this this this community activity would do would put pressure on legislature legislators in each one of the states to consider legislation that would force developers and companies and developers to put more time into making these tools more accountable than they are right now. And there is there is a right to consent activity. But that's slightly different in that the right to consent is if something goes or comes they're not consent contest. Is that something that when something has gone wrong, I have the right to bring recourse but but that's very different from the right to repair this thing needs to to explain itself to me, and I demand that and I have rights to have demand that this thing explain itself. To me.

Laurie Burns McRobbie:

That seems like that would have, you know, huge implications for criminal justice as well. Speaking of what state legislators might do with those laws. Vivian?

Vivian Halloran:

So, I was thinking of Safiya Umoja Noble, who, in her book Algorithms of Oppression, talks about victims of revenge porn. So, the fact that there is still no legislation bust enough that will try and engage and return some sort of privacy to victims of such crimes. And that’s just something that we should all be more aware of, and and figure out how not to fall fray to that, but also how to recover if we do. So that’s one one thing that I wanted to point to as something that might happen to people, and and they might realize that. And the other thing I love how Beth was talking about the right to question, yeah the right to question because I didn’t-- what that brings up to me is uh my parents are always questioning the GPS lady. And so that’s just a way in which illustrates that the technology might be working fine but if you don’t trust it, you’re not going to get to where you’re going because you’re going to do what you want. So, we need to change the ways in which we either trust, or don’t trust, the technology we use, and either use it willingly or not. So make informed decisions about our own engagement with it. And, then, it also brings up the idea in Europe, people are fighting for the right to be forgotten. So how much do you actually want proof of having existed be out there or all of you be collected without permission. So those are issues that I think are interesting illustrations of ethical AI, and how people handle the after effects, or how people interface with AI services, or whether people opt out of AI all together, and how feasible that might be. 

Laurie Burns McRobbie:

Mhm. Nikki?

Nikki Pohl

I completely agree with what Beth and Viv have said already and the only thing I would add is um I would say that everyone should remember that AI is ultimately a collection of zeroes and ones, and when you think about this amazingly beautiful, complex, colorful, three dimensional, fantastic life, there is a lot of abstractions that has to go in between that and zeroes and ones humans have an input in every one of those extraction layers and so this is something you should take into account, and question because and to be a part of that abstraction process is because in reality we are never going to completely capture that wonderful life.

Laurie Burns McRobbie:

Yeah, that’s wonderful.

Beth Plale:

Yeah, If I could just add on to that is again I think that I think that yes it is I I I completely concur that there’s abstraction after abstraction in these tools. Um, so, kind of tying it back to, you know, that would be the person who builds. You know if we tie it back to, this is just something I mentioned, those who are doing the acquisition, those who are using the tool, now, those who are using that tool a decade later, how are we, how are we educating them so that they’re aware of the tools that they use. And I don’t think we’re doing a good job there so what I like about the Center of Excellence for Women and Technology, is doing with their AI for all, is they’re acknowledging that, that there are there’s needs for understanding AI that are pervasive and go beyond to you know our undergraduate students or our graduate students and their research and studies and I I like I like that and I think we need more and I think we need need more penetration into our academic, I mean our our academic administrative services because AI is there now and I think it’s it’s something that we need to be responsive to.

Laurie Burns McRobbie:

Yeah, absolutely. Well, one of the, some of the future episodes will get more into these conversations and I think and one that’s coming up soon is is one that looks at how AI is used in K-12 classrooms so we start thinking about how we’re bringing up the next generation to hopefully to to feel more empowered by the tools rather than powerless. Or, you know, have to accept what those, uh, what the tool gives them as opposed to ineracting with the tool and working to make it better, um, through the right to question or whatever whatever techniques we find going forward. Thank you, all three of you, for your time today, your wonderful insights, your great conversation, uh, we’re looking forward very much to picking up on several of the comments uh that you made, topics and points that you brought up, in future episodes of this series and we hope that you all tune into these going forward. Thank you all

Guests:

Thank you. 

Laurie Burns McRobbie:

This podcast is brought to you by the Center of Excellence for Women and Technology on the IU Bloomington campus. Production support is provided by Film, Television, and Digital production student Lily Schairbaum and the IU Media School, communications and administrative support is provided by the Center, and original music for this series was composed by IU Jacobs School of Music student, Alex Tedrow. I’m Laurie Burns McRobbie, thanks for listening.

OUTRO MUSIC