Ethical AI Podcast

Episode 2: Demystifying the Technology

September 07, 2022 The IU Center of Excellence for Women & Technology
Episode 2: Demystifying the Technology
Ethical AI Podcast
More Info
Ethical AI Podcast
Episode 2: Demystifying the Technology
Sep 07, 2022
The IU Center of Excellence for Women & Technology

Discussion with David Crandall and Sam Goree about AI technologies themselves with an eye to helping a lay audience understand terminology, how the technologies function in the real world, and how we interact with them.   

David Crandall
Luddy Professor of Computer Science, Director of the Center for Machine Learning, and the inaugural director of the Luddy Center for Artificial Intelligence

Sam Goree
Luddy School Informatics PhD Candidate 

Reference:
Arab polymath Al-Khwarizmi, from whom we get the words “algebra” and “algorithm”: https://www.britannica.com/biography/al-Khwarizmi

Show Notes Transcript

Discussion with David Crandall and Sam Goree about AI technologies themselves with an eye to helping a lay audience understand terminology, how the technologies function in the real world, and how we interact with them.   

David Crandall
Luddy Professor of Computer Science, Director of the Center for Machine Learning, and the inaugural director of the Luddy Center for Artificial Intelligence

Sam Goree
Luddy School Informatics PhD Candidate 

Reference:
Arab polymath Al-Khwarizmi, from whom we get the words “algebra” and “algorithm”: https://www.britannica.com/biography/al-Khwarizmi

INTRO MUSIC 1

 Laurie Burns McRobbie:

Welcome to creating equity in an AI enabled world: conversations about the ethical issues raised by the intersections of artificial intelligence technologies, and us. I'm Laurie Burns McRobbie,  University fellow in the center of excellence for women and technology. Each episode of this podcast series will engage members of the IU community in discussion about how we should think about AI in the real world, how it affects all of us, and more importantly, how we can use these technologies to create a more equitable world.

 INTRO MUSIC 2

 Laurie Burns McRobbie:

I imagine that like me, some of you have questions about what exactly is artificial intelligence? What kinds of technologies are included in those definitions? And at a high level, how do they work? To help demystify the terminology. I'm joined today by two scholars from the Ludy School of Informatics, Computing and Engineering. David Crandall is Luddy Professor of Computer Science, Director of the Center for machine learning, and the inaugural director of the Luddy center for artificial intelligence. Also with us is Sam Goree, informatics PhD candidate in the Luddy School. both David and Sam focus on computer vision and Sam's interest extend as well to computer design. We're going to talk specifically about AI technologies, what they are and aren't, and how those of us not involved in the technology itself can understand various terms and functions. And we're going to talk about how understanding a bit more about the technology itself helps us engage in efforts around creating equity. I should note that David is in the studio today. And Sam and I are coming in by zoom. Welcome, David and Sam. We're going to get into the technology side of our topic in this episode, as I've said, but but without getting too detailed, or esoteric, you'll all be happy to know, I thought we could start with some basics. How would each of you define AI? Or or AI technologies? David, do you want to kick us off here?

 David Crandall:

Sure. And if we're not careful, we could spend the next hour trying to define AI, and you will get different different definitions from everyone you ask. I think I might say something like AI is sort of the study and development of technologies that try to be able to perform tasks that seem to require human intelligence. And that's sort of like this word salad on purpose. Because what I want to avoid saying is that we're trying to, like design techniques that think like humans, because that really isn't the goal. Usually, the goal is to sort of perform a task, have a have a car, be able to drive safely down the road, recognize what someone is saying, when they call into an automated helpline, or whatever. And sort of the way that the technology works is generally very different from the way that people would do the same thing. And that's important for I think, understanding some of the successes and also some of the failures of modern AI technology.

 Laurie Burns McRobbie:

Sam, what would you add to that?

 Sam Goree:

Thanks, Laurie. I guess I would add that the, you know, AI means different things to different groups. Within computer science, people tend to think of AI in terms of, you know, the problem statements that it's not, you know, a well-defined computable problem that we're trying to solve. It's something that requires some amount of intelligence. But then also from, you know, the business perspective. AI technologies refers to sometimes like, a particular category of really recent developments, particularly like cutting edge techniques and machine learning, which I'm sure is a term we'll get to a little later. And so, reconciling these two definitions is sometimes a bit challenging.

 Laurie Burns McRobbie:

Yes, and I think as we've talked about, in computer sciences is very much obviously invested in thinking about the functions of artificial intelligence very deeply, and you're an informatics student, and which gets us a little bit more into how those are applied. And so that does bring different different perspectives. And you mentioned machine learning. That's certainly one of the areas of artificial intelligence research and application that gets a lot of attention. Are there other sort of technical areas that you think belong in a large, broad definition of AI?

 David Crandall:

Well, so like historically, and we could talk more a little bit about maybe the history of AI, either here or another episode you might want to consider. But historically, there have been sort of a variety of different techniques that have been used for AI. The most recent, the most fashionable by far the most popular one right now is machine learning, which is where you, in order to create an algorithm, that's, that seems to do one of these tasks that requires intelligence, you collect a large amount of training data. So for example, if I want to create a program that can detect cats and images and making something up, but the way that we would get started these days is that you would collect a big collection of images of cats and images of other things and feed them into a machine learning model. And the machine learning algorithm would then try to automatically learn how to separate cats from non cats. That's sort of a recent development in the history of AI. So there's many other kinds of approaches that aren't learning based things like using logic, like writing down logical rules about how to solve problems, using like rules that are written by human experts in order to accomplish the AI task. I mean, there's a variety of other kinds of techniques as well. So those two terms AI and machine learning are often used interchangeably, but sort of technically, machine learning is sort of one part of AI a very important part of AI.

 Laurie Burns McRobbie:

And, and talking specifically about your field computer vision is is what you do in the in the area of machine learning? In other words, to be able to recognize faces, or speech or movement, or objects.

 David Crandall:

That's right, yeah, I'd say probably 98%. I'm making up that number. But something like that of the work that goes on in computer vision, is machine learning based at some level right now. And you know, so what happened in the community is, is a computer vision is something that has been studied for decades. And initially, I think we thought that we could write down rules to describe to a computer how to see. So like, how do you describe what a face is? Well, a face is like a circle that has two little circles in it that are the eyes in between them as this, this, you know, nose like thing and so on, you can sort of write down these rules. And there's various other approaches, statistical approaches, probabilistic approaches, and so on to do that. But over sort of, especially the last 10 to 15 years, we've discovered that these machine learning algorithms where we collect a training data set, and we give it to the machine learning algorithm, and it sort of figures out its own patterns about what makes a face a face, that actually works way better in practice. And frankly, it's also sort of more scalable, because now we don't have to write down those rules for every single new object that you'd like to detect in the world. Instead, we can collect training data and have this general machine learning algorithm, learn what is that object?

 Laurie Burns McRobbie:

Right, can I ask you one other, I guess, definitional question. Another area that comes up when we talk about AI technologies are neural nets. Can you give us a little more context for neural nets?

 David Crandall:

Sure, so neural nets are not a neat new idea, neural network research into neural network goes back many decades, at least into the 1950s. And they have this sort of this very exotic name, it sounds like maybe we are growing brains in the lab and connecting wires to them and connecting computers to them or something like that. But but that's not what it is. It's basically a mathematical model for some very basic understanding of how the brain works. And so it's just a bunch of math, it corresponds to adding some numbers together, or multiplying some numbers together. And doing this at a very large scale, you can think of the neural network is having these little components called neurons. They're very, very simple components. But then when connects them together, in a kind of complicated way, these days with many millions, billions, even larger numbers of networks and numbers of neurons and these very complicated networks are then able to do kind of amazing things. And again, sort of the inspiration for this is a very rudimentary motivation from neuroscience where we Know that the brain consists of these little processing units that are connected together like this?

 Laurie Burns McRobbie:

Yeah, thank you that that's nice, nice kind of framing of, of just the terminology. So Artificial intelligence has been around for a long time, we know that as well and as a as a term to describe various areas of research and development. But today we're seeing it everywhere, it seems in the real world, and along with what seemed like unprecedented amounts of Investment and Development. Can we talk a little bit? And David, you're probably best to respond to this. Could you talk briefly about what's gone on historically? And why today is different?

 David Crandall:

Yeah, sure, I can try. I'm not a historian of computer science, you'll probably get a sort of biased answer here. But if you're not in computer science, you may not realize that artificial intelligence is something that people have been thinking about since really the dawn of computer science. In fact, Alan Turing, who is sort of maybe considered the father of computer science was thinking about artificial intelligence in the very, very early days. And that term, artificial intelligence goes back to the late 1950s. So this is think this is an area that we've been thinking about for, for 70 years. And throughout that time, there's been a lot of sort of, maybe bubbles and busts along the way. I mean, there's been a lot of excitement I mentioned we met, we talked about neural networks, for example, there was a lot of excitement early on about neural networks. And people thought that maybe within a few years, these neural networks were so powerful that like our computers would be talking to us and walking around and, you know, doing our chores for us and stuff. And this was back in, you know, the 1950s and 1960s. And, of course, that didn't pan out, because the problem, it turns out is way, way harder than that just because a computer can play chess relatively well against you doesn't isn't nearly enough to drive around the world in a safe way. And so over time, there's sort of been different techniques that have been have emerged, and, and so on. And this idea of the neural network is one that keeps coming up, and then sort of fading away. And actually, the resurgence of interest, at least from a technical point of view, in AI over the last, like 10 years, is really because neural networks have started working really, really well on a really wide variety of problems. And I think there's sort of several ingredients that made that happen. One is, of course, you know, 70 years of technical work and engineering work. Another is just that the computers that we have now are so much faster than ever before. And all of these neural networks require lots of calculations. And so all of that computation power is really is really important. And then a third factor would be the large amount of data that's available for training these for training these networks. You know, when I, I worked at Kodak, Eastman Kodak Company, the camera company, after about 20 years ago, and back then it was actually really difficult to get images, for example, you know, we would have to take them on film and scan them into the computer sort of one by one. Well, these days, you know, we could download probably a million images from the web or more just in the, in the short duration of the this ramble that I'm giving you. And so just the amount of data that's out there is just amazing. And then I think there's other factors as well, the fact that we are carrying, each of us is carrying a computer around in our pockets, which can be enabled with AI, because it has a microphone in it, it has cameras and so on. I mean, all of this kind of has sort of converged, I think, to the way that AI is really kind of working in a way that wasn't the case before. I'm sure Sam has some thoughts on this, too.

 Sam Goree:

Yeah, that's great. The one thing I'd like to add, is, it's just impossible to overstate how important the internet has been for the current AI boom, just the fact that, you know, 20 years ago, there wasn't any volume of data available to an average researcher, you know, people at Kodak were struggling to put together 100 images. I've spoken to researchers who collected datasets by you know, taking their digital camera out onto the street, and just taking photographs for an hour. And because, you know, the internet exists that researchers can download, you know, millions of images in the time that David was talking. And the sort of flip side to that is that, you know, the photos that people post online are, well, they're probably better than the photos I could take. But they're not necessarily like the perfect representation of the way the world works, right? Like, the internet, you know, was created through DARPA funding, it's mostly a US based thing, even though it's worldwide, and it sort of puts forward a, you know, it has a view on reality that isn't necessarily like, true to the real world.

 Laurie Burns McRobbie:

Yeah, very, very, very selective. You're also I think, pointing to something that we'll be talking about a lot more in other episodes, as well as this one, which is the role that each of us plays, and producing the data that is going into these machine learning models. And whether we are aware of it or not, or whether we've even done it intentionally or not, that this has become very much a kind of human produce thing, that these technologies are now able to work on a new ways. So yeah, thank you both for taking us through that. I know there have been, you know, times in the past I, my career in technology goes back not 70 years, but but maybe more like 40. And various times when AI was the thing, the next thing that was, you know, where people were saying, all the smart money was going and then of course, nothing would seemingly happen. And I think now, of course, as David, you pointed out the factors that have led us to the place, and of course, Sam with with your point about the internet itself, where where things are very different today, and it is involving all of us. I want to talk about one piece that you both mentioned, which is algorithms and how they function in this AI enabled environment, perhaps differently than they have functioned in computing environments in the past. I'll venture a very basic definition which you're both free to disagree with, which is that an algorithm is a set of instructions for calculating certain outputs in a step-by-step fashion, based on certain inputs are very basic. And they far predate the advent of computers as we know them today. One example I love to cite as a woman in technology is that the early 19th century mathematician Ada Lovelace, is often credited as the earliest to see the potential in Charles Babbage's calculating engine for executing algorithms. But they predate even Ada, as I understand it, I'm no historian of technology technology either. But more to the point, as I understand it, algorithms function differently in an AI environment. And maybe because of how machine learning works. Both of you, I'd love to hear your views and algorithms in the context of AI, and especially how they interact with our notions about equitable outcomes.

 Sam Goree:

Yeah, sure. So the concept of an algorithm is very old, ancient, in fact, you know, Euclidean algorithm for computing the greatest common divisor, even numbers comes to mind. And even the word algorithm is quite old. It's actually it's based on a European mishearing of the the name of an Arab scholar, Muhammad Musa al Corzine. And so his book was, he wrote this book on methods for calculating and balancing equations, by filling in unknowns, which, you know, ended up being the foundation for algebra. And all of his methods that he wrote in the book, were, you know, you know, there are methods from Al Corzine, so people started calling them al corzine isms, where none of that eventually was misheard and shortened into algorithms. And so that's sort of interpretation where like, an algorithm is, you know, a list of steps written in a book, I find to be really sort of helpful when thinking about this. Because, you know, in that sense, an algorithm is a text, you know, it's a set of instructions that a human put together to solve a problem. And that gives us sort of a slick lens to think about algorithms, more more critically, that like they're not, you know, handed down from on high as the solutions to problems, they're really something that humans have come up with and that humans have decided are the right set of steps to solve a problem

 Laurie Burns McRobbie:

And humans can interrupt you know, perhaps, can change and intervene and David Yeah. David, your your thoughts here?

 David Crandall:

Yeah, so I really like what what Sam said, I think that's exactly right. I think you know, if you want to think about maybe an algorithm in your daily life, or at least my daily life life, I think of like recipes. So when I'm baking something, for example, I follow the recipe and they're really very basic instructions is not trying to play I don't have to use any creativity. I don't have to use a lot of It says, you know, measure out a certain number of teaspoon of whatever salt and put it in this and then stir it for a certain amount of time. And, and just like the algorithms going back to what Sam said, I mean those recipes for cooking things, they're written by humans. And as I am cooking that thing, I am basically just like the computer applying the, the the steps of that algorithm step by step. Now, unlike the computer, when I'm cooking, I might try to do some things off script, I might try to change the amounts and see what happens, and so on, the computer doesn't do that, it's just told to multiply numbers together and add them together and check one whether one number is larger than another, and so on very simple, basic, basic instructions like that. Now, sort of thinking about AI, specifically, AI, of course, as in anything that uses a computer, underlying it is an algorithm. It's a series of steps that the computer is is is processing. But it changes a little bit when we start thinking about this machine learning world. Because sort of what we might have thought of as the an algorithm before, like, before, I was talking about how we might write rules to recognize what a face is, look for circles look for circles inside those circles that correspond to eyes. That could be an algorithm that I wrote, in order to identify faces. But in this machine learning world, and machine learning kind of techniques, where we're instead collecting a large set of training images, and then asking the algorithm to find sort of its own model of what makes a face of face, there's still an algorithm that is looking for the patterns in the data. But what we sort of don't have direct control over necessarily, is what pattern it has found. And so we have programmed the the, the, the pattern finding algorithm, but the pattern that it found in the data may or may not be what we expected. And so like going back to my face recognition example, for instance, if we collected data from all faces of you know, people only with black hair, for example, the algorithm might the machine learning algorithm might learn, okay, in order to be a face or to be a human face, you have to have black hair. And that wouldn't be correct. But it was a pattern that it found in the data that explain the data that it was given. So sort of underlying everything, there's still algorithms going on, it's just what's a little bit different now is instead of sort of a clear sequence of steps that a person has written, often today, these algorithms are designed to find their own patterns and data, and then use those patterns in the future.

 Laurie Burns McRobbie:

I should note, by the way, Sam referenced the origins of the word that this podcast will be, you will be able to access it if you haven't already off of the the sea wet website. And we will have references like the one Sam mentioned there. So you can follow up. And and look at look at some of those directly yourselves. I want to talk a little bit more about this, this question of, of how artificial intelligence kind of as we apply it in the real world interacts with with our with our concerns about equity and fairness. And I know we can all point to lots of examples where and again, could be the output of a particular function, a particular algorithm perhaps has produced an an outcome that we don't think is fair. An example of this is I think, a few years ago, Amazon discovered that its hiring algorithms that it was using to sort applications for jobs, preferred men, and it had to do with historic data, looking at who Amazon had hired, and I'm probably oversimplifying the case. But again, we might look at that and say, well, when Amazon did this isn't what we were looking for. This isn't the outcome that we wanted. So we do you know, we do think a lot about the issues of equity. And and they become ethical issues. Indeed. We're talking here today though, about tools, right? I mean, these are the computers the the functions, they they are the tools that allow us to to do things but the tools themselves are we hope neutral. But at the same time in every era of technology leaps forward would have happened throughout human history, we have to think about the safety of the tools. So my long winded preamble to my question is in your work as David as a as a faculty member Sam as a as a faculty member in training but as a student How do these questions of, if you will the safety-and I'll use safety as a stand in for equity and fairness-how do those issues come up in the work that you do?

 David Crandall:

Well, so there's there's like, so many different ways to, to answer the question. What when I think about sort of tools of technology and unforeseen consequences of things. I think for example of like, the like button on Facebook, or the retweet on Twitter, like, it's such an innocent little, an innocent little feature where people see a post that they like, and then they press the button. And then, you know, if a lot of people like it, then that showed the other people. And when that feature was introduced, whatever 15 years it should go, I'm not sure. I certainly didn't see the consequences of that. And that, you know, 15 later, years later, we would be talking about how that little feature could be used to manipulate elections, or it could be used to generate misinformation, you know, just seemed like a really small little little button there. And I think, you know, that's sort of the issue that a lot of us in computer science, especially in computer science, maybe coming from that angle, face, because the reason that I got into AI and computer vision specifically is less is not really because I wanted to build robots, that would be my friend or something like that. It was more because there's really cool technical problems to be solved. And specifically, when you think about computer vision, it touches all these related fields, like how the brain works, touches, optics, it touches. Learning, it touches, touches neuroscience, like we've talked about, like so many different things. And so it's just really cool kind of technical challenges. And each of the most of the papers that me and my students write are addressing, like some really small, tiny, it may be important but but small sort of contribution that solves some technical problem. And in the, in the kind of grand scheme of everything that's going on in AI, it's a little bit hard to predict, like how that little innovation could potentially be used down down the road. And so I think this is something that honestly, at least I am still catching up to. And I think computer science in general might still be catching up to, like, what are sort of the ethical concerns that we that we should have? How do we express those? And I don't really know exactly what what is the the answer yet. I mean, certainly, I think the community, the research community, and also the teaching community, that education community is increasingly aware of these issues. And so for example, many of the major machine learning and computer vision and AI conferences these days, they asked you explicitly to write a paragraph that discusses the ethical concerns that may arise from the work that you've done. But again, a lot of it is very speculative, because one is working on one small little detail and has sort of no idea how that will eventually arise. But I think in general, we just need to think about this much. We need to think about this more deeply. And we need to be aware of the potential implications of the things that we do even when on a day-to-day basis. We're just trying to solve technical problems. Cool little technical problems.

 Laurie Burns McRobbie:

Yeah. Yeah. And important, important that you do. Sam, what are your thoughts here?

 Sam Goree

Yeah, it's, I don't know, it's easy to forget that, you know, when these sorts of problems emerge in AI systems, that they're not emerging, like, in this totally isolated like system in a vat, right, all AI is embedded in society, like, you know, code running on a computer can't hurt you. But it can hurt you if you put it in control of your car. Or if you put it in control of your social decision making, or if you put it in control of like the way that you allocate resources in some way. And so, you know, very often these sorts of algorithm bias problems, they don't necessarily emerge from the algorithm, they don't necessarily emerge from the data. They don't necessarily emerge from the like decision makers who put the algorithms in place. But they sort of emerged from the whole system as a as a total. And it's so it's that makes it really difficult to solve these sorts of problems. Because, you know, the computer vision or machine learning researcher working on some tiny little technical problem, they're not going to see any problem with their work. They're not going to think that they have any ethical concerns, but then maybe something emerges in the larger system. And that really requires us to look at things maybe more holistically than we have been before. And that's that's challenging.

 David Crandall:

I have sort of, oh, sorry. Just sort of a follow up thought lawyer you you mentioned To the maybe it was Amazon, I'm not sure which company it was either that had that system that basically based on some historical hiring data, you know, it's starting to discriminate against women or whatever, which, of course, is horrifying. But I think maybe there is sort of a small silver lining to it. And the silver lining is that discrimination is nothing new. So people, when they're making hiring decisions, historically have discriminated. And we had to rely on those hiring managers, we'd had no idea what was going on in their brains, they were making these decisions. One good thing is that if we have AI systems that are doing this, and if we can really agree, as a society on what are the rules, what are the things that we do want to be factors? What are the things that we don't want to be factors, then, at least in theory, those AI systems could be much more objective. And they could be used to probably reveal biases in the world, where right now we're relying on people, people sort of objectivity. So I think it's sort of very nuanced, right? Because like, on one hand, it's horrifying that AI learned to do that. On the other hand, the reason that it learned to do that is that those biases were already there, and it sort of helped us reveal them. And maybe there's some way that it could help correct them.

 Laurie Burns McRobbie:

Yeah, I think, again, this this sort of comes back to where where do the, if you will, the rest of us users, let's just very broadly say those of us who are interacting with technologies as we go about our daily lives, are both our awareness of of those factors and how these things can be can play out in automated processes. But possibly, also, again, if you're in a position as a hiring person, or you're, you're making some sort of decision that is that relies on an automated process, is the importance of us becoming smarter about questioning and being and being critical thinkers about what that outcome is. So let me let me just, we're coming up, I think, to the end of our conversation here, but I, I'd love to hear both of you talk about, you know, well, we've been referencing examples that we might all believe are sort of in the negative realm of things. And I think the important thing, many of the important things about about these technologies is what they enable us to do that's positive, and that improves human functionality and human society. And and I guess I'd love to hear both of you talk about it, from the standpoint of what what you would like everyone to understand about about AI technologies, and perhaps ways in which you'd like to see all of us interacting and helping to think about how they can be used in positive ways. What should everybody know?

 David Crandall:

So I think, you know, a lot of themes here have sort of naturally emerged in the conversation has been, it's been great along along these lines. I mean, I think I just like to reiterate that these AI algorithms, they're not like some exotic things that we have no control over, like, we know how they work, we can control them, the biases that emerge because of them are because of biases in the training data that's collected biases in the real world. And we can have a conversation now about like, how to how to correct that. And I think it's, it's, we're at a very exciting time in AI, because of all the technical developments. But I think we're also at a really exciting time, and also a really critical time in doing you know, what Sam was talking about just sort of thinking about, like, how we want AI to impact our lives, how do we want it not to impact our lives, and that's something that we that we get to decide, right? I mean, as a society, we get to decide to what extent we're going to trust these technologies and what extent we're not going to trust these technologies. And it will grow in tandem with the technologies themselves. And we're able to have conversations exactly like this one. And I think that's exactly what we need. We need people from all different disciplines, all different perspectives to, to weigh in on what the world what, ultimately, it's what do we want the world to look like? And how can AI help us achieve that goal?

 Sam Goree:

Yeah, I want to I want to elaborate on the last part of that, that, you know, as AI technologies start to work, sort of like, like computer science is like coming out of the basement, like computer scientists are no longer working on these isolated technical problems. They're really, you know, engaged with the world as it exists and dealing with more more human more social, more societal problems. And you know, that requires a different sort of person to solve than a lot of the the technical problems that have been traditionally studied in computer science. So there's no like, there's a lot of need that we have for, like really smart people with training and other areas to to help us navigate these issues. And I think it's a two way street to that, you know, the AI technologies allow us to deal with data in totally new ways. A lot of academic disciplines have been traditionally defined by scarcity, either scarcity participants or scarcity of documents. Some of my research involves design history, there's a lot of things being designed by a lot of people right now. And, you know, we could do the same sorts of analyses that we've done in the past and try to pick a couple perfect examples to analyze in depth, or we could leverage sort of AI technology to think about things more holistically and less productively. And that lets us sort of see the world in totally new ways. And I think that's really worth doing, even if we have to, you know, regulate out the horrible biases along the way. So that's sort of my take on it.

 Laurie Burns McRobbie:

Yeah, no, these are great, thank you both so much for your perspectives here. And for all the great information that you brought to bear on this conversation. think your point about the importance of involving people who come from different perspectives, different disciplines, is very much at the heart of what this whole podcast series is trying to get to. And I hope that as we go on, these conversations can continue and maybe build on each other as we go forward. And maybe in a little while, we'll be having a, an even broader conversation about these things. And I think particularly the points that you all make about, you both have made about what the potential is there, that we do need to think in terms of, of, and maybe this is the right way to put it, Sam controlled abundance, but we have abundance, and we need to learn how to use it wisely and fairly. So thank you both, and I appreciate your time, and I wish you both the best.

 David Crandall:

Thank you so much. It’s so much fun

 Sam Goree:

Yeah, thank you so much for having us.

 OUTRO MUSIC

 Laurie Burns McRobbie:

This podcast is brought to you by the Center of Excellence for Women and Technology on the IU Bloomington campus. Production support is provided by Film, Television, and Digital production student Lily Schairbaum and the IU Media School, communications and administrative support is provided by the Center, and original music for this series was composed by IU Jacobs School of Music student, Alex Tedrow. I’m Laurie Burns McRobbie, thanks for listening.