Tech'ed Up
What's happening on the frontlines of tech? Tune in for a zippy conversation about emerging technology hosted by industry veteran Niki Christoff. From the C-Suite and Capitol Hill to AI and crypto, quantum computing to the decentralized internet, Niki breaks down the trends in tech to help savvy listeners get even smarter. Guests include experts, enthusiasts, regulators, policymakers, CEOs, and reporters.
New episodes premiere bi-weekly on Thursdays. Subscribe for the latest episodes on YouTube or listen on your podcast app of choice.
Tech'ed Up
Reframing AI • Chloe Autio
DC-based AI policy and governance expert, Chloe Autio, joins Niki in the studio to break down the state of play in Washington for AI regulations this year. They explore AI’s PR challenge, talk about the awkward optics of recent Congressional briefings, and Chloe explains why the term “artificial intelligence” conceals more than it reveals.
“It really should be more about not whether or not AI is going to take your job, but whether or not someone who uses and knows how to use AI will replace your job.” - Chloe Autio
- Learn More at www.techedup.com
- Check out video on YouTube
- Follow Niki on LinkedIn
[music plays]
Niki: I'm Niki Christoff and welcome to Teched Up.
Today's guest is Chloe Autio. She's a DC-based independent consultant who focuses on AI governance and policy. It's an area that couldn't be hotter right now and I'm thrilled to have her here to talk about the state of play.
Chloe, welcome to the studio. Thank you so much for taking the time to come in, in person.
Chloe: I'm so excited to be here. It was cold. It's cold! And I'm from Montana, so like, I know cold, but it feels really cold here today.
Niki: But we've become weenies. [Chloe: Weenies!] I'm from Indiana, where I [Chloe: yes!] I used to live at the bus stop [Chloe: laughs] in, y’know, frigid weather with wet hair in middle school and now I'm like such a baby.
Chloe: [laughing] Yeah, me too! Also, our children. [Niki: I know!] We're raising our children to be a little weak, right? Like, the days that my stepdaughter has been out of school in the last few weeks are shocking to me. We have like five inches of snow on the ground.
I'm, like, we gotta toughen up people. Seriously!
Niki: I mean, I'm with you, although I don't know if that's like a popular concept: toughening people up.
Chloe: No, it's probably bad. That's like probably bad on my part. [laughing]
Niki: No, it's good! [chuckling] We should toughen, they should become heartier [both laugh] to be able to deal with those flurries of snow.
Well, thank you for making your way to DuPont Circle. I, we met through a mutual friend and colleague, Dorothy Chou, who actually has one of the most listened to episodes ever of this show [chuckling].
Chloe: That does not surprise me at all. I'm a huge fan of Dorothy.
Niki: I am, too. And she's a huge fan of yours. So, we met up and when we were talking, we were talking just career stuff. [Chloe: yup] And then I discovered what you do for a living, and AI couldn't be hotter in DC. So, I truly convinced you to come and talk about it.
Chloe: And here I am. And I'm really happy to be chatting with you today. Thanks for having me.
Niki: Why do you think suddenly we've become gripped in this town, in D. C., with the concept of regulating AI?
It's not new technology. Two years ago, I watched actually an interview you did, and you said, “Well, we're kind of through the hype cycle with the tech.“
[both laugh]
Chloe: How I wish I was right at that time!
Niki: Right?! So what's changed? Why are we back to it? I mean, Congress and the White House, we'll talk more, but they've dropped a lot of other tech issues to just hyper-focus on this. Why do you think that is?
Chloe: Yeah, so, I think the answer is actually pretty straightforward, and it's that generative AI and these tools like ChatGPT, or DALL-E, or MidJourney, or StableDiffusion that allow or provide an interface by which ordinary consumers, not commercial customers, but like regular old folks, like my parents in Montana, to get their hands on these technologies and actually understand how powerful they are.
And so, that sort of rampant and widespread consumer engagement in and around these technologies has just overall boosted the awareness about them, and, and with that has come awareness about risks, about job displacement. Right? “Holy crap! This thing is so good at writing a memo or, y’know, analyzing a legal brief. What are the implications for this?”
As far as why and how, y’know, so many people are talking about AI in Washington, obviously those concerns have percolated. We have a big election season coming up. [Niki: Yes!]
And we know, y’know, what the impacts technology can have on elections, positive and or negative. And politicians themselves are understanding that their constituents realize and are using this technology and they need to be more proactive about thinking about the impacts and governing it and understanding how it can be used in such consequential context like elections.
Niki: One of my observations in Washington is that we will sometimes have an issue where there's a ton of bipartisan support [Chloe: mm-hmm] that you think is going to get done. So, last spring it was TikTok. I thought they're definitely going to get something done. [Chloe: Totally. Right?} I was like, [Chloe: [laughing] Totally, of course] Oh, it's happening! Bipartisan support, you know, the House and the Senate. And at the end of the day, voters don't care. They don't care about regulating TikTok. [Chloe: Right]
And it was sort of the same thing where you don't have masses clamoring for H1B visa reform. [Chloe: mm-hmm] Right? So that we can have high-tech immigrants come into the country. That was nearly unanimous in the Senate, and we never got it done. [Chloe: mm-hmm] And so, I think you're right: it's the presidential election year, and by people having ChatGPT in their hands, they can see themselves [Chloe: mm-hmm] what these machines are capable of. [Chloe: Right]
So, if the voters care in 2024, DC cares.
Chloe: Right! I think that's exactly right.
Niki: We were talking once before when we first met about the risks versus the potential positive use cases of AI. I have a thesis which is, well, AI has a PR problem, [Chloe: mm-hmm] I'll say that to start. Like, a terrible PR problem, [Chloe: Totally] not helped by the South Park episode on it, which was hilarious. It's a ChatGPT-
Chloe: Oh, I haven’t watched that!
Niki: I don't really watch South Park, but I do recommend the ChatGPT episode. But essentially, people see the harms or feel the harms. They're not really experiencing the harms yet. [Chloe: mm-hmm, mm-hmm] But the idea that you would have discrimination, the facial recognition, which we do already have that harm, which does a terrible job with people of color and different minority communities. With the weird results [chuckles] when you put woman into anything on the internet, [Chloe: Yup!, Yup!] it makes her younger and thinner, [Chloe: It is kind of weird]. It's super weird.
So, people can kind of see these harms, but they can also extrapolate from what happened with social media [Chloe: mm-hmm] that maybe these will be job displacers, terrible for society, y’ know, create election fraud. [Chloe: Mm hmm]
But those feel really real and fear sells, but the positives, I feel like, have not been told in a compelling way. Like, people don't really get all the positives. And so, I think there's a major PR problem, which is part of the other reason we're seeing this in DC.
Chloe: Yeah, and I also think that the approach that a lot of companies have taken, y’know, flying your executives in, coming to, like, sit at a forum. These are targeted at policymakers, right, and their staff, to educate them, to talk about the benefits of AI, to talk about what they're doing maybe even to govern or propose regulations for AI.
Like, you would think that all of these things would be helping AI's PR problem, but they're not.
Regular people see, I think, this sort of dealing as well, this coming into, flying to D.C., we're going to have a panel, we're going to have this fancy, this fancy event, y’know, it's all well and good, but yeah, you've hit it right on the nose, I think that it just doesn't translate to ordinary people who are trying to figure out, like, “What impact is this going to have on my life? On my job? On my kids? And what will the benefit be?”
Niki: Right! And so, I think you're right. It's interesting that you brought up these sort of seminars [Choe: Yeah] that have been aimed at educating, y’know, Senators and members of Congress. And the optics of that, as a communications professional, it's a bunch of billionaires lecturing a bunch of Senators, [Chloe: mm-hmm] which I don't think is super relatable [Chloe: I don’t think it’s relatable] or makes people feel great.
Chloe: I hope you've seen this photo. It sounds like you have of, y’know, the members all sitting [Niki: yes] in the little galley or the gallery chairs like they were going to some sort of I don't know, lecture? Meeting?[chuckling] [Niki: Yeah!] It was really sad! [Niki: being lectured by billionaires]
[crosstalk]
It was. I know. It was like, “Oh my god! Why did they-” that was the first thing that so many people in my circles passed around. It was like, “Oh, my god! Did they not think of the optics of this?”
[chuckling]
Niki: The photo op is bad.
Chloe: Literally!
Niki: Right. The intent was good.
Chloe: The photo is so bad.
Niki: Yeah. And that's what, that's all people see.
So, okay. Some positives. [Chloe: mm-hmm] When I think of AI and I think of the positive use cases, I think of things like: every emergency room in the world, when you walk in, they'll be able to speak every language. [Chloe: mm-hmm. Yeah] You're going to have much faster, better care and better communication. [Chloe: Absolutely] You're going to have, when you have a scan for cancer, you're going to get an automatic AI-perfect replica the next year. Like there's all of these really positive things, but it doesn't have as much sizzle as like “the robots are going to take your jobs and then, and then kill society.”
Chloe: Yeah, and I think that this goes to sort of the communications or PR problem that you mentioned.
I mean, even like three or four years ago, we were talking about how, y’know, when AI is discussed or promoted, it's, y’know, “faster, more productive, more efficient.” Like, these things mean a lot, again, in commercial settings, but they don't mean a lot for, again, regular consumers whatsoever. As we go into this year, we're going to see a lot more focus, as we already have in some settings, y’know, on marketing AI as safe or responsible. [Niki: mm-hmm]
We're starting to see that already a little bit. Just because something is labeled safe, I think a lot of consumers think [Niki: [chuckling] Right!], “Well, hang on a second. Let me actually understand and evaluate, like, what that really means.”
I've seen, seen and heard a lot of firms, y’know, selling responsible AI solutions or, like, y’know, “This is, this is the responsible AI” as if the AI they were selling before was like not responsible, right? [Niki: chuckling]
Niki: Right! [both chuckle]
Chloe: Like, should we have that discussion? But it's, it can be a little bit misleading too, because from my background working on AI governance, building out, what do you want to call, quote-unquote, responsible AI programs, ethical AI, whatever you want to call it, those programs are still nascent. They're still early. They're, they're still, y’know, getting worked out.
And even though, y’know, companies with a lot of resources are doing much better, Microsoft, Google, ‘know, even OpenAI and some of these labs that are really focused on sort of model governance. Governance of these systems, particularly thinking about, y’know, data governance and content moderation, it's a highly manual process still. And, we're, like, just getting started on what really governing generative AI looks like and I, I don't think that consumers really understand or have been involved with that process either.
Niki: Well, and I think this goes to one of the fundamental things about persuading the public to be in favor of anything: it takes a long time to build trust and you can lose it in a second. [Chloe: Totally] So, calling something responsible [Chloe: Totally!] when you know that there are going to be fiascos. And I'm not even saying like, y’know, catastrophic fiascos, but just things that are going to look terrible on the nightly news. [Chloe: right, right]
You know that's going to happen. [Chloe: right] So then if you label it that you're setting an expectation that you probably aren't going to be able to meet.
Chloe: Correct! That's such a good point. As we sort of looked at this year and I think about, y’know, these, like - Where can we use AI? Where should we be thinking about using AI?
It's sort of in these, these areas where we actually need to realize what safe and responsible looks like for the consumer.
So, I'm thinking of this article that I read yesterday, actually, about how Americans in the last, y’know, five to ten years, road rage in Americans has significantly increased, increased something like year over year. Like road rage. It's not called drag racing, but like street racing.
Niki: Oh, my gosh! Yes! In Washington, DC. This is sort of, I mean, 14th street, right? For anyone living in DC.
Chloe: Street racing accidents, fatalities among young adults has gone up 20 percent every year for the last five years. And, y’know, I think it's because it's, people like the thrill that they get from racing these cars, especially teenagers. Among adults, the sort of reckless driving charges, these accidents, y’know, fueled by alcohol, driving under the influence, just being inattentive, i. e. from using a smartphone or just like not paying attention while you're driving have just skyrocketed, and it was really interesting because this, this article sort of covered how technology, this actually is an area that is super ripe for AI and technology implementation.
Whether it's to track sort of distractedness in a car to help with someone who may struggle with a substance abuse problem, y’know, getting in and out of a car and deciding whether to make a drive. But also the psychological implications, right? Just the fact that Americans feel disconnected from one another and lonely and angry about maybe the direction of the country, the state of the world. And they said they need a place to put these feelings, right? They need a place to put these emotions and they're putting them on the road.
Niki: This is fascinating.
Chloe: It is scary.
Niki: It is scary. I think it's anecdotally evidently true [chuckles] when I drive around. Both chuckle] And I think also related to this sort of aggression, thrill seeking, loneliness, it's this assumption of bad intent from other drivers.
So what can tech do to help us with this road issue?
Chloe: Yeah, you're right. To bring it back, like, this is just a clear example of where, y’know, there's a problem that affects everyone. It's consumers, every consumer, everyone who drives a car, right?
Making cars safer is, I think, a genuinely attractive proposal to any American, any mother, any father, any mother. And it's an area where, y’know, I wish and hope that some more of these companies think about, “Okay, instead of making our AI, y’know, quote, unquote, safer, quote, unquote, more responsible, selling it so that we can commercialize and make more B2B sort of, y’know, business to business transactions as a platform. Let's try to figure out how we can develop AI for some of these, some of these problems, right?”
Looking at using AI in cars to make things safer. Helping with the emergency room, like you mentioned. I sense that sort of this year, and particularly now that we've, I don't want to say hit a plateau, but people are now, and businesses are now aware of what these large language models can do, and what we'll see this year is how they get implemented into, into businesses, right?
Everyone will have their sort of, y’know, en-suite chatbot, whether it's to analyze their compliance documents or do customer service faster. Those are all great things, but I'm really, really interested to see how organizations and government maybe can partner to understand how these, y’know, commercial tools can work in these other settings where they're providing direct impact to consumers.
Niki: So, I want to go back to the road thing for a second [Chloe: mm-hmmh] because you said something that was really important. You said instead of focusing on quote-unquote responsible or safer AI, [chuckling] use AI to make people safer.
Chloe: [excitedly]Yes, exactly!
Niki: Use the tools.
Chloe: What an amazing change in the narrative.
Niki: Right?!
Chloe: That's exactly right.
Niki: And I can give a couple specific examples where this already exists but people don't think of it as AI.
So, if you're in an Uber and your driver is braking hard, [Chloe: mm-hmm] you'll get a notice that says, is your driver [chuckling] braking too hard?
Chloe: I didn't know that. Yeah, I've always had easy-braking drivers maybe.
Niki: Or sometimes it's a beta test that they're doing. But so you can say “Yes, my driver is!” So they're, they're making, y’know, they're checking on the safety of your trip.
The same thing for if you haven't moved for a long time in an Uber, you'll get a notice. “Hey, are you and your driver okay? We noticed you haven't moved in an, y’know, abnormal amount of time.” That's AI. [Chloe: mm-hmm] There's also, because it's pattern recognition, right? [Chloe: Totally] They're looking at however many trips, this is aberrational, we'll send a note.
But the other thing, you mentioned being from Montana, [Chloe: mm-hmm] but I was in Montana in August and for the first time in my life, because I rarely am driving on those country roads anymore, [Chloe: mm-hmm] I realized that my brights on the rental car went on and off automatically, like on when I needed them and off when a car was approaching. [Chloe: uh huh]
And I had this moment by myself where I was like, “Oh, it's AI.” [chuckling]
Chloe: [laughing] I don't know if I would call that one AI! But sure.
Niki: [surprised] You don’t think it is?
Chloe: I don't think so. I mean, I think it's just a sensor.
Niki: Okay, it's probably a sen- Well, that actually leads to another thing, which is like, what in the hell is it? {cross-talk]
I read today, someone said that AI is not as smart as a cat. [Chloe: Um. Interesting]
Because robots can't jump up and land.
Chloe: Yeah, so I was actually thinking about this on the way here. Like, I actually had a professor back when I was first getting into this stuff at Cal. And he, he made this amazing comment, which was that the term AI conceals more than it reveals. [Niki: mmmh] It's a terrible term to use.
Because what we're talking about is, as this sort of sentient or like alive or, y’know, extremely powerful thing with just these two letters. AI is really, y’know, a combination of a bunch of different technological capabilities, y’know, optimization models, if you're talking about like large language models, y’know, they need APIs to actually get connected with the rest of the world.
They're deployed in many different contexts, which require like bigger technological systems. They all run on hardware. Compute, right, which is where I started my career working on this stuff at Intel. All of these AI systems, [Niki: mm-hmm] which I think is a better frame to, frame of reference to use or term to use, are made up of different technologies, and they're also deployed in different contexts.
I think that gets lost also for these both policy and, y’know, PR problem discussions, as you mentioned, because people actually don't know what we're talking about.
Niki: No! Mark Zuckerberg doesn't know! [Chloe: 100 percent] He recently said, someone asked him to define it and he said he really couldn't. [chuckling]
Chloe: Right!
[both laugh]
Chloe: Well, it's very telling when the, y’know, powerful developers of these systems themselves cannot actually explain what a thing is. Right?
Niki: Right. But you just did a good job. So, I think many people, when they hear artificial intelligence, they think robots. Like, I just said something mechanical. In my head, I'm like, “Well, it's pattern recognition, [Chloe: Yeah] and then they've adjusted it”. And it's a sensor, [Chloe: uh-huh] but you're right -that's mechanical.
What you've just described is really software. [Chloe: Yes!] Big data sets that are going to create the next step on their own. Like, they're going to come up with the next step on their own, [Chloe: mm-hmm] and this is software that's going to enable a lot of things through a system.
Chloe: Right. And I actually think that the distinction between, like [interrupts self] Artificial intelligence can be used in robotic and mechanical systems, but I think that it's really important to distinguish between AI software and robotics. [Niki: Right!]
Which in a lot of fields are actually thought of as two, especially in a lot of academic fields, are sort of thought of as two separate things. And I think that when people think about AI, obviously because of all of this anthropomorphization and, y’know, the way that the, the media and, has sort of talked about AI, they immediately think about a robot. [Niki: right?] But really, it's a much more sort of subtle and, not subtle in its power, but just it's it's much more intangible, sort of, how AI works.
Niki: And this goes back to the point you were just making, which is, you're going to start to see, [Chloe: mm-hmm] so we talked about using AI to make us actually safer.
The other thing is making us more efficient, which people think of efficiency, it feels really abstract.[Chloe: mm-hmm] But in our jobs, if you're using AI systems to make things work better, that might be a tangible way that people can grab onto the positives?
Chloe: Yeah, absolutely! [sighs] I mean, if you think about, y’know, parts of a job that feel tedious or rote, right?
Like, y’know, I used to work in an office and a lot of what I would do was just sort of like pull research about different, cases or, y’know, what was in the news that day that might relate to what was going on with a certain case.
Or if you've worked on the Hill, right, you're opening mail and you're sort of going through a bunch of different content and letters and, and just sort of, like, having to collect a lot of content that may or may not be unique and it's sort of hard to find the nuggets. What difference would it make to have some sort of technology to collate and sort of amass that information to make that part of your job a little bit easier so you can focus more on doing things like, I don't know, preparing a brief [Niki: Right!] or, y’ know, taking phone calls, doing more in-person interactions?
“Gosh, if I didn't have like these three piles of mail to open, maybe I could spend more time, y’know, having face-to-face meetings with constituents and actually hearing about what they want to share with the member or whatnot.”
And, I know it's been said many times, but this fear about job displacement, it's really should be more about not whether or not AI is going to take your job, but whether or not someone who uses and knows how to use AI will replace your job.
Niki: This is so interesting and important.
I actually think where people should be thinking is not, “Do I like it? Do I hate it?” But, “How can I learn to use it [Chloe: yeah] so that I am trained for the next phase of the workforce?”, which we're going to have.
Chloe: The workforce education - just a couple days ago, I think the Arizona State University struck the first academic partnership actually with OpenAI to use ChatGPT as an assistant to help students in the first year sort of English or literature course understand text better and analyze text better. Like, how cool is that?
It's fascinating that, y’know, there is a tool that can provide that kind of really elevated analysis about, y’know, different literary concepts, and it's, like, at people's fingertips, right?
The reality is, is that, like, people will be using these tools, and that's why I think that this partnership is so important. It's kind of like cheating in school, right? There was all this concern about, and there still is, right, consternation about whether or not you should be able to use ChatGPT or, y’know, any kind of language model, Claude, whatever, to cheat on tests or, y’know, write papers. And obviously, like, the answer is “No, cheating is bad.” But by banning these technologies, people will still cheat, students will still cheat, right? Whether, like, writing the essay on their leg [Niki: right] , or the math formula, like, on the inside of their wrist, whatever, right? Like, we can't solve these problems, and so we need to teach people, and students particularly, how to use and improve and advance with these technologies. And not just act like they're not there.
Niki: I think that, right, that's a very reality-based approach to it. So, if you think of it as just enabling a better outcome and you're still learning the content.
Chloe: Right. I have a, I have an anecdote that's actually really interesting. So, my father just retired. [Niki: Okay] He had a very small private practice as a family practice physician, thirty-five years at the same job. He started taking some adult learning classes at the University of Montana.
He got a Master's in English Literature, just for the heck of it. Y’know, professional student, I guess. And one of his classes is on I think the class is called, Ghosts, Guilt, and Gun.
Niki: I love this, [Chloe: Something like that] this is literally, like, if there was a podcast called that, I would listen to it every week.
[cross-talk]
[both laugh]
Chloe: It's like something about, y’know, just like all the old books or whatever. And they're reading The Turning of the Screw. I actually haven't read it myself, which is embarrassing, he'll probably be mad. He found, because I introduced him to Claude, actually, Anthropic's chatbot earlier this year, [Niki: Yup] that he could go, y’know, ask Claude to interpret different passages or elements of The Turning of the Screw through different literary interpretation styles.
And he found that it was actually really, really good and very, very helpful. And while he could be, y’know, doing that on his own, sort of scribbling through the library, I, I thought it was really, really cool, particularly for someone his age, to be able to, y’know, input these kinds of questions into this really powerful tool and be able to think, “Hey, this is actually pretty good.”
And how much easier it was then for him to be able to sort of compare these different takes, y’know, these different analyses for his class. And he said that he then went and like wrote a paper for the class about using the different tools and the things that they taught him, and then they all had like a long discussion about it, y’know, the merits of the technology, whether the analyses were right, but it proved to be a really, I think, rich learning experience. Having all these different analyses kind of made the discussion, at least for this group, more rich around what they were talking about in relation to this book, in regards to this book.
Niki: So, as a communication strategist, [Chloe: yeah] I love this because you used education and essentially the same example. [Chloe: mm-hmm] Y’know, OpenAI versus Claude, but these two tools that people have, and we went from ASU students, like, being told what a text is, [chuckling] which is not super sympathetic, I think, just to the average person, to your dad, which is incredibly sympathetic, like, “Great! He's having a richer experience in this, y’know, advanced education and continuing education course.”
So, it's like such a feel-good story.
Chloe: It is a feel-good story. It is a feel-good story! And I hope that, y’know, I don't I would be really interested to understand sort of the terms and, and discussions that happened at ASU around sort of widely opening up this tool for students and professors to use. There weren't, in the story that I read, there weren't a ton of details.
As I suspect with so many of these different implementations and partnerships, the devil will be in the details and we'll sort of see what happens as the rubber hits the road. And do we want to be doing that? I don't know? [Niki: Right] Like, this is kind of consequential, students going to school and like, passing or not, or succeeding or not in their first year, y’know, writing course.
It's, a lot of it is sort of hinging on, that these, the assumption, right, that these tools will be used for good and not nefariously and not preserve bias and all of this stuff. And I actually don't think we know the answer to that question yet. So it's just going to be interesting to sort of see how that plays out in this more, y’know, serious or consequential educational context that isn't, y’know, a bunch of retirees hanging out at a coffee shop.
[both laugh]
Niki: I know, I actually think they're both important contexts, but I think the way people will relate, [Chloe: Totally] it's important for storytelling. But this sort of brings us full circle to what you do for a living, which is advising companies and teams on what might happen with government, what government should be doing.
And I think you just said something, this is the cynic in me. I'm like, well, the reason they announced it without any details is it gets them in the headlines [Chloe: mm-hmm] with like a very, y’know, sexy headline of partnering and being on the cutting edge of tech. [Chloe: mm-hmm]
And sometimes I think, over on Capitol Hill, you have people looking at this, not because we have new and novel regulatory issues with AI. It's, it's a tech layer on top of existing problems of discrimination and bias [Chloe: mm-hmm] and potential hacking and all of these things. And maybe they're just looking at it because it's like a fundraising tool in an election cycle. That's hugely cynical. [Chloe: Sure] But what do you see happening in this town this year?
You're advising clients on how they should approach regulators and policymakers. What's your state of play for 2024?
Chloe: Yeah. I mean, great question. I, after a lot of hype in the last couple of years, particularly, y’know, different legislative efforts, you have like the Algorithm Accountability Act, which I think was great, y’know, good to put it out there, other bills on AI.
A lot of different, y’know, regulatory enforcement agencies saying, y’know, we're going to think about how AI applies in the context of our enforcement authorities. Y’know, “What can we do?” We've seen this from the EEOC and others.
In 2024, I think a lot of the focus is going to be on implementation of the executive order, understandably so, right? Not just because it's an extremely consequential time in AI. But it's an extremely consequential time for the Biden administration going into an election season [Niki: right], knowing that, y’know, perhaps, we may not have a second Biden administration, we may be going into a Trump administration. And Trump has been extremely vocal already that, y’know, he will repeal Biden's AI executive order on Day One.
And so, there is so much focus. So much crunch, so much pressure from, I think, every public servant really who has any kind of responsibility in executing on the directives in the executive order to just get it done as soon as possible. And just to talk a little bit about those, I think it's important to sort of inform what will happen for the rest of the year.
We'll see a lot of, reports and sort of guidance about different things that the EO mandated. One is, y’know, how do we, how to define really what open source AI means. The NTIA, the National Telecommunications and Information Administration will put out a report on the availability of what they call open model weights.
So, basically analyzing the risks and benefits of making these really powerful models more open and potential policy implications and societal implications of that. [Niki: mm-hmm] That report, like many others that will come out from different agencies, will inform, sort of, how the administration, maybe even Congress, will write different rules about AI governance and availability.
And then we'll see, y’know, other, other sort of more process-driven implementations of the executive order or implementation threads, I guess, of the EO. One of them, obviously, is the OMB, the Office of Management and Budget's, guidance on how agencies should be using and buying AI. And this actually will have a lot of implications for commercial actors, too, whereas, y’know, the executive order sort of broadly did not.
The OMB's guidance will really set sort of a new standard for how agencies across the federal government should be thinking about risk management when procuring AI. Meaning that, y’know, anyone that's wanting to sell to the government will have to adhere to these new standards. There's a huge gamut of things that the administration will be looking at as it implements this Executive Order.
And I think that as sort of a follow on step, we'll really see Congress being very, very watchful of how this all goes, right? What do we learn from these reports? What have we learned about industry's response to these new procedures or policies set forth by OMB in a procurement context. Which agencies are well equipped to deliver on their directives and have they actually done so?
I'm not saying that some agencies will not, but that certainly happened in the Trump administration with Trump's AI executive order and, y’know, certain agencies just not really having the resources or even wherewithal or staff to be able to carry forth the things that they had been asked to do and not a lot of enforcement.
So, I think that, y’know, while we have this big effort from Schumer and the sort of leading members of his quote-unquote AI gang. And that's actually what they call it for those, those that don't know, “The gang, the gang of four on AI.”
[both chuckle]
Chloe: Y’know, I think they'll be watching really carefully to see how the EO implementation goes and how that should inform policymaking and sort of statutory attempts on AI. Knowing that y’know, pushing something through on, on bias or anti-discrimination or, y’know, transparency may not be all that realistic.
Niki: Yeah, I think it's not. And you made a really good observation that maybe we end on, which is during an election year, this final year, [Chloe: right] potentially of this administration, certainly it's the final year of this term, you see a lot of action in the agencies, and especially lame duck. [Chloe: Yup]
After the election, it's like they race to get things done, and maybe some of that would be overturned in a new administration. But that is absolutely a pattern we see in Washington, that there's this sort of like lame duck agency effort.
And then Congress, I think it, y’know. For all things, y’know, for all the things people say about Congress being totally stuck, they actually got a lot done last year. A lot. Not just, not in tech necessarily, although they had the CHIPS Act, but they got a lot of very positive things done, including a gun reform bill. [Chloe: Right, right]
So, I think now we're facing the optics of an election year where a win for the other side or a win at all is just like not tenable politically, so I suspect nothing really gets done.
Chloe: Yeah. [sighs] I'm, I'm a total skeptic about Congress and I say that pretty, pretty freely.
[both chuckle]
Not to disrespect any of my amazing, y’know, colleagues and congressional staff and, and members, y’know, who are working on these issues and really, to their credit, have massively improved in terms of their literacy about these issues.
I mean, it, it's been amazing sort of witnessing how, how the fluency among members and staff alike has really increased in the last three years. Y’know, having worked on AI policy-related stuff in Washington since 2017 when, gosh, it was really hard. [chuckling]
Niki: I mean, Will Hurd was the only person in Congress, [Chloe: Truly!] friend of the pod, Will Hurd was like the only person who understood AI!
[both chuckling]
Chloe: Yes, yes! There were like, y’know, a pinky's worth of members who even, like, understood what it was and, and, and could, could have this sort of discussion and staff as well, right?
What the American public is really looking for maybe from regulators or like when they say regulation, right, it's like two-thirds of the American public believes that something should be done on AI regulation. [Niki: Right]
In their minds, they're not really thinking about an executive order. They're thinking about Congress doing something, laying down in statute, y’know, some new rules about what they can expect as consumers and what sort of rules businesses need to adhere to to keep them safe or be responsible, right, these words that we're using and actually do that.
These are complicated issues and we shouldn't not work on them, but I just don't think that Congress, in this environment right now, is equipped to make any serious, serious progress on what that looks like. [Niki: Right]
They should try. They should. I have like a list, a laundry list of things that I would love to go talk to Congress about and, y’know, say, “Hey, here's what we should do.” But I, we're talking just about predictions, right? Like what will happen this year? I just don't think that's on the, on the table.
Niki: Yeah. Well, and in Washington, everything's a long game. [Chloe: Yup] I mean, you have to go and do exactly what you're talking about, which is pound the pavement. [Chloe: mm-hmm] and be on the Hill and talking to staffers so that when it is a more favorable environment, they can get constructive things done.
Chloe: Yeah! I think that there needs to be a lot more communication. There already is. But I think there needs to be a lot more communication between, y’know, congressional staffers and public servants working in the agencies on implementing this executive order on what that is actually looking like.
How is it going? Like, what are the impacts? Because I think we can all agree, and I dealt with this certainly in my time at Intel, doing things like doing product reviews and developing impact assessments and working on, y’know, what it actually meant in practice with data scientists and lawyers to make a product, quote-unquote, more responsible or more ethical.
Congress is not going to be the organization that figures out what an impact assessment will look like [chuckles], right? What a good framework for governing a model will look like. That's why they said “NIST, you go figure this out. Develop this AI risk management framework,” right? “This is not within our bounds.”
But I think the more work in Congress needs to be done to sort of understand what is and what isn't within the bounds because a lot of this effort I see has been focused on, y’know, creating these frameworks or developing an impact assessment or saying, y’know, here are the documentation requirements we're going to require by statute in law. And it's like that will all change very quickly. [Niki: Right]
We can't pass a law like that and expect it to withstand, y’know, the next six months. [Niki: Correct] Like this is just, it's just, the technology moves too quickly.
And so, I think there needs to be sort of more consultation with organizations and people actually doing that work to understand what is really within Congress's power to work on and solve, or maybe to give new authority to different regulatory agencies to figure out what that looks like, where they had the expertise, where they had the time to do that, because they just don't in Congress.
Niki: And you could almost picture, like going back to the optics, it might be nice to see some government workers who are actually doing this, have them [Chloe: Yes! Correct!] talking to members of Congress [Chloe: Yes] about what they're discovering [Chloe: Yes] as they're trying to make things safer for the American public.
Chloe: Totally! I, I want to see that this year. [chuckles]
Niki: Okay, that will be our vision. Chloe, thank you so much for coming on. This was a really fun conversation.
Chloe: I'm so glad. Thanks for having me. It was, it was a treat.