The Penta Podcast Channel

Artificially Intelligent Conversations with Nicole DeCario from AI2

May 16, 2024 Penta
Artificially Intelligent Conversations with Nicole DeCario from AI2
The Penta Podcast Channel
More Info
The Penta Podcast Channel
Artificially Intelligent Conversations with Nicole DeCario from AI2
May 16, 2024
Penta

This week on What's At Stake, tune into another episode of our Artificially Intelligent Conversations series. Each month, Penta Partners Andrea Christianson and Chris Mehigan will dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management, policy, and more. This week, they host Nicole DeCario, Director of AI & Society at the Allen Institute for AI (AI2) to cover AI literacy, policy, and industry norms.

Nicole discusses how AI2's commitment to open-source language models aims to shape a more transparent and collaborative future in AI research, distinct from the closed doors of industry norms. Her insights reveal a pressing need for ethical frameworks and a nuanced understanding of AI technology in order for U.S. regulators to craft effective policy. You don't want to miss this episode!

Show Notes Transcript Chapter Markers

This week on What's At Stake, tune into another episode of our Artificially Intelligent Conversations series. Each month, Penta Partners Andrea Christianson and Chris Mehigan will dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management, policy, and more. This week, they host Nicole DeCario, Director of AI & Society at the Allen Institute for AI (AI2) to cover AI literacy, policy, and industry norms.

Nicole discusses how AI2's commitment to open-source language models aims to shape a more transparent and collaborative future in AI research, distinct from the closed doors of industry norms. Her insights reveal a pressing need for ethical frameworks and a nuanced understanding of AI technology in order for U.S. regulators to craft effective policy. You don't want to miss this episode!

Speaker 1:

Hi, I'm Andrea Christensen, a partner in Penta's DC office and head of our firm's AI task force.

Speaker 2:

And I'm Chris Meaghan, partner at Penta Group in the Brussels office.

Speaker 1:

Thanks for tuning in to Artificially Intelligent Conversations. This is a recurring series that's part of Penta's what's at Stake podcast.

Speaker 2:

We're facing a time of unprecedented technological innovation in the AI space. Each month, andrea and I are going to dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management policy and more.

Speaker 1:

Today, chris and I are talking to Nicole DiCario from the Allen Institute for AI, or AI2. Nicole is the Director of AI and Society at AI2, which supports responsible AI development and deployment, focusing specifically on ethics, literacy and policy. Before joining AI2, nicole spent the bulk of her career in the philanthropic and nonprofit sectors, so thanks for being with us today, nicole, and is there anything you want to add about your background and work before we get started?

Speaker 3:

No, I'm just delighted to be here. Thanks for having me.

Speaker 1:

Awesome. So the first question we want your opinion on is this AI roadmap that was just released by Senator Schumer and his bipartisan team. So, as listeners probably know, this roadmap is the result of almost a year of discussions with industry, academics, ethicists and others over nine AI Insight Forums, and a lot of folks, nicole, on the ethicist side, have expressed skepticism that the roadmap is geared a bit too much to industry and not enough towards mitigating risk. What is your take?

Speaker 3:

Yeah. So the roadmap came out yesterday and I had time to read it because it was 30 pages, a third of which was an appendix, and I guess I just didn't. I didn't have super high expectations of this roadmap. Anyway, I think there was nothing earth shattering that was released in the report. I think that I assumed it would be a summary. I wish that it was more action oriented.

Speaker 3:

I am an action oriented person and I think we've seen a lot of these reports that have emerged that are we should be doing this and this person should be doing this and this committee should be focused on this, and it was sort of more of the same, and I think that's why people are feeling this sort of dissatisfaction. I don't think I'm dissatisfied. I think that it is on par with kind of what I expected that it would be. I would have hoped that it would have yielded clearer direction, particularly around some of the stickier issues that came up during the Insight Forums. We talked about things like transparency, explainability, copyright, jobs that's all sort of covered, but in the same way it's kind of been covered previously. Again, I don't think there was anything new. I think people are at the point where, especially in this week, where Google had their big news, openai released their latest cool thing. I think people are craving more direction and looking to the government to lead in a more heavy-handed way. I think that we are at this pivotal point. The technology is still rapidly improving and legislation and regulation is still kind of spinning its wheels.

Speaker 3:

Having said all of that, I also think this report wasn't. Legislation is moving. There's legislation and committees, there's privacy bills, there's you know, and states are doing quite a bit. So there are things building and moving and I guess I just didn't view this report as the thing that was going to propel us forward so quickly. Having said that, to have 150 experts spend so many hours to yield a report, I can also understand why people are saying that's that, that's it. So I guess my headline is would have loved to have seen more action. Would have loved to have seen more action. Would have loved to have seen more directive. Would have loved to have seen we need this law to do this thing, and that was lacking in the report, but overall, not far off from what I anticipated it might be.

Speaker 1:

Great take, Nicole. Let's kind of take a step back. I would love for you to talk a little bit about AI2 as a nonprofit research organization. You've also released your own LLM OMO, and can you talk a little bit about your goals with it and what's a little bit different about how you're working on this versus how some of the others are working on this?

Speaker 3:

Yeah. So AI2 is a nonprofit research institute that's been around for 10 years. It started in 2014. And it's one of the four institutes that Paul Allen put his time and energy behind, and AI2 always has been the institute that was doing research to advance the field of AI. The AI is a field that's been built on open research and then the breakthrough that came from that open research and then the next researcher that picked that thing up, and so AI2 has really been a big piece of that flywheel in getting that research out.

Speaker 3:

Then, the dawn of Chachi PT, everyone kind of woke up to this wow, now the thing that we do is on the front page of every newspaper. And so it became a different way of looking at the work that's being done and also recognizing that we've sort of flipped from AI being this open research focused community to now we don't actually understand these models that are coming out, because they're being released in this closed fashion where the recipe for how they were created is no longer being shared. Those things used to be published in papers and discussed at conferences, and it became kind of locked down, and so AI2 decided to create a model to open up that pipeline again so that we can understand the data and see the inner workings of how the models were starting to make these different connection points to deliver certain outputs, and so that's been AI2's approach to this language model and we're really building it for researchers. When it first came out, it was by scientists for scientists. The goal is really to fuel the research community with the tools they need to solve the big challenges that we're seeing. Ai2's led a lot of the work on bias research, gender bias.

Speaker 3:

There's just so much that's come out of research in the past that is not able to come out in the same way today when you're working in this more closed system. Also, all of these challenges. We have personally identifiable information that's in the data. You know all of the things that we're concerned about hallucination. In order to solve those problems, we need researchers to have full access to things so that they can reproduce things, that they can dig into these different connection points and understand what exactly is going on, and we're just not in a state that we can do that when things are closed and in a more challenging fashion.

Speaker 3:

Not only are things closed, they're closed and moving rapidly, and so there's going to be just a lot of catch up work that needs to happen rapidly, and so there's going to be just a lot of catch-up work that needs to happen. So AI2 released OMO and DOLMA the data set that it was trained on so that people could start to dig in and start to see it, and we've seen some cool things come out of it. People are playing around with it, people are using the tools now and the team is continuing to iterate. We're going to release a different model later this year, built on. It's going to be bigger. The research shows bigger is better. I don't know if that's sustainable and we'll get into that, I'm sure, later on. But that's AI2's whole approach Open it all up, let the research community do the thing they've always done so well, and then we'll sort of see some of these questions that are arising around safety and challenges start to get solved.

Speaker 1:

Yeah, that's really great. And so let's let's kind of take this and move a little bit back into the policy policy discussion. So first I'd be interested in your view of how kind of open and closed models coexist, and then let's also put that into the broader policy discussion on AI globally and and what's happening in the US and what's your view kind of what should happen at a broader international level, as we kind of as this technology continues to progress.

Speaker 3:

Yeah, so I think there are. You know, it's not to say there's not a place for models to not be shared broadly. I think we have to get into what are the incentives that are around to encourage research. What are the incentives around to encourage these closed systems? And right now the incentives seem very heavily skewed towards clothing up the system.

Speaker 3:

We live in a country that has a capitalistic society that values IP, that values all of these things that really support not sharing that information, these proprietary pieces, and so, as we think about AI regulation and AI legislation, we have to recognize that there needs to be a different incentive to encourage the research community and support the research community. Maybe it's safe harbor laws, maybe it's, you know, we have heard the term, these sandboxes, but we have to come up with something that allows for researchers to do their work and it's sometimes doing their work on copyrighted material, on things that are considered proprietary and IP sort of in this other bucket of incentive, and so they could be seen as conflicting and I think there are creative ways that we can come up with to incentivize both. I think that there are. We're also calling um we're. I think you know I've said this for a long time. Ai has a definition problem, so we are also talking about the same thing but different things. We're talking about data as one homogenous thing and data is actually different. Training data versus, you know, fine, fine tuning data, input data, output data those are different and we're kind of just like whitewashing them all as the same thing, and I think that we need to start to just get a little more specific. So, as an example, hopefully the Schumer Insight Forums those gave a little more clarity to the nuance that we actually need to be considering when we're thinking about legislation.

Speaker 3:

And AI in the US is, you know, there's so much rapid development and innovation that's happening here and again. I think we're at this moment where people are understanding more about it, which is critical. And, I should add, one of the highlights of the Schumer report for me was there was a section where they talked about AI literacy. I was like, okay, mic drop, I'm going to go home now, because I feel like I've been screaming from the rooftops about AI literacy.

Speaker 3:

We have to educate people about this, and that was in the report, and so the way that you describe the work that I do, the way that you heard me talk about it before. It's like if we have more ethically developed AI in the first place and we have policy that is supportive and we have a population who understands this technology, then we're getting somewhere, then we're starting to like mix it all up in a meaningful way, and I do think that there are, you know, opportunities for closed and open to exist. But even the definitions of open and closed are hotly debated within the AI community, and it does sometimes to me seem like we're missing the ultimate point, which is what I consider paramount harm. What are we actually trying to do? We're trying to prevent harm, and so are we actually achieving that in the terms that we're using today?

Speaker 2:

That's really interesting, nicole, and I think particularly the point you made there about harm. I suppose if I can just ask you a little bit more about that, are we talking about harm We've explored this a lot on the European debate in terms of harm to the likes of intellectual property rights holders? Are we talking about harm in terms of risk-based systems, in terms of what can happen to users or to people who have been processed through AI? You know, is that the area you're thinking about?

Speaker 3:

Yeah. So I think we have to consider all of it, and I think at AI2, as developers of a model, we need to think about how the model is going to perform in the real world, right. And so I just tied into a little bit about a question we had kicked around in many conversations ethics, right. So AI2 is not unique in that many of the researchers who work at AI2 never had to take an ethics course, so they're not even thinking about harm, and I think harm and ethics do go hand in hand. So one of the projects that I'm working on with my team is an ethics playbook to help embed the ethical consideration into the AI pipeline. I think, if we looked at what goes into creating a model from training to evaluation to fine-tuning to release all of the different pieces of the pipeline when along that pipeline do the people who are building the model have the ability to think more ethically about the work that they're doing and the potential harm? And so we're creating a playbook that's built on case studies. It's largely built on Laura Weidinger et al's paper from 2021 on risks and societal harms and ethics, and you know thinking about things like discrimination, information hazards, misinformation, malicious uses all of those different things. Researchers didn't have to think about those when they were just doing research to move the field forward. Why would they? But now we're in a point where it's non-negotiable, and so we have to start to think about how this research project, which was once just going to be a paper, is now actually going to impact somebody's life. And so there are myriad harms. There's harm to the economy, when you think about job loss. There's harm to IP. All of that is, I think, we're using different terms to think about all of these things and talk about all of these things, and I wish that we would get clarity on some of those terms so that there is less confusion.

Speaker 3:

And then, another thing that I think was missing from the report that came out is accountability. I keep going back to these, this conversation about accountability, because it's like a baton pass. You know, the researchers are doing their research and then they pass the baton at one point to the next group and then the user gets it, and it's just this constant baton path. But where does the accountability live? And then who owns the piece, at what time? That all I think needs to be really fleshed out, and that's where this conversation about harm and ethics and societal impacts all kind of comes to a head, and those are the conversations that I think haven't been quite sorted out. We know about them, we know we need to do something about them, but we haven't drilled down into that accountability layer and I think that's the missing piece in a lot of ways.

Speaker 2:

Very interesting. I completely agree with you about the terminology. I know it seems like such a basic thing and there's lots of other more flashy and exciting things to talk about with AI policy. But I know the EU and US TTC the Trade Technology Committee or Council rather they did a lot of work in terms of developing a common taxonomy. And you know, it's just let's start with understanding each other and then we can build out from beyond that, and it is so important that we all speak the same language in that regard.

Speaker 1:

Yeah, so what would you say? Your view is, nicole, on kind of where the US is versus sort of where the EU is in terms of regulation, and do you think that there needs to be a broader sort of international agreement, whether it's on taxonomy, whether it's on harms? We've heard people say we need something similar to what we have for nuclear proliferation when it comes to AI. So so that's a big question, but where are you on all that?

Speaker 3:

Yes, well, the EU has legislation. They passed the EUI Act, eu AI Act, after many years, after many rounds. I think I think there's one more round of lawyer review. Is that right, chris? You'll probably know better than me, but it's like they have legislation. And I think the other thing is nobody again going back to that the EU and the Schumer report. Nobody's going to be happy about all the things all the time ever, right? So like if anyone was expecting a party after the EU AI Act came out or after the Schumer report came out, there's not going to be a party. That the EU undertook to create the risk-based system and define what they meant by each category, and it really put a stake in the ground and someone needed to do it. So kudos for getting that out there.

Speaker 3:

One of the concerns originally that we had started to talk about with the EU AI Act was how open source model developers fit in, and in the final version, it seems like they've ironed out a decent amount of that. I think that copyright is still a question mark. What does it mean for copyright and training data that's? You know, we're just like waiting for someone to make a decision I think in the US about that. The EU has copyright law that I'm and I'm not an attorney, but they have, you know, provisions in there that open source developers have to follow the copyright laws. Um, and so that's a question mark. What does that mean for all of the work that's kind of gone in? I also think a lot of people are questioned Does that mean models that are already out or models that are come like? Is this retroactive or is this forward looking? Um, and some of the state bills that are out now in the is this retroactive or is this forward looking? And some of the state bills that are out now in the United States, for example in California, they're saying that they're going to create data restrictions that are retroactive, which would then mean what all the models have to go away. So there's just these very interesting nuanced things in these legislations that are big question marks and that sort of raise eyebrows from different communities.

Speaker 3:

So the EU has legislation that's great. It's going to take some time to put in to practice and to get it all up and running, but there are fines, there's actual things attached to the legislation, and in the US we're still in the report phase. We're still making recommendations On a global stage broadly. You know everyone's got principles for how we should be thinking about these things, but I think again, what we're feeling and seeing in the general responses is we are past the point of reports and past the point of principles and we want more action. And so I'm very curious and excited to watch how the EU AI Act unfolds. You know, it's gotten some pushback, it's gotten some praise, as is expected, but they did it. That's huge, and the US still has quite a bit of work to do.

Speaker 3:

The other thing is the EU AI Act is comprehensive, whereas the US is passing individual bills at the point. You know they're introducing the data bill. They're introducing, you know, the Create AI Act, which would fund the NAR, which is crucial, the National AI Research Resource, which sort of brings resources to the research community so that they can participate on the same scale as some of these larger, and by the same scale I don't actually mean the same scale, but on a scale that will get them to be able to innovate as industry does. So there's important bills, but things are being looked at in isolation and the EU really looked at things pretty comprehensively. So I think that's another interesting distinction. And then I think globally.

Speaker 3:

You know, I always go back to this question. If we believe that AI is on par with nuclear weapons, for example, then somebody should take it all away, right, somebody should shut it all down. Today. I just don't believe people actually think that. I think that it's something that's thrown out there as a talking point, but I don't actually believe people believe that, because if they did, then they would take all the models down. Someone would pass, you know, some sweeping executive order that says there are no more models.

Speaker 3:

And for all of the negative discussions that happened around AI, the potential of AI is tremendous and I think we live in a new cycle that really loves to pay attention to the negative side effects of this technology. There are just use cases that come out regularly on healthcare. You know, detecting pancreatic cancer or predicting pancreatic cancer early that is tremendous. My father-in-law passed away from pancreatic cancer last summer and two weeks later there was this major development in how AI might be able to predict pancreatic cancer. Like that's going to change people's lives, and so I think that having a conversation that really looks at the full picture is also lacking in these legislation conversations.

Speaker 3:

I think we talk about innovation and we talk about harm and we also talk about safety and innovation.

Speaker 3:

We kind of put them on a collision course, and I think we need to start reframing that to say safety and innovation, not safety or innovation, because they both need to be part of a really rich and robust discussion.

Speaker 3:

And regulation, you know, and future regulation. But I think that globally, we absolutely need to have start having conversations. You know, the UN put a report out in December, one of their first reports from their high level body on AI. There's more to come from the UN, but we need to have conversations because AI is not, doesn't have borders, it's not something that's going to live in, you know, the state line or the country line, and so we have to come up with some agreements, particularly around things like using AI and weapons, using AI in wartime. We're seeing some of that play out now and it's not going so well for anyone, and we really need to think deeply about what kind of world we all, as humans, want to live in, and where the technology makes sense and where it doesn't. And in the absence of the guardrails that are kind of needed for this, we're leaving people to make these choices and they're going to be wrong choices, and so I think that's this urgency that I think everyone is kind of feeling.

Speaker 2:

There's a whole lot of questions I could come back and put with you there. There was a lot, a very comprehensive answer, but I think we will just take a quick moment for a break when we come back we'll hear more about the latest AI developments.

Speaker 4:

Penta is the world's first comprehensive stakeholder solutions firm. We are a one-stop shop for the intelligence and strategy leaders need to assess a company's reputation and make decisions that improve their positioning, as executives in the C-suite must account for a growing set of engaged stakeholders, all with distinct, fast-changing demands. Penta provides real-time intelligence and strategy solutions. We work with clients solving complex global challenges across a variety of industries. Our clients span technology, financial services, energy, healthcare and more. To learn more about how Penta can support your company, check out our website at pentagroupco, our Twitter at PentaGRP or find us on LinkedIn at Penta Group.

Speaker 2:

Welcome back to Artificial Intelligence Conversations, a what's at Stake series, nicole.

Speaker 2:

Like I said just before the break, that was a really comprehensive answer, and the key thing, though, that I took from that really was the importance about gathering different voices and perspectives, because if you're going to have a comprehensive debate, you need to be able to examine it from different viewpoints as well, and in Europe, this is something which they used quite effectively, I think throughout the AI debate, with the initial establishment of the high-level expert group, which produced actually coming back to an earlier conversation we had there produced the first recommendations on ethics in AI as well back in 2019.

Speaker 2:

And then, of course, we went through the entire process of passing the legislation itself and included in the Act now the final Act there is a provision for the creation of another group as well of experts, where the Commission will appoint people from industry, from academia, from civil society and so forth in order to engage with the Commission and be able to talk to them about how the AA Act is working, part of the developments with the industry, so that we continue to gather those different perspectives and help to fine-tune the policy approach at a European level. In terms of looking at that I said, noting that importance as well. If we take a step back, then you know from that kind of piece at the moment and look at the policy discussions that are taking place in the US and you know. Again coming back to your point about maybe Schuber's report being a little bit too industry focused, as Andrea mentioned, some criticisms have been made, but what do you think is missing from the conversation in the US? What do you think needs to be done to address that?

Speaker 3:

Yeah, I think that there is. I think there's a lot missing, but I'll be, I'll try to hone in. I think that we are missing key voices from the general public in these discussions. You know, I think, another point about the report that came out yesterday there was really no mention of education or educators. The executive order was pretty light on that as well, and so that is a red flag for me, because I think that educating young people about this technology is the most important thing that we should be doing right now. This is a technology that they will grow up with, that they will need to understand and that they will need to understand how to exist with.

Speaker 3:

I think you know we've known about challenges within our education system for many, many decades, and I often say that AI is a mirror and AI is just the thing that's shining the light back at us and all the problems we already know we have. You know, we know that we're biased, but now AI is like let me just confirm that for you. We know the education system is not working so well. Well, let me. Ai is just going to reflect that back to you.

Speaker 3:

So I think that that's a huge miss in the conversation. I understand conversations are potentially happening and you know, with the house on that, but I think that's a conversation that needs to be front and center. It's education the potential for AI to support and revolutionize education with efficiencies for educators, with personalized learning. There's so much potential there and there's so much risk if we don't help people understand. I think, focusing on the executive function skills that we need in order to be discerning in a world where there's misinformation coming at us every day and where we need to understand, you know, how to innovate and be creative those things, those skills we need to do whatever we can to bolster those skills in the young people that we have in schools today.

Speaker 1:

Well, I was just going to ask you to expand a little bit on that, because I've heard you talk a lot and you mentioned it earlier AI literacy and how important that is, and we've talked about civics literacy kind of before and in the age of social media, and so when you talk about education and educators, talk a little bit more about what do you, what would you like to see happen, as as young people are growing up with Gen AI at their fingertips, like we're? I mean, there's Gen AI friends, there's all this kind of stuff, but what, what, what do young people need to know and what should educators do as they think about AI literacy?

Speaker 3:

Yeah, and I should say so. At AI2, I work with organizations like Teach, ai, like AIEDU, and they are the experts in this. I am not. However, I have opinions about lots of things. So I think that what we need to do is understand that this technology is not going anywhere and we need to figure out how to not exclude it from classrooms so that kids are left on their own figuring this out. That's the recipe for danger, I think.

Speaker 3:

Figuring out you know one of my favorite examples of a lesson I saw someone speak about and I wish I knew who it was so I could credit them. A teacher gave an assignment and the assignment was write your essay using chat, gpt or Gemini or meta, or insert your favorite model, um, bring that in, and then we're going to edit it and rewrite it together in class. And that's such a great use of the technology, because you get to figure out. Then you get to talk about well, what prompt did you use? What would it have meant if you used this other prompt? What if you got more specific? What if you gave it a persona? All of these different questions, and then you're starting to get new vocabulary introduced into the conversation. You're starting to give them hands-on access to the tool. You should not be doing this with five-year-olds. To be clear, right?

Speaker 3:

I think that all the models say 13 and up, but there are still ways to introduce AI in the form of robotics or other science methodologies in young people's curriculum. I also think, look, ai is not just a computer science course, it's in everything. So I think that we need to figure out how to layer this technology on top of all of the existing classes we have and sort of what are the inflection points where it makes sense and then help people understand where it doesn't make sense at all and that you shouldn't actually be using the technology. I think we need to look at teacher training. I think we need to look at how we're supporting administrators. I think right now, 12 states have guidance on AI in education in their state, and so I think we need to see more of that and more collaboration and more discussion, and I think education is really kind of embracing AI, that they understand it's not going away and are trying to find the most effective and efficient and useful ways to leverage it.

Speaker 3:

So I think that's a whole deep piece and I'm delighted that there are brilliant people working on AI in education, because it is arguably the most important thing, because, again I've said you, andrea, probably have heard me say this my mom is 75 years old practically. It would be great if she knew how to use this, but the main thing she needs to know is not to be scammed. But my five-year-old daughter absolutely has to understand what this is and how it's impacting her and when it's impacting her, and so I think that we need to really focus on that. But we have a history as a country of not really embracing the younger years of life in the way that we should, and so, again, ai is another example. We don't fund young people before they're in school and even when they're in school, fund young people in, you know, before they're in school, and even when they're in school, you know there's there's challenges. So, again, it's these. We know these issues. These issues exist. Ai is just sort of the next layer to reflect it back to us.

Speaker 3:

But your question about who's missing from the conversation I think that is would be number one for me is like the fact that education was missing from these two pretty big reports. I think is a flag, but I can't be at AI2 pushing for open research and not acknowledge that the conversation around open research is often missing. It has gotten better. I think we're seeing improvements on that. But the reason people have said industry is sort of driving the bus and being so reflected in these reports is because they are. That's kind of how business gets done with legislation, and so it's not.

Speaker 3:

Again, I'm not shocked by this, but I've been thinking a lot about how to bring together, you know, open policy focused organizations and thinkers to create this sort of unified voice to advocate for the things that we care about, like you know, incentivizing open research and encouraging that more things are shared for the research community, that we have better funding for public infrastructure to support academics who are working on these technologies. So I've been thinking a lot about this sort of AI open policy consortium and what that might look like, and so this summer I actually have a wonderful intern joining my team who's going to start to think about what that can look like. What would it look like? What would it mean? It's a lot of work to be a small nonprofit research institute like AI2 and be able to respond to every request for information that the government puts out.

Speaker 3:

You know the NTIA has one. The Copyright Office has one, this other office there's like one a day, and so you need a team of people who can write those responses. That's one example. There's also, you know, for every I would say for every one meeting that Sam Altman has. I feel like all of the open research orgs or academics need to have 75 meetings to make the same point, and so I'm being kind of glib, but you get the point it's. It is not balanced. That's just sort of the structure that we're in, and so I would love to see more open voices in the room when decisions are being made.

Speaker 1:

Yeah, that's great, and I'd kind of love to move us a little bit to kind of like forward looking to on what's next and what's coming. You know we hear a lot about that. These LLMs are only going to get better and smarter and faster and bigger, and, you know, talk a little bit about what you see there. Obviously, they require an immense amount of energy. So we're having a lot of conversations about all the data centers and should there be laws that like require a certain amount of energy, go to electricity and not to data centers, or all of these conversations are really interesting, and we talk about the availability of chips and how are we doing this? So so do you? What do you see as like the barriers to, like, continued growth, or do they even exist, or should we be thinking about them?

Speaker 3:

Yeah, I think there are a handful of barriers. One this is just the one that makes me laugh, because I think I had. I almost thought somebody was kidding, but you know, we heard about that. We need more compute, but there's a. You know, the chips. There's a short supply of chips, and so now that's the challenge. And then somebody said, no, the chips are back in stock, but it's the cable. And I was like, I'm sorry, what?

Speaker 3:

So there are supply chain issues that are just preventing some of this from happening and of course, I think some people may be having supply chain issues and some people might not. I think, you know, when Meta introduced their billion plus dollar cluster, they probably had all the cables they needed. But you know, I was like so I'm sorry, you don't just go buy those from like Home Depot, I guess. Then you have to go. You know we were having this whole funny conversation about the chips, and then somebody was talking about well, and then there's the cabinets. Why can't you just go to Ikea and get the cabinets? So there is this infrastructure to build a cluster or to have a cluster for cloud storage, and there are so many different components to that, and it's one of the things. That was just so shocking to me. It's like, okay, the barrier is the screw on the door, and so the supply chain issues are an interesting hurdle to getting some things built. If you're building a data center, you need things to go in the data center and there's a million little parts and pieces and things, and if one of those things is not available, then it sort of creates this ripple effect and someone who knows much more about supply chain logistics would probably be like, well, yeah, duh, the cables need to be there. But for me I was like you've got to be kidding me. I had a good laugh about that. And then I think that there's again, I think academic institutions are. You know, they're resource constrained. There's just Princeton announced their nine million dollar cluster the same week that Meta announced their billion plus dollar cluster. I mean that if that doesn't paint a picture, I don't know what does. And so we need things like the National AI Research Resource and the Create AI Act to move forward in order to continue to bolster the support so that anyone with a laptop can start to participate in this research.

Speaker 3:

We also talked you talked a little bit about energy and eventually, I think I read a report the other day that you know, within two years, data centers aren't going to have enough power. Don't quote me on that, but I mean we're getting to this point where energy is real, which is also making me feel very validated, because my husband has an electric car. I don't, and I was like I don't think we should have two electric cars. I'm like a little concerned and so I'm like see, I told you we shouldn't have two electric cars because the power is going to be a problem, but there are resources that are needed.

Speaker 3:

Water is critical to cool the data centers, and so now there's.

Speaker 3:

You know, what about cities that are in droughts?

Speaker 3:

What about not even cities?

Speaker 3:

What about places that are in droughts or experiencing water shortages?

Speaker 3:

You know, we live in a world where the climate is just wild and changing, and so we're building according to today and we don't necessarily know what's going to change tomorrow. We have a climate modeling team at AI2 as well, who's doing some of the work, to do some of the modeling to see what the climate will be like and how it's going to impact. There's again, that's a perfect example of a great use case of AI today, but there are there's just a variety of things that I think are barriers, and I think they're broad, they're very tangible. You know supply chain things to these bigger climate issues and sort of where. Where do you even build a data center? Then you get into ethical discussions about are you building a data center in a neighborhood that's already got a power plant and you know that already has bad water? It's like where you're not going to build a data center in the middle of Hollywood Hills, and so you have to start to think about the ethical implications of some of the decisions that are being made.

Speaker 1:

Yes, and that's a really important conversation to have. And I think your point on you know, planning for today versus what is going to be there tomorrow, and then also, you know, when you talk about AI potentially helping detect pancreatic cancer sooner, or can AI help us get a breakthrough energy solution that is going to kind of solve all of these problems? And so I think that there's a lot of you know projections on what can happen, both on the really positive side and then some on the bad. So I'd kind of love to wrap up here with sort of what's your projection for like the next two to three years on AI, and then maybe the 10 to 15 year range and then and then maybe finally like what is the coolest thing you've seen from an AI thus far?

Speaker 3:

Yeah, I think you know we're over the next couple of years. I imagine we're going to continue to see the bigger is better and the. You know the. I was watching the open AI release where you know, do the math problem, solve this linear equation? And I mean that's just cool, like the fact that you can talk to the. You know it's not. Whatever the open AI name is I don't even know what the name, I don't even know what they named that bot but it's cool that you can interface in those different ways.

Speaker 3:

When you think about how people learn and how people interact, and some people, you know, prefer to talk versus write, and so it sort of introduces that personalized, you know learning or personalized work, produces that personalized, you know learning or personalized work. I think that's going to continue to see, there's this race that's happening now. Who's going to come out with? You know, lama 3 came out. Everyone wanted to get out before Lama 3. And so OpenAI wanted to get out before Google.

Speaker 3:

I don't think that's going to stop over the next couple of years, 10 to 15 years from now. If we are still doing this, then I am going to move to an island and start selling coconuts because like there has to be a ceiling somewhere. Um, and I do think that where what I, what I am imagining is that people are going to start to think about how to get more instead of baking it. Bigger is better, is it? You know, smaller and more efficient, and multiple, smaller and more efficient models and connecting those in unique ways and any of the researchers on my team can speak much more intelligently about this than I can. But I think that we're going to see a shift away from this bigger, bigger, bigger, bigger Because one we can't sustain it from a resource standpoint and the data like we don't have enough data. So synthetic data is now part of the discussion in a way that it wasn't four years ago, where now the outputs from these models are becoming the training data and I would imagine eventually that's just going to get back to poor quality. I mean, I'm guessing here, but if you're feeding synthetic data and then you're getting output based on synthetic data and it's like this loop, I think there's going to have to be something that shifts there and people are thinking about it.

Speaker 3:

So I know that there's researchers working on these things now, but what I hope is also in 10 to 15 years, we, as humanity, have come to some more stability in what we want this technology to do, because we are in control of what it can do. We are responsible for how it's evolving, and so nothing would make me happier than, within the next decade, that we say we actually don't want it to do these things. We actually do want it to do these things, and we can hyper focus based on those guardrails, and we can get out of this moment of just yet another report, yet another principle, this legislation here, this legislation here, and just have a more collective and cohesive agreement. I feel like that's like a grand plan of mine. So I'm not again I cautiously optimistic that if I say it and manifest it in the world, it will happen. But I think that's what we need, because just because you can do something doesn't mean you should. And I think we need to start questioning do we want this technology to, you know, be what? Was it the robot dog that was going fire that could fire at will Like? Do we want this technology to you know, be what? Was it the robot dog that was going fire that could fire at will Like? Do we really want that? Um, I would hope the answer is no, and then I would hope that we do everything we can to make that not the case, and then we focus on the fact that, like, but we do want the robot dog to be able to smell and find the missing child, and so, like, let's maybe figure that out, right? So I think those are the things we need to start thinking about.

Speaker 3:

And the coolest thing I've so boring, the coolest thing that I've seen gender today I do is I have a five year old daughter and she loves stories story about Lulu and Lila, because that's like the framework we have. So my daughter's name is Lila and I tell her these stories about Lulu and Lila. And Lulu is always causing trouble, like she's always being mischievous, but she always, um, does like she tries to do something mischievous but it turns out being good for everybody. Like she, you know, painted the town's fences one day, but she painted them rainbow and everyone was so excited, and so she was mad that her plan didn't work out, and so, like, I need content, and so I am often like, give me a story about Lulu and Lila. So that is by far the coolest thing that it has done for me. And yeah, it's just, it's so funny Cause I'm like oh, I don't actually have to think about the story, I can think about like the prompt. But even thinking about the prompt is sometimes challenging.

Speaker 1:

Well, I think I might steal that for my kids, because she's always my daughter. Stevie's always like let's play a game, and I don't have any ideas for a game right now. But so, on that positive note, and I hope people will use it for that kind of stuff, I want to thank you so much for joining us, nicole, and to all our listeners, thanks for tuning in to this month's episode of Artificially Intelligent Conversations. Remember to like and subscribe wherever you listen to your podcasts and follow us on LinkedIn at Pentagroup.

Artificial Intelligence Policy and Innovation
Navigating AI Legislation and Ethics
Global AI Legislation and Perspectives
Importance of AI Literacy in Education
Future of Artificial Intelligence and Ethics