Edtech Insiders

AI Hype and Reality in Education with Ethan and Lilach Mollick

July 08, 2024 Alex Sarlin and Ben Kornell
AI Hype and Reality in Education with Ethan and Lilach Mollick
Edtech Insiders
More Info
Edtech Insiders
AI Hype and Reality in Education with Ethan and Lilach Mollick
Jul 08, 2024
Alex Sarlin and Ben Kornell

Send us a Text Message.

Dr. Ethan Mollick is an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and also examines the effects of artificial intelligence on work and education. He is the Co-Director of the Generative AI Lab at Wharton. His academic papers have been published in top management journals and his research has been covered by CNN, The New York Times, and other leading publications. His newest book on AI, Co-Intelligence, is a NYT Bestseller.

Dr. Lilach Mollick is the Co-Director of the Generative AI Lab at Wharton. Her work focuses on the development of pedagogical strategies that include artificial intelligence and interactive methodologies. She has worked with Wharton to develop a wide range of educational tools and games used in classrooms worldwide. She has also written several papers on the uses of AI for teaching and training, and her work on AI has been discussed in publications including The New York Times and Vox. She advises companies and organizations on the advantages and risks of AI in teaching.

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

Show Notes Transcript

Send us a Text Message.

Dr. Ethan Mollick is an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and also examines the effects of artificial intelligence on work and education. He is the Co-Director of the Generative AI Lab at Wharton. His academic papers have been published in top management journals and his research has been covered by CNN, The New York Times, and other leading publications. His newest book on AI, Co-Intelligence, is a NYT Bestseller.

Dr. Lilach Mollick is the Co-Director of the Generative AI Lab at Wharton. Her work focuses on the development of pedagogical strategies that include artificial intelligence and interactive methodologies. She has worked with Wharton to develop a wide range of educational tools and games used in classrooms worldwide. She has also written several papers on the uses of AI for teaching and training, and her work on AI has been discussed in publications including The New York Times and Vox. She advises companies and organizations on the advantages and risks of AI in teaching.

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

Ben Kornell  00:00
Hello Edtech Insiders, we're so excited to have Lilach and Ethan Mollick here today to talk about AI and education. I don't think any introduction is necessary. So let's jump in. Ethan and Lilach tell us a little bit about what you're most excited about, as you're looking at this CO intelligence of education, AI and human potential intersect.

Ethan Mollick  00:27
So there's a lot we could talk about, I think part of it is that there is just a lot of potential here to do things, right. And Tech has been, has always had, its potential has never quite been realized. And I think part of that, is very hard to figure out what to do. And the creator is often quite different, distant from the classroom. And so then we have a tool that does a whole bunch of things that were really hard problems, adaption and customization, and general knowledge that were hard problems before they could do this stuff out of the box. That doesn't mean it's ready to, you know, be used to classrooms everywhere today. But the potential here is so huge. And what you could accomplish right away is so dramatic.

Lilach Mollick  01:01
I think, build on our majors to build on what Ethan was saying, you know, a lot of our education system is really built on constraints. What can we do given one, you know, instructor or professor or teacher and, you know, any number of students. And we're sort of finally seeing though we have line of sight to a potential for extending that intelligence and giving everyone a mentor, a tutor, a role player to really extend the classroom experience.

Ben Kornell  01:33
I'm fascinated about your research around games and simulation that led into this. I actually think it's such a great window into AI and how it could intersect. Can you tell us a little bit about your research leading up to this, and then how it's really come to bear with AI?

Lilach Mollick  01:51
Yeah, so for many years now at Wharton, what we've been doing is building games and simulations for teaching. And teaching, sort of hard to practice skills or a, practical things like pitching a VC investor or negotiating or doing mortgage valuation interviewing. So these are skills that you know, you want to have practice before you do them, but they're very hard to do. And we've done this, you know, with a resource heavy set of developers and instructional designers, and narrative fiction writers. And it's taken time, we've learned a whole lot through that sort of very manual painstaking process. And the goal of this was really to do something that was very, very adaptive, but because the technology was limited, we were limited. When ChaCha TV came out. You know, we saw that 60-65% of what we were doing could really be done with a single prompt because of the AI's improvisational, and sort of cold reading capacity.

Ben Kornell  02:55
So as practitioners, you've actually been using AI in the classroom with students even encouraging the students to use AI. What's been most surprising about that?

Ethan Mollick  03:07
I actually think, you know, nothing works very well in the classroom the first time around. So I've been kind of surprised at how well it's worked, actually, especially how much it's worked pedagogically, right. So technology always have ups and downs, but like we're doing experiments, right? That's what we do. And what I've been really surprised by is how much like reflection This prompts and how much more deeper thinking getting an AI to work with someone prompts that I was expecting, right? I've read 1000s and 1000s and 1000s of essays over the years, right? And suddenly, there's this performance improvement, not just there, but also ideation, people are coming with better ideas for projects, it just seems to be making a difference. And the students like it, right, which is never an indicator of anything. But from an objective quality, standard, or at least as objective as we get when we're grading essays and stuff not not using AI. I mean, the quality level goes up and not just because we're using it for writing because the thinking is becoming more interesting. And so I think that to me, it is that this is working as well as it does, with his primitive tools we have is actually a bit of a surprise.

Ben Kornell  04:07
In your book, co intelligence, you're painting a picture of a world where on a daily basis, humans are in a symbiotic relationship with AI. So you reject this false dichotomy. It's all human, or it's all AI, and it's really about this cohabitation. What do you find most compelling about this picture? And what do you find most concerning?

Ethan Mollick  04:28
I mean, I think it's not as weird as when you think about it. Your basic co intelligence with your phone right now. It just uses Google and other tools to do it. Right. So if you've used AI enough, and I don't know how much you personally use it, you started just using the general purpose swiss army knife in the mind that solves lots of different problems for you. So that creates a crisis and opportunity when when the calculator was developed, right, that's it for different forms of intelligence that 1870s We had to redo how math classes work to the cause of crisis, decades, we figured out the same kinds of things gonna happen here, there will be deskilling unless we pay attention to that and try Solve the problem in the laundry, wait to realize the skill is going to happen, when that's English writing, or, you know, when you go graduate from school and you go join a company, and you just have an informal internship, where people would tell you give you basic work to do, and you'd learn from that, and that's gonna go away, because no one's gonna just give their work to API's. I think that is a genuine concern that we have to worry about here. So there are downside risks right there. And I'm concerned about, but I think that you know, we are solvable ones. So I think you're right about the false dichotomy, we do get to decide where to draw the line about what should be done by AI or what by people, and especially classrooms where we get to decide, hey, guess what, you don't vaccinate your phone, you have to fill out this test, you do this blue book exam. I mean, these are old techniques that we've developed a long time ago, they still have value.

Ben Kornell  05:46
In one of our conversations, one of the shifts that you talked about was just around learner agency, that in the past curriculum was developed top down and pushed out. And today, from a learning standpoint, learners have tremendous power in constructing or CO constructing learning experiences and applications. And since you mentioned that I've just been thinking about not just the implications for individual learners, but also what are the implications for our systems? How have you thought about that, both in the macro and the micro?

Lilach Mollick  06:22
So just to jump in, I think that there's, there are a few things, the first thing is to recognize that the raw models sort of out of the box, are capable of roleplay, because they've, you know, sort of seen lots of archetypes in their training data, but they do it on a very superficial level. So if, for instance, you want to set up a tutor or mentor for your classroom, it's going to not do a great job. It may hallucinate, it may do fine. But it will give you an explanation that is not tailored to the student, that doesn't take into account, you know, what the student already knows, that doesn't push the student to co construct knowledge. But because, you know, you can really sort of code and prose like, you can prompt the AI to act more like not exactly like not as good as, but more like a good teacher, including one that knows, you know, where students usually fail on this particular topic, you know, what are some sticking points? What is it? What does evidence of understanding really mean? So this is the kind of thing where educators are really empowered to build something and only build something with words in the case of custom chatbots, for instance, to build something that takes into account what they know about their students and about their context. And we've never had that before, because at Tech just can't, could not and really can't, you know, tailor for every single classroom and every single student.

Ben Kornell  07:54
Yeah, I think it's such a great point. And what this is another example of when we're looking at disruptive technology, often it underperforms what a human could do, but it could on the instrumentation side, it can instrument things that are just from a constraint standpoint, physically impossible for humans to do. And so how do we leverage that? I think, you know, your nuance here is so valuable in what AI can and can't do. And both the opportunities and constraints. And Ethan, you just mentioned it as a primitive technology. Right now, in the public sphere, we're having a wave of skepticism about AI in education. Now. It's kind of a normal cycle where everybody's really excited about something and then there's like the reality of it. But do you think that it is just part of a natural hype cycle? Or are there concerns that you share with skeptics?

Ethan Mollick  08:54
I mean, I'm a little weirded out by the nature of the hype cycle this time, because usually, the hype cycle happens when people try and implement something that doesn't work quite as well as they think. And then they get disillusioned that we have to build tools. And I think we'll get there but like, right now, this is all just battles. I think right now, there's a lot of philosophy that is reasonable, like we're in a changing type for education. I'm glad that there's people who are critical of this and not but I think height versus not doesn't make a lot of sense. But there's literally almost no deployed tools out there. Right people are, I haven't met many teachers who experimented with AI and find it not useful. The latest few studies have a Pew sorry, the latest Walter Walton foundation studies that found adoption rates just keep shooting up right for students for teachers, like there's no sign that this is like a bust and people are turning away from it. So I'm a little confused. I think people are worried about it. I think people are, you know, anxious about what it means. I think they're, they're fighting against somebody who who says is a panacea for everything but I don't think anyone I talked to who is probably education who's thinking AI is useful, is saying AI solves every problem education has or should be university use the classrooms or replace teachers like nobody in our field is saying these things So, I worry a little bit we're having a straw man fights about a world that doesn't exist, right? Ai adoption is actually here in classes, most teachers are using it. According to the studies we have, the numbers got up. Almost all the students are using it like this. It's not a pipe right now, the question is, how do we turn it for good? Use it to avoid the bad news? That's an important question we need to ask you about. But I find it a little frustrating to have conversations that are about like, is this hype or not, it's clearly not hype. Like, I'm sure there's some areas that are hype, like, we are not getting universal, personalized one on one tutors in the next two months, right? No matter how cool they are, and no matter what, you know, the great stuff Sal Khan and other people are doing, but we are like this technology that does stuff that really matters. So I guess, not to read to you. But I'm finding the hype versus not hype weird. There'll be areas that are overhyped, but it's hard to imagine any other technology that's had adoption as fast as it has, and as sticky as it has not turning out to be a big deal.

Ben Kornell  10:52
Lilach, you're, you took all of this work in this research. And now the two of you have founded the generative AI Lab at Wharton, and the gender of AI Lab, I'm sure it's focused on near term utilization, but also on the longer term arc. What are some of the projects that people are undertaking at the generative AI Lab? And how would you kind of describe the mission of the generative AI Lab?

Lilach Mollick  11:20
Yeah. So one, one particular project that's sort of near and dear to my heart is turning what we know about building experiences. So simulations, mentors, tutors within games, and giving educators access to create that in a no code way. So we're working on an open source platform that essentially is run by AI agents, a series of AI agents that communicate that argue that collude, but in the end, produce something that is usable in a classroom. So you know, we're certainly not there yet. We're in the early days of experimentation, but we definitely think it's possible. And it will give access to educators and others to create experiences that they simply couldn't have done before using their sort of content knowledge and pedagogical knowledge within their context.

Ben Kornell  12:17
Yeah, that's great. And such another great point around access, and accessibility of technology that, you know, with your language, your programming language being English, it really opens things up and, and.

Ethan Mollick  12:34
Or Hindi, or, you know, or, or what have you like, this is the universality of access is such a big deal. Because there are always downside risks for any tool. There are bias issues in AI, we know they exist, there's a lot of different kinds of bias we could talk about. But usually we talk about equity of access, we mean, you know, rich kids or rich schools get access to tools that most people in the world don't have access to. And it's absolutely fascinating to me that you can get GBD, four Oh, are called 3.5 Sonate, available for free, in 160 countries plus around the world, right. And that that is the best model available, every kid in Mozambique has access to the same quality tool that everybody at Harvard Westlake has access to. And I think that's a really unique situation.

Ben Kornell  13:16
And I've been watching the performance of the models and other languages, which continues to improve, I was talking to an educator from Haiti, who is basically saying 97% of the world's content was not accessible through Haitian Creole. And you know, given that in Haiti, they teach French and English there if you're an unschool, but but you can still decode, you still don't have that much content in the world to consume. And now essentially, as an accessibility play solely, AI is in a game changing way, unlocking the world's information for people in Haiti, just a micro example.

Ethan Mollick  14:02
But important what right like these, like part of what's so exciting about this is we have a tool that can make a difference. And part of what worries me about the hype versus non hype is like, Let's lean into the areas where there's obvious gains, and we don't have to like, you know, the idea that this is going to come for English classes first, we don't need to do that, like we can be deliberate about how we're handling these things. It doesn't have to be everything everywhere. But similarly, just saying no, is a bad answer to.

Ben Kornell  14:26
Yeah, I'm a school board member on my local school board. And I think, you know, there is a tendency when it first came out, school districts were banning it outright as a cheating tool. And then there was like, No, we can't ban it. Like, let's unleash it. And really what I found is when we shifted the conversation away from, Is AI good or bad?, and move towards what are the problems you're trying to solve? Like assessment or math reading? Are there new opportunities with AI that we could incorporate into our strategy, all of a sudden, you know, everybody's opening up, and everyone's willing to try an experiment. And then we're learning from each other. And it's been a really, really positive experience. But I think it's got to be not about the AI. It's about the solutions, you know, for learners. Absolutely.

Lilach Mollick  15:17
One thing that's core to our mission is this notion of transparency and sort of putting out models of positive AI use for other people to see, but also just in terms of like, how we built this. So we just put out a paper, we built a prototype of a pitching simulator on it's aI powered, and we put out a paper outlining not only all of our prompts, but you know, we asked our dev team to come up with how did you build this? You know, and also, you know, the experiments, what was the AI good at? What did we have to add? What do we not have to add? So it's, it's these kinds of initiatives of putting out good use cases, and explaining where the failure points were, that I think you can add to what other people can build on.

Ben Kornell  16:05
Alright, so it's time for us to jump a little bit into the deep end, on theory. And just full caveat, I know that this is speculative. And so I would just love to hear how the two of you are even thinking about these things. So in the book, co intelligence, it really is talking about humans living with AI in much the way we've lived with technology, you know, in decades leading up and the car, for example, has changed the American physical landscape, because we've co lived with the technology. Do you feel like the intelligence that we're approaching in artificial intelligence will feel like a singular technology? Will? Will it be more like an ant colony? You were talking about agents before where it's actually not a singular, this is my relationship with AI, but it's actually quite diffuse. And there's so many different agents doing different things. Our human perception is not this AI, super intelligence. But in fact, it's more of a sense of everything being connected, like in an ant colony, where they have a collective action, but the individual agency is somewhat diffuse. How do you think about that?

Ethan Mollick  17:20
So I mean, I think Edie colony is almost scarier sounding that it is, I mean, I think what we're gonna have is hierarchies, or organizations and agents that interact with each other, right. But that's a short term solution. I mean, we already see that happening, because that's what Apple's rolling out, your phone is going to have a pretty dumb AI on it, that could do phone things, it will phone it will contact the smarter AI. That is the centralized system for other requests in the past things up to GPT. For really hard requests, like that's already happening. It's very easy to build AI into any small tool and have them talk to each other. That's what really awkward was saying earlier, we're using agents, that's what our agents do is they're basically coordination mechanisms, right? Specialized AI, I don't like Ultimately, though, all the philosophy comes down to one question, which is, how good is AI? Get at how fast? And we don't have answers to that question. We talk to all the insiders, we talked to open AI and anthropic and Google and Microsoft and like, nobody really has clear answers to any of these questions. I think we have another year or two of exponential growth. But we don't know what happens after that, and how good these systems get. Right now they have what we, you know, I call it my one of my greatest papers of jagged frontier, they're good at some stuff better than others. So you can't just kind of hand over everything to them, because they're gonna mess up some tasks. And I'm sure they can share some of the fun examples. And we found over things that the AI just kind of bad at. So the question is, How good does it get it everything we don't know, that's what's gonna shape the future of education. If this is a really amazing one on one tutor out of the box that you can use with voice to teach you a topic, and it's patient willing to do it. We have to rethink some more stuff that if it's the current kind of flawed system, but smarter.

Ben Kornell  18:52
And if you had to guess, do you feel like we're getting to, we will get to an Assam total point, where do you feel like we are still very far away from the full potential of this technology?

Ethan Mollick  19:04
So that's the key question, I think we are still, we are far away from the potential is obviously clear, even if large language bottles stopped growing. No one's putting work into building the educational tools around this, like Sal Khan and a couple other institutions, the only people kind of with public API work, Google's released some stuff recently. There's other people doing kind of neat stuff in this space. But we haven't built all the pieces that you need to software needs to be integrated tools need to be integrated into our lives. You know, steam power doesn't do anything on its own until you have the gearing to connect it to something else. So I think we're in for a year five to 10 years of change, even if AI doesn't grow at all because just getting it to work with stuff will matter.

Ben Kornell  19:44
We like I'm curious you know in one of the newsletters you all put out, talked about this paradox of if you are trying to go you know interstellar travel, rather than leave now you'd be better off waiting 15 years because the odds The fusion technology being meaningfully better or, you know, technology advancement would be so much better in 15 years that you'd have a superior outcome and 15 than leaving now. Do you think there's a reason for people to wait? I'm thinking mainly institutions and organizations where there is a really large implementation costs. Is it? Is there a rational argument to be made for waiting on some of this? Or do you think this is, you know, as the listeners to the show, it's time to dive in? What do you think?

Lilach Mollick  20:33
So I think it's time to dive in. Just in turn, just in terms of individuals and organizations, I think it's really important, the thing that the AI doesn't do well, so it's, it's a generalist, you as an expert in whatever, you know, whatever your domain is, really need to experiment. And you know, because there's no instruction manual, really need to experiment and figure out, you know, how good isn't at what I do. Where is the edge of sort of where my expertise is better than it. And I think that there is no way through, but to really just experiment and experiment for hours across, you know, different tasks that you do. So I don't think that anyone should wait. But at the same time, I would expect these technologies to get better, there's no reason to think that they won't get better. And so while it's really hard to kind of see around the corner, I think building towards a future where these technologies are better, where they're more adaptive, where they know more, where we iron out some of the kinks. I think that's to be expected. But immediate sort of experimentation by anyone and then sharing in a really, demo way of that is those experiments. Here's what worked for me, here's what didn't work for me. Here's how I experiment. It's super important.

Ben Kornell  21:51
All right. Well, this has been such a great conversation. Any final advice to our listeners? You know, we've got 40,000, ed tech, entrepreneurs, educators, system leaders, any advice that the two of you would give us kind of a parting advice or challenge?

Ethan Mollick  22:09
I mean, I think you need to use it. And I think you need to use it with both an open mind but also one that is not, you know, like, we can be critical this thing. It's not great at everything. But the thing I see is two people rejecting this as either it doesn't do something and it's terrible, or else that it is, you know, it has to do everything. And I think we don't know what the answer is. I'm very unnerved by the idea we'll be able to cut this but the Insight your views are the most distracting has ever happened to the podcast so I'm very excited to get the like the deep tour. Not too much wax. Good job. Yeah. Oh, God. Oh, no.

Ben Kornell  22:52
Lilach, any parting advice?

Lilach Mollick  22:54
Um, yeah, I would say that. To dive in, like Ethan said, but also to note that anyone who gives instructions to anybody else and knows how to kind of break through their own kind of curse of knowledge and unpack it in like step by step ways, the way that teachers do the way that managers do is already good at prompting, it's already good at adapting to what AI can do. So they have, you know, teachers out there definitely have a leg up.

Ben Kornell  23:21
Wonderful. Well, this has been such a pleasure. I'm sure we'll be in dialogue about this as a community, but continue to look towards your leadership and your voice. Lilach, Ethan, the generative AI lab really helping us on this incredible frontier. Thank you so much for joining Edtech Insiders.

Ethan Mollick  23:42
Wonderful. Thanks for having us.