Edtech Insiders

Student Perspectives on AI with Siya Verma, Sophie Yang and Dev Krishnamoorthy, Research Assistants at Stanford Deliberative Democracy Lab

May 01, 2024 Alex Sarlin and Ben Kornell Season 8
Student Perspectives on AI with Siya Verma, Sophie Yang and Dev Krishnamoorthy, Research Assistants at Stanford Deliberative Democracy Lab
Edtech Insiders
More Info
Edtech Insiders
Student Perspectives on AI with Siya Verma, Sophie Yang and Dev Krishnamoorthy, Research Assistants at Stanford Deliberative Democracy Lab
May 01, 2024 Season 8
Alex Sarlin and Ben Kornell

Send us a Text Message.

Join us as we sit down with three high school seniors who are Research Assistants at the Stanford Deliberative Democracy Lab. They share their insights on the evolving landscape of artificial intelligence and its effect on youth today. In this episode, our guests discuss how AI is currently shaping their lives and education, their concerns and hopes for the future of AI, and their ideas for policies and tools to help young people navigate these changes.

Siya Verma of Quarry Lane School, focused on global public policy, technology policy, and economics.
Sophie Yang from Lynbrook High School, passionate about economics, public policy, and law.
Dev Krishnamoorthy from Saratoga High School, interested in computer science and political science.

Listen as Siya, Sophie, and Dev discuss how their work and academic interests inform their understanding of AI's role in shaping future policy and education frameworks!

Show Notes Transcript

Send us a Text Message.

Join us as we sit down with three high school seniors who are Research Assistants at the Stanford Deliberative Democracy Lab. They share their insights on the evolving landscape of artificial intelligence and its effect on youth today. In this episode, our guests discuss how AI is currently shaping their lives and education, their concerns and hopes for the future of AI, and their ideas for policies and tools to help young people navigate these changes.

Siya Verma of Quarry Lane School, focused on global public policy, technology policy, and economics.
Sophie Yang from Lynbrook High School, passionate about economics, public policy, and law.
Dev Krishnamoorthy from Saratoga High School, interested in computer science and political science.

Listen as Siya, Sophie, and Dev discuss how their work and academic interests inform their understanding of AI's role in shaping future policy and education frameworks!

Alexander Sarlin:

Welcome to Season Eight of Edtech Insiders where we speak to educators, founders, investors, thought leaders and the industry experts who are shaping the global education technology industry. Every week, we bring you the week in edtech. important updates from the Edtech field, including news about core technologies and issues we know will influence the sector like artificial intelligence, extended reality, education, politics, and more. We also conduct an in depth interview with a wide variety of Edtech thought leaders, and bring you insights and conversations from ed tech conferences all around the world. Remember to subscribe, follow and tell your ed tech friends about the podcast and to check out the Edtech Insiders substack newsletter. Thanks for being part of the Edtech Insiders community. Enjoy the show.

Ben Kornell:

Hello, Edtech Insiders listeners. We've got a special episode for you today. It comes from the liberative Democracy lab, we've got three high school seniors, that includes Siya Verma, Sophie Yang, and Dev Krishnamoorthy all talking about their experience as high schoolers with AI. It also goes along with a report that was released around AI in education and a poll for high school educators as well as high school students. So check it out. Excited to bring it to you. Hello, everybody. It is back to Ed Tech insiders. And we're doing a special episode on AI in education. And we have an incredible group of student leaders and ambassadors. Today, we have Sophie, Dev and Siya. Thanks so much for joining Edtech Insiders.

Siya Verma:

Thank you.

Dev Krishnamoorthy:

Thank you so much for having us.

Ben Kornell:

So in this episode, we really just want to hear student perspectives on AI. And it's our understanding that you were actually doing some research about AI and what's going on in classrooms around the country and around the world. Before we get too far into that though, let's have you introduce yourselves to our listeners. Sophie, why don't you get started

Sophie Yang:

Of course. Hi, my name is Sophie Yang. I'm a current senior at Lynbrook High School in San Jose, California. I'm very interested in economics, public policy and law.

Ben Kornell:

Siya.

Siya Verma:

Hi, my name is Siya and I'm a senior at Quarry Lane school in Dublin, California. I'm interested in exploring global public policy Technology Policy and Economics,

Ben Kornell:

and Dev.

Dev Krishnamoorthy:

Hey, everybody, my name is Dave Krishnamoorthy. And I currently am a senior at Saratoga High School in the Bay Area. I'm currently in Saratoga, California. And I am currently interested in exploring computer science and Political Science and all three of us our research assistant at the Stanford Deliberative Democracy Lab.

Ben Kornell:

So before we go too far, tell us a little bit about the deliberative democracy lab. How did you get connected with it? And what does it do? For

Dev Krishnamoorthy:

us, what we currently do in the deliver democracy lab is that we currently work with deliberative polling, as well as working in creating these polls that gather a lot of at least the poll that we currently did was on student perspectives on AI, where we kind of dissected student opinions on artificial intelligence in the classroom, how they feel about artificial intelligence being used as a teaching method, how some schools such as mine have been cracking down on artificial intelligence and such as Chad GPT, and not really allowing it to be used.

Ben Kornell:

So yeah, how did you find out about it? Yeah,

Siya Verma:

so we were just connected with the lab early on. And I think the work that we're doing is really interesting, sort of, to speak broadly, on deliberative polling, I think we're trying to really combat like, key issues with public opinion that are because of polarization. And a lot of that we've seen comes from, like, General Ignorance of the facts. But that also comes from like having phantom opinions and being in communication with people that have the exact same opinions as you. So deliberative polling really puts participants in an environment where they're communicating with people that has like completely different political or social beliefs. And that really gets them to think about, Okay, where did I get the information that is informing my opinion? And can I change that opinion? Should I change that opinion? So those are sort of the things that deliberative polling is combating.

Ben Kornell:

And so can you tell us a little bit of around why the decision to engage high school students and why high school students are at the heart of this deliberative democracy movement? Yeah,

Sophie Yang:

so all three of us actually joined the lab at different times, I believe, but we all met together one time and just had a casual conversation that big came an idea about creating a national deliberative poll centered around high school students. Since all three of us were part of the target demographic, and were all greatly exposed to AI. We received encouragement for our mentors, and eventually just began acting on our idea.

Alexander Sarlin:

It's a really exciting and obviously a very timely idea, because we're not that long into the generative AI revolution. And I think schools and students are all still just trying to figure out what to do with this. So you've done this big poll, let's talk about some of the results that you've found so far. See, can I ask you to just go through some of the things that you're finding in your poll, and then we'll we'll dig down on each of them? Course.

Siya Verma:

So overall, students definitely think that AI can be beneficial to them and their education. But they also recognize that there's a negative impact in play, especially if there's no guidance or direction. So the most important policy sort of related to that was that school should provide guidelines and resources to teach students about how to use AI responsibly, with four out of five students favoring it before and after the deliberation. That was one of the key ones. And then the most supported penalty proposal. And by penalty, we mean, like the punishment of sorts, for students if they use AI irresponsibly was that students should be subjected to a warning or a great deduction, and support for that actually increased from around 32% to 78%, of participants. And then sort of the last key thing that we thought personally was really interesting was that students put the most trust in their communities, not necessarily, you know, the creators of AI tools, or like people online or social media platforms, but almost like 77% of participants put like most of their trust in their community. And that was really key to our findings. Because know, if teachers are the ones using AI in guiding their students, that's like, that's where the trust is being built.

Alexander Sarlin:

It makes sense. And it's really interesting to hear that students are seeing sort of both sides of it, they realize there's so much potential, but they also see there is the potential for abuse, and there needs to be systems in place, and that they're looking for their schools, and communities to really make sense of that, you know, Dev, you mentioned that your school has been cracking down, right, quote, unquote, on AI, and this is something we've been seeing, you know, on and off for the last year and a half, some schools and districts have been charged CBT, and then pulled the ban out, every school has its own sort of way of thinking about it. Tell us a little bit about the crackdown that you've experienced and what the teams that you've talked to are saying about the sort of punitive, you know, version of, hey, nobody can use this. Yeah,

Dev Krishnamoorthy:

so in my school, what happened was, there was a group of students that were using Chatterjee putty in their AP US History assignments. And we're using them as like a way to help them out with like essays, as well as give them like ideas and help formulate, like ideas for starting essays. And the policy regarding that slowly became a policy where our school had this has this built in, turn it in system, where which basically had an AI detection system that detects like, Okay, is there been any AI that's like throughout this essay, and what happened with a couple of like, my friends was that they were falsely flagged as well, like the system was, in fact, flawed. And they ended up being given an academic integrity violation for essentially a false flag in which they end up disputed and hopefully, and it worked out in the end. And another really key point like I when I was talking to them was that they felt like aI was giving them like, they had a lot of stuff going on in their home lives and extracurriculars. And sometimes students don't really have enough time to do their, like these giant essays that the teachers are assigning. So they do have to inevitably resort to using chat GPT or other resources as a way to help them out. And what I found from discussing and speaking with them further, was that AI also helps them with generating new ideas, not necessarily writing the essays for them, but putting ideas that they might conceptualize in their head and putting them into actual words and into paper. So they're able to get a better understanding of the concepts that they're learning. Something that I've myself use it for, for study guides, like summarizing long articles. And recently, I had a feeling tests and I ended up using CBT as a way of just like a study guide. So to summarize, and bullet point important pieces of information, and I feel like it could be a pretty key tool in studying. So yeah, what you're

Alexander Sarlin:

bringing up I think is such a core issue in schools right now. There's this sort of arms race So little bit schools are using tools like turn it in, or on our lock, or there are all sorts of different proctoring tools or plagiarism tools that are being used. Those are edtech companies and tools. But nobody wants false positives. Nobody wants to get students in trouble for something they haven't done. And then meanwhile, you mentioned that the policies changed slowly over time. And I think that's also something that's happening. I know, all three of you are really interested in public policy. Sophie, can you tell us a little bit about you know, based on what you're hearing from your teen participants in your poll? How might policies start to be created that don't change that actually can work to, you know, limit the sort of plagiarism, but also allow students to do all the really positive things like create study guides and save themselves time on things that are allowed? Yep. Sophie, I'd love to hear about you on this. So

Sophie Yang:

from what we have gathered in our data, the first thing we realized was that people's understanding of AI, vary drastically depending on where they're from, how their curriculum is structured in the classroom, and what tools and resources are readily available to them. So policy wise, we believe the first thing or one of the first things to be done is to spread awareness of this tool, what it can do, and what it sort of the busting or busting the myths surrounding it. Because, like looking at our data, a huge concern from participants was that AI is going to misrepresent their work is going to dock them by mistake is going to like flag them by mistake. And that is a very real issue. But at the same time, there are some concerns that are much less valid, such that we've actually heard in some of our polls that were in some of our deliberations that AI can hack into their computer system or retain a lot of their private information. And if we're all using the same tool, our understanding should be holistic. So we believe that policies should first addressed that and spreading awareness on what AI can do, and from there, then everyone will have an objective understanding from there, what could we use AI for that is both productive, and not a violation of anyone's privacy.

Alexander Sarlin:

I love how you're mentioning, you know, awareness of what these tools can do. And a shared awareness and a sort of holistic awareness across the school environment, I think one of the things that we've wrestled with on the podcast is that when you learn what AI can do, it can do almost anything. And it can be used in so many different contexts. And that's actually what makes policy so hard, is that the gradient, you know, the line between a study guide, and something that might be considered a, you know, an aid is a moving line, nobody's ever had this much power in the hands of learners. I'd love to hear all of you talk about that, you know, you are all AI, early adopters, and you've been using it yourselves. How do you see this new world in which learners can do so many more things so much faster than they ever have? Before? See, I'd love to start with you. Yeah,

Siya Verma:

of course, I think something that I like to compare AI to, is that it should be a supplement to our learning, it should not be a replacement to our learning. So like you said, it can definitely enhance student's educational journeys, AI makes education a lot more accessible. And I think generally, I feel like we're based in Silicon Valley in the Bay Area. And we have a different context for that. We are, you know, at the forefront of the AI revolution. And so it's definitely like top of mind for all my peers and my teachers and my administrators. And I've also spoken with students that are you just in like Central California, who don't have as much access to the internet, or don't have as much access to, you know, learning what AI means. But I think the moment you introduce, like certain tools, even if it's as basic as you know, like chat UBT like this exists, and you just need some internet to make it work. Like I think that is just like a moment where I think they realize that, like, I can learn almost anything right here. And it does not replace a teacher, of course, but I think to understand new concepts, to supplement a teacher's way of teaching. I think it's doing a great job of that. And yeah, just like broadening the number of resources that students can use. It's definitely great at doing that. And

Ben Kornell:

just jumping in here real quick to one of your question areas. I know we've talked mainly around writing essays or brainstorming or things like that. One of your questions was really around coding and using AI for cogeneration. And, you know, with computer science standards, often not being met in high schools across America in Aren't because there's not enough teachers to teach coding. How did students respond for those types of use cases, not just, you know, authorship.

Sophie Yang:

I can talk about that a bit from my perspective as a computer science student in my AP Computer Science Principles class. So I believe in some other computer science classes at my school chat GBT, or any generative AI for that matter is completely off limits for students. So inside of the classroom, they're just not allowed to log into the website or anything or use it as a tool. But in my class, in particular, my teacher did specify in the beginning of the year that chat, GBT is not only allowed, but it's also encouraged as a consulting tool to learn how to code. Because with every new concept, learn in every new language mastered, there's a lot of intricacies that one and a half hours of class just cannot cover, in regards to everyone's concerns. So if we bring our questions and ask them to AI, we could potentially learn a lot more in class, and implement that knowledge during exams and test in real world problem solving. So that is how our teacher encourages us to use it. And I believe many students, such as myself implement that tool in our studies.

Ben Kornell:

Yeah,

Dev Krishnamoorthy:

yeah, to go on Sophie's point, I also have use AI as a way to help grade my own, like coding work and stuff like that. And I do feel like coding and like artificial intelligence are like inevitably going to be interlinked no matter what, in terms of learning in any sort of fashion. Because even looking at GitHub, I've seen examples of like co workers and friends who have used artificial intelligence as a way to help like edit and to like, correct possible mistakes within like, their programs. And I feel like it's just a really unnecessary tool to help build upon the skills of even the most well like to build upon the most professional and the most complex of programs that can still be used as a possible method. So yeah,

Ben Kornell:

in your analysis, you all looked at check TPT. But you also looked at things like GitHub, and some of the coding copilots. You know, depth you mentioned grading, there's an entire section that you covered, which was really around teacher use of generative AI. And I have to confess, like, you know, having been a former teacher, myself, there's a lot of excitement about teachers being more efficient or effective, using AI to grade to generate lesson plans. But it might feel differently if you're on the receiving end of an AI generated. Great. I'd be curious. First, let's just start with how did people in the polling in the research that you looked at? How did they generally respond to the educators using AI? And what are you know, what are the use cases that are favorable? And maybe what are the ones that you have questions or concerns about? Yeah, let's go see it.

Siya Verma:

I think most students didn't support the proposal that teachers should use AI in their curriculums as long as it's guided again. And I think, generally, almost half of the participants. And this, like after the poll, there was increased support for this. But almost half the participants said that teachers should not use AI detection tools to verify the authenticity of student work. And one of the biggest factors with that, like Dave has mentioned is false flags. And a lot of these students, I remember reading through the transcripts, and some students mentioned that their teacher said that, you know, they, they were falsely flagged for their work. And so I think those like in terms of the numbers, I think students don't want to be hypocrites, in the sense that they should be allowed to use AI tools for teachers should not be allowed. So I think through all these discussions, students really started to understand the teacher perspective.

Alexander Sarlin:

Does anybody else want to build on that? I think that makes a lot of sense.

Sophie Yang:

I can add on that real quick. So throughout our entire poll, and seeing how the participant opinion trends varied from before and after the poll, we realize that there is a large amount of skepticism in general surrounding how AI is used. People might trust AI, but not trust the person who is using it or vice versa. So we believe that that mentality has greatly impacted how students view their teachers and what they presume their teachers will use AI tools for.

Alexander Sarlin:

It's so interesting hearing you all talk about how the interplay between the teachers, the students and the AI and the AI for empowerment for things that students can do, or that can save teachers time and then the AI for sort of catching people things or false flagging it's such a comp flex interplay between each of these, I want to double down and dig in on one of the things you've all hinted at, I think, you know, Sia, you mentioned that you're an early adopter, or you know that, that being in the Bay Area, you are at the forefront of AI, which is definitely true and that even in other parts of California, let alone other parts of the country in the world, people may not be exposed to it. And then Sophia and Debbie are both computer science students, and you're already using it. One of the things that people have worried about in this AI era is that people who already have a lot of cultural literacy who know what's going on in the world will start to use these tools and become you know, super learners, super coders do a ton, while others who are not exposed to it may continue to fall behind. And it might actually exacerbate inequalities, and others say, No, it'll raise all boats, everybody has suddenly has these incredible tools. And as long as we get it in front of people, you know, some of these inequalities will actually disappear. And I'm so curious about how you all see this, as well, as you know, what you've seen from your, from some of the people you've talked to, I'd love let's start with you. Yeah,

Dev Krishnamoorthy:

from my own experience, I feel like Chad GPT and these other like chat bots, and these other tools are essentially vital to the universalization of education and allowing students from all not just calot, like the Bay Area, but all over the world to be able to, with a click of a button, just be able to access chat GPT and simply ask a question, and prompt a response from the program. And I feel like it can really level the playing field in terms of education discrepancies that are present in our society, and could bridge the gap in the quality of education as well, because although charging at the access of GPT, 3.5, is somewhat varied, it still does add a set A still could add a lot to students is learning and how they're able to fundamentally understand topics in ways that they might not be able to in the classroom with limited resources, since it's for child GPT. All you have to do is just sign in with your email and create a password and sign up and then that's it. So so that's

Alexander Sarlin:

a great case for the leveling the playing field side of the argument, I'd love to hear others weigh in on it as well. Do you agree that this will be an equal this will be as force for equality? Or are there other unintended consequences?

Siya Verma:

So we've been speaking with school boards around California, to see what their policies have been, and if our data can sort of help them. And something we realized through those discussions is that a lot of students like cannot think about AI, like, there are schools and their massive communities that are still recovering from the pandemic, a lot of us, you know, missed out on that key moment of social like, maturation, I guess, during middle school, if you know, students were online, in middle school, they didn't have access to Wi Fi. And getting on to like the their classroom was super difficult. And they missed out on like, a year or two of real learning. And at such a young age, I think that was such there was a really big topic for us. And it's close to close to home. But I think there are schools that have moved on, especially in areas where the students are more privileged, socioeconomically, but then in other areas, students are still recovering. So AI is not, they just can't think about it right now, teachers can't think about completely changing their curriculum. But I think there is definitely an argument to be made. Like they've said, if we like raise awareness about AI, it can definitely help with sort of bridging that gap. Again, like they've said, I think if someone you know searches something up on Google, and you get like massive research papers, and if you don't have the background knowledge to sort of process those research papers. Generative AI is like that method to really boil down all that information and make it more accessible.

Ben Kornell:

I'm curious how you all are thinking about career and your life prospects going forward? How has the dawn of the age of AI affected your thinking? Is it making you excited? Is it making you fearful, we've had an author on the show who basically said all entry level jobs are going away and we need to have apprenticeships, other people have said this means basically everybody can start their own company from their basement as a micro company. So we've heard both optimism and pessimism. We're curious what the three of you think. I'll start with you, Sophie. Yeah,

Sophie Yang:

of course. That's something we actually discuss a lot amongst ourselves. To us as people living in the Silicon Valley. We don't see AI as something that would replace humans because there is a very emotional human trait In certain jobs and work that AI just hasn't been able to replace, as of what we've seen today, but we also do acknowledge a lot of the concerns surrounding automation. In certain job functions, like you said, entry level jobs, there's a lot of overlap in what AI could do versus what a human could do at those entry level jobs. That's all I have for now, I could add on to it after

Ben Kornell:

What do others think Dev, from my

Dev Krishnamoorthy:

perspective, I do believe that AI, the issue of automation is a very real issue as it has been slowly progressing over the last 20 years in terms of technological advancements. But I feel like this also calls a need for more jobs and different types of jobs and expanding the job market as well as a way to replace as I feel like some jobs as our economy advances, and technology becomes more events, it will inevitably, it'll cause more jobs to be created. So I feel like there will be a overall equilibrium in the job market, so to speak. But I also have do realize that like, AI has the issue of AI art and issues of AI creating video and false images has become a very prevalent issue in current society and could continue to become more of an issue. So that is something I feel like I'm myself and a lot of other up and coming students going to college will have to also be aware of in terms of like research and finding our and creating companies. It's just navigating through a lot of misinformation that AI could exacerbate. Yeah,

Ben Kornell:

so I'm gonna move on. So the, the deliberative democracy labs big project is called the one room project, where basically every five years they get people who are voting in their first election. So 18 year olds, into a room in Washington, DC, they bring 500 randomly selected 18 year olds from across America to Washington, DC, and they go through this deliberative democracy project where you're not just asked survey questions, but you're really, you know, looking at both sides of the issue, and then coming up with a conclusion. And they tried to time this. The last one was in 2019, they tried to time this with the federal election, given that we're talking about all this innovation, all of this transformational power of technology. And then we have some really old likely, you know, finalist candidates who are in the oldest, they will basically be the oldest president in history, whoever gets elected. How do you think about our elections and our political landscape? Given everything that you're seeing in elementary?

Siya Verma:

Yeah. So just to clarify, to the young American one room, this is the first time we're doing it. But then there have been America in one room polls as well with just voters in general, not just supposed to first time voters that were selected. And I did work a little bit on the last American one room that was done this past summer. And I really would say like, we often think, and I think the media, part of why we think this is that the media is portraying it this way, that our country is immensely polarized, and no one can agree on anything. And I think it does seem that way. It's definitely a problem. But personally, I do have faith in human conversation. And we've seen that like through the polling, you know, it's just once you put like, people with different perspectives in a room together to speak on all these different issues, like just guiding that discussion is really like, main thing that will, I guess that's like the main thing that will push forward that shift, not just like an opinion, but also generally like understanding, like I said before, like why do we have these opinions? So I think, regardless of who our president is this coming election, if you like, the constituencies that that President is representing, if the constituency and I mean, like the American people, if they also have faith in you know, in that conversation and being open minded, I think that's, that's like the main thing.

Dev Krishnamoorthy:

Yeah, I like to add on see his point. And basically, I realized that human to human interaction, I feel like can only be replicated like in America in one room project. It's, we see a bunch of different perspectives of how different like basically, different teenagers view, you know, different political questions in the upcoming election. And it also begs the question based on how, like how human human interaction does affect how we inevitably think and how we act. And I feel like when we look at the majority of American in society, there's obviously there's a very loud and vocal amount of people that are expressing their opinions other than the far right or far left or whatever side. But there is I feel like the human to human interaction of being able to interact with each other that deliberative polling offers to its participants is something that really can't really be replicated in any other spaces. And I feel it should be applied as a more of a universal approach to discussions and harboring fostering discussions to work through differences in terms of political views in order to find a compromise or middle ground. You know,

Ben Kornell:

Sophia, building off that I'm curious to hear your thoughts on what is the role of schools in facilitating that deliberative democracy, how it's and I've been also charting the evolution of education as a foundational way to inform and engage the citizenry. And the shift now to have been essentially preparation for jobs. And we've, you know, many of our higher ed institutions have made the switch, and we're seeing liberal arts programs cut and these surface areas where people have different ideas or ideologically or socially, you know, essentially going into their respective political corners. What do you think the role is of school and particularly high schools, in fostering that kind of deliberation? Yeah,

Sophie Yang:

I think the role of schools is especially high schools right before people go off to college and sort of dive in on a particular discipline they're interested in. I think high schools are very critical to how people present their opinions, and how open minded they are to being exposed to a variety of ideas. I can use my classroom as an example, we have Socratic seminars in class sometimes to discuss policy issues that are at the forefront of like American politics, or AI, for example, we've had Socratic seminars specifically regarding AI in classrooms, where students would talk about what their current concerns are. And the teacher would guide the conversation with certain questions, which is very much analogous to the deliberative poll we did but on a smaller classroom and more casual scale. And we find that once teachers and classrooms facilitate this kind of thinking, it really transcends the classroom into every aspect. And we do think that this is also a form of career preparation, being able to vocalize your ideas and being receptive to other people's ideas. It's very important. See,

Alexander Sarlin:

did you want to add something to that?

Siya Verma:

Yeah, sure. So I completely agree, I think we talk often about, you know, AI, literacy, digital literacy, overall media literacy, but something that really needs to come to the front is democracy literacy, and really understanding like, how can we be civically engaged to ensure that our representatives not just like, in, you know, national government or federal government, but so our representatives on the local scale, are really representing our beliefs, and you know, our peers as well. And I really think that like if these discussions, like Sophie said, with Socratic seminars, or even deliberative polls, I think that's like the dream, like if we have deliberative polls, in classrooms, every now and then like that would really transform the way students think about how to approach their opinions. And I feel like when we write, you know, argumentative essays, we're often taught to really represent our opinions, even just, you know, like in debate class, or whatever it may be, like we are taught to represent our opinion, and that is it. But I think this kind of thinking where we're open minded, and we're also considering other perspectives. And we're sort of melding it with our own perspective, that's something that students need to learn early on, so that when they do go off with their careers, or just in general, like being a citizen in the world, it's just such an essential skill to have.

Alexander Sarlin:

Yeah, fantastic answers and really thought provoking. I love how both of you are sort of honing in on the idea that some of these skills that we might consider liberal arts skills, you know, argumentation, or persuasion, or as you said, you know, melding others opinions to your own or understanding others. opinions and accepting them are actually career skills in a lot of ways. I think that blurs the line in a really interesting way. It also strikes me that, you know, the three of you have mentioned economics and political science as interests, which are liberal arts, and you've also obviously mentioned computer science and AI, which are considered sort of the most cutting edge career in tech skills. And I think you're also sort of being early adopters, in that melding of those two types of thinking because it feels like they're both really relevant to the future. I want to ask one very 10,000 foot view question. So Ben and I are all these here. And when we were around your age, that was actually the beginning of the internet. You know, the 90s, when we were your age was when the first web browsers were happening when Google was launched all of these things. And I think at the time, we all had a sense that something big was sort of coming. But there were some people who are like, this is the future, this is the future get on board right now. And others were like, I don't get it, or some people will use it. But I'm going to always use my paper encyclopedias and things like that. And seeing the AI moment right now, it just reminds me so much of that moment of like, some people like yourselves are like, this is the future get on board. This is so exciting. Everybody should know about it. And others are like, we're too busy. The pandemic is just over, we can think about it. So with that in mind, I'd like to ask each of you to sort of I know you haven't been through that kind of transformation yet. But I think you're going through right now, to sort of think about what the future might look like, I'd love to hear a positive and a little bit of a warning, like what are you most optimistic about? And what are you worried about? If AI does prove to be as big and transformative as the internet, as many people think it might be. So looking maybe five years from now, when you will all be at the probably around the end of your college careers, what would be an amazing world of AI, and what would be a little bit of a scary world of AI. Let's start with you that I've

Dev Krishnamoorthy:

noticed that in the next five years, I feel like a big trend that we're definitely going to notice in terms of AI is definitely going to be what we've talked about already AI and education, I do feel like the role that AI can play. And machine learning can play in as a role as an educator for students. Let's just say somebody that lives in Senegal that doesn't really have that might not have as much access to information as someone in the Bay Area, they could potentially have resources that exceed those that are being given in the classroom in let's just say, like search Ohio, where I'm from and through like charging putty or other services. And I feel like the process of learning is a positive world that I see is that the process of learning could change for the better and reflect more upon the individual experiences of the students that are seeking to learn or gain more information to create a more equitable space for not just education, but for creating a more civil and understanding society. And it could definitely had definitely a lot of potential in that aspect. But an issue that we're already seeing now that I feel could be built upon further is definitely the issue of misinformation, as misinformation has totally been exacerbated by AI, especially when I look on Twitter. And I see these deep fake images. And these fake screenshots that are created by a prompted by other creators on platforms in order to create more engagement to create more controversy on these platforms, which is I feel is definitely an issue that has the potential to affect world politics and world events as we know it, potentially for the worse in terms of like, let's just say one person decides to pose as a United States Ambassador and says some claim regarding another country that could cause potential foreign disputes. And it's turns out the host of the screenshot was faked the whole time. That could be a potentially huge issue that we could be seeing in the next couple of years. But I do feel as potential in education, but also in terms of being able to like as we're seeing with generative AI and with the SOAR program in terms of creating these beautiful, contextually based images and videos that are basically like very similar to Unreal Engine five and other video game engines. It's all very interesting to watch, and very interesting. It'll be very interesting to see in the next five years.

Alexander Sarlin:

Yeah, Sophie, how about you?

Sophie Yang:

I think a huge concern regarding AI on the flip side is that it will demand a huge change in skill sets across various careers just as any technological revolution does. Like such as you mentioned, when search engines came to be, there was a huge change in skill set required of people who used paper and pen previously. And we do believe that in the coming five years or so, that will probably that will likely be demanded from people who have no connection to AI previously, and are now finding that this tool could actually greatly supplement their job function or their learning. One example I have actually is one of my teachers at my school recently showed me a tool a very new tool that is essentially chat GBT but geared for teachers. It's called Elena. You And he had did a presentation to our schools board. And a lot of teachers afterwards came up to him and was like, that is so amazing. This could help with planning curriculums, essay prompts, and potentially coming up with stimulating ideas for my students. How do I use this? So we find that in these cases, adapting to this tool would actually be a short term investment for a very long lasting trend and benefit in terms of these different jobs. So

Alexander Sarlin:

we're hearing some really positive and exciting use cases in education and some fears around misinformation or skill sets changing very quickly under people. Really interesting thoughts. How about you see, what do you think?

Siya Verma:

Yeah, I completely agree. I think misinformation is definitely like the root cause of polarization. And it's, it may be indirect, it may be direct, but in reality, like, we get our opinions, and a lot of people are getting their opinions from, you know, social media. Now, we're getting that. And we're seeing, you know, deep fakes or AI generated content. If students don't know how to discern from real content and AI generated content, like it's going to really exacerbate our already key problem of misinformation. But also, you know, on the flip side, I think AI is not going to replace us as humans, it's really, you know, as I've been saying, like, it's a supplement to what we're already doing with teachers, I think it's definitely going to help them. I know, some people say that the AI may break that connection between teachers and students. But I think it's going to give teachers more like physical time to actually spend with their students, and have that ability to really, you know, make that connection. And that can be seen with other careers as well. I mean, not just like on a productivity aspect, but really just understanding like, you can even compare it to a calculator, right? Like, we all know how to add one plus one and two plus two, but the moment you know, you're in, like high school or college, it calculator just makes that like easier. You already know how to do it. So I think just understanding the concept, and then using AI to really enhance your skill set. I think that's like how it's going to be used in the future. I'm really optimistic about that.

Sophie Yang:

Yeah, so just adding on to the more cautionary side of the spread of AI. I do think that the lack of equal access poses a very big problem to how certain people from certain areas will grow to implement this tool. Because like for us in the Silicon Valley, it's a given that you know what AI is, and you know, what buttons to click to access it. But as we've seen, there are a lot of school boards and students who just don't have the time, nor the capacity to even discover this new tool, and years down the line as this tool develops, and becomes more and more powerful, there could be a very large disparity between people of different socio economic statuses. So we do believe that all of this benefit, and this large spread technological boom, is partially conditional on equal access, and also the awareness of this tool.

Ben Kornell:

Well, this has been an incredible conversation, we're so inspired by the three of you. So if you see an in depth, thank you for skipping first period and potentially second period here today, to be with us. And thank you for all your work that you've been doing on deliberative democracy. And we are inspired and excited to follow your journeys. As you bring in the future that's going to make the world a better place for all of us. So thanks so much for joining us here at Edtech insiders. So glad to have you on and if folks want to find out more about one room or it's called Young Americans in one room. It's happening this July. It's the first gathering as CO mentioned of it's tight with first time voters. But this has gone back and into 2019 with adult voters of various ages. And so expect to have some press around that and you can also check it out on the Stanford deliberative democracy website. Thank you all for joining and we hope to hear from you soon.

Sophie Yang:

Thank you for having

Alexander Sarlin:

Thanks for listening to this episode of Edtech Insiders. If you like the podcast, remember to rate it and share it with others in the tech community. For those who want even more Edtech Insider, subscribe to the free Edtech Insiders newsletter on substack.