Higher Listenings

This is Your Classroom on AI with José Antonio Bowen

Top Hat Season 1 Episode 2

This episode's guest's latest book is selling light hot cakes—for good reason. It’s been called a “veritable lifeline” for any educator looking to wrap their heads around the potential of AI to enhance their teaching practice. Buckle up as we sit down with the tireless José Antonio Bowen, distinguished academic and the author of Teaching with AI: A Practical Guide to A New Era of Human Learning, for some sage advice on what AI can and cannot do for you and your students.

00:00: Teaching With AI

11:21: Navigating AI in Higher Education

22:44: Teaching Integrity and Empathy With AI

Follow us on Instagram: instagram.com/higherlistenings

Higher Listenings is brought to you by Top Hat

Subscribe, leave a comment or review, and help us share stories of the people shaping the future of higher education.

Speaker 1:

Welcome to Higher Listenings, a podcast from your friends at Top Hat, offering a lively look at the trends and people shaping the future of higher education. I'm Eric Gardner, Director of Educational Programming.

Speaker 2:

And I'm Brad Cohen, Chief Academic Officer.

Speaker 1:

A wise sous chef once said knowledge is knowing that a tomato is a fruit, but wisdom is knowing. You don't put tomatoes in a fruit salad, which, like a lot of things these days, reminds me of AI In the context of learning. Ai isn't just good, it's uncannily good at writing, clarifying, researching, testing, simulating, coding. You get it. But when it comes to knowledge that makes us wiser, or, as the great John Dewey once said, that equips us to explore the potentialities inherent in experience, well, that is the special magic of a good educator, and magic is an appropriate descriptor for this week's guest. Jose Antonio Bone has been leading change and innovation in higher education for more than 35 years as an academic, an author, consultant and one of the most dynamic speakers you'll ever see. His latest book, Teaching with AI, has become a must-have for educators looking to maximize the benefits of AI while minimizing the negatives. So strap in as we explore AI, angst, relationship-rich education and how AI and higher ed can come together to make the perfect fruit salad.

Speaker 2:

Well said, Eric.

Speaker 1:

So welcome to Higher Listenings, jose. It's wonderful to have you with us Now. We've worked together a few times, so we've got a pretty good sense of just how fast you move. But even so, brad and I were both a little awestruck because you and your co-author, edward Watson, managed to write Teaching with AI, a practical guide to a new era of human learning, at a time when most of us were still Googling what is ChatGPT and is the Terminator movie series actually a documentary?

Speaker 3:

Yeah, so the way that happened is, you know, I'd been like everybody else. It's like, yeah, ok, I'll play around with this, this is interesting. And then you know when, when the I guess the verse, the March 2023, when it, you know four came, it got better all of a sudden. And so you know, this needs a little more attention. And I said, well, if we're going to do a book, you know cause a book is going to be out of date. It can't have the usual. You know we'll take two years to print it and do all that kind of stuff. And so I do keep a website at teaching nakedcom with prompts, and so as I collect new prompts, I do new prompts. So I'm constantly, you know people are sending me stuff or I'll see and I'll say I'll post links. And so there is that that goes next to the book. The book is Principles and Basic Ideas, but I am constantly updating the list of prompts with new ideas.

Speaker 2:

I do think the website is really quite helpful and I'm amazed that you're continuing to, you know, to maintain that kind of update. I'm wondering, before we get into some of the deeper questions that we have for you simulation and gaming do seem to be really made much easier by this technology. Are there other pedagogies that you see as really made available more broadly and easier to adopt for faculty than in the past?

Speaker 3:

Well, I think you're right that simulations, gaming, and simulation also means what, if right, the future? Right. But I also think role playing, right. So I teach design thinking. And so you do empathy interviews, right.

Speaker 3:

So normally I say, okay, so you're going to need to go out and find, you know, a half dozen people for a focus group who have this problem, right. So you want to make it easier for high school kids to go to college? Okay, you're an entrepreneur great. So you wanna make a better shopping cart, whatever, right. So you've gotta go find people and say, well, what's wrong with the shopping cart, or what's wrong with the FAFSA or whatever. And so you know, I now have an example where I say so, ask Claude 3.5 to be the person. Right, let me, I want to interview you, you know.

Speaker 3:

So what's the problem with being a Latino entrepreneur and running a small business, right? What could I, as a bank, do better to help you? So I want to focus on the emotion, right, because that's one of the things that we people forget to do. What did it feel like going into the bank? And so I can do this instantly. I don't even need to set up a focus group. So is that research? Is that empathy building? Is it role-playing? I mean it's, I think, all of those things. You know what happens if I change my story so that the main character is Asian American? What happens if I move it to Detroit? You know what kinds of plot lines might change. What would this audience think? So you're anticipating readers' responses. Is another one getting editing feedback. So I do think creativity is a big area, but I think editing, getting early feedback and being able to talk to people that you couldn't normally talk to, so I think there's gaming and simulation, role-playing there's a really wide level of things that fall into that.

Speaker 1:

So we're going to take a step back, and I think I'm not alone when I say that when chat GBT 3.5 first arrived, it generated as much excitement as it did existential angst. So, seemingly overnight, we found that the automation threat that had been lurking somewhere out on the horizon was suddenly taking aim at the highest earning and most creative jobs. So before I consider a career as a dog walker, jose, you suggest that instead of focusing on the elimination of jobs, we should be thinking about the fact that AI will change every job, and I really like the optimism there. But I'm curious to know how do you see this actually playing out? Or should I be exploring an AI-proof career in, say, small engine repair?

Speaker 3:

Yeah, I don't know, because one of my favorite examples is that the AI can now debone chickens. If you think about it, every chicken is different, and so that was one of those. You know you can, you always are going to need human beings to do that, not anymore. So you know some of those kind of mechanical things. So fixing small engine, you know, I don't know, I mean, maybe it can fix my lawnmower too. So first of all, we've got to be honest, right that that there is an existential threat here. There is real anguish. It takes time to learn a new tool and you know I get it. I certainly understand that feeling and, of course, faculty have just come off of COVID and a whole lot of other things, and so I do think that faculty are going to be looking for easy things they can do. You know, when it's built into your LMS, when it's built into something, you know, are there easy things they can do? You know, when it's built into your LMS, when it's built into something, you know, are there easy things that I can do? That will actually save me time.

Speaker 3:

I also think we've got to be hugely cognizant of the difference between, you know, being a full professor with tenure at a small college that has classes of 20 and being an adjunct that travels between institutions and has 600 students. And so you know people say, oh, it's too impersonal to have the AI grade. And it's like sure, if you've got 20 students and you want to write personal comments and you know all of your students, then absolutely you shouldn't outsource that to AI. But you know, if you're struggling to find more time with your kids and you've just driven between three colleges in one day and you can get better, faster feedback to 600 students, I mean, you know they say the average K through 12 teacher who has, you know, 25 kids in a class needs 56 hours a week to grade. I mean, right, that just doesn't exist. And so so I think I think we've got to be really aware of different circumstances, different kinds of things and both the emotional toll as well as just the time it's going to take to learn these new tools. So, to really answer your question, I think education is in the firing line.

Speaker 3:

I think Google and OpenAI are both looking at real educational tools. Right, the Learn AI that Google has was trained in ped educational tools, right, the learn AI that Google has, right, was trained in pedagogy, right. So they took Gemini and they said, well, suppose we gave you right these principles. Don't give students the answer, try to increase engagement and curiosity right? And then they trained it on real students, with professors giving feedback, and so you know it's starting to really think like a teacher. So the point of that story is not that you know it's going to be a great teacher, but they do see education as being a place where we can make money.

Speaker 3:

That said, education can be customized. It will be good to customize learning, but customization is not the same thing as personalization, because personalization implies relationships and and we know that you get better care from your doctor or your teacher when you like them and respect them. Right that, if you're, if you believe that your teacher cares about your learning and believes that you can learn you, you learn more, and that's psychology and that's the social nature of this. So figuring out what AI can do in education and do well will A depend on your circumstances, how many students you have, et cetera. It'll depend on the type of learning, the level, and it will also, you know, can it customize.

Speaker 3:

I still think you will need the personal connection. I don't think an animated version of me can do that, but it can do some things, and so I think we don't yet know what those things are going to be possible. Because we're experts, right. It's like when Google, when we were first using Google, right? Well, I don't know if this answer is good. Is this an answer from somebody I trust, right? It's like why doctors tell you, you know, don't, don't look up your symptoms online. And if you do use WebMD and not, you know, you know Joe and his underwear in Nebraska, who's just typing random stuff on the internet, and so, you know, we're able to analyze the results and say that's not good. And so if there's any group of people that need to be interested in policy and what happens to AI, it needs to be teachers.

Speaker 2:

Let me ask you about higher ed more generally. As we're all really well aware, higher ed has really kind of taken it on the chin. The recent Gallup poll gives more difficult news to us regarding the industry. You know what is it now?

Speaker 2:

Only 36% of adults have a great deal, or quite a lot, of confidence in higher education, so that's a real shot. So you're suggesting that AI may actually give us opportunities to reimagine the industry and sort of reclaim the value in some way. Do you see AI helping at that level Not just me as an individual instructor, wrestling with the thousand challenges I have to meet the time constraints and give my students feedback but something more substantial for the industry Do you see universities being able to reclaim value or represent themselves to our community in ways that regain that perception of value?

Speaker 3:

Well, I think I'd flip the question. I mean, the short answer is yes, it has to, or we're all out of business. And so I think, if you take AI out of the equation, the most important thing for institutions of higher education is that we need to do a better job of demonstrating value and delivering value. The public has lost confidence, rightly or wrongly, and so we're not going to win this battle by simply, you know, talking about our value more, right? People always think and it's like psychology tells us people are not convinced with facts, right, and you should know that, obviously, if you really don't understand that, I'm happy to send my father to your next Thanksgiving and you can try to give him some facts and change his mind, right? But it's about relationships. Here is more data on X, y or Z is not going to change most people's minds.

Speaker 3:

So the first point is that universities, regardless of AI, have to do a better job of delivering learning and value to students, because reducing costs is going to be hard, although AI may help us with that.

Speaker 3:

So AI has to be an opportunity for improving quality. If it isn't, we're all out of business, right, if AI just is a way to increase efficiency and people can learn with AI and they decide, oh right, we don't need higher ed. That's what will happen. It's not going to be enough for institutions to say, oh well, we're better than that, okay, prove it and demonstrate that you're better than that, because real people are involved, which is probably going to mean a greater focus on relationships and caring and understanding students' needs, and not just on tools. So that's a bit of a paradox, right? Because in some ways, I think the biggest use for AI in higher education is using AI to do stuff that faculty shouldn't have to do because it's tedious work and then allows time for faculty to spend more time with relationships with students, right, to actually be having smaller classes, more face-to-face time, et cetera. But I think that there are ways that we want to figure out what AI should do so that we can make higher education better.

Speaker 2:

I think everyone has been spending much of their energy in the past year sort of coming to terms with this and beginning to answer these kinds of questions what can we do now? What should we do now? I also know from my experience as an administrator and as a faculty member there is a difference of opinion and perspective. We are, I think, still challenged in this dynamic with, I think, an institutional perspective where there's a real pressure and imperative to move on these sorts of things and a faculty desire to be really thoughtful about this, and of course, we have the early adopters and the laggards and all that sort of thing. Do you have some advice for how to navigate this? If you're in a position of leadership, what are some of the best things you should be doing? It's so hard.

Speaker 3:

So the first piece is you know, we've had this before. We had the chalkboard controversy 130 years ago. You know we've had, you know, typewriters and writing and blue books and spell checkers right, I remind people that dictionaries and a thesaurus right, those are also intellectual property that we use to make our work better. And so we've had lots of these controversies. And the good news is that, right, faculty are thoughtful people. Right, you know we are skeptical, we want to pull everything apart, and this is absolutely what is needed at this time. What's different about this is that everything else is moving at light speed. I saw data last week that the majority of job postings period are already with, like some sort of AI, right, so we're seeing a really rapid change on the job side, and also, the technology is moving at light speed. So the first thing I would say to administrators is that, A we need to be skeptical. Skeptical is good, but you can only be skeptical if you've actually used the tool. So faculty development is absolutely key. Right, Everybody needs to be experimenting with the tools. There need to be workshops. I think a lot of that can be done at the departmental level. When I do a workshop, I always say that. So go back to your department and show them one thing that you learned. That's discipline specific, because it's really different for the English department and the chemistry department and the math department, Right. So so people have got to start. And also, I would talk to students.

Speaker 3:

I am actually opposed to to having a university policy because I think faculty, for really good reasons, want to be. They want better advice from their, their administrators. For really good reasons, want to be. They want better advice from their administrators. What should I do? Right, but a policy has to be crafted by each discipline, by each type of course, and I think the most important thing is to talk to students.

Speaker 3:

So I actually think the better strategies on the first day of class or before the first assignment is due is to show students what AI does when you give it the assignment and then say, well, what would this mean? I mean, would you learn as much if AI did this for you? Does this mean you never have to do it? One of my favorite examples is have it do something meaningless like say so, AI, write me a 500-word essay on why every college essay should have a character named Barbie in it, and the AI will do this right? And then you say so clearly this is idiotic right? So what's the danger of using AI to do your essay right? And so I actually think that, rather than having a top-down university-wide policy, what we should do is have a workshop for everybody to say how do I talk to students about AI in the first day of class.

Speaker 1:

We'll be right back. If you're enjoying the show, you can do us a favor by subscribing to Higher Listenings on Apple, spotify or wherever you get your favorite podcasts. You can also write us a review. We'd love to hear your thoughts or invite a friend. And last, most important of all, thanks for listening Back to the show. So there's a lot of discussion around sort of the productive friction in the wake of chat, gpt and you note this that the challenge here is that technology is it made it harder to see the benefit of doing things the hard way, and AI is obviously magnifying this, and you suggest that the more unpleasant or uncomfortable something is, the more we need to understand the payoff. So can you talk a little bit about the emphasis on on starting with why and how that actually plays into getting students engaged in learning?

Speaker 3:

so. So when I go to class, they say that discomfort is good and essential even, but pain is bad. So you want to find that zone where you're you're a little bit uncomfortable and that's where the benefit is. And the same that's true for your muscles is true for learning that you have to be a little puzzled and a little bit uncomfortable, and there is some hard work that's essential for learning that you have to be a little puzzled and a little bit uncomfortable, and there is some hard work that's essential for learning. And so relationships, right. Do I trust you? When you tell me I need to learn this, do I trust you? And that's really about do I like you? Do I think that you're not? Do I think you're an expert, but do I think you have my best interests at heart? Not, do I think you're an expert, but do I think you have my best interests at heart?

Speaker 3:

Students have extra radar for busy work, right? So they are very sensitive to that. They don't want to do that, and so motivation has become more important. Again, it was always important, and so there are really two parts to that. One is do I really still need to teach this? Do students still need that level of skill with this, and the other is, if I'm going to ask students to do things the hard way, I need to make the path clear. What is the journey, where are you going to be on the other side of this and why is this necessary? And my yoga teacher does that really well every day, which is why I like the fitness example, because, right, we understand that no pain, no gain, or no discomfort, no gain, which would be better it just doesn't rhyme.

Speaker 3:

Right. So we have to have some discomfort, but students are really sensitive to this and so we just have to do a better job. And what do we call that? We call that pedagogy, we call that curriculum, and it's not just a list of topics, it's what do you really need to know and how are you going to learn it, and why is this step essential to getting there? What is the journey? We really are curators of the journey. That's really what teachers do.

Speaker 2:

Yeah, I think you know, the whole movement toward learning outcomes, making those explicit, was a step in that direction of trying to really make learning more explicit to students so that they might understand and be motivated, and I think what AI has surfaced is the gap between what we ask them to do. That culminates in an assessment. So whether the assessment itself is good or bad is one question, but why it's valuable to pursue excellence in this particular task or through this particular task is, I think what you're pointing to, Jose, as work to be done. Where students may be resisting may be a signal that we are not motivating them with enough information about why this is good for them in the way your yoga teacher is doing for you.

Speaker 3:

Yeah, and I'd also add, though, that so, on the one hand, it's the motivation, but it's also identifying the journey, right, you know? Again, there's this idea in yoga that it's you know, it's a practice, right, that it's not a destination. And so learning outcomes are a desk, right, they sound like a destination. So if I say, well, my, my outcome is to lose weight, well, could I take a pill? Right, and as there might be other ways to get there, and if it turns out that there's another way to get there, then I'm going in an easier way. I'm going to take it. I'm only willing to do the hard work of lifting weights If that's the only way I'm going to get to the outcome. So what's interesting, right, is that learning outcomes are good, but they give us a false sense that I've done motivation right. So, you're right, they're well-intentioned, they're a good step, but what we really need to do is identify the journey and why this journey is going to be beneficial to you, and that is different than just identifying the end result or the learning outcome.

Speaker 1:

I remember you spoke at the Top Hat Engage conference in San Diego last year, which was amazing and I'll never forget this. But you said that what higher education calls cheating, the business world calls progress, and the irony of that really stuck with me. So you're obviously pretty bullish on the benefits of ai, on teaching and learning, but you do acknowledge that we won't be able to outthink or out ai students when it comes to, you know, students who do want to take a shortcut. So you say that good pedagogy should always be our first uh consideration when it comes to academic integrity. I'd love for you to just expand on that a little bit.

Speaker 3:

Yeah, so I'm less bullish on AI than you think. Right, I'm not pro or con, I'm not pro or con the calculator, but I do think that you are not going to put the genie back in the bottle, right? You are not going to say I want to go back to a world where there are no calculators. That's just not going to happen. So I have to figure out how do I respond to this new thing in the world. So I could say to a student no using a spell checker, right, only use a dictionary, and spell checkers are cheating. But when that student goes to look for a job, if there's another student there who is practiced using a spell checker, who produces faster work because they use a spell checker than a dictionary, that student's probably going to get the job and my student, who I told was cheating, is going to be at a disadvantage. So you know, our first obligation is to do no harm, right? I don't want to disadvantage my students by saying I said this was cheating. True story, right?

Speaker 3:

The student who was working during the day and going to night school to take business communications told us that her boss came to her and said You're the slowest person on the team. I need you to use AI to increase your speed and maybe the quality of your work. And then she goes to her business writing class hoping for some advice and the professor says no, no, no, no, ai is cheating. And she's like I'm confused. My boss just told me I have to do this to improve my work. So I just think I want us to avoid getting in the trap of saying you know, that's cheating. Just have to remind ourselves that what we call cheating business might call progress or efficiency or any of those things. So I want us to be careful that we think about equity, and equity means we're not closing off possibilities for students success after they leave us.

Speaker 1:

So how should we approach this issue of academic integrity? I mean, this has been a nut Educators have been trying to crack since the invention of the clay tablet, and the clay tablet has nothing on what Claude or ChachiBT can do.

Speaker 3:

So when we talk about academic integrity, that just makes me crazy. I would much rather we talked about integrity full stop, because the minute you say academic integrity, I think, oh well, in four years I can forget about it, Right. But it turns out that integrity is integrity, right, it's not what we think about academic integrity. Yeah, there's some special cases, but we should make those special cases. Should make those special cases. So I say we should start with talking to students about integrity and why it matters. What would happen to our community of learners if you were to do your math problems with AI and everybody else was doing them by hand? What would that do to your integrity as a human being and your ability to get a job doing X or Y? Right?

Speaker 3:

I think students respond better to discussions about respect, integrity, et cetera. And if I say academic integrity, right, anybody with teenagers knows right, the eyes are already rolling, right, you just you've lost the battle already. It's like talk to the hand, I'm not listening. And so making academic integrity a special category, I think, is a generic mistake. Right, when we say start with pedagogy, start with integrity, start with what matters to students and is going to matter in all aspects of their life, not just in the classroom, and you'll have a better chance of making that connection stick so, jose, we like to involve chat gbt in preparing for our podcast interview, so I had chat GBT help me with this next question.

Speaker 1:

So it said that you share some interesting findings around AI powered chat bots and the fact that they're not only increasingly indistinguishable from humans, but, in some cases, seemingly more caring and empathetic. So what chat GBT would like to know is this what do you think AI can actually teach us about the human domain?

Speaker 3:

So this is somewhat paradoxical, but AI is a better listener than most people. Ai is more persuasive than 85% of people, according to this research.

Speaker 2:

And.

Speaker 3:

AI is a better helper to fix your phone or whatever, partly because it's a better listener. It doesn't get emotional. It's going to be more persuasive. To my father, one of the researchers in this area says it's like Grammarly for empathy, which is a really funny thing to say, but it's an ability for me to get into somebody else's head.

Speaker 3:

So I think there are places where AI will be able. Again, better than average, maybe not, but the very, very best human right, the most empathetic person that you know, is going to be better than AI. But all of us know that person who's a really good engineer or contractor or, you know, landscaper, but it's just a jerk, you know, and so that person could be helped with AI. So you know, if you're really a great lawyer but you have lousy bedside manner, then you know, maybe AI is going to be able to help. So I think for the average person, there are going to be places where AI will be able to make us better human beings that we are going to turn over, you know, student interactions to chatbots, which would be bad. It yeah, at three in the morning. It's good, right, if a student needs to help with math or write any number of things, then a chatbot at three in the morning is going to be better than nothing. But we don't want to eliminate human beings from this, because I feel differently, knowing it's a human.

Speaker 3:

But there's a really interesting study about vet. If the vet calls to tell you your pet has died. Well, if your pet, if your vet, is really good and empathetic and has a lot of time, that's way better to have a human. But if your vet is like most vets and they're busy and they treat a lot of animals and they don't remember you know what Fido happened 10 years ago. The AI does the AI right. People actually liked it. The you know, years ago the AI does People actually like it. I remember 10 years ago when the Fido chewed up your carpet. It makes you feel, and feel is the important. It makes you feel better that the person you're talking to who in this case, is an AI remembers that the dog chewed up the carpet 10 years ago and the vet doesn't. It's aundrum and I think we have to be very careful, but I also think that there are lots of instances where it could make us connect better with each other.

Speaker 1:

So the Dalai Lama probably doesn't have much to learn from AI about empathy, but I imagine telemarketers and insurance claims adjusters could probably pick up a few tips. One can only dream.

Speaker 2:

Jose, I just want to say how much again we appreciate your time and your thoughts and you're really helping to lead this conversation in our higher ed community. The book is fantastic and phenomenally valuable to helping instructors take this giant leap forward into understanding how they can put it into play and learn with their students about this rapidly evolving technology. So very much appreciate your taking an active role in this.

Speaker 3:

Well thanks. It's a very strange time to be in education and if I can play a small part in helping, I'm happy to do it.

Speaker 1:

So the good news is that there's plenty more where that came from. So I'd encourage all of our listeners to pick up a copy of Teaching with AI a practical guide to a new era of human learning, by Jose Antonio Bowen and C Edward Watson, available at a fine bookstore near you. With that, thank you all for tuning in. We'll talk at you again soon. Thank you all for tuning in. We'll talk at you again soon.

Speaker 1:

Higher Listenings is brought to you by Top Hat, the leader in student engagement solutions for higher education. When it comes to curating captivating learning experiences, we could all use a helping hand. Right With Top Hat, you can create dynamic presentations by incorporating polls, quizzes and discussions to make your time with students more engaging. But it doesn't end there. Design your own interactive readings and assignments that include multimedia, video, knowledge checks, discussion prompts the sky's the limit. Or simply choose from our catalog of fully customizable Top Hat eTechs and make them your own. The really neat part is how we're putting some AI magic up your sleeve. Top Hat ACE, our AI powered teaching and learning assistant, makes it easy to create assessment questions with the click of a button, all based on the context of your course content. Plus, ace gives student learning a boost with personalized AI powered study support. They can access anytime, anyplace. Learn more at topatcom slash podcast today.