Tech Travels

EP16: Innovative AI Integration: Enhancing Pharmacy Education with Dr. Kaitlin Alexander

May 28, 2024 Steve Woodard Season 1 Episode 16
EP16: Innovative AI Integration: Enhancing Pharmacy Education with Dr. Kaitlin Alexander
Tech Travels
More Info
Tech Travels
EP16: Innovative AI Integration: Enhancing Pharmacy Education with Dr. Kaitlin Alexander
May 28, 2024 Season 1 Episode 16
Steve Woodard

Send us a Text Message.

Ever wondered how artificial intelligence can revolutionize higher education? Join us for an insightful conversation with Dr. Kaitlin Alexander, a clinical associate professor at the University of Florida and recipient of the AI Teaching Integration Award. From having no prior experience with AI to now pioneering its use in pharmacy education, Dr. Alexander takes us through her transformative journey. Through her engagement with a faculty learning community at UF, she has creatively blended AI into her teaching, especially within experiential rotations in the trauma ICU, making learning more engaging and personalized for her students.

Discover how AI tools like ChatGPT are being used to help students grasp the practical applications and limitations of artificial intelligence in evidence-based medicine. Dr. Alexander and I share experiences from our classrooms, where students analyze medical cases and critically evaluate AI-generated outputs against established guidelines and primary literature. This approach not only equips future healthcare professionals with essential skills but also fosters a deep understanding of how to utilize AI responsibly in clinical decision-making.

We also tackle the ethical challenges of incorporating AI into education and healthcare. From ensuring confidentiality under HIPAA and FERPA laws to addressing the biases and potential for misinformation in AI-generated content, Dr. Alexander and I explore the complexities and responsibilities that come with AI integration. Envision with us a future where AI enhances not only educational experiences but also patient care through real-time support and personalized learning, ultimately leading to more efficient workflows and insightful student reflections. Don’t miss this thought-provoking discussion on the intersection of AI, healthcare, and education.


About Dr Alexander

https://pharmacy.ufl.edu/2024/04/22/dr-kaitlin-alexander-wins-inaugural-ai-teaching-integration-award/


Twitter
https://x.com/KAlexander4218

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Tech Travels +
Get a shoutout in an upcoming episode!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Send us a Text Message.

Ever wondered how artificial intelligence can revolutionize higher education? Join us for an insightful conversation with Dr. Kaitlin Alexander, a clinical associate professor at the University of Florida and recipient of the AI Teaching Integration Award. From having no prior experience with AI to now pioneering its use in pharmacy education, Dr. Alexander takes us through her transformative journey. Through her engagement with a faculty learning community at UF, she has creatively blended AI into her teaching, especially within experiential rotations in the trauma ICU, making learning more engaging and personalized for her students.

Discover how AI tools like ChatGPT are being used to help students grasp the practical applications and limitations of artificial intelligence in evidence-based medicine. Dr. Alexander and I share experiences from our classrooms, where students analyze medical cases and critically evaluate AI-generated outputs against established guidelines and primary literature. This approach not only equips future healthcare professionals with essential skills but also fosters a deep understanding of how to utilize AI responsibly in clinical decision-making.

We also tackle the ethical challenges of incorporating AI into education and healthcare. From ensuring confidentiality under HIPAA and FERPA laws to addressing the biases and potential for misinformation in AI-generated content, Dr. Alexander and I explore the complexities and responsibilities that come with AI integration. Envision with us a future where AI enhances not only educational experiences but also patient care through real-time support and personalized learning, ultimately leading to more efficient workflows and insightful student reflections. Don’t miss this thought-provoking discussion on the intersection of AI, healthcare, and education.


About Dr Alexander

https://pharmacy.ufl.edu/2024/04/22/dr-kaitlin-alexander-wins-inaugural-ai-teaching-integration-award/


Twitter
https://x.com/KAlexander4218

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

We are willing to adapt to the current technology and change our approach to teaching and learning to hopefully make them better pharmacists in the future.

Speaker 2:

I love the idea of having the education personalized to perfection phrase.

Speaker 1:

I think we could see personalized education happen, with programs that are tailored to the pace of the student, by utilizing AI and technology in that space.

Speaker 3:

Welcome to Tech Travels hosted by the seasoned tech enthusiast and industry expert, steve Woodard. With over 25 years of experience and a track record of collaborating with the brightest minds in technology, steve is your seasoned guide through the ever-evolving world of innovation. Join us as we embark on an insightful journey, exploring the past, present and future of tech under Steve's expert guidance.

Speaker 2:

Welcome back, fellow travelers. In today's podcast, we're discussing the role of artificial intelligence in higher education. Today, we're thrilled to have the prestige honor of having Dr Caitlin Alexander, a clinical associate professor at the Department of Pharmacy Education and Practice at the University of Florida. Dr Alexander is deeply involved in the critical care medicine and other prestigious pharmacy associations, with a ranging interest in critical care management to infectious diseases, and her innovative use of AI in pharmacy education has set all new standards, making her the perfect expert on today's topic around AI in higher education. Dr Alexander, welcome to the show. It's a pleasure to have you.

Speaker 1:

Thank you so much for having me on, Steve. I'm happy to be here.

Speaker 2:

Welcome to the show, so I really want to dive into this here. So recently you were named to have this prestigious honor of the AI Teaching Integration Award, where you were recognized for using artificial intelligence in advanced pharmacy and practice experience that has not been traditionally used in the classroom. So can you share a little bit about how you first decided to integrate AI into kind of the critical care and what this? What really inspired you to have this type of innovative approach?

Speaker 1:

Yeah, absolutely. So. I'm really fortunate to be at the University of Florida, which has a lot of support and momentum behind artificial intelligence but also integrating artificial intelligence for teaching and learning, and so I started this journey last summer, never really interacting much with generative AI, and started to really gain an interest and see the possibilities and the uses that it could have to, I think, advance education for students and for faculty. Through that I found a faculty learning community though at UF that was focused on harnessing AI for teaching and learning. So I participated in that last fall and that was really a huge eye-opener and learning experience. And that was really a huge eye opener and learning experience. I would say some of us kind of described it as drinking from a fire hose of all of the ideas and technology that we are introduced to, of how we can really bring AI into the classroom and into our teaching. So from that I started to get a lot of creative ideas, network with others in the group that had similar interests and develop this idea of how I could start to bring artificial intelligence to my experiential rotation.

Speaker 1:

So as a clinical faculty member, I have didactic teaching responsibilities in the classroom. I also take fourth-year pharmacy students and residents on rotation in the trauma ICU, where I practice as well, and so this is a really unique opportunity where I'm working with a small group of learners on rotations and we collaborate on our topic discussions, where I may have a group of five to seven learners during a rotation block and we focus on different topics that are related to the care of trauma ICU patients during that time to add to their learning experience, and I've been teaching the same topics for every single rotation block for a really long time and I thought AI would be a creative way to kind of reimagine what that topic discussion looks like.

Speaker 1:

So I was just going to say that, and so that was kind of the initial idea of how can I spice things up, make it more interesting, but also integrate AI for the student.

Speaker 2:

That's an incredible use case here and I wonder, you know, just give me a little bit of background, you know. You know, kind of before this journey started here, what was your exposure to things like generator AI was? Did you already kind of have a real kind of fundamental knowledge of it? Was it something you were around in the academic kind of environment where you you kind of have a foundational knowledge of it, or was it something you had to kind of get into, start learning some of the foundational building blocks of it, understand kind of what it is, before you can learn how to apply it? Where did you really start with this?

Speaker 1:

I had no previous use, so I am by no means an expert, and I have been on this journey trying to learn as much as I can over the past year or so now, really.

Speaker 1:

And so that started with yeah, what is generative AI, how does it work, how do these models work and what can they really do, what are the possibilities? And so that's where I started from and, honestly, through, you know, a lot of my own kind of self-research, if you will webinars, other opportunities that I think have really started to come up within teaching and learning. Higher education that has helped a lot to get some of that foundational knowledge, and also, again, this faculty learning community that I was part of really provided a lot of that. We heard from experts you know, at the University of Florida that are AI experts, right and learned from them what it was and then ultimately, how we could use it. And so I'm really an end user here and I've gained a lot of my ideas and experience now just through that action of playing around with AI, seeing what it can do, seeing what it can do for me and for my students and learners.

Speaker 2:

And really the goal of it was. You know, my look at the article was the students responded positively to the AI activities. But the goal of it was so that the students could understand the uses and limitations of AI and evidence-based medicine. Kind of walk me through. What was it about that AI? That they needed to learn the limits around the use cases around that? What was the goal of that?

Speaker 1:

Yeah, so we're really in an interesting time here, right. Ai is pretty new for most people, my students included, and so these current cohorts that are in graduate programs, doctorate programs like our PharmD program they didn't have the same experience that undergraduate students are having now, where AI is being integrated into maybe their K through 12 curriculums undergraduate curriculums. It's very new to my student cohort still, and so they haven't had a lot of experience and exposure and, honestly, their past experience has been don't use it, it's cheating, right. So they've kind of closed their minds off to the uses of AI, especially when we're talking about classwork or an assignment from a professor. So, in this case, I wanted to expose them, though, to what AI can do, and also, though, that, especially in health care, there are a lot of limitations.

Speaker 1:

If you're asking for medical advice or a question, the information may not be completely up to date, it may not be fully accurate, and so you really need to trust but verify. And, of course, we're seeing the models get better and better with the information that they're providing and the accuracy and the sourcing and the citing of information. But, especially initially, that was a huge concern with these hallucinations. I was reading a journal article that I knew wasn't in the model previously and I put it in and asked it to give me a journal club or a synopsis of that journal article that I knew wasn't in the model previously, and I put it in and asked it to give me a journal club or a synopsis of that journal article and it completely made it up Right. And so I want the students to know that, yes, this is a tool that is out there that they can use that will be very influential in health care during their careers, but they need to know that they need to also combine that with their own knowledge, expertise, verify the information, verify the sources, conclusions, before they bring a recommendation to the bedside that would impact patient care.

Speaker 1:

So ultimately, that's what we worked through kind of in this topic discussion together, giving them short cases that they could put into a generative AI tool, seeing what the response is then, and then real time. I facilitate discussion with them of what do you think about that? Is that what you would recommend? Why or why not? What do our guidelines say? What does the primary literature say? And let's literally like dig in and critique what you're getting out from your generative.

Speaker 2:

AI. It definitely seems like that's a wonderful approach, because you're looking at AI as a tool and then you're also having the students look at and evaluate it to see if, in fact, the information is correct based upon certain data you mentioned. Sometimes the data model is not always perfect and it gives inaccurate information. Sometimes there's hallucinations and it's up to the person who is kind of the subject matter expert or who is more of the practitioner in the field to really look at it and kind of make more of that informed or data-driven decision. I wonder, how did you know? How did the students react to it when they heard that? You know I couldn't use ChatGPT or I couldn't use any of the other AI tools for classroom work. But here I get a chance to actually do some AI work in this. Well, this is blowing my mind. Like what was their reaction to this? I?

Speaker 1:

think that was their reaction. They're like you want us to use AI. What they were surprised, right? They've never been told yes, I want you to use this as a tool, and I think they thought it was really cool. I think it shows that, right, we are willing to adapt to the current technology and change our approach to teaching and learning to hopefully make them better pharmacists in the future. So they I had some that were really excited, right, especially those that were already maybe using an AI. Some that were apprehensive though as well, right, because they have been told, you know, don't use this kind of stay away. And they had never even had the opportunity to kind of play around with it, know how it works or what to do with it, what to expect. So there was definitely a mix and a mix of experience that students have right now with how to use it.

Speaker 2:

So it seems like it seems like there might have been a little bit of a skills gap kind of, amongst most of the cohorts right. Some probably were very proficient using some sort of AI tool, some were not right. So I, some probably were very proficient using some sort of AI tool, some were not right, so I'm sure you probably had to have some sort of ramp up period for some of those folks to get ramped up on the tool before they can start evaluating it here. How did you narrow it down to it? Maybe a specific AI tool Like what was the decision that went into how you look at the landscape of artificial intelligence tools and language models and different things how did you kind of narrow it down to maybe one or just a few that really were going to be fit for purpose for particularly your use case in the classroom?

Speaker 1:

Yeah. So when I first started this idea, I immediately kind of went to ChatGPT, just being that it's readily available using the free, unpaid version, the 3.5 model, because I wanted it to be accessible, or I needed it to be accessible for all of our students, right, and we can't expect them to pay for an additional service or AI tool. So that's one. Consideration is just the cost associated with some of them, and that being a limitation when you're asking a student or a learner to engage with the tool. And then the other thing was the purpose, right, like.

Speaker 1:

So there are other AI tools that I've played around with. One that I considered, for instance, in this assignment, is perplexityai. That is a little bit better with giving some of the sources of its information. It's meant to kind of be more of a search tool of the literature along with giving kind of an overview. So that's another one to consider, and I think any of the kind of open AI models could work well for at least the type of assignment that I adopted, and I didn't want to be too prescriptive.

Speaker 1:

If the students did have experience, you know they were welcome to use another tool of their choice, but to me ChatGPT is the one that's most common. Students know the name and you know, if I wanted them to get experience, maybe with one tool, that was the one that I went with. Now that's also changed already, right. So the University of Florida now supports Copilot for both faculty and students, and so that is secured once we're logged in behind our GatorLink login, and that is what we are being encouraged as faculty to kind of use with our students in the classroom now, as we're going to use the generative AI tool.

Speaker 2:

So in the future, and what I've been doing now is utilizing Copilot for these assignments- I was just going to ask you kind of like you know, kind of now that it's now that you've been able to prove this prototype happening in the classroom and having a profound impact on the learning experience for the students and also being able to actually really solve real world use cases, I wonder what the implication has been kind of across the college faculty. Now, of course, we know that AI is really kind of starting to move into higher education and I was very curious to know, kind of now that we start to see the emergence of ChatGPT, copilot being integrated, secured in a specific environment like universities, I wonder how it's basically been playing a role within, maybe enhancing things like within the faculty, whether it's within efficiency, within course development or something like that Could you talk a little bit about how it's now starting to kind of bleed into other areas within the faculty of the university.

Speaker 1:

Yeah, I think that this is a huge topic. With faculty no-transcript writing. It can be really challenging and very time consuming to come up with a multiple choice exam of questions. It's kind of one of the things of being faculty you don't always think about. So getting ideas for questions, generating questions from AI based on content, has been really helpful. It can also be used in course development, course mapping, thinking of how you would create or approach new course content goals, writing objectives, assessment instructions.

Speaker 1:

So I've come up with the assessment. I want to, you know, have my students complete, but help me just write up the instructions so that they're clear and the students know what they're supposed to be doing. It helps me summarize information for presentations. I have a PowerPoint slide that's super wordy. I can put it into Copilot and tell it to make it into shorter bullet points. Saves me a lot of time and efficiency coming up with that myself. So there's a lot of uses there just from your faculty tasks that we're completing, and then I think there's a whole other realm of possibilities of how this in the future is going to help support research, data analysis, qualitative analysis of large volumes of feedback, for instance, that our generative AI models are really good at summarizing and really good at summarizing in a short amount of time.

Speaker 2:

I love the idea of kind of having the education personalized to perfection phrase Somebody had mentioned that at one time. You know, kind of keeping an eye on how it's going to kind of allow us to kind of tailor-made these resources to have more of an adaptive learning experience that really has a more of an ebb and flow to the student's academic journey. We know that. You know that professors and university teachers have a lot. So you mentioned things like multiple choice questions, grading and being able to synthesize large amounts of data to be able to summarize it for you quickly, kind of offloading that differential heavy lifting into kind of an AI model that allows you to free up time to focus on the things that really do matter in education.

Speaker 2:

And I know that there's been some big players in the market. You know there's been a huge growth within AI and education. You know, I know there's, you know, microsoft, google, facebook. They've really been pioneering a lot of development into the AI space. But I kind of want to ask the question around things such as you know, like you know, what are the ethical uses that you, maybe even you talk to other university professors, or maybe some of the things that the faculty or maybe the university is talking about is kind of the ethical use of AI in education. Always seems in technology, we use it, it's great, but then you get into education and it's a different conversation. There's many more different levels of complexity. Can you kind of expand on a couple of things that you might be able to talk on around this use case, around ethical use?

Speaker 1:

Yeah, I mean there's a lot of concerns and challenges with implementation and making sure that you're implementing appropriately. One of the biggest things that comes to mind initially is just the confidentiality factor, right, especially when you're using an open AI model. I'm working in health care as well, so you obviously don't want to put in any patient information to the model. It sounds obvious and straightforward to, I think, those that are in the know, but to a student that's on rotation and has a patient-specific question, they may not think about, you know, putting that information into the model. Same thing in education, though. In higher education we have FERPA laws that we follow, and so we need to maintain student confidentiality with any information that we're putting out there into the model confidentiality with any information that we're putting out there into the model. So I think number one, that's a huge ethical issue and concern just with generative AI use and how students may use it. There's other challenges that faculty see.

Speaker 1:

Academic dishonesty is another big issue or concern that comes up of students utilizing AI to complete their assignments and not putting in the work, the time, the effort themselves, and we know that the AI checkers, if you will, are not accurate.

Speaker 1:

It's very difficult to tell if it's a student you know a student completed this assignment themselves or if it was written by CHAT-GPT completed this assignment themselves or if it was written by chat GPT.

Speaker 1:

We'll never know or be able to prove that right in an academic dishonesty situation, and so a lot of professors are kind of turned off by that and don't know how to approach that.

Speaker 1:

And personally I think that it's education of our students right about the appropriate use and having them cite the use of AI when it is utilized, sharing how they utilized it for assignments, being open and honest about that with them when I'm using AI so they can see that modeled for them and that way that can help ensure you know an authentic student assignment and assessment. So those are some big challenges that we're facing. The bias also that we see with responses and things that you receive from generative AI is another kind of hot hot topic from generative AI is another kind of hot hot topic. And then just the fact of potential misinformation and correct information and hallucinations and how that may impact the students' learning and overall education. If we are encouraging use of these tools, you know wanting to make sure that they're getting the information ultimately that we want them to get to achieve their kind of curricular outcomes is a topic for it.

Speaker 2:

When you mentioned, you mentioned the biases in the AI and things like that and you know kind of talked, you know a little bit about what are some biases that you're you know you're trying to kind of safeguard against. You know, you know, as technologists, many of our listeners you know roughly 50% of our listeners you know may or may not be in the technical field, you know, but for them to understand, you know kind of what is an AI bias. What is it you're looking for when you're trying to kind of guardrail against that?

Speaker 1:

I think a kind of simple example that comes to mind, right, is I utilize AI to help me come up with, maybe, for instance, patient cases on certain disease states that I'm trying to teach in the classroom and it can give me a starting point of a patient scenario, details to include that then I can expand upon and use that for case-based learning with our students.

Speaker 1:

And there are certain disease states, though, right, that tend to occur maybe more frequently in certain patient populations, certain genders, certain professions. And if you prompt generative AI, then you know then that's when I see, for instance, those biases come through, because then the patient scenario that it comes up with is always going to fit kind of that, that mold. So that's one example, and so I think we just want to be careful with our students of them understanding and knowing, right, that that's not always the case 100% of the time and that there may be, you know, certain bias introduced there. There's also bias in if you're asking a question, right, and the types of information that you're going to get out, that again may lead you down a path to get to a potentially incorrect answer for a patient that doesn't fit the general mold.

Speaker 2:

And I talk a lot about patient care because, ultimately, I'm teaching pharmacists and we're preparing them to be taking care of patients upon graduation, and so I really want to make sure that they are prepared to appropriately utilize the information that they're getting, to make the best decisions for their patients and all of the teaching that we're doing, when you, you know, when you mentioned patient care for you, as you look upon like the future and the landscape of AI and where this is really kind of headed, where would you kind of see an ideal scenario of kind of the perfect complement of AI with the right type of clinical analysis, the right type of clinical research that really drives an enhanced patient experience? Like what do you kind of see, what do you envision, what's the future look like for you?

Speaker 1:

I think that's a big question, steve. There's a lot of opportunity. I think there are certainly a lot of AI platforms already live and out there that are tailoring to patient care and, for instance, helping to make a diagnosis if you put in some patient information you know de-identified patient information to help come up with a diagnosis and a treatment plan on the spot. And I don't think that we're that far away from seeing that really go live in a widespread capacity where it's integrated into our electronic health systems. And if I have a resident that's rounding on a patient and they don't know the answer to a question, that they could pull up this, you know chat bot, essentially within the EMR, and ask a question and get an answer of what medication am I supposed to prescribe, what dose is correct for this patient, what medication am I supposed to prescribe, what dose is correct for this patient? What would you recommend? So I think there's a lot of opportunity there to again help.

Speaker 1:

Another area in healthcare is just with writing and documentation. So it's a lot. I'm sure you can understand or know that physicians, providers, are incredibly busy. They see a lot of patients throughout the day. All of those patient visits need associated documentation, and so what I see from my colleagues is that they're able to see all the patients and do the clinical work, but then they still have 50 notes to sign at the end of the day or to write, and so AI can also be really helpful, and I think we're already seeing that be implemented in healthcare to help them with the documentation piece of their patient visits, to make that more efficient as well. So lots of things going on in this space. I certainly am not going to pretend to have all of the answers or ideas. I'm sure there's a lot of other things happening too, but those are just some of the things that I'm seeing or potentially forcing in practice.

Speaker 2:

And how do you see kind of more of the evolution happening within AI, kind of more into more things that you may have within the within your classroom setting? You mentioned, you know, pharmacology. You mentioned kind of the patient care pharmacology. You mentioned the patient care you mentioned. This is a really good. This is a great win for you because you've been able to showcase that it is a very valuable tool. You can have a very positive outcome when you include AI into a proper classroom setting with the proper guardrails and instructions with it. What do you see as the next venture into prototyping? Something else like this, but maybe a little bit different?

Speaker 1:

There are again same thing in education, so many different opportunities. I have a ton of ideas of other ways that I personally just like to incorporate AI into my teaching, into my courses. One of the next projects I'm working on is potentially enhancing students' own self-reflection, self-reflective behaviors by utilizing AI, getting them to think more deeply about their own self-evaluations, and kind of interviewing with a chatbot to get to that deeper level. Inherently, I thought that AI should be avoided for reflective writing, and I'm actually finding now the opposite again, that it can be a really exciting and powerful tool there. So again, just lots of ideas there.

Speaker 1:

On a higher level, I think we could see personalized education happen right, with programs that are tailored with a person via video, voice huge for education and other possibilities of how we could implement it For my students. I'm thinking about practicing their interactions. How would they interact with a patient? How would they counsel a patient? What questions would they ask them? Right now we simulate all of that in a skills lab environment with sometimes simulated patients, or we have also patients that we hire in right to come play that role, and now I think in the future we could do that with AI right. So lots of possibilities and, I think, lots of changes for higher education and for learning in the future.

Speaker 2:

It's funny you mentioned you know more of the cognitive personas, about difficulty with dealing with patients, kind of in a hospital setting right. I think that that's an interesting one is that you get all types of personalities right Happy, sad and all the different ones. It's interesting. So you said that you get some people to come in to play the role of the patient and you say, okay, go over here and try to diagnose this person and person's in a really bad mood or something. When you mentioned self-reflection, maybe I'm not really too sure about what that one is here. So when you mentioned having students use AI for things like self-reflection or not using AI for self-reflection, what does that really mean? Help educate me.

Speaker 1:

Yeah. So I mean I want our students to be lifelong learners, which means that they have to have the ability to self-evaluate what their strengths, what their weaknesses are, where they need to continue to grow. Self-evaluate what their strengths, what their weaknesses are, where they need to continue to grow Throughout the pharmacy curriculum and then also particularly on rotations. When they're on their experiential rotations with me, they fill out their own self-evaluation alongside my evaluation of them, and what I see a lot is they just click through the boxes of the evaluation and they don't add a lot of additional detail or examples to support. You know why they think they're doing so awesome in this one area, why maybe they ranked themselves lower in another, and also what they're going to do about it if they aren't performing well in that certain area. And so what I foresee, or what I'm trying to implement, is I've created a prompt where they can interview with AI in these domains right that I'm evaluating them on and the AI is prompting them back with questions of tell me an example of when you did that well, what was the outcome, what would you do different next time? And giving kind of that feedback that I'm hoping that they can utilize and then put into their own self-evaluations and again kind of reflect deeper than what I'm currently seeing, where students not all of them, but unless I'm very specific about I want you to come with which I've learned in the past too three strengths and three weaknesses of what you're going to work on for the next three weeks or until the end of the course.

Speaker 1:

They tend to not come with a lot of specific examples or areas for growth, and so I'm hoping that AI can be a tool that we can utilize to help them with that. What I was sharing about my hesitation is, though, that reflective writing, though, to me, really needs to be about you coming up with your own thought process on things right, and so using generative AI to write a self-reflection. I was like, well, that shouldn't be allowed. That was my gut kind of instinct, but with this approach, I'm seeing that they are providing the examples the students are. The AI is just asking them questions back to make them think more deeply about the example and, potentially, what they could have done differently, done differently to improve, or do in the future to improve. So that's what I'm really excited.

Speaker 2:

That's interesting. So it was going to almost kind of dovetail right into my next question and it kind of, as we started to kind of, you know, kind of come into conclusion is you know for the? For the people who are are? A majority of our listeners are technical people. Most of them are non-technical. What would you love to see from the tech community? People who are writing, you know, working in the large language models, people who are working with very different versions of artificial intelligence, people working in the you know, the generative AI space For them. What is your message to them? How can we improve AI in education? What are some things that we could be looking at from a future perspective, on things that we should be focused on building into our programmings now and also into the future?

Speaker 1:

Yeah, I think there's a lot of things out there that I probably have not tapped into, so I apologize if these things are made and you know, and I just don't know about it or haven't used it yet. But I think support for giving which I know we have this too already but support for giving feedback, grading students more efficiently, especially in really large courses, is something that is difficult in higher education and really being able to give formative feedback to students so utilizing AI to kind of help with that is one area that I would foresee being very impactful, kind of with student assessment overall. There, I think growing AI applications where they can be tailored more so to a specific course, right. So that's another area where I want to maybe take my course and make its own GPT, right. So I know I have the capability of doing that and then right now I'm limited, though, just by the cost standpoint of being able to make that available to all of my students, right, and the cohort, and so making access more freely available.

Speaker 1:

Again, with this announcement from GPT, it sounds like they're looking at that model going forward where hopefully we'll be able to make some of these on GPT for GPTs for our courses.

Speaker 1:

That would be really impactful for student self-learning right where they can interact on their own outside of class and study with it and do more there and in the healthcare space specifically, I think just more accuracy is needed, honestly tailoring those models to medical conditions again and the medical literature Again. I know that that's out there and people are working really hard on that. It's just not something that's readily available right now again to the end user in healthcare taking care of patients. And what I see right now is that there's still a lot of inaccuracies where I don't trust the initial response as kind of the expert in the field and I'm nervous about that at this moment in time for my learners that they're not going to have the expert knowledge right to critically evaluate that response. It's not the best choice for their patient ultimately right. So I think in that healthcare space it's really more so about the accuracy of the information that we're getting.

Speaker 2:

And great question and I wanted to kind of just double click on this one here. When you mentioned healthcare, it seemed to be a very broad lens that you can look at it from and you say that you know there's data models that are very inaccurate, there's data models that have incorrect or maybe there's some misinformation in there. From a healthcare perspective is, who would you be looking at in terms of kind of the trendsetters or industry leaders when it comes to AI? Would it be something like health insurance companies, healthcare institutes, pharmaceutical companies, who would kind of be kind of like that North Star, shining light for you when you were to say, if we were to look at this AI model from, you don't have to mention a specific company, but is it a specific like? Is it who would kind of be an industry leader for you to look at as a North Star of? This is kind of what we're striving to.

Speaker 1:

I think that's a great question. I don't know that I have an awesome answer for you, steve, I can tell you what would be probably most impactful, though, is working with our electronic health record electronic medical record companies to implement the AI. When it comes to developing the model, I mean, I really want to see the evidence-based medicine incorporated into there. Right, our latest guidelines are primary literature. That's what we really base our decisions off of in practice, and that's my kind of barometer for, you know, coming up with the best recommendation for patients. It's not looking to a health insurance company to tell me the answer based on their formulary. Right, we have to take that into consideration when we're making treatment decisions for patients, because obviously the financial aspect is a huge part of whether or not they're going to take their medication. But ultimately, I want it to be, I want to be. I want it to be whatever is going to be best for for the patient and their condition. That makes sense, yeah it does you want?

Speaker 2:

you? You kind of want like an unbiased, biased opinion, almost right, like yeah, so it's like insurance companies, of course, are going to probably skew some sort of, uh, predictive or prescriptive analysis based upon current trends and what they would like to see, based upon their actuarials, or something like that. Um, so it's interesting, I, I'm, I'm, I'm kind of curious now. It's like you mentioned health, you know, the employee record or employee record or the health records that are being kept, but then also still being able to use AI to be able to look at a broad set of data and then still kind of have almost kind of like a board or peer review of the data, to kind of everyone kind of go back and forth to see if, in fact, if this is the best outcome that we're looking for for something that is very patient-centric, correct?

Speaker 1:

Yeah, and I think that's where the money is right, like if we can synthesize real patient data and outcomes into an AI model and then work from that. That's what's going to be huge and changing healthcare kind of going forward, healthcare kind of going forward and I know at UF we have researchers that are working on building those models within you know, specific patient populations and trying to collate all of that data right into a model, and so I think those are the areas that are going to be really impactful for patient care and really help us look at outcomes, make informed decisions based on patient outcomes and then ultimately improve care.

Speaker 2:

I think that's the biggest message is, of course, just trying to improve the patient care. Dr Alexander, I really want to thank you for your time today. Thank you for sharing your thoughts and insights with our listeners. I know this is a topic that I've been wanting to dive in deeper onto Hope to have you back on the show again. I would love to explore this to keep up with your journeys as to kind of how you're progressing with AI in the classroom. Your students also have this great experience where they go. Yay, we could use AI in classroom. Yes, Dr Alexander is awesome. This is the best class ever. Please take our class. I, I don't know something like that, probably Right.

Speaker 1:

Me too.

Speaker 2:

Good, good evaluations would be awesome, Wonderful Well again, dr Alexander, thank you so very much for this opportunity. Look forward to the research you're doing. How can people follow you to keep up with some of the greatest things? Some of the great things you're doing there at the University of Florida?

Speaker 1:

Yeah, absolutely so. I mean you can feel free to reach out to me via email. It's readily available on our website through my faculty profile, so that's probably one of the better ways, other than you know your typical channels like X and whatnot. I'm at KAlexander4218, if you want to look for me or take a follow. And yeah, I would love to connect with anybody who's interested in chatting.

Speaker 2:

Wonderful Well, dr Alexander. Thank you so very much for your time and thanks everyone for listening. This has been an illuminating show. Thank you so much for your time and thank you for listening. Cheers.

Speaker 1:

Thank you.

Artificial Intelligence in Higher Education
Enhancing Learning With AI Tools
Ethical Challenges in AI Education
AI in Healthcare and Education