ADCET

ILOTA Things: Episode 4 - Smart Summaries: AI and UDL Making Online Learning Inclusive

July 22, 2024 Darren Britten, Elizabeth Hitches, Joe Houghton Season 1 Episode 4
ILOTA Things: Episode 4 - Smart Summaries: AI and UDL Making Online Learning Inclusive
ADCET
More Info
ADCET
ILOTA Things: Episode 4 - Smart Summaries: AI and UDL Making Online Learning Inclusive
Jul 22, 2024 Season 1 Episode 4
Darren Britten, Elizabeth Hitches, Joe Houghton

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI.

In this episode, titled Smart Summaries: AI and UDL Making Online Learning Inclusive, we're going to take a look at how some of the modern AI tools can help educators and students summarize, clarify, repurpose materials to make them more accessible and inclusive. 

More information including episode notes and links are available on the ADCET website.

Show Notes Transcript

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI.

In this episode, titled Smart Summaries: AI and UDL Making Online Learning Inclusive, we're going to take a look at how some of the modern AI tools can help educators and students summarize, clarify, repurpose materials to make them more accessible and inclusive. 

More information including episode notes and links are available on the ADCET website.


Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe. 

Joe: Hello and welcome from whenever, wherever and however you're joining us and thank you for your time as we investigate ILOTA Things, that is Inclusive Learning Opportunities Through AI. My name's Joe Houghton and joining me once again on our artificial intelligence, universal design and accessibility merry-go-round are my co-hosts Elizabeth Hitches. 

Elizabeth: Hi there.

Joe: and Darren Britten.

Darren: Hello everyone.

Joe: So today's episode we've titled Smart Summaries: AI and UDL making online learning inclusive, and we're going to take a look at how some of the modern AI tools can help us summarize, clarify, repurpose materials to make them more accessible and inclusive. We want to be able to reach different audiences, who might want to consume the material in different forms and through different ways of experiencing it, be it text-based, be it audio, be it video, and the tools we have available to us today make it easier than ever before to transform materials. So, I'm going to throw over to Darren to kick us off. You've been in the EdTech space now for a good while Darren, is this idea anything new?

Darren: The short answer is no, there's nothing new. Everything you just said in your introduction has been possible in some way, shape or form, with one slight difference, and that is the amount of time and human input that's been required into the various steps and processes to get those various formats. What these new tools can do and the speed at which they can repurpose information is what I'd call new. In our last episode, Multimodal Miracles, we discussed the ability of these tools to transform information into different formats and different media. However, the things that we can do with that information and the opportunities for interrogating and presenting that information in different ways based on an individual's needs, this presents new ways of interacting with that existing material we didn't have before.

And this is what excites me from an accessibility and a student point of view that these tools can now take, say, an input from a lecture, a presentation, a slideshow, and allow it to be broken down, summarised, simplified, clarified and repurposed for an individual's needs. So back to your original question, is this new? No, it isn't, but what can be done from an individual user's perspective is so much more than just converting material from one format to another. But before I ramble on about the myriad of things you can do with these new tools, Elizabeth, can you provide some context on how things like summarising, clarifying, simplifying and repurposing of information fits within the Universal Design for Learning framework? 

Elizabeth: Absolutely, and I think what I'll start with is thinking through a few different scenarios just to get us imagining where we might actually be considering using summaries. Where would it be relevant? and then what we'll do is think about how can this actually align with that UDL framework. Summaries themselves, it's not just about condensing information, it can also be about clarifying information or perhaps even simplifying information. So I want us to imagine we've got a really large piece of text or perhaps a very large audio file and a large transcript of that audio file that might be really overwhelming to come across. Or perhaps it could be something that is very complex and the complexity of that piece of text or audio could be overwhelming itself. If we had a plain language summary, that could become a really good entry point for someone to build on. So they come across that plain language summary first and then they know what that document is about and then they can dive into that longer or more complex piece. So perhaps someone might have English as a second language or perhaps learning difficulties or disability. But it could also be really helpful if you might have missed a class, and you want to get a quick overview of what's taken place in that class before you dive in.

Let's also imagine that you know you may not have just missed that class, l et's imagine you've been attending but you might have experienced pain or energy drops throughout or something might have broken your concentration. If we actually have a short summary, that might help to fill in some of those gaps of what might have been missed during that time or even to refresh your memory. So you know, this could even happen if you're studying at home. You might have young children in the background or you're in the middle of a really important meeting but you have an urgent phone call. If you can fill in those gaps quickly with that summary, you can be quickly caught back up to speed. Now this can also be helpful if you're someone who might have working memory challenges. So small summaries could give you those bite-sized reminders of what's actually been covered. So you could have gone through maybe 13 weeks of classes, you might realize that you're perhaps not too clear on a particular concept, but you can't quite remember where that concept was covered. Diving into those short summaries could mean you can find quite quickly where it is that you need to explore deeper, where was that taking place? 

Now, if we take that universal design for learning lens, what we're doing is we're remembering that not every student and not every colleague is going to be using information in the same way or experiencing that in the same way. So a single summary could have many different uses and many different benefits for a number of reasons and for a very diverse range of individuals. So it's all about how the individual can use that summary t hat's provided. But if we provide it up front, we might actually be reducing barriers or even enabling greater access from the outset. So that's where that UDL framing can come in. But where might it actually align with those UDL guidelines? Well, at this point, feel free, if you're near a device, feel free to open up the CAST guidelines. That's on the CAST website and we can go through those together.

So what you'll see is there's a section around providing options for comprehension. So, there's a checkpoint here about activating or supplying background knowledge. So, if you're going into a new meeting or you're about to go into a class, you may not remember what happened in the last one very clearly. You can very quickly access that summary and quickly revise and then be able to draw in that information as you go into that next meeting or that next class. 

Now we've also got a checkpoint around, highlighting patterns or critical features or big ideas, and you can imagine in a larger transcript or a large audio file or even just a large document that a summary to highlight those critical ideas could make salient the key points or the key takeaways.

But we know that it could also be used to guide information and processing. Imagine that there's a document that people might possibly get lost in. You know you can get lost in that detail. That summary can help guide people's information and processing to the key points and make those front and center. 

So what we can also do is think about the role that summaries might play in supporting executive functioning. So perhaps we might be revising meeting notes or perhaps revising various classes ready for an exam. We could possibly build in ways that support individuals to manage information and resources by drawing on those summaries as a bit of a guide to that information processing. So various different summaries might have a really good overview of what's happened and some of the key concepts to focus on and explore further. But, as you can see, there's not just one way to use those types of summaries through that UDL lens. Many different ways we can use it, but providing that summary upfront could be beneficial to a number of different individuals. 

Now, something I often think about is if I was to do this for every single class, keeping in mind that the tutorials that I teach are often two hours long, that could be a really big task to do manually. As far as summarizing each of those classes across about 13 weeks, summarizing various meetings, summarizing various resources, that could take considerable time to do by hand, and that time isn't always allocated or able to fit into our day or even allocated into workloads. So what I'm really excited about is to think what tools might actually support us to do this more efficiently and to provide some of those benefits that we've just considered. So, Joe, I'd love to know what tools are you coming across that can actually support us to do this and help us to create those summaries? 

Joe: There's a growing range of tools that we can use as educators. But I think it's also important that you know many of these tools are free or very, you know, cheap to use, and what I'm finding that I'm doing now as an educator increasingly is, as well as using these tools, to make sure that I create as many ways of representing the information as possible. Sometimes you just haven't got enough hours in the day. You know you've done this week, you two have been doing the conference and stuff and there just wasn't enough time to do everything this week was there. But if we train our students in the use of these tools and the capability of these tools, we also give them agency, we give them executive function skills that perhaps they didn't have before and we give them the tools to use to do their own conversions. Because, you know, maybe a student encounters a piece of material and thinks, oh, you know, I wish I had the transcript of that and it's not available, yeah, but if they know that there are tools that they can just drop the video or the audio into and it'll give them a transcript, absolutely fantastic. And a lot of the tools that we're already using Zoom, for instance I mean a lot of us are using Zoom, we're using Zoom today just to record this podcast. I'm here in Dublin, you two guys are in Australia and you know we're doing that. So we're using Zoom. There's a thing in Zoom now called the AI companion, but you have to turn it on. You have to go into the settings to turn it on and that allows you halfway through a meeting if you came in late or you had a call or something happened to come back in and just say catch me up, was my name mentioned?, summarize the last 20 minutes for me and it'll just generate quickly on the fly that kind of stuff. So, a lot of these tools now are built into tools that we're actually using day to day. 

So don't overlook what you've already got, because you know what you've already got may be there. I mean there's so many tools. We'll have a lot of links in the show notes to links to different tools. But I mean, from a text conversion point of view, we've got things like fireflies.ai, that's an AI note taker which you can just turn on in the background and it will transcribe,  it will summarize, it will analyze a meeting, a class, any kind of group activity, and you don't just necessarily have to be in an online meeting to do this. Fire a laptop up and the microphone will capture the discussion in the room. So very often now, if I'm in an in-person meeting, I'm still going to fire up Firefly or something like Firefly and it will do the minutes. You know, it will take the notes and then at the end we've got a transcript. So you know, and it doesn't matter whether you're on Zoom, Google Meet, Teams whatever you're using, these tools are pretty much cross-platform now. 

Otter.ai is another transcription summarization tool and I mean I've paid for the otter subscription for the last 18 months and I use it all the time. Rev.ai is another one. Supernormal specializes in customized AI meeting summaries using templates, so you can set templates up, that kind of say to the AI, right, we're going to cover these headings, so when you come across a point that should be under that heading, drop it into the template and stuff. So that's quite nice, you know a customizable scenarios tool. And MeetGeek  is another one of those, they all have weird names, these tools, don't they? Um? and another one is Bloks. 

But it's not just text. We've now got, with new AI tools, the, the ability as we've never had really before. I mean OCR, optical character recognition has been around a long time but it's never been very good, but now it's really good. So you can take a photo now with your iPhone of handwritten text. Or, if you're traveling in a different country, you can snap the menu that's in Chinese or Korean or whatever your non-native language is and it'll translate it on the fly and the translations are getting really, really good. So converting from handwritten text to editable typed text is becoming very, very easy. Apple announced in the developers conference on Monday this week you know a deeper integration of lots of AI tools across all their products you know the iPhone and the Macs and the iPads and stuff and come autumn this year it will just be built in. We'll just be seeing this stuff just working in the notes we take and in Word and all the rest of it. Microsoft are building this stuff into Word with Copilot. 

So if you're a listener, some people like to listen. So I don't want my text in text form, I want it in audio form. That's come on in leaps and bounds in the last six months, particularly that the voice transcription engines now, um, Siri, again, on Monday got a huge, big upgrade. Finally, we've only been waiting what,12 years for that and now it sounds like a real person. So you can point a transcription engine at a pdf and say read it to me now. Up to now it's been a bit robotic and a bit, and I don't like them, but you could also now get really good voices and you've also got the ability, obviously, to speed them up as well. I mean, my wife listens to podcasts. She has them on two times speed. So it all sounds very gabbly to me, but it works for her. She can get through more material more quickly. And that's agency, isn't it? That's executive function. She's choosing to do that. So really really good. 

So I mean I've given you a few tools. But I think, more importantly and particularly from the ILOTA kind of point of view, this podcast why we're here from an accessibility point of view, how does this ability for us to transform stuff, how does that improve accessibility. Well converting audio into text or from text into audio or whatever yeah, that can benefit people with different challenges. So if you've got a hearing challenge, then maybe you don't want stuff you know in audio. You want it in text form, you want it more visual, but if, if you've got a visual challenge, then maybe you want it more in audio. Yeah, well, we can do both now so we can transform, and the modality of the material that it originates in mostly shouldn't matter. 

Elizabeth, you mentioned attention deficit. You know you're sitting on an online call and you have a pain attack, or you know you're really tired because you've got, you know, autoimmune disease or whatever that just knocks you out for the morning or whatever, and you just can't pay attention. You go for a nap, you come back in an hour or whatever, and you ask for a summary, and that reduces cognitive load, that reduces anxiety, that reduces stress, because you know you're worried about missing the information. You've been looking forward to this conference, you've been, you know, looking forward to this meeting or whatever it is and all of a sudden you're just not up for it. You just can't do it. But if you know these tools can provide this kind of summary, then that pressure lifts. And this is what it's about, isn't it? It's about making our lives better. 

So, motor impairments I mean voice commands you can initiate this stuff now, you know, with voice commands if you set your machine up in different ways and stuff like that. And you know, neurodivergence, AI writing assistants can help people better express ideas. They can take complex ideas and simplify them, or they can take lots of ideas and pull them together and summarize them in a kind of more concise way. 

So there's so much different stuff here, so I'm going to throw this over to Elizabeth now. So you're teaching people, you know, um, you've got your role in Griffith and other universities as well. I mean, how do you pass this information on, not just to your students, but maybe also to colleagues, because I mean, I think that's something as well we need to think about, isn't it? You know, and that's why we're doing the podcast, the audience here is probably mainly educators rather than students, but does everybody know about this stuff? 

Elizabeth: Oh, it is such a great question and you know, colleagues, students I don't think everybody knows about this stuff and if we really specifically look at students, the students that I teach are just learning about inclusion and UDL for the very first time. So very fresh to thinking about why we might need to reduce barriers in the first place, why might we need to be creating inclusive resources and learning environments. So, I think where we're really needing to bring in these newer conversations are these conversations about technology and how can some of those more manual tasks be made more effective through the use of technology. Because what's going to happen for our students when they finish up their degrees? You know, we're not just hoping to prepare them for the first day on the job, we're hoping to prepare them for what might eventuate in the next five years and beyond. So how can we set up that foundation to build on that knowledge and make sure that they have that foundation to then keep advancing and keep up to date with these changes? 

So, alongside the conversations about UDL and accessibility and inclusion, I think we need to be sure that we're building in conversations about those AI tools and how they might actually open up avenues for accessibility and help us to strengthen inclusion in various different ways. So, Darren, can I ask you, how are students starting to use these tools to help with their learning? 

Darren: Just like educators are starting to tinker with these tools, so are the students themselves. I'm starting to see more and more students starting to use these tools to help them with their studies, from essay planning, organising information and even for review and reflection. Beyond the headlines of academic integrity, students are looking at these tools as study aids and for breaking down and repurposing information to structures and formats that work for them. For instance, while there are many essay planning tools available, AI  now offers the ability to craft an essay plan based on your own strengths and weaknesses. I'll give an example. In using, say, perplexity.ai, I can enter a prompt like ‘I have a 2,000-word essay due on X date, can you help with an essay plan for researching, writing and editing?’ And it will spit out an eight stage breakdown with bulleted information on each stage. Now I may look at that plan and say I really need more time for writing and I want the key dates listed. So I can then ask a further prompt like ‘can you provide more time for writing and list key dates for each stage and add how many words I should be writing each day’ And that could be really useful for some students and it's really important for students to learn, I think, how to use these tools and work out which ones work best for their needs.

 Learning how to get the most out of these tools can make a huge difference if you're using them yourself to support your own learning, and that new knowledge you can acquire and skills comes from playing with those tools and working out what you need and what the tools can do. So, with that student lens, I think it's really important for students to play and explore for themselves, rather than for us to just provide them with a set of prompts. We want students to discover what's possible and work out what works for them and their individual circumstances. I've seen students just come alive, you know, and be aghast at what this technology can actually do to assist them, things they didn't even know were possible. 

Elizabeth and Joe, you both touched on some of the things these tools can do. You know from a technical point of view and also from that UDL lens. So I'll just offer another example here. I have the privilege of working with some students on just this, designing prompts that work for them. So here's an example of you know, that transformative power of these tools that students can use. 

As Joe has mentioned before, many of these tools allow you to upload source information that you can then do something with. For this example, I'm going to use Claudeai. So we have a transcript of a lecture. The student has watched and listened to the recording and now wants to make notes and test their understanding. And listened to the recording and now wants to make notes and test their understanding. They can upload the transcript or cut and paste it, depending on the tool, and then prompt the tool to quote, you know, ‘summarize this lecture transcript’, and each tool will summarize it in a different way. 

However, this is where it can get interesting and we encourage people to explore the different ways the tools can give you information based on your prompt. There's a difference between prompting ‘summarize this’ as opposed to ‘can you provide a summary of’, or even a ‘brief or detailed summary’, and this is where the good stuff comes in, or, as Joe refers to it, the magic. I can enter a prompt to give me more than just a summary, such as ‘provide a short overview of the attached transcript, followed by a breakdown of key topics with bullet point information’, and it will do just that and provide an overview of the lecture, key topics with sub-bullet points, etc. Multiple things all at once, etc. Imagine the time that would take to do that manually and the different processes you have to go through as a student. I could go even further after that prompt and ask for more or less detail if needed, and ask it to expand on the particular topic within that, or highlight specific subject terms, provide a glossary, etc. So I can end up with a prompt like this ‘Provide a short overview of the following transcript, followed by a breakdown of key topics, with bullet point information highlighting subject specific terms, and provide a glossary of these terms at the end’, and it will do just that. So you can see how powerful these tools are and how much time they could save, along with the flexibility and opportunities they provide for summarising and repurposing information in a way that works for an individual. 

Each tool has a different way of presenting that information and may need a slightly different prompt, but you can see how these tools can be extremely useful in assisting students in reviewing, in this instance, just a simple lecture. I can go, then, further and test my understanding of the lecture with a prompt like ‘create 10 multiple choice questions based on the key themes of the lecture’, and it will do just that, give me 10 multiple choice questions. I did not specify that I wanted answers, so it hasn't actually provided any, but if I want to check those answers, I can then prompt ‘can you provide the correct answers to each question’ and it will do just that. 

Of course, the more detailed prompts need to be crafted and considered. They come from playing with the tools, working out what they can do and then adding in your individual needs, step by step. Now that final prompt take that, save it, store it somewhere so that you can do the same thing with your next lecture and hopefully get the same results. I say hopefully because the new tools are constantly updating and with new versions and features as well. So if that same prompt does not work in a newer version, you may have to build either a new prompt or you can go back and use the previous version, which most of these tools allow you to do. 

So there's an example for a week-by-week scenario where AI can assist a student with their studies. However, that's just the beginning. Imagine we're now at the end of the teaching period and there's the final assessment, which is possibly an exam in some cases. So imagine combining all of those transcripts you had from your week-by-week and all of your notes, putting those in as the basis for a prompt such as ‘provide me 50 multiple choice questions’, and it can do just that, or even list all the times that the word exam was mentioned, and it can look through all of those transcripts and scan, whenever the lecture may have said this will be on the exam or some such comment the things we used to highlight and circle and go, this is important, remember this because the lecturer mentioned it. So, again, my working memory may not be great at the time I highlighted it, but I've forgotten it now that it's come time to study for the exam. Because the word exam was mentioned I can go and find that within those notes. 

Now this is where I will, as usual, of course, suggest a note of caution. I am not suggesting students don't need to listen to or attend lectures. You know there's so much more than just what was spoken and what's in a transcript. Also, remember that you know AI can hallucinate and make things up or even miss key information. And, more importantly, you can't assume that that transcript or lecture or tutorial is accurate, especially if it's been automatically generated. The quality of these can vary significantly depending on the speaker, where it was recorded, the audio quality, the quality of the microphone, etc. And, more importantly, the subject matter itself. It may have colloquialisms, abbreviations, specific terminology. The list goes on. 

Given that most of these tools require input in text format, they therefore need to convert audio and video into text first. And while that automatic transcription has improved considerably in recent years, it should still be scrutinized first. You need to know that your source of truth is really the truth. So, you also need to consider the privacy of that information that you're putting into the systems, what those tools do with it and where that's stored. And I'll throw to you, Joe, because I know governments, educators and industry everybody's scrambling and playing catch up in this space, aren't they? 

Joe: Yeah, and the global kind of issue as well that kind of certain parts of the world have different rules around privacy. I mean, the European Union has this thing called GDPR, which is quite a restrictive set of rules that came in a couple of years ago around data and privacy, and what can and can’t be shared. In practical terms what that is leading to at the moment certainly is that a lot of these AI tools and the stuff that comes out becomes available in the US perhaps straight away and, I don't know, maybe in Australia at the same time as the US. Um, but it kind of hits the guardrails when it comes over to Europe and then so we have to wait two, three, four months or whatever you know for things to move into Europe. So some tools may be available in some parts of the world and not in other parts of the world. And, like you said, this is a rapidly evolving field. 

There were two stories this week. So I mean, we mentioned Otter previously and I'll throw that over to Darren in a minute, but the Adobe kind of story broke last week where they had made a change to their terms of kind of engagement, and Adobe, you know a big global corporation um, Photoshop, Illustrator, you know all the design tools and stuff like that and their new terms of reference, which people had to actually agree to before they could log into the tools. 

When you read the fine detail, they basically were saying you know, kind of like Adobe has all the rights over all your materials and all your creations and stuff and there was a huge hoo-ha over the last few days about this and they're rolling back very quickly on this and saying no, you know, we're not trying to assert IP over your creations and all the rest of it. 

But it just goes to show that we do have to be careful about attribution and about who owns the intellectual property when we are creating information and when we're putting this information into these tools. And what we're going to see over the next year and I've already started playing with this is we're going to see private AIs. We're going to see AIs that run on your machine, that don't necessarily have to send information to the cloud, and you can point it at a data set that is local to you. So then the AI will be able to give you all these summaries and transformations and whatever, but it won't have to put them into some multinational corporation servers where you don't know what's going to happen to that data. You were talking about Otter AI and a similar issue, Darren, recently. 

Yeah, you were talking about Otter.ai and a similar issue Darren recently. 

Darren: Yeah, in terms again of companies and what they do with that data that we send them and what rights they have over that data, those terms of use that we tick the box on. A colleague was talking with me recently about their institution and Otter.ai has terms that may have changed in theirs, that they own the information which for their institution is not good, and they didn't want that. So they actually lost hundreds upon hundreds of licenses that the institution had because of how that data was being stored. So we need to be really careful about where that's going, how it's being used, etc. 

Consider also that that information may have students' names, students' details, identifiable information or information that you don't want public, you don't want stored somewhere. There's a lot of sensitive information gets discussed in some tutorials, or even look at the ethical nature of research data and things like that, where we normally may not have gotten ethics approval. Look, I know that discussion certainly being had in academic spaces, particularly around postgraduate work and the use of these tools. Elizabeth, I suppose I'll throw to you then with these ethics and pitfalls of using AI, is that an academic concern? 

Elizabeth: It definitely is an academic concern and I think it's one that every single one of us are really still working to understand. And you know, even if you get an understanding of the current AI capabilities, as those capabilities evolve, so will the benefits and the pitfalls. So things are getting updated quite frequently, quite rapidly, and so there's a real need for us to upskill and make sure our understanding is as up to date as possible. So I don't think any of us can really say that we've got a really firm grasp on this, and each different tool has a different license agreement and different capabilities. So it's definitely there's a need there for us to be having some really deep conversations to upskill educators and also to upskill our students at university to truly know what you can ethically do and what you ethically can't do. And these conversations need to go much deeper than just academic integrity. So I think the accuracy that you mentioned before and also privacy these need to be really key parts of these conversations. Especially if we're going to be sending students out into the workforce and, you know, hoping that they're going to be fully equipped to be managing what that workforce is going to involve, we need them to also have the understanding of what these tools are and how they can really consider them in an ethical way, going forward. 

Darren: Yeah, I totally agree. I think we're all learning how to work with these tools and how to get the most out of them, and I'm not sure that I fully agree with IBM's sobering view recently, and that was that ’AI won't replace people, but people who use AI will replace people who don't’ Look. There's some truth in that and this is an evolution. We and the tools will keep evolving, but that's providing us with a lot of things (ha ha) and, more importantly, ILOTA things, that is, those inclusive learning outcomes for us to continue to explore and discuss.

Elizabeth: I think it is important too that we start to build in these conversations about the AI tools and how they do possibly open up avenues for accessibility and how we can strengthen inclusion through using these tools so that, as this area progresses over the months and years ahead, that our new educators and our current educators can keep building on that understanding too. So I think we can think about how we're building it into the everyday use of students, how we can make sure that we're transparent in showing them if, when and how we're using it, but also to think about how we can equip them for the future in using those tools too. 

So, while speaking of that evolution and everything changing, Joe, you're more up to date on this front, so please, in just a minute, can you tell us some things that are new? What things have stood out for you in this last week or so? 

Joe: Few things going on. There's always a few things going on. Last week and we're kind of 14th of June as we're recording this 2024, so kind of early June 2024, OpenAI, who were the creators of ChatGPT they've announced an educational version of ChatGPT for universities, with all the all the guardrails that everybody's been calling for kind of built in, and also a similar version for non-profits as well. So you know, universities and non-profits and educational institutions keep an eye out on that, because that will obviously gain traction pretty quickly and we'll put a link to that in the show notes. 

Perplexity are an AI tool that's been getting a lot of press recently. They've got a very, very good, well-developed search engine now and they released a new tool which I actually used to pull together notes for this podcast today. Perplexity have just released a new feature called Pages, which is a game changer for content creators and also, you know, teachers and students who want to pull together a quick summary on a particular topic. And you go into Perplexity, you type in a search and then you ask it to create a page and it generates a structured web page which you can then use yourself. But you can also then just share the link to and make it available to everybody else. Notion does a similar thing as well, and I'm now using Notion and Perplexity, you know, hand in hand together. So again, we'll put some links in the chat for those. But they're magic, absolutely magic tools. So that's kind of it from the newsroom I think. Elizabeth wrap us up. 

Elizabeth: Thank you. Now, as you mentioned, there is so much happening in this space, so much to cover, and one of the best ways to learn is really to have a bit of a play with these tools. So please do go and explore some of these tools yourself. You know, try some of the things that we've discussed, obviously keeping in mind those principles of privacy and ethics as you go, so play safely. And you can find the links to the tools that we've discussed today and the text prompts in this episode's show notes on the ADCET website at www.adcet.edu.at/ilotathings.

Joe: And, of course, we always love hearing from you. So please do drop us a line. Let us know how you're using AI, how your students are using it, but also ask questions. You know we're looking for new ideas, for new episodes and stuff like that, so if you discover a new AI tool that you're finding useful or you've got an insight into accessibility that you'd like us to explore, please drop us an email feedback@ilotathings.com  

Darren: And unfortunately, that's our time for this episode. I hope that we've given you just a slight insight into how AI can help you and your students to deliver inclusive learning opportunities. So, thank you for listening and we hope that you can join us in our next episode as we continue to explore ILOTA things. Till then, take care and keep on learning. 

Joe: See you next time.

Elizabeth: Bye.

Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.