ADCET

ILOTA Things: Episode 3 - Multi-modal Miracles? - A format shifting evolution

July 04, 2024 Darren Britten, Elizabeth Hitches, Joe Houghton Season 1 Episode 3
ILOTA Things: Episode 3 - Multi-modal Miracles? - A format shifting evolution
ADCET
More Info
ADCET
ILOTA Things: Episode 3 - Multi-modal Miracles? - A format shifting evolution
Jul 04, 2024 Season 1 Episode 3
Darren Britten, Elizabeth Hitches, Joe Houghton

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI.

In this episode, titled Multi-modal Miracles? - A format shifting evolution, we're going to dive into the ability of AI and other tools in creating opportunities for content to be rendered and shifted into different formats, and how this can support educators in providing inclusive learning environments through universal design for learning. 

More information including episode notes and links are available on the ADCET website.

Show Notes Transcript

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI.

In this episode, titled Multi-modal Miracles? - A format shifting evolution, we're going to dive into the ability of AI and other tools in creating opportunities for content to be rendered and shifted into different formats, and how this can support educators in providing inclusive learning environments through universal design for learning. 

More information including episode notes and links are available on the ADCET website.

Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe. 

Elizabeth: Hello, and welcome from wherever, wherever and however you're joining us. And thank you for your time as we investigate ILOTA things. That is, Inclusive Learning Opportunities Through AI. My name is Elizabeth Hitches, and joining me on the artificial intelligence, universal design, and accessibility expedition are my co-hosts, Joe Houghton,

Joe: Hi from Dublin

Elizabeth: and Darren Britten.

Darren: Hello from Australia.

Elizabeth: In this episode titled Multi-modal Miracles? - A format shifting evolution, we're going to dive into the ability of AI and other tools in creating opportunities for content to be rendered and shifted into different formats, and how this can support educators in providing inclusive learning environments through universal design for learning. Now, this is going to be a very broad banquet of what's available in the AI space, and depending on how we use it and for what purposes there are many ways that we could use this to support a UDL approach. So it might be things like altering formats to provide multiple means of representation, it could be providing ways for students to act on information and express what they know, or to adjust formats to support engagement. Each of these could be explored in depth on their own, but today we want to open the wide world of possibility first. So taking that UDL approach, we're really emphasising us moving from reactive inclusion, where we might encounter a barrier or see a student encounter a barrier, and we then work to reduce those barriers towards proactive inclusion, where we're going to be anticipating the many diverse ways that individuals will interact with a resource or a learning experience, and how we can proactively make it as inclusive and accessible as possible. So to really get us started on this topic over to you Darren, what should we know about format shifting? 

Darren: First off, I suppose format shifting or alternative formats is nothing new in the educational sector. It's been more commonly associated with supporting students with disability in providing learning resources and content in more accessible formats. And this has been a fundamental part of reasonable adjustments, as termed in the disability standards for education in Australia for decades, and it's used in a similar way across the world when providing equitable access to education. In many situations, however, a reasonable adjustment is, as you mentioned, more often than not, a reactive adjustment, something that's done after the fact. Adjustment for an individual, rather than an option for all students. So, traditionally, these adjustments and format shifting, if we will, would have had benefits for individuals, such as a Braille  version of a key resource or a tactile diagram for students who may be blind or have a vision impairment, transcripts or closed captions for students who may be deaf, hard of hearing or hearing impaired, and audiobooks, large print, high contrast, the list goes on to help a whole range of students. So, this converting/shifting of content from one format to another has traditionally been very much a manual process. However, in recent years, with the explosion of digital content, both the speed and availability of tools to assist in doing this work has changed dramatically. Now, of course, format shifting, not exclusively the domain of supporting students with a disability, it has proven useful in so many other ways. And as technology and how we consume information has changed, so too has the range of formats and containers that this information now comes in. One of the most obvious of these and format shifts has been the digitisation and the conversion of hard copy books to electronic formats. Another is the upheaval of traditional print media, such as newspapers and magazines that have gone digital, and, of course, the massive increase in digital content that's now online. Couple all of that with the powerful computing tools we now have available on our desks and in our hands and even on our wrists. There's even more options now for presenting information in various formats than we've ever had. I guess I'll throw to you, Joe, and ask, given the explosion of these different technologies and devices and the emergence of AI, can you give us an overview of the transformational power of this and the ability of AI to be multi-modal?

Joe: Yeah, multi-modals are a relatively new term in AI. And it's particularly hit the headlines in the last few weeks, and we're recording this at the end of May 2024, because OpenAI recently announced their new version of ChapGPT 4 Omni, which is natively multi-modal, and what that means is that up till now a lot of the AI's have primarily read and spat out text, but now they can read and process and also create images, video is coming audio, so this is what multi-modal means. It's all these different ways of representing information rather than just text. 

I mean, you mentioned a little earlier on converting from paper to electronics, and Google has been doing this for a good number of years now, you know, a lot of us think this AI thing has only just happened in the last 18 months. It hasn't, it's been going for many years behind the scenes, and the big companies like Google have been, you know, working on this for years. And one interesting kind of project is Google Books, and Google Books was setup with the aim of digitizing old books to make them available to everybody. So you can go to books.google.com, and the project is there. So making digitally accessible older books that were created before we were doing stuff on Word Processor. It is a job that's been going on for many years, and there's thousands and thousands of books have been converted, and the technology for doing that is improving all the time with optical character recognition now, being able to get more and more accurate. But I mean, coming down to a micro level, we've all got these amazing computers now in our hands, you know, our smart phones. I mean, I've got an iPhone 12. I can take a picture of something with text on it. It could be a street sign, it could be a menu, and the camera will then automatically recognise the text in the image, and will not only recognise that it's text, but will convert that to editable text that I can then paste into a document. Not only that, but it will also now translate it. So if I'm reading a Chinese menu, it will convert it straight into English for me. 

So I can take old documents, and I can take a picture of them, and the AI tools now will convert that form straight into a PDF for me, and in many cases an editable PDF and then allow me to interact, you know, in a different way with that information. So this is what we mean by this kind of multimodal format shifting. And, you know, we were talking about alt text in the last podcast, and this is just kind of an extension of this, of moving us forward with different ways of representing our information. 

So I'm going to throw this over to Elizabeth now, so kind of fill us in from from a UDL perspective on this Elizabeth. 

Elizabeth: Thanks so much. I think what's really exciting about this from a UDL perspective is that these tools are really opening up those options to have multiple means of representation, to those materials that might actually have been really quite difficult, or time-consuming, or even resource intensive to adjust the format to before. 

So I'm sure we've all got one of those folders that might have perhaps PDFs that are not really readable text. It's more that almost like a photocopy of text. You know, it's that old scanned copy that, you know, if we wanted to actually convert it into text, we would probably have to do that manually. We'd have to retype the whole document that would take a lot of time and we don't often have time allocated for that type of thing, but this technology is really opening up a way to make it accessible to have it in a different format and to do that in quite a speedy way. So that makes me quite excited thinking about those types of possibilities. 

Now something that's also really interesting to me from that UDL perspective is thinking about shifting from one file format to another to also enable individuals to manage information themselves and to perhaps turn that document that you wouldn't really organise into something that works for you. So perhaps ways to not take on a document in a way that best suits your needs. And in this way, I can see it sitting in supporting those multiple means of action and expression. So Joe, I'd love you to talk through a particular experience that you had with Penny and an unusual file type that she came across and how you worked around that. 

Joe: Yeah, so this week Penny, my wife, who was a psychotherapist, came across an article that she downloaded and it was in a format that neither of us had ever heard of. It was .DJVU. And, you know, I've been in IT for years and I consider myself pretty IT literate, so I was a bit stumped by this. But anyway, Penny was reading this article and she was reading it on her iPad. And what she normally does when she's reading these articles is she will annotate them and, you know, maybe highlight areas to then copy and paste into her notes and stuff later on. But she was complaining that this was just an image and she couldn't do anything with it and could I help her? 

So I asked her to kind of email me the document so she did. I have an app that I use to kind of manage all my ebooks called Calibre and it's a fantastic application, you know, because it'll import files in almost any electronic format, you know, that's text-based. And I use this to store all my ebooks and I've got 27-30 thousand books in there, and I can load them up onto my Kindle or onto my iPad or onto my computer or whatever. But one of the things that Calibre does is it will also allow you to take any any book in any format and convert it to any of the other formats. Now I didn't know whether it would even know what a DJVU format file was, but I imported it into Calibre, bang, straight in. And then I said, okay, can you give me this in Moby, EPUB and PDF formats? Because I wasn't quite sure which one would be the best one for Penny. So I thought, right, I'll ask for all three. 20 seconds later in the folder along with the DJVU file I had an EPUB file, I had a Moby file and I had a PDF file, and I sent those back to Penny and half an hour later, I got a big hug and she said, oh, that's fantastic Joe, I've opened it up and I can now annotate it and the text is all text and everything is fine. So I mean, it was just a little story, but I mean, I think this is what this podcast is about. This podcast isn't just about talking about theory, it's talking about how these tools can make a difference in your own life to make things easier. So does that answer your question, Elizabeth? 

Elizabeth: Oh, exactly. And I really, really love this example because, you know, we can think about format shifting from that lens of multiple means of representation. So how can we actually make sure that a diverse range of individuals can perceive information in different ways? But I think this shows us a really interesting example that fits under that action and expression guideline. And, you know, it's really showing us how we can support executive functioning. If someone is used to note taking in a particular way, there's a way that works best for you, and we can convert a file into the format that best works with how you work and how you organize information and manage it. Isn't that incredible? You know, you're not locked into one particular document. You have that ability to change that format up. Now, I'm also interested, Darren, and, you know, this is probably where I have less experience and understanding in, and I know you have much more. From an accessibility perspective, are there certain files that might be better suited to different types of assistive technology? Is that also something that format shifting might be useful for?

Darren: Oh, it certainly is, and look Joe, great example. Calibre is one of those tools that's been around for a while now. And it's really useful, particularly in the accessibility space of shifting things between different formats. And so to your question, Elizabeth, are certain types more readable, it's largely around, you know, rubbish in rubbish out. If you've got a well-structured document to start with, a well-structured set of text that might, you know, have headings or hierarcy, whatever it needs to have, then you can convert it much more readily into different formats. And the answer of what works best for an assistive technology is really down to what the person needs. And that's what's interesting to me, certainly from an accessibility perspective, and that's the ability now for these tools to tailor content, you know, to different audiences so quickly. 

However, this is just one side of that equation. What gets me, I suppose, more excited from an accessibility lens, is the ability of these new tools to assist people in expressing themselves and expressing ideas in certainly different ways. So that output of ideas, if I can put it that way. And I know discussions have already started, there's been many of them, there's lots of thoughts online and reckoning about the threats that some of these tools now have and how, you know, even for image generation, you know, everybody's now an artist in what happens for musicians, you know, anybody can write a song, anybody can do this with these tools. Because these tools give that ability to do that. And while, of course, this is somewhat true, I think it also opens up a world of new possibilities that we haven't really looked at. 

I think back and consider the same thing that was said about the whole range of, you know, different industries and areas and traditional photography is one of those that comes to mind and you know, that moved to digital cameras and digital photography. Most people now have an extremely powerful camera at their fingertips. And while there's been an explosion of certainly photography and digital images and that's certainly challenged traditional photography and the photographic profession, there's still professional photographers and artists in the field and the reality that the professional photography is more than just point click. There's composition, light aperture, focal length, all of those things, the lens you choose. But, you know, I think the same is true in the whole range of existing fields that we will adapt to those and change and there'll be new opportunities that will open up from it. It's not just about moving content from one text-based format to another text-based format anymore. There's now so many options for input and output that we're really spoiled for options, which is probably another thing to consider. Too many options can certainly be a bad thing for some students. So while there's many options for now input and output, we need, I think, a note of caution on what's actually realistic in the space. And again, we've spoken before about those biases, checking it, etc. And Joe, you've certainly seen, played and reported on so many of these new and emerging technologies. What sort of, you know, multimodal and format shifting capabilities I suppose, have got you excited. We've really spoken about text to text and text to some images, but there's so much more, isn't there? 

Joe: Oh, there absolutely is. Before I go on to that, though, let me riff back to a point that you just made where you were talking about things like headings. I was talking to a very experienced UI designer who was putting a PowerPoint deck together for a bid yesterday. And she knew that I'd written a book on accessibility recently and she rang me up and she said, you know, this is for the public service and one of the things in the requirement is that we've got to make all this information accessible. What do I do? Well, you know, it's a pity the podcast, we're recording this, we haven't actually released this podcast yet. So I will be sending her the episodes once we once we have. But I just talk just through some basic stuff like alt text that we covered last week and also this thing of headings. And, you know, she didn't understand that if you don't use the headings in word or in PowerPoint, you know, the title is heading one. The main bullet points on the slide are heading two. And then the sub bullet points are heading three. If you just put text boxes in, that is processed very differently by accessibility tools like screen readers and things. So even very basic stuff like that, most people don't actually understand that they're important to use those things. So she went back to her slides and made sure that she was using headings because in a lot of cases she said she'd been using text boxes. Okay, so that's that's kind of one thing. 

What tools are exciting me? Well, I love the example of photography, and one of my other hats is that I run a photography training business. And, you know, I'm a fairly expert in Adobe Lightroom and do a lot of training courses on that. And Lightroom is probably one of the world's top kind of image editing programs used by many, many amateur and professional photographers. And in their latest release a few weeks back, they've brought in a new thing called lens blur. And what this allows you to do is if you take a photo, it has say everything in focus from front to back, we call that wide depth of field. You can now, after the fact, and this is all due to AI and computational kind of generation because the camera isn't actually capturing this information, but you can now adjust the focus of the photo. So let's say I've got Elizabeth standing there, you know, and she's got trees and bushes 10 yards behind her. And I have taken the shot, and she is in focus and the bushes are in focus. Back in Lightroom, I can say, well, I just want Elizabeth in focus and I want to blur the background. So I can just tell Lightroom, just focus on Elizabeth now and blur the background. And that completely changes the photograph because now we're bringing the attention onto Elizabeth. Or I can say I want to blur Elizabeth and I want to focus on that rare tree behind her or whatever because that's the thing that's, you know, most of interest in the photograph. And this is something we've never had the ability to do until the smartphones came along because smartphones actually capture depth information because they have two lenses. So the cameras with a single lens can't capture that information, but the AI application is now allowing us to do a simulated version of that, which is very, very good. So there's just one example, but I mean, let's get into just, you know, other stuff that is perhaps of more used to many of the listeners. 

Any of the AI chatbots, whether you're using Claude, whether you're using Copilot or ChatGPT or, you know, Llama or whatever you're using. Okay, they can all now convert material into different formats. Most of the chatbots now have a little paperclip or something on the kind of bar where you type your text in, which will allow you to upload a file. And these multimodal chatbots now allow you to upload an image or upload a text file or upload a PDF or whatever it is and then you can either ask it to kind of ask it questions about what's in the image or what's in the file, but you can also now ask it, convert this from this format to another format. ChatGPT, because OpenAI is now kind of closely linked with Microsoft, will generate you Excel files now. So if you ask ChatGPT to generate your table of information, you can say generate, maybe this table of information as an Excel file, and it will generate the table of information and it spits that out on the screen, but then you also get a link and it's generated the Excel file and you can click on the link and open the Excel file and download it. So there's lots of stuff we can do. 

I had a class yesterday and some of my students had done some brainstorming around putting a playbook together for a project. And it was just unstructured text. It was just throwing together lines of text ideas or whatever. And I said, right, take that file, upload it into one of the AI's and ask it for a structured project playbook using the material that you've just brainstormed. And they uploaded it and it turned out this beautifully structured document with headings, some of them, you know, the bold at the top and then the subheadings and all the rest of it reorganized the material so that it was in a narrative flow that was suitable for what we'd asked it to do. And that was a really good example of converting unstructured material that was, you know, had some use but wasn't ever so useful yet. And all of a sudden we've got something that's far more useful and we can expand on each of these sections and they're flowing in a narrative format. 

So I mean, that was just one example. We can convert images, we can convert, you know, charts and stuff like that. So that I mean, the world is our oyster now with this stuff. You just have to go and try. And I think that's the problem. A lot of people don't go and try. They don't actually realise that this stuff is even possible. Elizabeth, what do you think about all this? 

Elizabeth: I think what you raise that possibility is so true. And the number of different ways that different files can be converted, whether it's a text file, whether it's an image file. And, you know, one example that I just like to pick out and help relate back to the UDL framework for anyone thinking, I've got all these options, but why would I do this? You know, one of the examples that we could really think about could be you know, perhaps you have a really long document, a very long document and you know that students are going to look at that document and perhaps be a bit overwhelmed before they start diving in. Well, perhaps we could use AI to create a summarized list of bullet points and perhaps those bullet points could be provided right at the start of that document to give a bit of an overview of the structure of what that document looks like and also to pick out what the key ideas of that text are. 

Now, again, with that critical reflective lens, what we'd also want to be doing is once AI has generated this particular list, we would want to be thinking about what do we want our students to take away from that document and do the AI-generated bullet points capture that and perhaps we need to do some refining of whatever's been generated. The other thing we really should consider whenever we're using AI and uploading documents or uploading images, we want to be sure we've been considering the ethical use of that document. So does the data get stored? How is it used? Are we uploading copyrighted material that actually can't be fed into AI, especially if AI might store this data and then draw upon it? So there are lots to think about, lots of things in that ethics space, but I think, Joe, you were talking a little bit about Omni as well and I'd love to throw back to you for some updates about that. 

Joe: Well, the Omni update last week, one of the things that I think has slipped through a lot of people's radar on that. So chatGPT have come under a bit of fire recently because you know, anything that you uploaded into a chatGPT chat was used for their training data. So, you know, you mustn't up until last week upload anything sensitive or you know, copyrighted or whatever because that would end up in the corpus of training data that chat GPT was using. And they've come in for a lot of criticism over that. But there's a little toggle at the top of the chatGPT window where it says chatGPT 3.5 or chatGPT now on that drop down to make your chat private. So when you click on that little toggle, it starts you a new chat, but that chat doesn't get stored in your history and nothing that you upload or you know, generate in that chat will be stored as part of openAI and chatGPT's training data. So this actually, you know, gives you a safe place if you like where you could upload, you know, more sensitive files that you don't want sharing out with the general world and stuff. And then, you know, when you finish your chat, they say it's retained on the date on the service for 30 days, but then it's deleted and it's not used for training data. So that's just one example of an advance that you might have missed recently. 

You mentioned earlier on, Elizabeth, about kind of multiple means of representation. And you know, that brings up loads of possibilities for us as educators. So, you know, in classes and in workshops, what I'm asking students to do is upload documents and then go play with them, just like you were talking there Elizabeth about summarizing or ask it for a set of bullet points or ask it to produce a counter argument for this document. Yeah. And if we're doing research into into something, you know, very often you want to provide different points of view. So maybe you read a document and it's advocating one point of view, but then you set up the chatGPT or Claude or whatever and say, okay, I want you to argue the other side of this debate. I want you to give me references that support, you know, the alternative view or maybe there's multiple alternative views. Give me the current major streams of thought in this area. So again, it's about the questions you ask. 

These tools have not just a potential, they have the capability of bringing materials to life, of giving students a sense of agency, allowing them to explore in far more autonomous ways than perhaps they've ever had the chance to do before as they co-create learning experiences. Now that's not to say that old ways of doing things are still not very valuable. I spent the whole of yesterday doing a design sprint with some Masters students and it was three sheets of paper and a pencil. You know, they did nothing on the computer. It was, I mean, we were on a Zoom call and we'll put a show note link in, but the Stanford D school, Stanford Design School has this amazing exercise that you can run with students to do a design sprint. I've used it for a few years and it's absolutely fantastic. 

And then things like reading text aloud, you know, the screen readers have been around for many years, but there are very, very good natural language voices now that can read a PDF or can read something that you've just hyped, you know, even our intro uses an AI voice. I still don't know how Darren did that. Maybe, maybe he can explain it in a little while. And what we've got coming in in a number of months, are apps like Sora and Veo, they're just about to be released and you'll be able to type in a description and it will generate you a video that just looks like it was shot by a Hollywood director. I mean, these things are that good. They're absolutely incredible. Even Canva, Canva had their big annual conference a couple of weeks ago. And now one of their magic tools is magic video. So if you go into Canva now and play with magic video, you can type in text or upload a couple of images or files and it will generate a video for you around that theme that you want. It's early days, but the speed of change is just phenomenal. 

So, you know, that's just a very, very quick skim of what's going on. There's so much more. But that kind of gets, gets us started, I think. 

Elizabeth: Yeah, I think we really have to start with a broad overview before we do any deep dives. We need to know, you know, what's actually even possible if we wanted to turn one for that into another. And, you know, as we can imagine, there could be so many different ways in which we could use this to really support our UDL practice. And how can we use these tools for really specific purposes and in ways that really help drive what it is that we want in that learning experience. So, we can start with one format. We can end with another. We've now got flexibility in what our inputs are and what our outputs are. And there are opportunities to consider how we can support perception and comprehension. There are different ways that we might be able to enable action and expression to happen. And, you know, even depending on what we're converting that format into, perhaps even thinking about those guidelines around engagement. 

So, where do we come in? Where does the educator come in? Or the resource developer? What then is our role? And I would say, you know, we can't just let AI reformat something and then deliver it straight to students. We need to be thinking about what that new format is. We need to be thinking about the purpose of what the format is and what information is intended to be conveyed. So, you know, if we've created a resource, we're wanting to put it into various different options. Is the information that we intended going to be conveyed across those different options? We want to make sure that our students are going to be getting equitable learning experiences and quality education across each of those. And that's not saying that we can't use AI. It's saying we're going to have an educator lens over that and we can also refine it further. 

So, with educators or subject better experts, it's also becoming crucial or becoming more obvious that at the moment, we need to really be thinking about the accuracy of what's produced. And, you know, I was reading a really interesting example of what's called an AI hallucination, and that's where AI draws on those patterns, it draws on that data and somehow makes up information or puts together something that we wouldn't typically put together. And this particular AI, I'm not sure what was asked of it, but it had told a user, I believe, to put glue on their pizza. And I think it told someone else that the geologists were recommending them to eat one small rock a day. Now, these sound like funny examples, but the reason that it is quite humorous is because, you know, we know it's not true. We know that it's silly just because we've read that, we're not going to rush out and put glue on our pizza or start eating rocks. I'm sure many of us wouldn't do that. But I think what this flags is that it's great when our knowledge base can flag that, but what happens if our knowledge of the topic is at quite a base level or, you know, if we don't quite have that depth of understanding and perhaps one of the AI hallucinations is quite subtle. 

So let's imagine that you're working through some really difficult chemistry material and it's very complex and, you know, you think, really, I need a more narrative approach to reading this procedure in chemistry. And what if it hallucinates and tells you to mix two chemicals together that shouldn't be mixed or if in summarising a really complex process, it misses out some key safety information. You know, what if it tells you to observe a chemical reaction that you don't realise you actually shouldn't be observing with the naked eye? We need to be checking over what's being created by AI. We need to be thinking about, you know, is it context and purpose specific for what we intended that educational material to be? Is it going to be hitting those intended learning outcomes, those learning objectives? And also is it safe? Because that is going to be something crucial particularly in those early years classrooms. And particularly if any of those hallucinations are quite subtle and we may not be able to pick them up as readily as those other examples. 

So I don't think we can ever get rid of that educator or resource creator. We're going to be very much involved in that process and at this stage at least AI can't replace that. But I think, you know, don't let those words of caution dampen your curiosity because there are many opportunities that are opening up even just in this space of, you know, multi modal formats and different formats that are available. So I think where we can really start with this, we can think about, you know, what are the principles of that UDL framework? What are the AI tools available and really start to brainstorm where the two might begin to complement each other in this quest for quality and inclusive learning experiences? And if you go about that, if you do some of that brainstorming work for your own educational environment, your own students, please do feel free to share those thoughts with us. It would be great to learn. 

Darren: That's a really good point. But I would also throw in maybe a note of caution just because we have these tools doesn't mean that everyone wants to necessarily use them. I alluded earlier that, you know, for some students too much choice can be just as much a barrier as too little. So again, it's about finding that middle ground and going and playing with these things is the best way to do that. As the saying goes, with great power comes great responsibility and these tools are very, very powerful. But again, they can also hallucinate. Speaking of hallucinations and bias, don't take our word for it either. We encourage you to go explore, play with these tools yourself and try some of the things that we've mentioned in this episode, put in some documents, ask them to change it to a different format, prompt it into doing summaries for your weekly lecture so that you can put that out in advance. All of those things just go and play. You can't break anything. However, what you can do is find the links to some of the tools and the software that we've mentioned today in this episode, along with some text prompts to help you with those things. On the ADCET website at www.adcet.edu.au/ilotathings

Elizabeth: And of course, we would love to hear from you as we want this podcast series to really be that ongoing conversation. We're not in every classroom, we're not in every higher education meeting or all these different spaces that AI is taking place, but what we can do through this podcast is really harness that collective learning. So please be part of this ongoing conversation with us and help share that with the community as well. Now, if you have a question or a comment about either AI, UDL, accessibility or anything that we've discussed, you can also contact us via our old school email at feedback@alotathings. 

Joe: And also, if you've got any requests for other episodes for the podcast, I mean, we've got some ideas of what we think we could cover, but if you're finding an accessibility problem, if you've come across a really good tool, if there's a challenge that you are encountering, that you think AI might be able to help with, throw it in the comments or drop us an email, and we'll consider it as an episode because we will be looking for episode ideas as we go along. So I hope this episode, we've given you some more insights into how AI can help you deliver your learning and resources in multiple ways, and give flexibility and extra agency to your students. So thanks all for listening, and we hope you can join us next episode as we continue to explore ILOTA things. Till then, take care, keep on learning. 

Darren: Bye.

Elizabeth: Bye.

Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.