Hello World

What is the role of AI in your classroom?

May 27, 2024 Hello World Season 6 Episode 6
What is the role of AI in your classroom?
Hello World
More Info
Hello World
What is the role of AI in your classroom?
May 27, 2024 Season 6 Episode 6
Hello World

Explore how AI can be applied in the classroom, and find out how educators are teaching their students about how AI works and how to use it.

Show Notes Transcript

Explore how AI can be applied in the classroom, and find out how educators are teaching their students about how AI works and how to use it.

Ben Garside:

We're in a phase of what feels like really rapid development in AI.

Helen Copson:

Our students will be using AI in, at their everyday jobs.

Ben Garside:

We must educate people about AI development.

Jane Waite:

We have to support our learners to become critical consumers, but mainly because there's just too much to lose.

James Robinson:

Welcome back to the Hello World podcast. A show for educators interested in computing and digital making. I'm James Robinson computing educator and aspiring AI aficionado.

Ben Garside:

Hello, I am Ben Garside a senior learning manager at the Raspberry Pi Foundation. Former classroom teacher of computing, and enthusiastic yet amateur prompt engineerer. So James, I'm very excited to be here today with you and the guests because in this episode we're exploring an area that has a lot of people talking at the moment. And as ever we really value your comments and feedback which you can share at helloworld.cc/podcastfeedback.

James Robinson:

Thank you Ben and as you say this week, we're discussing a really interesting topic and a really relevant topic at the moment. We're looking at the landscape of AI and its role in the classroom. Ben you've been working on lots of AI content for the Foundation recently, for the last two years. How would you summarise the current landscape of AI as it is today? And obviously this might change tomorrow. But right now what's the landscape look like?

Ben Garside:

Yeah, well, that's a really good point. I think the first thing to recognise is that we are in a phase of what feels like really rapid development in AI. I think certainly since ChatGPT became a thing. It really has made us sit up and pay more attention to I think the opportunities that AI can bring, but also being a little bit more aware of it, the risks behind it. But the first thing I like to tell people is that you know AI has been around for actually quite a while. I mean it was the 1950's when the famous British mathematician Alan Turing proposed this idea of thinking machines. And he thought that we would have achieved real artificial intelligence. I mean what I mean by that is when computers have, you know, true human like abilities. When we wouldn't be able to tell if we are actually talking to a human or a machine. So you might have heard of the Turing test and you might have actually done this with the students in your classroom. But let me tell you James I think that Turing test is completely blown out of the water. You know, I think if I didn't know are talking to a chatbot, you know with things like ChatGPT or Bard, they could do a really good job of convincing me that I was talking to a human. So I think that brings me back to a really important point is that we must educate people about AI development so that you know, we know they can be convincing but they don't have human like capabilities. You know ChatGPT can't drive my car or you know train my dog how to play fetch. I'm not particularly great to those things either to be honest with you, but it's a good simile. You know, actually we've done a lot of work at the Foundation about, the language we use when talking about AI and machine learning, you know, we avoid what's called anthropomorphization. And what I mean by that is when we attach human-like capabilities to things that aren't human. You know, and we think that's really important particularly around talking about AI and when young people are developing their mental models of this AI world that they were in at the moment. You know, so that they don't see their smart speaker at home is being human, or even you know, even though it sounds and you know speaks like a human being you know. So we think that if young people see these machines as human-like, then it just risks them missing the fact that it's actually humans who make these systems. Humans who are responsible for their outputs, and we want to empower people to like, look under the hood of these machines. Maybe want to play a role in the future development of these systems, but, you know as a minimum just feel empowered to you know, question any unfairness that they might be victim too. So James that's my opinion, you've been less close to this work that we've been doing. You know, what's your perspective? You know, have you used any cool AI applications recently?

James Robinson:

I mean, I think you're, just to pick up on a couple of points there, but I think you're right like you know, things are moving very very quickly. And if you're not kind of really immersed in this space, it can be quite hard to kind of keep up. I've, personally I've trialled out a few different kind of applications and tools for doing various bits of or sort of doing a bit of heavy lifting for me in certain places. So I think in a previous episode I've spoken about help using it to help me write some sort of a template for some code that I've then had to go in and adapt, and fix. And that's been a really useful application of AI as well as analysing texts and potentially drafting text as well. But I find that it always requires that little bit of, you know, at the moment is still requires that additional human intervention to kind of validate, verify to make sure that you're getting the output that you want. And I was just thinking as you were talking about like outside of work I found a really, it's a really niche, but it's a really helpful application of AI. So as you know Ben I'm a real big fan of Lego and I do lots of kind of Lego sorting and sifting and identifying of bricks. And as I get a little bit older my eyesight is not great, so I can't read the little, the little numbers printed on the Lego bricks. And so there is an app you can use on the computer which identifies a Lego brick. So you hold the Lego brick up, you take a picture and it tells you here are the five things that I think this could be. And you can very quickly identify. Which is really great to like slight variations in faces on Lego minifigures and that kind of thing. So that's, that's a really fun kind of application where I've seen that really kind of enhance something that I'm trying to do outside of work. We should probably though stop talking about Lego and things its not the Lego podcast after all. We should invite some of our guests on. First of all joining us is Helen Copson an ex-head of music and now subject leader for computer science and media at Co-op Academy, in Priesthorpe in Leeds. And so Helen my first question I guess is where are you seeing AI being used in the classroom now?

Helen Copson:

Well teaching it, we're teaching it through lessons. So we've got the Raspberry Pi AI unit we're teaching to year 9. And in the year 8, we've done some work with Teachable Machine with Google which falls into one of the one of the other schemes work. In terms of teaching and learning loads of us are looking at TeachMateAI to plan curriculum and lessons and mark things together and AI's actually making a, we've got a whole strategy now as a multi-academy trust. How is AI going to manifest across the curriculum and in all areas?

James Robinson:

And maybe we could just delve it a little bit deeper into that. So when you talk about teaching your students about AI using the resources from Raspberry Pi and other sort of resources, what are the kind of particular, what parts of AI are you exploring through that? What's the focus been for you and what you've been teaching the Learners about AI?

Helen Copson:

To ask them essentially to begin with what do they think AI is. And largely it comes back with "AI's more intelligent than humans." And it's"No AI is only as intelligent as humans. It's just more logical." And then looking [...] with year 8, which is our you know 11 to 14's, so 12 - 13. We looked at the Teachable Machine and we taught it to recognise gender. So we did male, female. We took pictures of all the kids [in the] classroom. We have an Asian population at more Asian's [at] 60%, and we tested the model, we're also boy heavy. And it could recognise boys and girls quite easily. It recognised me as a girl. When a boy put a blazer over his head it recognised him as a girl because it associate a blazer with a headscarf. So it's that bias that's built in because there's not enough data in the models at the moment, and looking at bias in AI as well and the pitfalls of AI and not to trust it so much, it may be those kind of model.

James Robinson:

And then you also you talked a little bit there about sort of not just the sort of teaching of AI to students or teaching students about AI, but using it as part of teaching and learning. And I think we're going to get into this a little bit later on when we introduce our second guest as well. But how have you identified like, whereabouts is is AI having an impact on the teaching and learning or how you approach teaching and learning? You mentioned TeachMate, like what are the things that you are using AI for? For those who might not be familiar with this?

Helen Copson:

So TeachMate, you tell it what you, what subject you want or what scheme of work you want, how many lessons you want? You pop it in and voila you have a scheme of work that you can adapt to your own setting you're on year groups etcetera. And planning preparation is now so much easier. Using ChatGPT we can now give feedback that sounds a lot better than it used to, or write reports. So give it a bit of data and it can give us an output that we can use to inform parents, can write minutes, can reword documents. I've used it for some of the case studies in business, and then we do a key stage five for the Cambridge Technical when the case studies come out for the unit two, we've put it into ChatGPT to see what questions it comes out with, and then the students can base their responses around those questions using the case study. So it gives them a bigger platform, because they can't use it in the exam because they can't take it. But it gives them those thought processes of points in terms of office and ideas and things like that. It's going to revolutionise how data input is going on. How we send emails, how we mail merge it's going to make it so much easier.

Ben Garside:

So you've seen, you've obviously used it in a really great way these kind of like these large language model tools and imagine that your students are also very enthusiastic about using these tools. So how have you seen them using it? Have they made any mistakes? And have you kind of, how have you gone about teaching them how to use these tools appropriately?

Helen Copson:

When we did the whole class model they were really like "Oh take my picture." They think it's great, they think it's hilarious. When we looked at fruit they only used a few images, and they'd made, the computer didn't know that a green apple or the AI didn't know a green apple was the same as a red apple. And they kind of started seeing but it's going to need so much data to get this right. But then also in their heads they think it's programmable, they think it's instructions rather than data-based. So when we talk about chess they go straight in to say, "Well that's that's that's an algorithm. That's just a set of rules." It's not that's a set of data and when you look at how much data's in that chess model, it says, what is it? There's more, there's more bits of data in that than there are atoms in the universe their mind is like "What?" That's how data it needs to be able to win. So then they think actually AI isn't where it needs to be yet for us to use.

Ben Garside:

So have your students use ChatGPT to do their homework, for example?

Helen Copson:

It's banned in school, so they can't access it. Teachers can, so I've modelled it in lessons. So we were looking at, like paraphrasing using media or getting citations and things. And how to paraphrase or getting definitions. And we asked this massive question. We asked a question like "Give me a definition for..." I think it was networks or networking and it what up three paragraphs and all it needed was the first sentence. I said "Look if you're going to use this, it's great it's going to give you all the answers, but you need to be able to unpick what the information is that you need and then that's how you're going to use AI." Because ultimately while we've never, I've never had to use AI in my whole career or childhood, our students will be using AI in at their everyday jobs. It's just how they use it effectively, the same way we teach how to safely navigate the internet.

Ben Garside:

Okay, so I think it's time to welcome our second guest so I would like to welcome dr. Jane Waite. Jane is a senior research scientist at the Raspberry Pi Computing Education Research Center, a joint initiative between the University of Cambridge and the Raspberry Pi Foundation. So Jane was a developer for 20 years and a teacher for ten years and now researches computing education on AI and machine learning she and her team have conducted various research studies in the area including looking at teaching resources, developing models to help with teaching and learning, and exploring how large language models can be used to provide explanations of programming error messages. So Jane you and I have worked quite closely together over the past two years, and I know that you've done a lot of work in the AI education space, now I was at the bett show recently and AI was the hot topic. But you know to me it felt as though people were sometimes talking about AI and education in very different ways. Is there a way of framing or categorising what we mean when we talk about AI in education?

Jane Waite:

Well, first of all, thank you so much for inviting me to talk about this subject and I just actually want to say to Helen I'm from Leeds. So I'm from ,I'm from Middleton originally, so I'm really quite excited to be talking to someone else from Leeds. But going back to your question of how might we categorise this topic I think, this is very difficult for me as well because I need to use my hands to kind of explain these overlapping circles. I think first of all there's productivity. So Helen was talking there about how AI can be used with TeachMate and other projects in order to write reports, or answer emails, or do all of those kind of improving teacher productivity. So that's one kind of pancake in the mix. There's in this next thing again that Helen was talking about AI literacy. How do we help students to learn about the you know, AI being fair, accountable, transparent? How can we have the use of AI being critical consumers? And so that's maybe we call that AI literacy. So that's the next kind of pancake that's sits next to our first productivity one. There's something then that is specific about each and every subject and if I talk about teaching computer science, but of course as Helen said, it could be about business or history or art. But if we think about teaching about computer science and potentially specifically about programming. We can also use these kinds of tools in order to support students as they're sitting and learning. So like as a more knowledgeable other that is sitting in the classroom with them as a kind of a super TA or maybe not such a good TA. And I think that's the important thing is how can we change pedagogy in order to use these tools to support teaching and learning of programming. Now when I say programming, I mean the normal kind of programming that we already teach. So procedural programming with python or object oriented or so on. So there's that whole thing of how we might change pedagogy and that might be if you use PRIMM at the moment, which is this predict, run, investigate, modify, make. What might PRIMM plus AI look like? And as well as that there's curriculum changes that need to sit at the back of that. Because we're going to have to teach children how to use prompt engineering. That's a new topic for them to learn. It's a new thing to go on that list of all the things that the DFE want us to teach. And also about feedback literacy because these kinds of tools they're just massive prediction machines, garbage in garbage out. You know, and they will, there will be hallucinations it won't always be accurate. So how do we support students to change their mental models? Just as you said Helen they used to think in terms of algorithms, it's always going to be accurate, it's always going to have the same result but now it's data-driven it's probabilistic. So there's this thing around what's called feedback literacy, and then there's another pancake. So we've got these three pancakes so far and now we've got a fourth pancake. And out of the back of AI literacy we move into data science. So will computer science teachers actually be teaching data science next? And will it not be developing programs that use procedural programming languages, but that actually are interacting with large language models where you're going to have to tune your embeddings, or all these technical things. So we're moving from this computational thinking 1.0 to a brand new version of computational thinking. Entirely changing the way that students write from being five, six years old as they're interacting with these new kinds of technologies, they have to think in a really different way. So I think it's, there are four pancakes for me. And in a sense, I think at the moment teachers are just looking into this big vat of mixture of pancake mixture and I think it is pancake day soon. And so I think its all muddled up and often when I hear teachers talking about things there's a bit of productivity, there's a bit of AI literacy a little bit of data science. Anyway, that's the way that I think about it and if that's useful for others, then that would be really cool. So Ben?

Ben Garside:

Yeah, I mean, my mental model is a little bit stuck on pancakes now. It's getting towards dinner time. So you talked about this computational thinking 2.0 and I know that's based on research. I'm always fascinated by how the research community keeps up with the pace of AI development. So how do you go about that?

Jane Waite:

Ergh, I'm going to be absolutely honest it's like the wild west right now. We can't keep up with it. I'm reviewing for a conference at the moment. So we kind of, if you imagine what happens is people write lots of papers. They all get submitted to a conference and then other people have got to review them. And two years ago I never had papers to review about AI and machine learning. I think it's at something like 10-15% of the papers are AI and machine learning. We've put a paper in for that. It's just you know so much research is going on, but mostly it's in higher education. So it's you know, it's really is in universities where this is an enormous problem. So as Helen was saying it's banned, you know for younger learners. It is not banned in higher education, or they try but students are just using these tools in order to complete their summative as well as formative assessment tasks. So it's a massive issue in HE. So how do we keep up? Well we're not really. It's just a really fast fast fast moving area of research.

Helen Copson:

But aren't we going to get to the point where they're not going to have to write a dissertation anymore? Is this not where we're ending up?

Jane Waite:

Excellent question. Absolutely brilliant questions. So this is much the talk in higher education about how assessment is going to change and it's interesting [...] So I'm from Queen Mary originally, that was the University Queen Mary University of London. And luckily much of our assessment for our 500 students who kind of turn up on day one to do intro to Java programming. It's not just creating the same kind of program. You know, they have to actually explain every single stage that they go through, they have to go and talk to a TA and explain their program. And they have to explain, we use another theory for that called Semantic Waves. But we have to, they have to explain what they're doing. So it is even if they were to use a say Co-Pilot, ChatGPT any of these tools, they would have to be able to explain what that's doing and hopefully yeah, they're not just listening in one ear to what Co-Pilots telling them kind of thing. But that was very anthropomorphised so I apologise.

Ben Garside:

I think it's really interesting how we've gone from a place almost maybe even in the last 12 months of being worried about these tools, and thinking how are they going to replace what we're doing. To now actually thinking how can we use this to augment the work that we're doing, you know, for example with teaching programming its not that they're going to replace this need the program, but we're now people are thinking how is this going to help the pedagogy that we already apply? You know how we can use AI to help us with Parsons Problems for example, when learning to program, I think that's really great. And you know you touched on it then it's not thinking that you know, the genie, there's a common phrase the genie is out of the bottle, you know, it's there. So it's about how do we adapt that? So yeah assessment will need to change and I think that's a good thing, that's a positive thing. But how we can make the best use of these tools and how we can work, change, adapt our own practices around that.

Helen Copson:

Talking about how assessment models change and my music head on. I've had a lot of conversations with people saying with art teachers or musicians "Are we dumbing-down creativity?" But in fact are we going the other way, in that we have to be more creative in our assessment models because we have to demonstrate the understanding in a creative way to show that, whilst their dissertation could be written, or piece of work, they need to be able to show the working through it. And then that comes, brings back creativity. So AI in practice is not getting rid of creativity but making us more creative.

James Robinson:

And I think that's true of almost any innovation right? Any innovation causes you to need to be a bit more creative in how you kind of adapt to it. But I think AI is proving to be such a far reaching and you know, just how you can use it for so many things that it's really challenging lots of different sort of aspects of how we teach, how we learn, all at the same time, Jane?

Jane Waite:

I was going to agree that it's, it is about us as educators taking a responsibility and not just letting it happen. We have to support our learners to become critical consumers mainly because there's just too much to lose. So yeah, so becoming critical consumers and taking responsibility is really important because otherwise students are going to be facing an incredibly biased world where you know fake news and, is already going to influence the elections that are coming up both in the UK and the States. And I think there'll be a, there's going to be a, there's a real risk around almost as having a degree of apathy that the pedagogy will sort it out, and that students actually will gain lots of learned helplessness and that they'll miss lots of kind of important stages in building mental models. And I think that's already starting to happen in HE where they see students then coming through to their final exams and they can't do what they need to do. So I think creativity, yes, but pedagogy that's the really most important thing right now.

Helen Copson:

You've touched on the ethics, in fact, you've talked about the dangers of AI and with year 9, thirteen year olds, say eleven to fourteen. They said this week, introducing AI and we were talking about the Taylor Swift news story at the weekend, how AI had used her figure and objectified women etcetera. And one of the girls just went "Why aren't we stopping this? Why aren't we pausing? Why aren't we waiting?" And if you've got thirteen, fourteen year old girl going "This is going way too fast for us. We need to stop." Then I'm scared for their future more than mine.

James Robinson:

It's a good, it's a good point. I think you know, things are moving so fast and it is really hard to keep pace. And you know, if we're having challenge, challenges keeping pace as professionals, like, imagine what the world is like for the young people who are you know, seeing all of this sort of you know happen. I wanted to kind of bring us back a little bit. Just something we started talking a bit about earlier on and I think it sort of concepts and, and language I think are really important in this space. And it kind of connects a little bit with, I think one of the things that I reflect on as someone who designs kind of curricular and oversees learning experiences. I think, you know being really purposeful about what we're trying to learn and I think once we understand what we're trying to teach and learn we can then think about the core concepts, we can think about the pedagogy and be very deliberate in the way that we present those ideas. Something like that you mentioned earlier on Jane, you know when I framed computing with my learners I often talked about systems being very deterministic, right. And that was almost like a nice safety net. Everything is deterministic. You put some rules in you get reliable answers out. And now that's kind of just gone out the window. And so we've got to not only un-change are thinking but we've actually got to unpick that a little bit with the learners. And then I think we've made a reference to language as well earlier on so I'd be really interesting to hear everyone's views on like what are the sort of the really important concepts that we need to be getting across in this? And then also what's the role of language? Who wants to come in on that?

Jane Waite:

Can I just start by giving us a framework to talk about this? I love a nice framework as everybody knows I like a diagram.

James Robinson:

Is it more pancakes Janes or?

Jane Waite:

No, this is layers.

James Robinson:

Layers, like a cake?

Jane Waite:

Could be layers of a cake, yeah let's do layers of a cake.

James Robinson:

Cool cakes and pancakes.

Jane Waite:

Cool, so this is called the SEAME model, which is based on some work that I did with, again Paul Curzon at Queen Mary. And it's this idea that at the top of the cake, you've got social and ethical. So it's all the things that Helen was talking about with the students saying"Come on, stop. This is too much with the fake news." And with the kind of way that images are being representative as of celebrities. Then you've got the applications that we build, so the next layer of the cake. So I don't know what flavour we would have at the top vanilla, then chocolate. Then you've got the applications that you build. So it's interesting because the applications we build will be a combination of both rule-based and data-driven, they will call both algorithms, but they'll also call these models that may sit beneath them. So as programming teachers we're going to have to have students who can sit with both a CT1. So computational version 1.0 in their head and also this probabilistic, and this kind of data-driven view of the world in, you know in their heads. So we have all these different applications and then we move down into the model layer. And that's where we hoover up data or we create data that we can use to train a model, test a model, and we go through the the kind of the. And I really do like Ben, the life cycle that we've got in the Experience AI resources to kind of support that thinking. Then at the bottom we've got the kind of the most dense layer. I don't know what this layer is made of maybe liquorice or something really heavy duty at the bottom. And that's the engine and we've called it the engine on purpose to say it's a machine, you know, it's this [...] it's the mathematical expression that of neural networks and other types of models as they're implemented and we might do lots of unplugged activities in order to teach that layer. So in terms of, there's the SEAME, and we might have language which is different according to what level we're at and we're definitely going to have different concepts at those levels. So SEAME, I think is quite a useful way, if we're talking about what concepts do we need to teach? What language are we going to use? And what pedagogies? Because there might be different pedagogies, it might be all discussion up in the vanilla level and it might be unplugged and kind of quite different ways to learn about the engine.

James Robinson:

And there was, I was listening to everything Jane, but there was one particular bit that caught my attention in there that [...] sort of resonated. Because you know, we've spoken before about the SEAME model as well. So, but I think the bit about students sort of switching between right? And I think that's you know, when we think about programming traditional rule based programming we use the levels of abstraction framework to help students move up and down the different types of activities that are involved in programming. From design all the way down to execution and testing. And I think you know, you can see parallels both within the SEAME model. So you've got these different layers, but then also I think you've got this m oving between the two quite seamlessly and I think that is going to be a really interesting challenge for our learners, for our next generation of learners to be able to you know, float up and down the layers and levels of abstraction move up and down the SEAME layers, and jump between those two. You know, within the space of 20 minutes potentially. That is going to be, it's going to require really careful and explicit scaffolding from educators to help them navigate that shifting kind of between those two.

Helen Copson:

I think this has to start in primary school because at the moment we've got learners there can't do a Google search select successfully. So they'll never be able to use an AI model easily because they can't even search for used search term. But it's not happening in primary school. They're learning scratch in primary school, which is not helping navigate AI.

James Robinson:

And I think that's a really important question because I think and then we'll come on to that in a moment I think. But I think this like, well actually we can maybe address this now.[...] How do teachers sort of embark upon this? And at what point do we all think we should be engaging with AI? What are the safety concerns? What are the things we should be learning at a primary age? Like where do we all kind of sit on that, and I'm gonna go to Ben first actually, because I realised I've not gone to Ben for a little while. Ben, what's your thought on that?

Ben Garside:

Well I think the the first kind of myth to debunk to think that maybe this is a subject for computing. Because I don't think it is. I think you know, we look at in the future and think about how AI is going to affect all of you know, most industries, most jobs. I think that's something to factor in that you know finding time in curriculum space of computing we can do. But I think there's also a lot space where this actually fits into so many different subject areas. You know, we've created a lesson on AI and biology for example, but certainly there's areas in things like PHSE, and art, and music. All these have got really strong links to AI where there's opportunities for us to talk about it. But to go to the other part to your question about when do we introduce this. I think as early as possible. As early as young people are starting to interact with their the world around them. When they're starting to build their mental models of the world, I think it's important that we start introducing things. And if we look at you know Jane's and SEAME model. It doesn't mean we need to delve straight into you know, with five-year-olds and talk about the engines. The liquorice level as Jane called it. I think we need to start thinking about their social ethical issues, introduce them and Helen you mentioned with your students about them questioning those things that are seeing with Taylor Swift, you know, there's a great conversation to have with them whether that be in form time or you know any time you're speaking to young people, start those ethical and you know conversations with them. And encourage them, you know, if they're having those conversations and they're questioning these things, that's really important and a really good thing because they're going to play a part in the future. You know, we want to empower young people to question these decisions, question these models that are being created. And not just accept the reality around them.

Jane Waite:

I, you can't believe how much I agree with you Ben. And I think it is the CT 2.0. I think we've done, not done too bad a job getting the idea of computational thinking into some primary schools. And, but I think it's going to be quite shocking because certainly we've got this new idea of CT, of another version. But children are using Alexa and other tools. They are maybe using language which is highly anthropomorphised and seeing them as being trusted, knowledgeable and they, you know others rather than objects. Not understanding that like the output can be entirely fabricated, and can be manipulated. And the safeguarding issues are, it kind of makes me kind of feel a little bit shivery to think about how students, very young learners can be manipulated through these kinds of tools. So I think you know, we've got to start as soon as they're in schools. It's an early years issue as well as a you know, key stage 1 or. So that students who are in kindergarten who are like four and five years old. But it is at the top. It's a social and ethical. So in terms of the cake, it's like a cake that's got a tiny tiny point at the bottom and then a great big vanilla bit at the top.

James Robinson:

So we've now got a triangular cake slice.

Jane Waite:

Correct.

James Robinson:

Layered with liquorice in the middle.

Jane Waite:

At the bottom, liquorice at the bottom.

James Robinson:

Yeah, yeah, yeah. We've mentioned anthropomorphisation if I've said that word correctly a couple of times, but and I know Ben, from like observing you kind of immersing yourself in this world of AI over the past year and a half whatever it's been. I've seen you kind of correct yourself and develop your way of speaking about about AI but in particular. So for our listeners that may be aren't quite sure what the issue is in relation to anthropomorphisation. What what are the kind of the key kind of trip, trip hazards as it were when you're describing an AI system that you should be mindful of as you're talking with? Like things that you constantly or have routinely tripped up on?

Ben Garside:

Yeah, I think, I think my worst mistake is keep on referring to it as a, this countable noun. You know saying is it is an it. You know, you wouldn't say kind of like "The biology." But we are, we do say "T he AI." So, you know, we need to. What, the kind of what I tell people to use is slightly move away from these human-like terminologies. So when we're talking about smart speakers for example it's very easy for us to describe a smart speaker as listening or hearing and you know, there is an argument you could say, maybe they do. But I think for young people with what we're doing there is giving a human-like characteristic there. So rather than using words like, you know, you know"hears" or "sees" we should be using terminology like "It detects it and it takes inputs it pattern matches it generates." So we're not really using technical language, but we're using language that's you know, not human like if that makes sense. And even when we're describing, you know, maybe how AI systems are built, you know rather than saying that "This AI does this." We can say "They are designed to do this." Or "Developers have made an AI system that does this." So you're adding in the human element and it makes them realise that it's humans that are behind these systems and we have control over them. So yeah, so it's a simple guidance really but you know, like I say even though I've been talking about this for probably two years now, I still make the mistakes so don't give yourself a hard time about it. But I would say that if you're talking to young people and you hear them do that, you hear them give for you know a human trait to them just pick them up on and go "Does it actually see?" you know just ask that question "Does it listen?" Just so you've put that seed of doubt and they make it makes them think about how does it actually work?

James Robinson:

Mmm. It's about that purposeful intent isn't it, like you're going to make mistakes, but just be mindful of it as you're making the mistakes.

Jane Waite:

Yeah, and I think for educators it's you know, we model the use of language in every single subject. We talk about phonemes we talk about graphemes. And I think this is a place for posters that have got input, process, output and then a big cross through of the brain that is normally shown. You know, no more eyes. No more ears being shown on these posters. No more kind of robots that seem to have particular kind of intentions. They're just machines and they this is a machine that takes an input and based on vast amounts of data there is an output and that output is merely a prediction, and I think, that thing of it's a prediction based on input and it's a bet it might be right might be wrong. You know, the what the out, whether the output is accurate or not. So I think it is going back to the input output or process model almost to just kind of simplify our language around it rather than anthropomorphising.

Ben Garside:

I think we are completely it against it though don't you? Because science fiction loves a robot. Loves a walking talking robot. But you know what, it's, we spoke to one of the people we work with are Google DeepMind and she said when people find out that she works in AI the most common question she gets asked is "Is Sky, Skynet going to become a real thing?"

Jane Waite:

I'm actually, because I was a primary teacher and I know how easily we can, how well children learn about genres in language, in literacy. They know the difference between a fact book and a fiction book. You know, so we can teach this. I think you know as educators we can do amazing jobs and that children learn about genres. Genre theories a great big kind of area that we've used really effectively in the teaching of literacy and yeah, so I think we can do this. But as James says it just needs really careful thoughts.

Helen Copson:

A child's first experience of AI is through a device. Like my three-year-old will ask Alexa to play Baby Shark repeatedly.

Unspecified:

Ergh.

Helen Copson:

Yeah, absolutely. And then my ten-year-old we use Alexa to tell the weather and use it in a different way. When you go into school we're teaching it in a 11 to 14 curriculum. When I asked them "Wheres AI used?" And that's part of the Raspberry Pi, the first lesson "Is this AI?" They thought a spreadsheet was AI because it worked out the maths, spreadsheet's a calculator. Netflix didn't use AI. But then they can't see it because the only see as that humanist model. So by teaching them that it's not, it's everywhere. It's in everything you do and it's not in a spreadsheet because that's a calculator.

Jane Waite:

Can I go back as well to primary? In primary we would go around with a little label and go "Has this got, is this a computer? Is this a computer? Is this alive?" You know, it's like we need to go back to those initial activities where we have a little piece of cardboard or a little sticky note and we start to label things and say "This uses a large language model. This calls an AI." You know, it's like really, we can do it though! I really do believe we can do it, but we just need lots and lots of pedagogy, lots of teacher training and some brilliant resources. So lots of teacher training.

Helen Copson:

It's like it's skipped a generation, technology. I'm that little generation that was born analog and grew up digital, and then everyone missed that bit. And now they're all digital and they haven't got a clue and now we've got AI. And it's a whole another new world.

James Robinson:

Yeah, it is a, yeah and it's just the current new world we're dealing with lets. You know, let's not talk about quantum computing other things that might change the world again in this episode. But yeah, thank you everyone. I personally found that conversation really interesting and engaging and I've taken away quite a lot of thoughts about AI and its place in education. So if you our listeners have a question for us to a comment about our discussion today, then you can email via podcast@helloworld.cc or you can tweet us @HelloWorld_edu. My thanks to Jane and Helen for sharing their time and experience and expertise with us today. So Ben, what did we learn today?

Ben Garside:

Well, I always get a lot from listening to classroom practitioners who are actually using AI with their classes. And I think like any teacher does, it's about experimenting and trying out new ideas that you know constantly refining them as well. And I don't know about you James, but even like minor things like speaking to my colleagues for the Raspberry Pi Foundation and hearing about what kind of prompts do they use when they're using ChatGPT kind of really helps me learn and get new ideas. So, but as we've talked about today, I think it's really important that we do so from this position of knowledge and are like mindful of the risks. But my takeaway message to everyone listening is just to have a go, you know, talk to your colleagues and share best practice and I'm going to go away and make one of Jane's vanilla, chocolate and liquorice cakes.

James Robinson:

I would say yummy, but I'm not entirely sure... No I think my take away if anything is that most ideas in computing we can describe AI through the role of food.