Ed-Technical
Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about?
Ed-Technical
How & why did Google build an education specific LLM? (part 2/3)
This episode is the second in our three-part mini-series with Google, where we find out how one of the world’s largest tech companies developed a family of large language models specifically for education, called LearnLM. This instalment focuses on the technical and conceptual groundwork behind LearnLM. Libby and Owen speak to three expert guests from across Google, including DeepMind, who are heavily involved in developing LearnLM.
One of the problems with out-of-the-box large language models is that they’re designed to be helpful assistants, not teachers. Google was interested in developing a large language model better suited to educational tasks, that others might use as a starting point for education products. In this episode, members of the Google team talk about how they approached this, and why some of the subtleties of good teaching makes this an especially tricky undertaking!
They describe the under-the-hood processes that turn a generic large language model into something more attuned to educational needs. Libby and Owen explore how Google’s teams approached fine-tuning to equip LearnLM with pedagogical behaviours that can’t be achieved by prompt engineering alone. This episode offers a rare look at the rigorous, iterative, and multidisciplinary effort it takes to reshape a general-purpose AI into a tool that has the potential to support learning.
Stay tuned for our next episode in this mini-series, where Libby and Owen take a step back and look at how to define tutoring and assess the extent to which an AI tool is delivering.
Team biographies
Muktha Ananda is Engineering leader, Learning and Education @Google. Muktha has applied AI to a variety of domains such as gaming, search, social/professional networks and online advertisement and most recently education and learning. At Google Muktha’s team builds horizontal AI technologies for learning which can be used across surfaces like Search, Gemini, Classroom, and YouTube. Muktha also works on Gemini Learning.
Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for Education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches. Before embarking on AI research, Markus completed a PhD in black hole physics.
Irina Jurenka is a Research Lead at Google DeepMind, where she works with a multidisciplinary team of research scientists and engineers to advance Generative AI capabilities towards the goal of making quality education more universally accessible. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an Experimental Psychology student at Westminster University. This was followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence.
Link
Join us on social media:
- BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
- Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
- Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
- Stay up to date with all the latest research on child development and learning: https://bold.expert
Credits: Sarah Myles for production support; Josie Hills for graphic design