Incongruent
Podcast edited by Stephen King, award-winning academic, researcher and communications professional.
Chester | Dubai | The World.
Correspondence email: steve@kk-stud.io.
Incongruent
AI ACADEMY: Prompt Wizards and Data Guardians: Making AI Work for Schools ( DfE June 2025)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Spill the tea - we want to hear from you!
We take a comprehensive look at using artificial intelligence safely and effectively in education by examining guidance from the UK Department for Education and related sources. This deep dive unpacks the opportunities, challenges, and key considerations for anyone dealing with AI in educational settings.
• Understanding generative AI as a tool that creates new content based on massive datasets and pattern recognition
• The critical importance of crafting detailed, clear prompts to get quality output from AI systems
• Maintaining human oversight as AI lacks true understanding and is simply making predictions based on training data
• Safeguarding concerns including exposure to harmful content and the creation of convincing deepfakes
• Data protection as a paramount consideration under UK GDPR with strong warnings against using free AI tools for student data
• The surprising environmental impact of AI systems which consume electricity equivalent to entire countries
• Strategic implementation requiring proper planning aligned with school development goals
• Practical applications including generating teaching resources, personalizing learning, and streamlining administrative tasks
• The need for schools to develop clear AI policies covering usage, data handling, and academic integrity
How do we best equip everyone - students, teachers, leaders - with the critical thinking skills they need to navigate this complex new landscape responsibly, ensuring the technology serves learning and not the other way around?
Introduction to AI in Education
AI 1Welcome to the Deep Dive. Today we're taking a really close look at a whole stack of sources about using artificial intelligence safely and effectively in education. We've gathered up a recent UK Department for Education guidance plus, you know, related videos, transcripts, documents, trying to get a clear picture of things right now.
AI 2That's right, and our job really is to sort of cut through all the noise, unpack these sources and pull out the most important bits.
AI 1Exactly the opportunities, the challenges, the things you really need to think about if you're dealing with AI in schools or colleges.
AI 2Yeah, so think of this as like your shortcut to getting up to speed.
AI 1Right, whether you're leading a school in the classroom or just really curious about how AI is changing learning. Okay, let's dive in. When we talk about AI in education, what do we actually mean? Because it feels like it's kind of everywhere already, but there's a specific focus now, isn't there?
AI 2absolutely, you're spot-on. Ai is in loads of familiar stuff spam filters, predictive text on your phone right things we don't even notice exactly. You might not. Might not label it AI, but it is For this deep dive, though. We're really zoning in on generative AI, that's the AI that actually creates new stuff Text images, audio video, even computer code.
Understanding Generative AI Fundamentals
AI 1Oh, got it. The kind of AI that can, you know, write an essay, draft or make a picture from a description. So how does that work? Like in simple terms, based on the sources.
AI 2Well, the sources break it down pretty simply. It's basically built on machine learning computers learning from data, okay, and then there's deep learning, using these complex sort of brain-like neural networks that lets them process huge amounts of information and generate something new.
AI 1And that leads us to the large language models, the LLMs, things like ChatGPT, Gemini, Copilot.
AI 2Exactly those. They're the big examples. They get trained on absolutely massive data sets, learning patterns in language or images or code.
AI 1Right.
AI 2And that lets them predict what comes next in the next word, the next pixel. That's how they generate stuff that feels so human-like or really creative.
AI 1It almost sounds like they understand it, but the sources used a helpful way to think about it like a black box.
AI 2Yeah, that's a really useful analogy. You put something in that you're prompt, something complicated happens inside the box you don't see how exactly, and then you get something out. The AI's response.
AI 1Okay, and this is where it gets really critical for educators, right yeah, Because the quality of what comes out it hugely depends on what you put in. The sources really stressed at this point they really did.
AI 2A detailed, clear prompt makes all the difference. We saw examples. You know. Like asking for a quiz.
AI 1Yeah.
AI 2Just saying make a quiz about light gives you something well, pretty generic, Right. But if you specify, say, create a 10-question multiple-choice quiz, include the answer key this is for UK year three science based on the national curriculum. The topic is light Then you get something much more useful.
AI 1Ah, much more targeted.
AI 2Exactly, and the sources even mention tools like AILA from Oak National Academy that are specifically designed to help teachers with this, generating stuff that's already aligned to the curriculum.
AI 1So it's not just the AI's power, it's our skill in asking the right questions.
AI 2Precisely. And that links straight to another really really critical point from the sources. Yeah, AI is not a human expert fundamentally. Yeah, Its output is just based on prediction from its training data. It doesn't understand things like a person does.
AI 1Which means you can't just take its word for it.
AI 2Exactly that. You always need human oversight. You've got to check the outputs. Are they accurate? Is there bias? Is it actually relevant for your students? Your context?
AI 1Right because the source is warned about American spellings creeping in or outdated teaching ideas.
AI 2Or stuff based on, say, the American education system. The AI doesn't know what's current or right for your specific school, it's just giving you the statistically most likely answer based on its training. Okay, and the sources are crystal clear. As the educator, you're professionally responsible for the prompt you use and for the output, how you check it, adapt it, use it.
AI 1So cross check with your curriculum. Official guidance, use your own expertise indispensable.
AI 2Your knowledge is key okay.
Key Risks and Safeguarding Concerns
AI 1So understanding that input, output thing and the absolute need for human oversight, well, that immediately brings the risks into view, doesn't it? It does the sources spend a lot of time on risks and safeguarding. One expert even said you lots of opportunities but definitely risks to navigate.
AI 2Yeah, and the advice was quite pragmatic Learn fast, but act more slowly. Don't feel pressured to just jump in and adopt everything until you've really understood it and figured out your strategy.
AI 1That feels like solid advice, especially when you see how much kids are already using this stuff.
AI 2It's vital. The sources mentioned an Ofcom study from 2024, something like 54% of 8 to 15-year-olds Wow, and 66%, so two-thirds of 13 to 15-year-olds in Britain had used generative AI in the past year.
AI 1That's huge.
AI 2It is, and over half of those 8 to 15 said they used it for schoolwork. So you know it's already happening. Educators need to be informed to guide it properly.
AI 1Definitely makes this conversation essential. Okay, so the sources flag some key risk areas. What's the first big one?
AI 2Exposure to harmful content. That's a major worry.
AI 1How so.
AI 2Well, generative AI makes it super easy to create really realistic images avatars. So there are concerns about kids potentially seeing inappropriate stuff or even grooming risks through chatbots that seem very human.
AI 1And this connects to wider security issues too.
AI 2It does. Yeah, the UK's counterterrorism strategy was mentioned, highlighting the risk of AI being used to create and spread fake propaganda. Deepfakes on a massive scale, potentially overwhelming the systems trying to moderate content.
AI 1So educators are kind of on the front line helping students spot fakes and misinformation.
AI 2Exactly. Teaching those critical thinking skills is vital. How to question what you see, recognize misinformation, challenge extremist ideas. That fits right in with the prevent duty. Safeguarding students and good filtering and monitoring systems are obviously crucial too.
AI 1Okay, what's another risk? I know inaccuracy and bias came up a lot.
AI 2Yes, Because AI learns from these enormous data sets, often just scraped from the internet. Its outputs can be wrong or biased, Reflecting the biases already out there in the world racial, gender stereotypes, that kind of thing.
AI 1And bias can be in the algorithm itself or even in the question we ask.
AI 2Both Bias can be sort of baked into the algorithms and definitely prompt bias how you phrase the question can steer the AI.
AI 1Ah, like that example they gave about maths.
AI 2Yeah, that was a good one. Asking why do students struggle with maths sort of assumes they do struggle right, whereas asking what factors influence students experiences with learning maths is more neutral.
AI 1It allows for successes and challenges a much better prompt that really shows how our own assumptions can shape the ai's answer it does highlights the need for critical thinking from us too.
AI 2And then there are are hallucinations.
AI 1Where the AI just makes stuff up.
AI 2Basically, yes, it generates content that sounds plausible but isn't actually true or accurate. Sometimes with real confidence.
AI 1So the takeaway again is Check everything.
AI 2Cross-reference with trusted sources, curriculum plans, official guidance. Use your professional judgment. Don't just copy and paste. Never just copy and paste.
AI 1Okay, then there's data protection UK GDPR. That must be a huge consideration with student data.
AI 2Paramount, absolutely paramount, under UK GDPR. If you process personal data names, photos, assessment results, anything identifiable you must have a lawful basis. Using those free, public AI tools is really risky because you don't control where that data goes. It might get stored somewhere else, maybe even used to train the AI model without your OK and it could be accessed by people who shouldn't see it. And children's data needs extra special care, access correction and, importantly, the right to be forgotten, especially if they agreed to something as a child without fully grasping the risks Using their data, particularly sensitive stuff in general AI tools that could be a serious GDPR breach.
AI 2So the really strong advice from all the sources is only use AI tools that your school or college has officially approved and provided.
AI 1The enterprise versions.
AI 2Usually yes, because those have been checked out. They've likely had data protection impact assessments done. They have better safeguards. The message is loud and clear Don't put sensitive or personal data into general AI tools unless your institution has specifically said it's safe after a proper assessment.
AI 1That sounds like a hard and fast rule Okay, link to data Intellectual property IP infringement yeah, different, but rule Okay. Link to data Intellectual property IP infringement.
AI 2Yeah, different but related. Yeah, this is about creative work, lesson plans, resources and, importantly, student work. Right Copyright belongs to the creator. So if you put student work say an essay for AI marking into a tool that learns from the data you feed it, you could be infringing that student's copyright, unless you have their permission, or their parents, if they're minors.
AI 1And the AI itself could spit out copyrighted material.
AI 2That's the other risk secondary infringement If the AI learned from stuff it wasn't licensed to use and then its output includes that, like copying text from another school's website or generating an image based on copyrighted art you could be liable if you use it.
AI 1So the mitigation is use tools that don't train on inputs, get permissions, be transparent. Be careful sharing AI stuff publicly.
AI 2Exactly, transparency is key, especially with student work.
AI 1Okay, the last big risk area they covered was academic integrity Students using AI for assignments.
AI 2Yeah, a massive challenge and the sources were pretty blunt. Those AI detection tools, they're just not reliable. They throw up false positives, unfairly flagging students, maybe those for whom English isn't their first language.
AI 1Yeah.
AI 2And they often miss well-hidden AI use anyway.
AI 1So relying on detectors isn't the way forward.
AI 2Not really no. The sources strongly emphasize using your professional judgment, Knowing your students' usual work, spotting inconsistencies, it's much more effective. Jcq guidance is clear Work must be the student's own. Unacknowledged AI use is malpractice. But it's not just about cheating in the old sense.
AI 1It's about whether they're actually learning anything.
AI 2Precisely. If you just rely on AI, you bypass the actual learning, the critical thinking. Schools need really clear policies on AI use and assignments need to be designed differently, perhaps focusing more on process reflection things AI can't easily fake.
AI 1And talking to students about it Essential.
Environmental Impacts of AI Usage
AI 2Discussing the risks, the ethics, helping them see that just getting an AI answer isn't a substitute for genuine understanding and effort. And again, giving them access to approved, safe tools helps guide them towards responsible use.
AI 1That's a really thorough rundown of the risks. But there's another angle. The source has brought up one that might catch people by surprise the environmental cost.
AI 2Yes, this is sustainability. Generative AI has a surprisingly significant environmental footprint.
AI 1How so.
AI 2Well, these systems need huge amounts of electricity powering all those servers in massive data centers around the world.
AI 1Thousands of them.
AI 2Around 7,000 globally, apparently, and they need constant cooling. Altogether, they use more energy than many entire countries.
AI 1Wow, that puts a massive strain on energy supplies, even renewables.
AI 2It does. It makes it hard even for the big tech companies to hit their own carbon goals, because AI use is soaring.
AI 1And it's not just energy, it's water too.
AI 2Right For cooling. Again. Just mentioned, an average large data center uses something like 2.1 million liters of water every single day.
AI 1That's staggering, often in places already short on water Exactly, and even small AI tasks add up like a search query.
AI 2Comparatively yeah.
AI 1Yeah.
AI 2An AI search can use maybe 10 times the energy of a normal Google search generating one image. It could use as much energy as charging your phone halfway. It really makes you stop and think.
AI 1Is there any positive news on this front?
AI 2Mitigation- Well, tech companies are investing a lot in renewables and there's potential for AI itself to help tackle climate change, maybe through complex modeling, but the immediate energy and water demand is definitely a big concern right now. Ok, the sources did mention, though, that smaller, maybe more efficient AI models are expected around 2025, which should help.
AI 1But until then, it sounds like we need to be conscious users.
AI 2Definitely Just being mindful. Do I really need AI for this specific task, or would a standard search or using a resource I already have be more efficient and well better for the planet? It's about thinking about the wider impact of our choices.
Implementation Strategies for Schools
AI 1Okay, so we've looked at what AI is. The pretty significant risks, the environmental side. With all that on the table, how did the sources suggest schools and colleges actually go about using AI safely and effectively? Marc Thiessen.
AI 2Well, the DfE guidance is quite positive really. It says AI can genuinely transform things, help teachers focus more on teaching, but it needs safe, effective implementation and the right infrastructure. Leaders really need to grasp both the potential and the pitfalls.
AI 1Yeah, it sounds like something that needs a proper plan, not just, you know, buying some new software.
AI 2Absolutely. It needs a strategy. It should tie into your school's wider digital plan, your development plan. The guidance even suggests checking it against the DFE's existing digital and tech standards.
AI 1What kind of practical things should leaders be thinking about, based on the sources?
AI 2Quite a few key things came up. Obviously, ensuring you're meeting safeguarding duties Keeping children safe in education is fundamental. Making sure AI use aligns with your school's educational philosophy, developing or updating policies, data protection, ip safeguarding ethics that's crucial. Planning for any infrastructure upgrades needed, setting up support teams. Evaluating tools before you commit, monitoring how AI is being used. Maybe even setting up an AI steering group to guide the whole process.
AI 1I remember seeing some specific tips for college leaders from JISC too.
AI 2Yes, jisc had five clear actions. Lead by example, use the tools yourself, set boundaries, clear guidelines for exploring AI. Invest in staff training that's vital. Create an AI culture, encourage curiosity and critical thinking about it. And collaborate with industry. Understand how AI is changing the workplace students are heading into.
AI 1All sounds very sensible but running through all of that, the absolute core message seemed to be about keeping humans in control.
AI 2That's the golden thread. Absolutely, you always maintain human oversight. You never outsource your professional judgment, your thinking, your decisions to an AI Right AI is positioned as a tool to support humans support, expertise, interaction, judgment. That human element remains totally central to education.
AI 1So it's about empowering people with AI, not replacing them.
AI 2Precisely, and the sources suggest a kind of phased approach to rolling it out like explore first, assess your needs, check your tech, talk to everyone then prepare, then deliver. Trained people monitor how it's going, get feedback and then sustain, embed it in your strategy, keep policies updated. Review tools Keep talking about it. They even mentioned using audit tools to help figure out where you are now and plan the next steps.
Practical Classroom Applications
AI 1Right, let's switch gears a bit. What does this actually look like in practice, in the classroom, in the school office? The sources gave some really interesting examples of safe and effective uses.
AI 2Yeah, they broke them down into sort of supporting teaching, personalizing, learning and admin tasks.
AI 1Okay, supporting teachers first.
AI 2Some great examples there AI generating lesson resources quickly, like that photosynthesis plan that included differentiation ideas. Creating quizzes from maybe a block of text. You have Breaking down complex text for different reading levels. Simplifying that geography text was a good example, I like the creative stuff too. Yes, getting AI to generate like a rap about the planets or mnemonics or maths problems set in fun context, like that Avengers Fraction problem, and drafting routine things, emails home, adapting the tone, helping draft policies, even helping plan the logistics for a school trip. Lots of workload reducers.
AI 1And personalized learning. That sounds like a big potential area.
AI 2Huge potential. Yes.
AI 1Yeah.
AI 2Helping teachers adapt resources for kids with specific needs, like that computer science lesson adaptation mentioned.
AI 1Okay.
AI 2Generating personalized learning plans, but always with a teacher overseeing it, because they know the child.
AI 1Right.
AI 2There was one really powerful example An automotive teacher using an AI tool trained only on his own curated teaching materials.
AI 1Ah, so a closed system.
AI 2Exactly Safe data. It meant students could ask it questions and get tailored info based only on what the teacher approved. It could even generate podcast summaries for accessibility Really clever.
AI 1That's a fantastic use case, assuming all the data handling and permissions were solid.
AI 2Absolutely. Safeguards are paramount. What about admin tasks? That seems like a natural fit for AI.
AI 1Definitely. Things like getting first drafts of policies, summarizing long documents, checking policies against new laws, maybe helping with timetabling or structuring development plans all stuff that could save a lot of time.
AI 2And using data for insights.
AI 1Yes, and this warning came up again and again never put individual student personal data into general AI tools. Okay, critical point.
AI 2But using anonymized data with approved, secure tools that can help spot patterns, attendance trends, performance across cohorts, like visualizing GCSE results against Key Stage 2 scores, perhaps to see overall patterns, not tracking individuals publicly.
AI 1So data analysis possible, but with extreme caution on personal info.
AI 2Absolute caution, Transparency, lawful basis under GDPR essential.
AI 1Okay, and finally, what about students themselves using AI safely?
AI 2Well, if the tools are provided by the institution with the right safeguards like not training on student inputs, having monitoring then students can use LLMs for things like research that EPQ student example using a college tool for research questions was mentioned, but the key is teaching them to verify the facts themselves and to credit the AI properly. Digital literacy.
AI 1And giving them safe tools helps bridge that digital divide too right If they can't afford premium tools.
AI 2Exactly and students can use AI creatively with guidance like that image generator for creative writing prompts. Some places are being really upfront explaining AI policies at enrollment, setting expectations early.
AI 1And there was that framework for writing good prompts, f-c-t-s yeah that's a handy one.
AI 2Focus the prompt, clear, concise, analyze the output, check it carefully, check for bias actively look for it, tailor suitability, make sure it fits your context and strengthen the prompt, refine it based on what you got back. A good little checklist.
Critical Thinking in the AI Era
AI 1So, wrapping this all up, what's the big picture from this deep dive? It seems AI offers some genuinely exciting ways to improve education.
AI 2Definitely Easing teacher workloads, creating dynamic resources personalizing learning.
AI 1the potential is clearly there, but and it's a big but using it safely means really getting to grips with the risks. Safeguarding data protection, ip, academic integrity, even the environment. These aren't side issues, they're central.
AI 2Absolutely. The core message really from all the sources is be strategic, be considered. Human oversight and judgment have to stay front and center. Be transparent. Use approved tools with proper safeguards. Develop clear policies.
AI 1It's obviously moving incredibly fast this whole area, but it sounds like if we stick to those core principles keeping students safe, focusing on real learning we can hopefully harness the good bits responsibly.
AI 2That's the goal, which, I suppose, leaves us with a really important question for you, the listener, to think about. As AI gets more woven into education and we know young people are using it a lot already how do we best equip everyone students, teachers, leaders with the critical thinking skills they need, skills to navigate this complex new landscape responsibly, making sure the technology serves learning and not the?
Aayushi Ramakrishnan
Co-host
Arjun Radeesh
Co-host
Imnah Varghese
Co-host
Lovell Menezes
Co-host
Radhika Mathur
Co-host
Sukayna Kazmi
Co-host
Stephen King
EditorPodcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Agentic Insider
Iridius.ai Inc
Brew Bytes
Nadeem Ibrahim