Incongruent

UAE Green Paper 2024 - What Does It Mean to Know Something in the Age of AI?

The Incongruables

Spill the tea - we want to hear from you!

We explore the UAE's comprehensive green paper on generative AI in education, unpacking a multi-level framework that addresses both opportunities and challenges of AI integration in learning environments. This three-tiered approach examines policy, institutional implementation, and classroom experience to create a balanced roadmap for educational transformation.

• The UAE demonstrates leadership in AI education through national strategies and concrete actions like developing AI tutors for their curriculum
• International frameworks including UNESCO's AI Competency Framework inform the UAE's ethical approach to educational AI
• The Human-Machine Interaction quadrant provides institutions with a structured way to implement different types of AI assistance
• Professional development for teachers must focus on practical benefits and ease of use, not just technical training
• Academic integrity requires clear policies on appropriate AI use alongside verification methods like oral examinations
• Critical questions emerge about equitable treatment across subjects and potential cultural biases in AI systems
• The future of education will require rethinking what it means to "know" something in an age of instant AI assistance


Speaker 1:

Welcome to the Deep Dive. Today we're jumping into something huge, really evolving, fast, generative AI in education. We're looking at it through this really comprehensive green paper from the UAE. So our mission, basically, is to unpack the key insights, the opportunities, which are massive, but also the challenges. Think of it as your shortcut to getting up to speed on AI and the future of learning.

Speaker 2:

Yeah, and what's great about this green paper is its structure. It uses these 3M levels macro, meso and micro to break down GAI's impact. And the UAE? Well, that's a really relevant place to look, isn't it? They've had things like the UAE Council for AI and blockchain for a while now, plus appointing a minister of state for AI way back in 2017.

Speaker 3:

That really put them on the map, kind of at the forefront of this whole AI push. Also, the absolute need for a solid framework to handle the tricky stuff and there is tricky stuff making sure access is fair, keeping learner human centered, protecting intellectual growth, the psychological side of things and, crucially, needing humans to actually check what the AI puts out. You know, fighting bias, making sure it's accurate, hashtag, tag, tag, core segments. So let's start at the top, the macro level. What's the UAE's big teacher? The grand vision for generative AI, how's it shaping policy, ethics, that sort of thing.

Speaker 2:

Well, their vision isn't just a vague idea. It's tied into national strategies like Vision 2021 and Centennial 2071. These are serious plans aiming for tech leadership and making sure pretty much everyone has some AI literacy. And we're seeing real action. They've launched a GAI guide which is pretty detailed, even covering education uses, plus big pushes to train teachers, give them AI skills. And what's really interesting, I think, is the Ministry of Education's own initiative developing and launching AI tutors. They've even partnered with ASI you might know them as Digest AI before to build an AI tutor specifically for the UAE's national curriculum. So the takeaway is this very proactive, top-down integration, making AI literacy core, not just an add-on and ethics governance.

Speaker 2:

The green paper is really thorough here. It pulls in major international frameworks You've got UNESCO's 2024 AI Competency Framework. That covers the basics ethics, practical skills, being adaptable and it leans heavily on UNESCO's 2021 recommendation on AI ethics, emphasizing, you know, keeping it human centered, respecting rights, cultural diversity. Also, the 2019 Beijing consensus on AI and education gets a mention, calling for fair access, getting everyone involved in making policy. So these aren't just name dropped guidelines. They seem to be genuinely shaping the UAE's approach from the classroom up, which is maybe different from places where ethics feels more like a top down memo.

Speaker 1:

Yeah, and that connects to the real world, doesn't it Like? The World Economic Forum keeps talking about AI for personalized learning and that IBM index from 2023, 42% of UAE companies already using AI that's a lot. It really highlights this need for different skills in the workforce, doesn't it? Critical thinking, yeah, but also that ethical understanding feels more important than ever. Is the paper hinting? We need to rethink what fundamental skills even are?

Speaker 2:

Absolutely and stepping back. That brings us to the core ethical pillars. They focus on preventing bias, keeping data private and being transparent and accountable. So, for instance, that AI tutor project from the ministry. It specifically has built-in privacy stuff and aims for culturally appropriate content fitting local values. The paper even floats the idea of an ethical AI toolkit like a practical guide for teachers on how to use AI responsibly. It's all about making AI a tool that helps teachers, not something that clashes with their values or adds complexity without benefit.

Speaker 1:

OK, but that raises a tough question, I think what does equity in AI really mean here? Is it just okay everyone gets a login, or is it something deeper Like does the AI treat subjects fairly? Math seems straightforward, maybe, but what about history or literature? Could biases in the training data subtly skew things, depending on the subject?

Speaker 2:

That's a really good point, and it's complex. The paper uses a neat analogy. It compares AI and education now to early mobile phones. You know those first brick phones. They worked, they were valuable, but they didn't totally transform everything like smartphones did later. Ai is kind of like that now in education. It's useful, definitely, but it hasn't fully woven itself into the fabric and changed everything yet, and because it's still relatively early days, that makes universal access tricky and also makes it harder to ensure it works equally well across all subjects, especially where, yeah, cultural biases might creep in. We're still figuring it out, basically Scaling up from useful tools to genuinely transformative systems.

Speaker 1:

Right, okay. So if macro is the big strategy, mezzo is where the rubber meets the road the institutions schools, universities. It makes me think of the calculator again. First it was banned cheating, now it's essential. So the question for these institutions is how do you tell the difference between AI use that's genuinely transformative and AI that's just the latest gadget?

Speaker 2:

Exactly. And the paper doesn't shy away from the big questions Like the latest gadget? Exactly. And the paper doesn't shy away from the big questions like could these intelligent tutoring systems, its, actually replace human teachers entirely? And the ethics of that, wow. It explores ideas like should AI behave ethically like a human, or should we aim for some kind of ultimate moral machine with higher standards? But and this is key, it always comes back to needing humans in the loop for oversight, for quality control. So, while replacement is maybe a theoretical possibility down the line, the paper really grounds it in keeping educators in charge focused on the learner, using AI, not being replaced by it.

Speaker 1:

Soterios Johnson. Keeping humans in the loop makes a lot of sense, and the green paper offers this practical tool the human machine interaction or HMI, quadrant typology. How does that actually help institutions figure out policy?

Speaker 2:

It gives them a framework, a map, really. It breaks down AI use into four types. You've got full automation AI runs the show, like generating reports automatically. Then collaborative interaction AI helps students work together, maybe facilitating group projects. Then full human control AI helps the teacher, say, with lesson plans, but has zero autonomy. The teacher drives everything. And finally, assisted by AI, ai handles routine tasks like grading, multiple choice or tracking attendance, freeing up the teacher. So institutions can use these quadrants to think about pilots, develop guidelines. It makes the whole process less chaotic, more intentional.

Speaker 1:

Okay, implement. Implementing is one thing, but what about quality? How do schools make sure they're using AI well and, you know, ethically?

Speaker 2:

That's critical. The paper emphasizes needing clear benchmarks. We need to measure if GAI is actually improving learning outcomes, not just making things quicker. It's about moving past convenience to see real educational value and, alongside that, strong ethical guidelines on transparency, data privacy, making sure AI helps learning without creating new problems or risks, which ties straight into getting teachers ready. The paper really stresses professional development and it links it to the technology acceptance model, TAM. Basically, Teachers need to see that these tools are genuinely easy to use and actually help them in their real classrooms. It has to have practical benefits, Otherwise why bother? So? Training programs, yes, but also getting feedback, maybe through surveys, to make sure the training and tools actually work for them. Make AI intuitive, not another burden.

Speaker 1:

And all of this has to feed back into the curriculum right Building in AI, literacy, critical thinking skills. Are we talking new courses or yeah, more like weaving it in.

Speaker 2:

The paper suggests things like dedicated AI ethics modules becoming standard so students really grapple with the implications, but also using GAI simulations, say in STEM subjects for hands-on problem solving. So they're learning with AI within their existing subjects, not just about AI separately.

Speaker 1:

Okay, let's zoom right in now to the classroom, the micro level, the student experience and the paper flags this really fascinating, almost weird loop the AI on AI assessment. Student uses AI to write an essay. Ai grades the essay. My first reaction is isn't that just cutting out the learning bit? How does the paper suggest we stop students just relying on it and actually push them to think?

Speaker 2:

That's a huge concern, definitely, and it connects to that earlier point about equitable treatment across subjects. It's maybe easier to spot AI writing in, say, a technical report than in a history essay, where nuance and interpretation matter more. And should AI even try to be neutral in subjects like history or literature, where cultural context is so important? What if the training data has biases? The paper flags this as needing serious thought for policy. How do we manage those potential biases from, you know, non-local training data? How do we make sure students still engage critically with sources? It's about making AI culturally aware or at least acknowledging its limitations, not just accepting a global default.

Speaker 1:

But despite the challenges, there's also the flip side GAI's power for adaptive, personalized learning, real-time support. That fits perfectly with ideas like constructivist learning theory, right Letting students build knowledge at their own pace. So it's more than just a fancy e-book. It's about platforms that genuinely adapt, maybe simulations for real-world problems that adjust to the student.

Speaker 2:

Absolutely. But then academic integrity comes roaring back. Assessment the paper stresses needing crystal clear policies on what's OK and what's not OK regarding GAI use in assignments, exams, and not just rules but ways to check, like maybe using VivaVoice, more oral exams if misuse is suspected, to see if the student truly understands the work, or using authorship authentication software checking originality, making sure it's really the students thinking.

Speaker 1:

even if AI tools were used appropriately as aids, and a massive piece of this puzzle is the students themselves their AI literacy, their skills. It's not just can you use chat, gpd, it's, do you understand it?

Speaker 2:

Precisely the Green Paper says we need to embed ethical AI use right into the curriculum, get students doing collaborative projects, using AI tools responsibly, regular training on AI ethics and, interestingly, it also mentions raising awareness about AI's environmental footprint. Thinking about responsible use in a broader sense. Zooming out slightly, gii is also changing research, isn't it? Literature reviews, synthesizing info. It offers amazing tools for finding patterns, summarizing vast amounts of text. But big challenges too Reliability, authenticity. How do we make sure students are still digging into primary sources? How do we guard against AI confidently presenting misinformation? How do they cite AI use transparently and again, mitigating those historical or cultural biases? These are huge questions for developing real research skills today. Hashtag, hashtag outro.

Speaker 1:

So, as we wrap up this deep dive, it feels like this green paper from the UAE offers a really solid, multilayered way to think about generative AI in education. It doesn't just height the benefits, it gives a very clear-eyed view of the risks too A balanced perspective which is refreshing.

Speaker 2:

Yeah, I agree, and looking at the bigger picture, it provides such a strong foundation for well, actual policy discussions and real strategies. It really positions the UAE's education system to be a leader in integrating GAI responsibly and honestly preparing students for the future that's already arriving. It's definitely a blueprint others could look at, and it leaves us with a really interesting question to mull over, doesn't it? As AI becomes more and more part of education and we really push for these human-centered approaches, what parts of our human creativity, our critical thinking, are going to be amplified, and maybe what new ways will we need to think about knowledge itself? What does it mean to know something in the age of AI? Mark?

Speaker 1:

MIRCHANDANI that is a profound thought to end on, something for all of us to think about. Thank you so much for.