
Incongruent
Experimental podcast edited by Stephen King, senior lecturer in media at Middlesex University Dubai.
Correspondence email: stevekingindxb@gmail.com or s.king@mdx.ac.ae.
Incongruent
Turnitin 2024: Navigating Education's AI Frontier: Students, Teachers, and the Ethics of Learning
Spill the tea - we want to hear from you!
Artificial intelligence isn't just changing education—it's creating a fascinating paradox that affects everyone involved. Our deep dive into a comprehensive global study reveals a striking contradiction: while 78% of students, educators, and administrators feel optimistic about AI's educational potential, an overwhelming 95% believe it's being misused at their institutions. This fundamental disconnect exposes the urgent need for clearer guidance in our AI-integrated classrooms.
The most surprising revelation? Students are significantly more concerned about AI's impact than their teachers or administrators. Far from blindly embracing new technology, today's learners are actively seeking guidance on ethical AI use and worry about its effects on their critical thinking abilities. Meanwhile, educators find themselves in an impossible position—expected to guide students through this new landscape while lacking adequate training and institutional support themselves.
The confusion around academic integrity reveals how unprepared our educational systems are for this transformation. When administrators, teachers, and students can't even agree on what constitutes cheating with AI, how can we establish consistent policies? Only 45% of administrators consider using AI to write an entire assignment as cheating, compared to 63% of students—a disconnect that undermines trust throughout the educational ecosystem.
As we navigate this complex frontier, one thing becomes clear: the future of education isn't about rejecting AI or blindly embracing it. It's about redefining what makes education valuable in an AI-powered world. If machines can handle information processing, we must double down on developing uniquely human capabilities—critical thinking, creativity, and empathy—skills that AI cannot replicate. The question isn't whether AI belongs in education, but how we preserve the human connection that makes learning truly transformative while leveraging AI's enormous potential.
Welcome to the Deep Dive, your shortcut to being truly well-informed. Today we're diving into something really central to our lives how artificial intelligence is rapidly changing education. We're not just looking at the headlines. We want to unpack the actual experiences, the perspectives of students, educators, academic administrators. Our mission really is to understand how we can navigate this new frontier. How can we make sure AI genuinely enhances learning without, you know, sacrificing that essential human element?
Speaker 2:And to guide us. We've got a really powerful source. It's a comprehensive global research report from April 2025 by Turnitin, and this isn't like a small study. It's based on a survey of 3,500 respondents we're talking 500 academic administrators, 500 educators and a big chunk 2,500 students, and they're spread across Australia, india, mexico, new Zealand, the UK, ireland and the US. So it gives us this uniquely broad look at AI's real-world impact.
Speaker 1:Right, and our goal here isn't really to pick sides, is it? It's more about peeling back the layers of this, well, incredibly complex landscape. We want to uncover, maybe some surprising insights, understand the core tensions and pinpoint where guidance and support are just desperately needed for you, the learner, and for, well, everyone in education. And straight away, something really jumped out at me. It kind of challenges common assumptions. Students are actually more concerned about AI's impact than educators or administrators are and, what's maybe even more striking, they're actively looking to educators for guidance. What do you make of that?
Speaker 2:just as an opener, it's a profound way to start, isn't? It Completely reframes who we might think is just jumping on the AI bandwagon. What's fascinating is the report actually found a strong 78% students, educators, admins feel positive about AI's impacts. That's a huge wave of optimism about its potential 78%.
Speaker 1:That's really high. But then you flip the coin right. A staggering 95% of all participants believe AI is being misused somehow at their institution. So almost everyone sees the good, but almost everyone also sees the bad. It feels like a fundamental contradiction. Is this just, you know, excitement about the potential hitting the messy reality of now? Or is it something deeper, like ethical worries people haven't squared yet?
Speaker 2:I think you've hit the nail on the head there. It really suggests that while the vision for AI in education is grand and mostly optimistic, the actual doing of it, it's full of ambiguity, and that leads straight to that feeling of misuse, because the clear lines, the shared understanding, they just aren't there yet.
Speaker 1:And part of that ambiguity, it seems, comes from just a basic disagreement on what cheating even means with AI. The report shows a really significant difference in perception there, especially about using AI to write like a whole assignment.
Speaker 2:It's a critical point. Yeah, only 45% of academic administrators think using AI for the whole thing is cheating. Compare that to 55 percent of educators and then students 63 percent. They're more likely to see it as cheating than the people setting the policies.
Speaker 1:Wow, that's a huge gap. I mean, how can you have clear rules when the people making them, enforcing them and the students under them, how can they define the main problem so differently? It feels like everyone's got a different rule book.
Speaker 2:It really does. And this confusion it isn't just about words, right. It undermines trust. It creates this environment where students are unsure what's okay and educators well, they struggle to be consistent. It absolutely screams for clearer policies, for a shared understanding of how and when AI can be used. That's really step one to tackling this widespread unease and, frankly, distrust.
Speaker 1:That deep confusion around cheating definitely fuels the unease, and it sounds like that unease is part of this bigger overwhelm factor. The report talks about this feeling that AI just exploded and everyone's scrambling. They even call it well, we don't know. We don't know.
Speaker 2:That's a perfect way to put it. The sheer number of tools, the volume of information, it's overwhelming almost everyone. We're talking 80% of educators, 73% of students, 72% of administrators feeling this way. It's not some small issue, it's systemic, it's affecting pretty much everyone.
Speaker 1:And, like we mentioned, that student anxiety point is major here. Everyone feels overwhelmed, sure, but students seem particularly worried 64% of students expressing worry about AI use. Compare that to 50% of educators, 41% of admins. It just flies in the face of the idea that students are blindly electing AI. They're really concerned about their learning.
Speaker 2:And this widespread unease, especially from students. It links directly to a fundamental lack of clear guidance for the educators themselves. If educators aren't fully equipped or confident well, that naturally limits how well they can use AI for students or the institution. There's a guidance gap, you know, and students are right in the middle of it.
Speaker 1:Which leads us straight to the educator's burden, doesn't it Sounds like teachers are really at a crossroads. They're being asked to do a lot.
Speaker 2:That's the core challenge, isn't it? The report's clear Educators are seen as the key, the key to helping students navigate AI, both for their studies now and for, you know, being AI ready for work later. They're expected to be the guides on the ground in this new landscape.
Speaker 1:But there's this huge knowledge gap. The report says 50% of students admit they don't know how to get the most out of AI and they look straight to their teachers for that help. Yet over half of educators 55% also say they lack the knowledge to use AI effectively for teaching, even for their own admin tasks. So if students are looking up and teachers are struggling too, where does the actual support come from?
Speaker 2:It really throws a spotlight on a systemic issue, doesn't it? A failure, perhaps, to provide the right professional development, a clear institutional strategy around AI? It's a major blind spot and it impacts everything curriculum integrity, you name it. And it's not just about grades, it's about jobs. 90% of educators, 89% of admins, believe AI readiness is essential for future careers. 70% of students agree. So this demand for AI skills makes the current knowledge gap a really critical problem. Ne needs urgent attention really.
Speaker 1:What's also worrying is the lack of institutional support. 37% of educators said their institution just doesn't have the resources for them to use AI effectively, which I guess pushes them to find their own solutions outside the system.
Speaker 2:Exactly, and that can lead straight to inconsistency, unfairness across different classes, different schools. That inconsistency then feeds right back into academic integrity concerns, another core theme. Despite AI adoption booming, the guidelines for proper use are lagging. Addressing these risks is just crucial for maintaining academic integrity. It's not just about, you know, blatant copying. It's subtler stuff too, like students using AI for ideas without citing it or refining work so much it's barely their own thinking anymore. The lines get really blurry when AI feels like an uncredited co-author, not just a tool. That's where integrity takes a hit.
Speaker 1:And it's not just about misuse, it's about the learning process itself. A key student worry really stands out. Fifty 59% are concerned that relying too much on AI could actually reduce critical thinking skills. And that's not just a small worry, is it? If students use AI to generate ideas or even drafts, are they skipping the hard mental work that actually builds those thinking skills? Is AI becoming a crutch?
Speaker 2:Precisely that struggle wrestling with complex ideas, building your own arguments, pulling different sources together. That's fundamental to critical thinking. If AI does the pre-digesting for them, the risk of, well, intellectual atrophy is real. It's less about the AI itself and more about how it's used or not used, maybe. How do we make sure it enhances thinking, not replaces it. And this concern about critical thinking it pops up across the board. Both secondary and higher education institutions reported the same top two challenges academic integrity and lack of expertise and training. On AI.
Speaker 1:So it's not just a college problem or a high school problem, it's systemic, affecting all levels. And they also flag the same top two risks misinformation or misuse of AI and this loss of innovation and critical thinking. That alignment, that widespread agreement on the core worries, it feels like an opportunity, maybe, for a unified approach.
Speaker 2:It does seem that way. The report backs this up. 57% of everyone surveyed sees AI as a threat to academic integrity. And look if critical thinking isn't being taught effectively, if students aren't being prepared properly, it really calls into question the core purpose of education itself. The whole mission is potentially at stake.
Speaker 1:Okay, as we start to wrap this up, let's touch on the global picture briefly. The report mentioned some differences, right, and then maybe pivot to what the report suggests as the path forward. It's clearly not the same everywhere.
Speaker 2:That's right. The level of positivity about AI's impact it varies quite a bit. India, for instance, reported 93% positivity, Mexico 85%. Much higher than the US at 69% or the UK Ireland at 65%, Could be down to different exposure levels, policies, maybe even cultural views on new tech.
Speaker 1:Interesting. So the enthusiasm varies, but that feeling of being overwhelmed, the uncertainty that sounds more universal, even if they're positive overall.
Speaker 2:That's spot on. The overwhelm is widespread. India actually had the highest concern level at 85 percent, despite the high positivity, and globally, half 50 percent of students just don't know how to get the most benefit from AI. That uncertainty is high in Mexico 51 percent, india 50 percent, the UK Ireland 47 percent. It really shows a global guidance gap.
Speaker 1:And institutions are definitely playing catch up with strategy. Only 28 percent have fully integrated AI into their strategic plans. That seems really low, suggests a lot of reaction, not much proaction in this fast-moving area.
Speaker 2:It points to a significant lag. Yes, the way forward, according to the report, it really hinges on urgent open communication and a real collaboration between everyone students, educators, administrators plus, crucially, developing clear guidelines on acceptable AI use Tailored guidelines too, because using AI for coursework is different from exams, which is different from revision. Right Context is key.
Speaker 1:On a hopeful note, I saw that 33% of students in higher ed say they are involved in making new AI policies. That sounds really positive, like students are willing to step up and be part of finding solutions.
Speaker 2:It's a very positive sign. Yeah, it suggests a collaborative future is possible if institutions open that door. And looking ahead, the expectation is huge 92% of educators, 88% of students expect AI's role to expand significantly in the next two to three years. The future is definitely AI-infused, ready or not or not, wow, okay.
Speaker 1:So let's just recap the core tension we've explored. There's this widespread positivity about AI's potential in education, but it's sharply contrasted by an equally widespread belief that it's being misused. And underneath it all, there's this significant knowledge gap, this resource gap, especially for students and teachers, plus just basic confusion about what misuse even means.
Speaker 2:And the responsibility for bridging that gap. It seems pretty clear confusion about what misuse even means and the responsibility for bridging that gap. It seems pretty clear. 86% of respondents agree. It's up to institutions to educate students on how to use AI ethically and effectively. This isn't something you can just leave to chance or hope individuals figure out on their own.
Speaker 1:Absolutely, and one educator and source put it so well we need the human touch always. The report notes that without enough human interaction, that guidance students might just feel disengaged, shortchanged, even it's not just about the tech.
Speaker 2:It's about connection, critical thinking, human development. That's real learning. That human element is just paramount, and it leaves us with a big question, doesn't it? If AI can handle a lot of the basic information processing, which it clearly can, what are the new higher order skills that will really mark out a well-educated person in the future, and how will schools and universities evolve to actually prioritize and teach those essential human skills? Things like critical thinking, creativity, empathy the stuff AI can't replicate. How do we make sure those remain central to what it means to be truly educated and capable in this new AI-powered world? It's definitely something to think about as we look ahead.