Incongruent

AI, Classrooms, and Real Skills: Yara Alatrach

The Incongruables Season 5 Episode 9

Spill the tea - we want to hear from you!

What if AI literacy wasn’t about prompts or platforms, but about thinking that endures? We sat down with AI strategist and educator Yara Alatrach to unpack how schools can move past the hype and build true fluency—skills that help students question systems, evaluate data, spot bias, and decide when human judgment belongs in the loop.

Yara’s journey from reservoir engineering to consulting and AI leadership at Microsoft gives her a rare vantage point: end users, tech teams, and leaders speak different languages. She explains how that gap shows up in classrooms and why a cross-curricular, teacher-friendly approach matters. We get into the UAE’s bold AI mandate, the idea of treating technology as nation building, and what it looks like to design a weekly, one-hour curriculum that fits real schedules while raising the bar on critical thinking.

We also tackle the hard line between education and product training. Yara makes the case for vendor-agnostic AI education that still welcomes public–private collaboration, so students learn concepts they can transfer across platforms—whether that’s Azure, Google, or open-source stacks. From embedding ethics in every lesson to preparing 10-year-olds for a 2035 workforce, this conversation maps the practical path from curiosity to confidence.

If you care about edtech that empowers teachers, AI literacy that sticks, and a future-ready generation that can frame problems and ask better questions, this one’s for you. Follow the show, share with a colleague, and leave a review to tell us how your school is approaching AI—and what you want us to explore next.

Links:

https://www.ednas.academy/

Support the show

Stephen King:

We're back and here we are for yet another episode of the incongruent and joining me once again is the OG G-O-A-T A-R-J-U-N-R-J.

Arjun Radeesh:

Hello, hello, hello. How's it going?

Stephen King:

Everything is going phenomenally well. Right, so we are today we're talking education and AI ed tech, which is a very controversial and complex topic. Who did we speak to today?

Arjun Radeesh:

So we spoke today to Yara Alatrach. She is she works with Microsoft. She's also one of the co-founders at EDNAS, which is our education-based platform which is committed into equipment uh KS1 to KS3 students with the skills needing to thrive in an AI-powered world.

Stephen King:

It was a really, really interesting conversation, and uh EDNAS is uh here in the UAE with you, correct?

Arjun Radeesh:

Yes, it is an Emirati product.

Stephen King:

EDNAS is inspired by the great policies and the direction that's been taken there in terms of AI uh across the entire country and in the education in particular. So here is a wonderful, wonderful conversation. I hope you are ready for it. So here we go.

Arjun Radeesh:

Hello and welcome to a brand new episode of Incongruent. So today we are joined by Yara Alatrach, who is an AI and digital transformation professional with over 10 years of prospractice experience across technology consulting and energy. She began her career as an engineer at ADNOC before moving into business management and AI strategy role at Boston Consulting Group. And currently she leads the Europe and Middle East and Asia AI strategy for energy and resources as an industry director at Microsoft, where she helps governments and private sector organizations design national AI transformation, adoption, and upscaling strategies. Along with her role at Microsoft, she also has her own venture, EDNAS, where Yara oversees platform operations, curriculum localizations, and delivery, ensuring that every module is both globally benchmarked and regionally relevant. With a Masters of Science in Data Science and over three decades living in the UAE, she bridges product, education, and policy to make AI literacy accessible and scalable across key stage 1 to 3 and K-12 schools. So, Yara, welcome to the show. And how has it been going?

Yara Alatrach:

Thank you, Arjun. I do apologize. That sounds like a very exhausting intro to have made across everything. Knowing quite well. Very excited to be participating in this podcast. And honestly, I love talking about AI in education, so I'm very excited to get started.

Arjun Radeesh:

You've moved from engineering at ADNOC to AI strategy at Microsoft to education with EDNAS. How has this cross-sector journey shaped and how you see AI's role in the society?

Yara Alatrach:

Yeah, it's surprisingly actually quite uh cross-curricular, by the way, and like cross-industry, because with my role and what I've done, when I started in ADNOC and I worked as a reservoir engineer, I used to focus a lot from an operation perspective. And at one point I started using AI, of course, in my role. So I was an end user. And when I moved into consulting, it was more focused on strategies. And when I moved to Microsoft, I started even elevating it a bit more with um strategies for large enterprises like ADNOC and other oil and gas companies and energy companies. And what I noticed the most, what was very interesting, is because I had that experience from a different aspect and a different stakeholder every single time, I was able to notice how people engage and speak about AI completely differently depending on where they're standing. And what I noticed the most is everyone speaks about it in a different language, which a lot which causes a lot of uh it gets lost in translation, basically. Because you have the end user who's speaking from their own perspective, you have the people who come from a tech company who speak in very deep tech conversations. And a lot of times, and this is what where my main role is, it's I try to bridge that gap. And with AI over the last two years moving from a hidden infrastructure and becoming more front-facing, and everyone is being able to engage with it with the natural language. We, and with the mandate, of course, that's happening in the UAE to teach AI in education, we noticed that we need to shift on how we teach AI in education. It's no longer about just passive introductions to common, uh common models, for example. It actually has to be become a bit more advanced. We need to be looking at it more from a critical thinking perspective. So that thought is what led to ADNES. We want to focus on fluency, not on the hype of AI. We want to focus on giving students and teachers the capability of when they're engaging with AI, as it's being introduced more into schools as well. What is it? What is it not? How do I use it? When do I use it? And if I'm using it, how can I be aware of the bias that may exist within these technologies? So the focus of ADNES is basically to it, it exists to turn that curiosity into confidence for both the learners and the teachers. So AI becomes more understandable, more usable, and more responsible in their hands. And if you're an end user as well, because they're going to participate in the next generation of workforce, they need to understand if I'm an end user, how do I need to question the tools that are being provided to me or they're being tried to be sold to me? How can I go beyond of saying, like, I want AI in my own operation or in my own workflows? And from a leadership perspective, those who are going to become to be leading their own companies, their own organizations, their own processes, they'll be able to have a better understanding how does it fit and where should it fit as well? So to go beyond the idea of just programming and coding, because AI is not just programming, that's the reality of it.

Arjun Radeesh:

Okay, that's that's that's quite that's quite an interesting uh story out there. Uh well, like yeah, uh living in the UAE for 30 years, what are the unique opportunities or challenges that you see this region has taken in its approach to AI adoption?

Yara Alatrach:

It's quite insane, but I think it's also as well quite um how do I say this? It's very UAE because UAE itself, uh the growth that they've done over the last 30 years is insane across all the different industries. And when they move towards AI, it's quite interesting how they were one of the first countries that started that mandate. And I love how they see this as they don't see it as a challenge, they see it as an opportunity, they see it as an opportunity. Uh, and they're treating UAE as the uh treating technology as a nation building. It's it's the hub now for most of the startups as well, competing with the likes of Silicon Valley, if not even more. And it's they're going beyond just tool adoption. They want to create the runway for responsible AI education. That's the up that's the opportunity that they're leading with. And that's the message that we are aligning ourselves as well with. Like with ADNES, we're trying to align with the national priorities that they have on AI and digital literacy. We're trying to support schools with the concept of critical thinkers and how they can they can grow to be the future economy as well. And how we're looking at it honestly, it's from a very simple LAN's perspective. Country needs first and the classroom practicality always. How can we uh shape the uh the concepts in schools in order for them to understand how AI works, how to design it, how it matters, not just the literacy of how to use Chat GPT to prompt and get the research that I need for X, Y, and Z or this thesis or this topic?

Stephen King:

Uh, how are you uh helping the teachers uh become more welcoming and be more championing of this kind of literature, this kind of uh pedagogy?

Yara Alatrach:

Yeah, so this is why uh from ethnos we're looking at it from um we want it to be taught uh teachable by any by any teacher. You don't have to be from a computer science background. So our curriculum is designed one, it's cross-curricular, so it builds on what's happening at that year level. So let's say in year one the focus is on X, Y, and Z. The curriculum kind of builds on that these topics. So it starts building on how AI is being used in terms of writing books, how AI is being used in specific industries, how AI is being used as such. And we're not forcing AI tools in our curriculum as well. So, what we're trying to do is to work with the existing ecosystem of these schools because a lot of schools, over that, even before the mandate from the UAE, they're already embedding AI tools in a way, and kids use them on a daily basis as well. So, what we're trying to do is to make sure that it weaves into the classroom in a much more subtle manner, and we're also we have also designed it to be taking one hour in the week to teach a simple concept and build on it across the year. Um, it's understandable, of course, like you mentioned, teachers are busy, and of course, this topic it can be daunting to a lot of people as well who don't interact with it on a daily basis. And one of the things that we are we want to do is the training portion as well, where we when we teach teachers, when we train teachers on AI, we don't want to train them on AI from the concept of this is what a neural network is, this is what uh an auto-encoder is. We're trying to pass them.

Stephen King:

Yes, it can it can it can be very, very confusing. Uh but the I'm seeing this now as replacing home economics as being computer economics, if you remember that particular course, because each year you're gonna have to learn something uh different, which is and you you mentioned the the use of phone, the use of your smart TV, the use of your Alexa, all of these different things as the young person grows up, they're gonna become interacting with different technologies. So I can see that. Uh Arjun, what do we have? Do you have what do we have next?

Arjun Radeesh:

No, so um again, well, we're talking more all about AI, and with AI, also there is that uh thought about okay, the ethics, the risk, the social impact, which kind of comes into play. So, how do you balance teaching technical AI concepts with critical thinking about its ethics, risks, and social impact?

Yara Alatrach:

Yeah, so I think this goes back to what I was saying a bit earlier as well. We're not trying to focus on just the use, we're trying to focus on looking at these technologies from every single perspective. So we we believe that the technical and the ethical should go hand in hand. It must grow together. So, and and separating them will create blind spots, right? It's gonna create passive adopters and passive users, and that's the name that we're trying to steer away from. So, from our perspective, we have our own AI competency framework. I'm not gonna go deep dive on it uh on a podcast. Uh, I think people will get bored. But the general consensus, it focuses on systems. What are the systems that we deal with? What is the data that we have to consider when we're designing something? How do we design it? How do we think about how it's how it's accessible by people? And who might we who might be we leaving out when we're designing this specific model as well? And of course, communication as well. So students basically they they learn to build tests and ask who benefits, who's being excluded, and what's potentially could still need human call as well. That that's this is a very controversial topic, of course, and I think no one has made any consensus in terms of so we go fully autonomous in certain things and should or should we keep things in human and loop. But the idea is to build that habit that in every it's not a one, it's not a single left lesson thought, it's embedded in every single lesson, in which they always analyze and they ask what data is driving this, who benefits, who's excluded, and how our judgment should still matter.

Arjun Radeesh:

Um yes, there one big question I gotta ask now. That what do you think the skills and or literacy is required today for the 10-year-olds when they enter into workforce 10 years down the line?

Yara Alatrach:

Uh I think the true skills that now that we're seeing more is um the people who are surviving with this uh complete change and the acceleration of these technologies are the people who are adaptable, the people who are able to get the critical thinking. Technology is evolving way too fast, and who knows what it's going to be in 10 years. The tools are going to change, but the the mentality and the skill in order for you to understand and how to be able to adapt and consume new technologies is what's going to endure and what's going to be more essential. I think there was even um there was even an a study that was uh done by Microsoft on this, um, in which they said that the focus is actually more about making sure that people are empowered with the mentality of how can they become more fluent when they speak about technologies, how can they go beyond simple functionalities and be able to frame problems, frame better, knowing how to ask better questions. And this is one of the things that I've experienced when I when I when I work with people and when we sit and we talk and about a specific solution, and I tell them, like, don't think about the technology. I don't want, I know I'm from Microsoft and I'm speaking, I'm speaking to you, but I don't want you to speak to me about from a technology perspective. Let's talk about this on how you would think about this problem, the questions that you would like to have, the outcome that you're trying to uh to achieve, and will weave the AI into it. That's how you become more even native and thinking about a specific um what is specific solution.

Stephen King:

It's yeah, it's good that you brought that into there because our very, very last question is all about ed tech's influence, and there is some negativity about the power that ed tech is is is influencing on teachers at the moment. Um you mentioned Microsoft, and I'm on the Microsoft AI fluency program, which is brilliant. Like I will happily say it is brilliant, uh, but it does tailor me down, it does guide me into a the Azure uh foundry, right? Um, and so in order for me to become literate in uh in machine learning generally, I will then have to go and do the Google one, I'll have to do the Lambda one, I'll have to do so many, so many others. Um so what is the role of the private sector and big technology in supporting AI literacy without it becoming training rather than education? Because education, we have to be balanced, we have to be critical. Too much technology, all of the private companies will, you know, they have sales targets to hit. Where's the line?

Yara Alatrach:

I am gonna put a very hard line on this. I do believe AI in education, any focus when it comes to that should be it should be agnostic, it should not be driven by a single vendor. But let's talk about realities. Public, private partnerships, they are needed. Technology companies are leading the AI innovation, they are the ones who know what's happening, what's the latest, what's um what's a roadmap. And public, when we're talking public, I mean I mean here in education, um, systems, schools, governments, ministries, they're the ones who, of course, are trying to make sure that whatever the generation that we're bringing up is able to capture the knowledge that's that's that's happening in the world. And it needs to be uh dynamic, it needs to be able to adapt to everything that's changing in the world. So we need to be able to leverage that to have discussions, open discussions forums, if possible, as well with um uh uh with tech companies. Adness by design, we are tech agnostic, for example. So when we're teaching about AI, we're not referencing a specific brand, we're not specificing a specific tech company. What we're looking at it is from the core AI topic rather than anything else. And we're also not enforcing any AI tools because we want to work with whatever the school has, because some schools are using ChatGPT, some of them are using Gemini, and some of them are using such. And it doesn't matter. The reality is it doesn't matter who is the one, what what is the tool that you're using, because what we're trying to teach is the concept itself, the how to interact with it, how to assess it, and how to evaluate it. Um so we always welcome partners who would love to come and collaborate uh with a real-world context, provide their tools, provide the resources, but like you said, in my opinion, it needs to be agnostic. It cannot be driven by being forced to only work with X, Y, and Z. Because the reality is you're gonna graduate, you're gonna go into the workforce, and you need to be able to jump in between different companies. And I do have to say one last thing, I do apologize, one last thing. I see I see this as well, even from a company perspective, because when I work with companies, even their approach now is I don't want to be stuck with one vendor, I want to be agnostic, I want to be able to have something that's interoperable, and that's the message that we need, even in AI education.

Stephen King:

Right, I think that's been a wonderful final note. Actually, would you like to close us down for today? Thank you very much, Yara. It's been wonderful.

Yara Alatrach:

Thank you, Stephen.

Arjun Radeesh:

Um, but like yeah, thank you so much, Yara, for coming on. Um, thank you so much for tuning on to another episode of Incongruent. Please do like, share, and subscribe. Uh and like yeah, follow us on your favorite podcast platform and yeah, see some feedback.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.