%20(1).png)
Incongruent
Experimental podcast edited by Stephen King, co-founder of KK Studio and founder of the Steve K. King AI Academy.
Correspondence email: steve@kk-stud.io.
Incongruent
Beyond Algorithms: Humanity's Role in AI Development - Alistair Lowe-Norris
Spill the tea - we want to hear from you!
Alistair Lowe Norris, Chief Responsible AI Officer at Iridis, discusses how AI can be a positive force for change when built on ethical foundations, safety measures, and benefits for both people and planet.
• Recipient of the President's Lifetime Achievement Award for his volunteer work addressing food insecurity
• Three pillars of responsible AI: ethical AI that follows guidelines, safe AI that minimizes risks, and beneficial AI for people and planet
• Growing complexity of AI regulation with hundreds of non-harmonized regulations worldwide
• Transition from humans operating systems to supervising autonomous AI agents
• Corporate regulation of AI through procurement standards (Netflix, Microsoft examples)
• Critical need for AI governance frameworks within organizations
• Change management principles for navigating technological transformation
• Career opportunities in responsible AI as companies seek to navigate complex regulatory landscapes
• Importance of AI literacy and critical thinking skills when using AI tools
If you're interested in learning more about responsible AI, there are many open classes and resources available. This is the perfect time to get involved as AI governance becomes increasingly crucial in the coming years.
Stephen King
And we're back and welcome to the next and latest issue of the Incongruent in our AI Themes, season 5. I'm joined today with a new voice. Who is my new voice today?
Radhika Mathur
Hi, I'm Radhika Mathur Radhika lovely to meet you, Radhika.
Stephen King
When did you graduate? It was about 500 years ago, wasn't it?
Radhika Mathur
No. So I graduated from my undergrad in 2020, which actually does feel like 500 years ago. It was right in the middle of COVID, but I recently also completed my master's. So two graduations done, so two graduations done.
Stephen King
So we've just been. You're now getting close to being as old as I am, so is this to scare you? Tell me, what have we been talking about today? Who did we meet?
Radhika Mathur
So we met Alastair Lowe Norris, who is the chief responsible AI officer for Iridus, and we had the most inspiring and insightful conversation about responsible AI driving positive change and AI being a force for as a positive change for people and the environment in general.
Stephen King
Good, and you also recommended us to become specialists in a particular field and gave us a very, very good career advice as well, didn't you Future?
Radhika Mathur
Yes, you did.
Stephen King
Good. So buckle up everybody, because this is a really good conversation and we're ready to roll. Here we go.
Radhika Mathur
Hi and welcome to a new episode of The Incongruent. Joining us today is none other than Alastair Lowe Norris, Chief Responsible AI Officer at Iridus.
He is a visionary leader in responsible AI, organizational transformation and change management. As a board advisor and CEO coach, he guides leaders on responsible AI adoption, standards and regulatory compliance and enterprise wide transformations. He served as the chief change officer for Microsoft where he held direct accountability for driving company wide transformations and responsible AI initiatives building multi-billion dollar organizations during his career. And in 2023, he was honored with the
President's Lifetime Achievement Award by President Joe Biden, the highest level of recognition for volunteering service in the United States. Welcome to the broadcast. Before we get into the technology, our listeners would love to know more about you. And I think the perfect place to start is the President's Lifetime Achievement Award from President Joe Biden, the highest level of recognition for voluntary service in the United States. Tell us how your life path led you to this unique recognition, Alastair.
Alistair Lowe-Norris
Thanks for inviting me. I appreciate the chance to chat. mean, for me, really, this isn't about me. This is about others. And I think that's what voluntary service is really all about. I've always been passionate for being able to help other people throughout my entire life and my career. And really, one of the biggest areas that I focus on is around food insecurity and hunger around the world, and especially now that I'm living in the Seattle area.
you know, around the local area here as well. And so for me, I, you know, I spend a lot of time, know, at places that are responsible for trying to take food that would otherwise be going to a landfill and, you know, repurposing it and bringing it to, you know, food banks for people to be able to bring home to their table. And I think this sort of food insecurity is crucial no matter where you live.
whether this is across the third world or the first world, there are people who are hungry everywhere. And this is one of those things where I have a huge passion in trying to make sure that everyone has not enough to eat, but also then has the shelter that they need and can actually have those things to feel comfortable with their life and their environment.
Stephen King
And maybe tell us a little bit more about how did you get into that? Why was that so interesting to you? Why is that particular field? Because there are many, many issues in the world. Homeless and poverty and food shortages are three very, very important ones. They're UN sustainable development goals. But what was it that made you choose these particular topics to go down to?
Alistair Lowe-Norris
I think it really stemmed from university time in the UK. I was at University of Leicester and I spent a lot of time there, realizing that there was a lot more poverty around than I had necessarily been exposed to before. And started being able to sort of build out and help people with soup kitchens and things locally there, organized student union activities around that area and then grew out from there to be able to sort of.
go out to different countries and help and see what was happening on the ground and then build it out. So for me, it's a case of just sort of something that started a long time ago really, from when I was, my teens all the way through. I look back and I think really that this is something that impacted my life quite significantly in terms of my passion for help. I felt that it's...
If you're, it's not enough just to be able to have enough yourself, you have to be able to ensure that other people do. And so I think at the university, this started out and then it grew into the companies that I ran and that I set up. made sure that every single time that we got together for, you know, an offsite or activity at some point, or my team were in town, we would be getting out and doing something, whether it was, you know, going around and helping paint and, you know, paint the walls of an old people's home or do the gardening for something locally.
or if we were traveling to a different location as part of a company event, what could we do locally to be able to impact the community rather than just turning up for an event and enjoying it? has to be more to it. And trying to be able to advocate and drive for that. I think it's just one of those things where education, foundational things are crucial for us as a planet to be able to move forward.
Stephen King
Super, again.
Radhika
Yeah, that's really interesting. It sounds like all those values are just at the core of everything you do currently. My next question is, you have previously held senior positions at Microsoft and now play a critical role at Iridus. Could you tell us more about what it is you do today and also the mission that Iridus was created to achieve?
Alistair Lowe-Norris
Sure, mean, Iridius is a platform that enables enterprises to build and deploy AI driven systems, especially agentic AI with things like security, compliance and governance, transforming what used to take months into a process that can now happen in days. And the focus really is on delivering that AI that's not only fast and powerful, but also trustworthy and safe. so for us,
As we build that out, responsible AI has got to be at the heart of this. And we have a series of pillars and principles around this that are deeply embedded in our approach so that we can make sure that if we're delivering solutions to enterprise companies that they're enterprise grade, not least in the privacy and security that you would expect.
but also around responsible AI. I think that's the role that I have now is heavily focused on this area. you know, at the company, we really look around three pillars of responsible AI. One, we want to make sure that AI is ethical. So it has to follow, you know, specific guidelines or principles to ensure that the actions are fair, that they're transparent, accountable, explainable, those sorts of things. Okay. And this is about sort of making sure that it's, you know, adhering to established rules. And we want to make sure that
The AI abides by those rules. Then at the same time, we want to make sure that AI is safe. So it has to minimize risks and harms to people and society by being secure, reliable, robust, respecting data protection and privacy principles and things like this. And I think then the third pillar is really around we want AI to be good for people and good for the planet. And that goes beyond these sorts of rules now to aim to benefit society broadly and individuals.
How can it improve healthcare? How can it improve education? How do we create positive impacts with AI? And I know that, Radhika, you're very focused around the sustainable development goals from the UN. And I think that this is the same sort of thing here. As AI, for example, continues to expand out in terms of the amount of power that it needs, if we're going to be good citizens and good stewards, then we have to make sure that environmental sustainability is crucially part of the
Alistair Lowe-Norris
part of that as we move forward. that means that as we build out solutions and as companies use our platform to build out solutions, we want to make sure that AI development and deployment considers those environmental impacts, promoting practices that minimize carbon footprints and waste, optimize efficiency goals and so on. And so I think really as a chief responsible AI officer, my job is to look at the AI is ethical, AI is safe, AI is good for people in the planet.
How do I bring that into the work that we do? And how do I ensure that the platform and the products that we build have that at their core so that companies can trust that the AI that they are building is maintained with the highest standards of responsible AI?
Stephen King
That's excellent. So we just dig into this just a little bit more. Now, from my personal experience, not all companies are at the stage where they have the budget or the management teams that have the capacity to understand all of the AI changes and all the transformations that's coming. They're being overwhelmed with it. And but they're still wanting the productivity gains. They're not willing to invest so much in.
the responsible side that you're talking about, but they want their staff to be delivering the same kind of performance as you would expect. And so now we're getting into words such as bring your own AI or now which will be bring your own agent and or having zero trust.
in all forms of AI is two different approaches. I've seen organizations do each way. How does Iridius, it's almost like your American aluminum and aluminum type of a name there. What's your opinion on those two concepts, either zero trust or bring your own agent and how you might fit with either of those?
Alistair Lowe-Norris
Sure, I think about it this way. So if we go back to 2024, okay, or even go back to 2023, let's go back a couple of years. Okay, people were starting to build large language models, LLMs, okay, they were the first really time that they were coming into enterprises. It was all new and exciting, although, know, AI has been around for a very long while, okay, as has machine learning, neither of, know, machine learning is not going away either, but it started becoming, you know, a lot more significantly used. You get into 2024.
And now we're talking about the first time that you really hear about co-pilots of any sort. You know, have enterprise versions of chat GPT. So now suddenly people are concerned about, on a minute, everyone's, you know, typing stuff into chat GPT or perplexity or Mr. L or whatever the models are, which is great for those companies because then they can, you know, train using all of the inputs that are now being provided. So people are going, Hey, you know what I'm going to do is I'm going to send all of my marketing data directly into the company, into chat GPT and then ask him.
if it could really rewrite it for me. And at that point, then you're passing all of your company data, you know, out of your firewall into somebody else's system. And so people are starting to say, okay, what if we bring chat GPT or large language models, foundation models behind the firewall, and then we can do that. So that all works well. But it's starting to change now as we get into 2025, which is really the year of the agent. And I think there's a big fundamental need you talk about zero trust, and you're talking about, you know, bring your own agent. This is a case we're saying,
Okay, so now organizations are thinking about building agents. They want to be able to create, you know, I would guess small autonomous, okay, pieces of software that are capable of doing tasks that have goals. Okay. And the more complex of those you can give a compass and a map to and say, you know, head off in that direction. And I don't care if there are dragons in the way, it's entirely up to you to figure out a path. And then others, you're, really bounding it down. saying here are the inputs coming in.
This is what I want you to think about and how to work on it. And here are the outputs I'm looking for. And it's a little bit more deterministic. But we're moving from a point where humans were operating things to humans being supervisors. So now we're saying, previously I had to go into a factory and I was moving all of the dials on all of the different older machines. Now I'm more of a supervisor. I have somebody, some agent that is manipulating all those dials. And when there are problems, it's going to escalate it to me. So I'm now acting more of a supervisor.
And that's where we're sort of getting to with agents at the moment this year. But the way that we think about agents and the way that agents are being proposed out there is very, very different. So a lot of the times there are a number of solutions out there right now that build agents for you. And I would call them kitchen sink type agents where you're throwing everything you possibly can in so that now, you know, this agent is capable of building you a new marketing plan at the same time as being able to redo your company finances and figure out exactly what your next product should be. And at the same time, it needs to send out the dry cleaning. It's everything possible. So when you bring those agents along, you're a company. You're either building your own AI of some sort, or you're buying AI from somebody else that you're going to use, or you could be selling services or products that include AI, including either stuff you built or stuff you bought.
And at that point, then, when you're thinking about that level of AI, the question is from a zero trust, how much are you trusting that AI is not going to make mistakes? How much are you trusting? How much hallucination are you OK with? when you're bringing your own agent,
It's a bit like bringing your own laptop. There are security protocols out there that say that if I bring my own laptop to a company and connect it to the Wi-Fi, as long as the company allows it, that's fine, because what it's going to do is it's going to propose a large amount of restrictions on what that laptop can and can't access and what you as a user can and can't access. And it's the same thing with agents. The bigger you make them, the more complex you make them, the more chance that there is of some sort of failure or deviation, especially because this is that the uses of these foundation models is very nascent at the moment. And then the question is around authentication and authorization. How much do you trust and where do you put that boundary? the agents or agents in an AI has a set of entitlements. What is it allowed to do? What are the boundaries around its behavior? How is that exactly going to be governed? And how do I know what it is doing? So as I step away as an operator, as a human, and start becoming more of a supervisor, all of those different agents that are sort of joined together in whatever series of workflows I'm using, how do I know that each one individually can be trusted? And when you orchestrate them into a system, how do I know that the system can be trusted? Just because all of the 51,000 agents in this system are individually trusted doesn't mean when you put them all together that you can transitive trust that all of them are working. So there's a level of complexity and concern for around zero trust and bring your own agent, that companies are just starting to get their head around. And it's about to get a lot worse than it currently is because companies, know, Satya Nadella, CEO of Microsoft, staying on stage, talking about the fact you can have potentially dozens of agents and companies we're talking to, you know, a bank, for example, with 15 million accounts may well have hundreds of millions of agents, okay, running. That suddenly we're at a scale problem that is very, very different.
And if I add one last thing here, most of the time your CRM system, when you install it, doesn't make things up, but AI will. If it wants to please you, it'll definitely do that. That hallucination isn't all bad. If you want to be able to come up with new creative ways of doing it, you want it to hallucinate. If you and I, Steven or Radhika, are trying to brainstorm some new ideas, we are hallucinating in terms of the way that AI thinks. So you don't want to block that, but you do want to give it some guardrails.
And I think that's the whole concept here that really has to be thought through a lot, especially in the larger enterprises.
Stephen King
Now when we talk about these guardrails this takes us into the complex issue of government regulation and national regulation and Whether it's the US the UK or the EU What do you see as being the most serious? Lapses or problems that we're to see ahead with regards to these agentic Tools that we're noticing widely deployed.
Alistair Lowe-Norris
I think the biggest concern right now is a lack of governance. think that there are people inside organizations who are using AI that is not being controlled. And you wouldn't want somebody to be able to install loads and loads of software inside an organization and just use it without somebody ensuring that it met the procurement guidelines, ensuring that it met security and privacy guidelines. And so there's a lot of AI that is being built internally, that is being used, shadow AI where people are using AI that they shouldn't be doing, or at least happening under the table. And so the biggest problem, and the EU AI Act is probably the most robust piece of legislation out there. 500 pages, very, very strong. think the EVM looking, you're over in Europe, I'm over in America at the moment. Colorado and Utah, its states have some pretty strong protections that have just been put in. Texas just did the same.
Stephen King
Yeah.
Alistair Lowe-Norris
California has a few and is growing. So when you have a federated collection of states like America does, each one of those then starts adding this in. You have whatever the federal government from the Trump White House is putting in place as well. And so you end up with global multinational organizations that need to be able to ensure that they are applying the regulations correctly inside their organization. So if I have a company that wants to do business in Singapore,
I need to abide by what Singapore tells us is a requirement. And every single one of these regulations, of which there are hundreds around the world, are not harmonised. So that when somebody says you need to be transparent, what Singapore means by transparency is not Australia, is not what the UK means, is not what the EU means. Harmonising these is an absolute nightmare. I guess as part of that as well, regulations are great.
Stephen King
Yeah.
Alistair Lowe-Norris
but there are standards out there. there are over 200 or 300 AI standards from different organizations out there right now, and each of those inherit some other ones. So all of us have been in academia in different forms. And I think one of the most common well-known standards is called ISO 42001. tells you how you govern AI. It's about 70 pages long, and it tells you are about 20, 20 to 30 things you really need to do.
Stephen King
Yeah.
Alistair Lowe-Norris
What you don't realize really when you read through this, unless you go into the detail, is that it inherits from about 70 other documents. So I've got one 70 beige document that costs $250 or 250 euros to buy, and it inherits from about 70 other documents that all need to be used if you want to be compliant. So now I need to buy another 69 or 70 documents at 250 euros each, to be able to go and do that, each of which refers to white papers and academic papers.
And the level of complexity, this sort of AI hairball of regulations and standards and so on, is an absolute nightmare to try and figure out.
Stephen King
makes me think I want to become an AI lawyer. Sounds like I'll be a fortune. So just bring I mean, I'm looking at what you're saying, I'm bringing into my field, which is predominantly education, I'm looking at the Utah which state of Utah, which is mandate made it mandatory for schools to have to teach AI at some description without AI fluency. I'm now looking at the European Union, which considers AI to be a high risk tool in the field of education, particularly within
Alistair Lowe-Norris
Absolutely.
Stephen King
marking because it has a permanent impact on your life. And then the UK, which is a very wishy washy approach to it all. Here is some guidance. So I can absolutely 100 % see where going. Let's get that maybe we talked a little bit about Netflix before as being as a corporate governance structure as a corporate as a corporate regulator.
Because this is another stream where we have the nationals, we have the states, but then there are these big content aggregators that have responsibility themselves. And Netflix just last week or the week before issued their own regulations. Is that a trend that we're going to likely to see continue or?
Alistair Lowe-Norris
Yeah, think so. mean, Netflix has two days ago, I think, when we were recording this, Netflix clarified how creators can use Gen.ai. So what are they trying to do? They're trying to put their own standards around what is allowed, what needs approval, and how to keep those final cuts safe and compliant. So this is all about responsible AI. So it's basically saying good governance is important and we have standards. And when we procure...
or when we purchase content, we want to ensure that it meets the requirements that we are laying down because we do not want to break into jail as a company. And Microsoft did the same thing. going back when I was involved with Microsoft, I left just over three years ago. The standards we were putting in place around responsible AI were focused on exactly this area. How do we know that what we're buying or what we're building ourselves or what we're selling and in Netflix's service?
It's buying content, okay, in which it is then selling to consumers through a subscription. So it is liable and it doesn't want to be liable. To your point about armies of lawyers, okay, this is exactly the case. So the easiest solution, and Microsoft takes the same approach, is to say, this is very, very hard, so we're now putting the burden on the people making the content. Microsoft's point is, you are a company that wants to sell Microsoft a service, for example. So,
You can do that service and probably have been for maybe for decades if you're a supplier without using AI. But if you want to deliver that service using AI, whether because you want to make better images using an image processing pipeline, or you want to create copy in a certain way, or you're going to, whatever it is that you're doing, if your people are going to be using AI to deliver a service, then Microsoft has a huge number of standards around AI that you need to meet.
You need to do things like AI red teaming, okay? Black hat, white hat, you know, attacks on the infrastructure to ensure that whatever you're doing is right. And so marketing companies that come along and want to do just a little bit of work here and there with AI, so they have a huge compliance burden that hits them like a ton of bricks. And this is the same thing with Netflix. What Netflix is saying is if you are coming to me with, you know, or coming to us with a piece of content, we clearly need to know what did you get approval to do?
Alistair Lowe-Norris
who owns the IP. And so what we're doing as Netflix is we're saying, here's what is allowed and what isn't allowed. Here's what you have to declare upfront. And that includes the transparency around all of this. And so what Netflix is doing is very clearly just laying out and saying, this is our standard that you now have to meet. I'm surprised it took this long, but this is absolutely what every company will and should be doing. And it just means that Netflix
buys Netflix buys content. If we were talking about a company like Microsoft, it buys goods and services. So every single company out there will have a responsible AI policy of some sort that is going to push people to do the same thing with things that they're procuring. So procurement from enterprises worldwide is getting heavily, heavily around this.
Radhika
Previously, you shed some light on the dynamic nature of regulations and AI itself. So could you tell us about how you ensure that your business is flexible in responding to these dramatic changes?
Alistair Lowe-Norris
Sure, mean, it's an absolute nightmare for any organization to understand the AI regulations that are out there. And I say that as a person whose job it is to understand them. I struggle, and this is my specialty area in that respect. So it's not easy. You have to have a way of being able to understand what the regulations are for every geography that you're going to work in and how they apply to you.
You have to understand every standard as to where this applies to because it goes beyond the regulations and it goes into the standards. here's why, Radhika, because a regulation is really a restriction on what you can and can't do. And it's basically applying you, is the criteria that you must meet. What a standard tends to do, an independent standard from like a company like ISO, the International Standards Organization.
Okay, comes along and says, hey, if you want to do AI well, you should follow these policies. And you should make sure you do this. Same with security. Okay, when you have security and you have a Wi-Fi network, it would be a really good idea if it wasn't open to everybody to watch the traffic going by. You should encrypt it because encryption is good. And here's a series of standards around encryption. So for security, that means that everybody now focuses on encryption at risk and encryption in transit.
okay, and encryption on disk. You do those three, everything's great. There are standards around it. And there are many, many, many pages of documentation. And if you want to be secure as an organization, you need to say, okay, we are going to focus on ensuring that every single part of this security applies to our organization. We don't want to be 60 % secure and leave 40 % open. We want to be 100 % secure.
What people are struggling with right now is how do you make sure you're 100 % compliant with AI? Because if you want it to be unbiased, there are reams of documentation on how to not be biased, but somebody has to take ownership of that and ensure that whatever is being done isn't biased. And so I think the answer to all of this comes back to the same thing, know, Radegar, over and over where there has to be an AI charter.
Alistair Lowe-Norris
There has to be responsible AI governance. There has to be an organizational committee that guides AI inside an organization with heavy sponsorship at the top. And that there have to be teams of people whose job it is to be able to ensure that this is being used effectively. In the largest organizations, this is what you have to do.
Radhika
Yeah, that's actually a very interesting point because like you said, there's so much responsible AI and it's crucial to eliminate bias. So you are a leading player in change management and many of our listeners are just leaving university and are yet to experience change of the magnitude that you have led. What advice or top tips can you offer them to survive and thrive in the turbulent future that lies ahead of them?
Alistair Lowe-Norris
Sure, we're talking, so there's a couple of things here. First of all, I actually think that the youngest generation, I have young kids at the moment, I have one who's 13 and one who's 10, their ability to be able to handle constant and ongoing change is gonna be much better than in some ways it was with previous generations, like going back like myself. So I think that I'm encouraged that being able to have the... the grit and determination, you know, and the resilience to be able to get through this is something that's good. That's going to come along anyway. I think when you think about change management, we're talking about human change management and every human, okay, goes through, goes through change of some sort. It doesn't have to be significant. It doesn't have to be, somebody's getting married or getting a divorce or there's a death in the family. Although those things happen, but change of all sorts is very human. And I think that if you want to be able to go to a
university, you have to make a choice. What's the change? Which university are you going to go to? Which college is the right one? What about colleges in other countries? These are all decisions that have to be made, but ultimately there are changes. And I think when you're talking about the younger generation, okay, what we're saying here is if you have five or six people and you want them to change, can grab them all, go down the pub, take them to a park, have a walk, talk to them, and you can help them through the change. It's a small number.
When you get into tens, hundreds, thousands, tens of thousands of people to change, you can't do that sort of local based work anymore, certainly not at the higher level. So there needs to be some sort of organizational way of managing this at the cost of enterprise. And that's what enterprise change management or organizational change management is about. And so the great thing here is that if you think about project management as a comparative discipline,
Project management has a structured organization called the Project Management Institute or PMI. It was created in 1969. Project management's been around for more than 100 years. The principles of this is very well known and very well understood. Change management, although it's been talked about for over 100 years as a discipline, is very nascent. When we created the Association of Change Management Professionals or the ACMP, we only did it in 2009, 2010. It's very new. So the discipline of this is new.
Alistair Lowe-Norris
People are going, oh, yes, I've got 50 years of change management. It's a bit like, really? That would be impressive. OK, because there aren't that many people with that much experience in it. And I think for the younger generation here, it's easier to be able to connect into. And I encourage anyone who goes to a college, goes to a university, comes out, goes into the workforce. It's easy to be able to gravitate towards things like project management and having a look at that because it's well understood.
Change management is a huge thing because you're talking about the psychological behavior or change that's necessary to nudge humans in a direction that they may not want to go and that they will have resistance to, you know, against. And resistance is natural and there's a way of being able to help them through that. And the structured process is published on websites and you can use ChatGPT and others to be able to guide you in this, but it's a discipline, a professional discipline.
that has a huge amount of value and is going to become more and more at the forefront of all of these changes that are happening. So as change is constant and getting more and more complex, I think that the need for change management is not only ever present, but it's going to be needed significantly more.
Stephen King
I'm gonna... I'm sorry, ready, guys? Go it, guys.
Radhika
Yeah, I was just saying that it was interesting that you said about being open to change because like we were talking about regulation earlier, you brought up the fact that whereas one country might be very hesitant to it and see it as sort of an evil, another country is more open to it and teaching students in schools how to use it to their advantage. And I think that's really essential with the younger generation coming up now.
Alistair Lowe-Norris
I mean, I think, Radhika, when you think about this, and we're all involved with education to some extent here, I I think that using chat GPT in the classroom is a nice contentious topic to go down rat holes on, okay, for this. And how do you adjust the way that you teach to ensure that this is happening? But I think it's a different conversation to have here. Chat GPT and others are tools, AI are tools.
Okay, much like, you know, teachers didn't want calculators in the classroom because it would stop people being able to do mental arithmetic. Having as a tool is important. think what is being lost. I mean, if I was going to guide, you know, anyone and what I guide my kids on and others on is that there's two things. One, AI is constantly changing. So doesn't matter how much of it, you know, you you'll you need to continue to use it day by day because every model brings new things. And, and this technology is moving in a, very rapid direction. And then next year.
There are going to be household robots and a number of things that are going to come out this fundamentally in a very different direction as to where we have been over the past few years. But I think that not just using it, it's knowing how to prompt it, knowing how to give it enough guidance that you can then get the answers that you're seeking out of it in a structured way and not giving it too much information so that it does a very poor job at coming back to you. And I think that...
Understanding how to be able to guide the AI to give you what you want is absolutely crucial. But I think that comes clearly, okay, with a comparative sort of set of knowledge, which is around discernment. You have to be able to look at what is being provided and use critical thinking skills to understand whether what is being provided is right and whether it's on message, whether it's...
giving you what you're looking for or whether it's answering you because it thinks it knows what you want to hear. And I think there's a combination of being able to ask the right questions. Okay. And I know that, you radical, you, you're, know, as a former journalist and a fierce debater, you, know, you're, very strong on being able to, ask the right questions, to be able to pull things out. And I think these are the skills that are going to be crucial.
Alistair Lowe-Norris
Okay for organizations, mean you go back Netflix was paying a million dollars a year for for prompt us just a few years ago Okay for these sorts of things that the ability to write and investigate and gather that information and nail it is huge and then be able to understand whether what you're being given Okay is good enough to then be able to use and and then be able to tweak your approach is huge
Stephen King
I totally agree with that. I've been mulling in my head that for every AI tool out there, you need two humans to review its content. I'm serious because when I'm using the AI to produce my content, I am so in love. It's partially my input and my output because I put so much into it that I am positively biased to the AI's output that I can't see all the mistakes. And it's only when I share my output with someone else and they go...
Alistair Lowe-Norris
Hahaha!
Stephen King
What is this? And suddenly go, my God, I would never have done that if it was my own mistake. But because I helped produce it, it's it's it's it's crazy. We're coming to the end of sorry, we're coming to the end of this really inspiring conversation. And I really enjoyed it. I'm sure Radiker has as well. And you've added so much value here. And I really wanted just to ask, you have produced so much great, fantastic content, you've written several technology books, handbooks on
Alistair Lowe-Norris
I agree.
Stephen King
on different Microsoft technologies, I believe. And you've created so much valuable knowledge, you've got your own podcast. What would you advise someone if they wish to follow Alistair's path and become just like you? What's the first step if we were the hobbits coming out of our little home? What's the first step so we keep walking onwards?
Alistair Lowe-Norris
I will tell you that this is the right question at the right time. Responsible AI is at its nascent level. It's really at the birth. The idea of AI ethics and AI governance, nobody really knows exactly how to do it well right now because the models and what's happening at the foundation level with the foundation models is changing so fundamentally every few months that the idea of how to get your arms around this is very difficult.
So I think that there's an opportunity here for people to really understand what's going on with responsible AI, find a niche to interest them. Whether this is about environmental sustainability and you're passionate about AI for good, good for people, good for the planet, how do you drive that? Maybe this is about making better legislation. And you're talking about army of lawyers. Maybe this is something you're interested in. Maybe it's advocacy.
Maybe we're talking about the fact that society is being disenfranchised because people are putting AI out there that is stopping people getting jobs because they haven't thought through the buyer side of it. Or it's assuming that people in jail are never going to be good citizens, so it's looking at higher recidivism rates compared to what it should be. There are different avenues that you can take in this, and it is so nascent that no one listening to this podcast is behind.
because I guarantee you out there, I really don't know anything, okay, in this area. And it is so new that anyone is able to step into this and become a leader very quickly. If I had to guide one thing, I would say, and I can give both of you this and you can share it with your audience. I gave a presentation recently to the Association of Chain Transition Professionals about responsible AI. How do you drive that as a chain transition professional? One of the things I gave them was,
You don't know anything about AI right now. That's fine. OK. Well, you know a little bit about it. Here's a long prompt that would allow you to be able to very quickly come up with a training curriculum for you over the course of the next six weeks that will very quickly tell you what's happening in the current market and give you an opportunity to learn about those areas that you don't know. So turn things from unknown knowns to sort of known unknowns at that point and move into it. And I think.
Alistair Lowe-Norris
That then allows you to really move forward pretty quickly in this area. There's enough open classes at the Sorbonne, okay, and other AI ethics classes around there that are absolutely perfect to be able to sort of dip your toe into this, but you're not behind. Nobody here is behind. This is the right time right now. Even two years from now will be the right time. Okay, it's a huge area and AI governance is gonna become so crucial.
for over the coming years. It's gonna become a huge discipline. So this is the right time for sure.
Stephen King
I'm inspired to come up, you we mentioned the UN sustainable development goals. I'm now inspired to come up with a whole series of AI development goals, because you can really tie it into the subjects of no poverty, zero hunger, as we mentioned, jobs, power, partnerships, you know, it's every single one of those 17 goals. You just put AI at the front of it. And suddenly, you've got
Alistair Lowe-Norris
Exactly. Well, actually, the UN has done that. UN has decided that, all joking aside, the UN has decided that what's missing in this is more recommendations and standards and guidance. So they have spun up a new approach to this, which will not be harmonized with any of the other ones out there, because why would you want to do that? And I'm being bit facetious here to make a point, but either way, so the UN is now moving heavily into this area. They have been for a couple of years now. and will absolutely move. there is so much out there. It's very, very difficult to get your arms around all of this. So anyone who thinks that they can help, okay, you are going to be, you know, you are going to be the people who are going to really make a difference. And this when there's an AI gold rush, the people making the money are the people selling the picks and the shovels.
Okay. And AI governance around all of this, okay, and ensuring that people don't break into jail, okay, as large enterprises subject to, you know, the EU AI Act, if you do something wrong is seven and a half percent of global revenue. Okay. That's a large number. You don't want to pay a fine of seven and a half percent of global annual revenue. And so anyway, anyway, you can stop that happening, okay, is absolutely going to be crucial.
Stephen King
That's amazing. Thank you very much, Alistair. Radico, would you like to close us off?
Radhika
Yes, so thank you for joining us today Alastair and thank you everyone for tuning in. Please subscribe, like and comment or rate if that's an option and bye for now and join us on the next episode.