
Incongruent
Experimental podcast edited by Stephen King, senior lecturer in media at Middlesex University Dubai.
Correspondence email: stevekingindxb@gmail.com or s.king@mdx.ac.ae.
Incongruent
EU Parliament 2025: The Future of AI: Balancing Innovation with Responsibility
Spill the tea - we want to hear from you!
Generative AI is transforming technology and reshaping our world by creating human-like content at unprecedented scale and speed.
• EU ranks second globally in Gen AI academic publications but faces funding gaps for innovation
• Technical foundations include deep learning architectures, specialized processors, and massive datasets
• Gen AI systems face unique cybersecurity vulnerabilities like data poisoning and prompt injection
• Emerging trends include agentic AI making autonomous decisions and multimodal AI processing diverse data types
• Teachers are more exposed to AI-automatable tasks than 90% of workers across occupations
• Gen AI enables creation of convincing false content, threatening information integrity
• Digital commons face existential threats from AI crawlers and potential information pollution
• Data centers powering AI projected to consume growing portions of electricity by 2027
• Children need specific safeguards against manipulation and inappropriate AI-generated content
• Bias in training data perpetuates societal prejudices in AI outputs
• Privacy concerns include AI's ability to infer sensitive information from seemingly innocuous data
• Copyright challenges involve determining appropriate limitations for AI training
For more information on generative AI and its impacts, check out the comprehensive outlook report from the European Commission's Joint Research Center.
Welcome to the Deep Dive. Today, we are plunging headfirst into something that's not just changing technology, but really reshaping our world. Generative AI.
Speaker 2:Yeah, it's everywhere now, isn't it?
Speaker 1:Exactly. You've interacted with it, you've seen the outputs. You know it's transforming things. Yeah, at its heart, gen AI is about systems creating well human-like content text, images, code, you name it.
Speaker 2:But at a scale and speed we've just never seen before. It's a huge potential force for, you know, innovation, productivity, societal shifts.
Speaker 1:And to guide our deep dive into this really complex space, we're using a fantastic source. It's a comprehensive outlook report from the European Commission's Joint Research Center, the JRC.
Speaker 2:That's right, and what's great about this report, I think, is that it's not just tech specs. It's actually designed to inform policymakers, people working in digital education, justice across the board. It pulls together the latest science, expert insights, gives a really sort of nuanced picture.
Speaker 1:Absolutely, Because I mean, the potential is massive. We see that. But Gen EI also brings these significant challenges right. They're all interconnected to misinformation, job market shifts, privacy, big stuff, Definitely. So our mission today is to cut through that complexity. We want to pull out the insights you need to understand what's really important in this rapidly evolving landscape. Looking across tech, economy, society, policy, the whole picture.
Speaker 2:Exactly. This report is kind of a vital resource for understanding trends, anticipating what might be next. It doesn't pretend to have all the answers, but it brings the science right to the policy table.
Speaker 1:Okay, let's unpack this then when do we start? Maybe the foundations, the tech that actually makes Gen AI tick? What are the building blocks here?
Speaker 2:Yeah, good place to start. It really comes down to a few key things that have advanced together quite rapidly actually. On the software side, huge strides in deep learning architectures, the transformer model particularly.
Speaker 1:Right the transformer. That was a big deal for understanding language context.
Speaker 2:A massive deal. These algorithms are computationally intensive, sure, these algorithms are computationally intensive, sure, but they're what allows models to process context and generate stuff that's surprisingly coherent, relevant.
Speaker 1:But algorithms need power, right Serious power.
Speaker 2:Oh, absolutely. That's where the hardware comes in Specialized processors, gpus, tpus. They're indispensable. They provide the sheer computational muscle you need to train and run these absolutely massive models efficiently.
Speaker 1:And that muscle needs something to chew on Data.
Speaker 2:Mountains of it. Massive data sets are the well indispensable raw material. Combine those huge data sets with high performance computing, think supercomputers and faster connections like 5G.
Speaker 1:Right.
Speaker 2:And you get this environment where gen AI models can be developed and trained at just an unprecedented scale. The EU, for instance, is investing heavily in HPC and these gigafactories to build up that capacity.
Speaker 1:OK, so, given these foundations, where does the EU actually stand globally? I mean in terms of research, but also turning that research into, you know, real world innovation.
Speaker 2:So here's a key insight from the report, and it's a bit mixed. The picture is strong in research, definitely, but there are challenges turning that into market leadership.
Speaker 1:How strong is strong?
Speaker 2:The EU is actually second globally in academic publications on Gen AI, right after China, so that shows a really vibrant research base.
Speaker 1:Second worldwide. Wow, that's impressive.
Speaker 2:It is. But and this is the critical point the report highlights the EU faces significant funding gaps, especially compared to the US and China. Wow, the money, yeah. And that affects its ability to translate that research into innovation, into patents. Eu patent filings are growing fast, sure, but they still only represent about 2% of global filings. They lag way behind South Korea and the US.
Speaker 2:Okay, represent about 2% of global filings. They lag way behind South Korea and the US. Okay, so balancing this strong research base with closing that investment gap to really drive innovation, that's a major challenge.
Speaker 1:Okay, so we have the tech, the data, the hardware capacity is growing, but with systems this complex, how do we even begin to evaluate them, Make sure they're safe, reliable? It feels like uncharted territory.
Speaker 2:It absolutely is, and the report really emphasizes this critical need for what they call a science of evils or model metrology. Basically, we need standardized ways to measure both the capabilities and, crucially, the safety of these models, especially as they move into sensitive areas.
Speaker 1:A science of evaluation.
Speaker 2:Yeah.
Speaker 1:That really captures the challenge, doesn't it? What does that actually involve in practice?
Speaker 2:Well, it involves developing new methods for benchmarking both performance and safety. Adversarial testing, often called red teaming, is vital.
Speaker 1:Red teaming so like deliberately trying to break it.
Speaker 2:Yeah, basically pushing it to find weaknesses, trying to make it fail or produce undesirable outputs. And human evaluation is still incredibly important, having experts or just representative users test the systems out.
Speaker 1:But here's where it gets really interesting. What happens when the AI's abilities start to surpass our own understanding? How do you evaluate something that's operating at a superhuman level?
Speaker 2:That's a huge future challenge. This concept of superhuman evaluation is emerging precisely because of that. We need ways to assess capabilities, safety, issues that humans might not even be able to fully perceive or understand on their own. It's a critical area for ongoing research, definitely.
Speaker 1:Okay, shifting focus just slightly. What about cybersecurity? J&ai systems handle massive amounts of data. They're incredibly complex. They must present new kinds of vulnerabilities, right.
Speaker 2:They absolutely do. They're vulnerable not just to, you know, the standard cyber threats we already know about, but also to threats specific to AI systems themselves. The report breaks these down. It's really important to understand them because they open up whole new attack surfaces.
Speaker 1:Okay, so what are some of these AI-specific vulnerabilities?
Speaker 2:Well, first there's data poisoning. Because these models are trained on these vast, sometimes unverified data sets, attackers can subtly inject malicious samples into that training data.
Speaker 1:How does that work?
Speaker 2:It can compromise the model's overall performance, maybe subtly, or introduce specific risks like making a code generating AI suggest insecure code patterns without the user realizing it. It's like subtly contaminating the ingredients before the cake is baked.
Speaker 1:O' Okay, so attacking the training data itself. What else?
Speaker 2:Then there's model poisoning Similar idea, but it involves manipulating the training process itself or the model's learning updates directly again again, to compromise its behavior. Marc.
Speaker 1:Thiessen, not attack the data or attack the learning process. What about attacking the model once it's built and people are using it?
Speaker 2:Danielle Pletka. Right. That brings us to prompt injection. This is where carefully crafted input, a prompt, makes the model behave in ways it wasn't intended to.
Speaker 1:Marc.
Speaker 2:Thiessen. Okay, danielle Pletka. It can be direct prompt injection a user trying to bypass safety filters maybe get it to generate harmful content or misuse its abilities. But there's also indirect prompt injection. This is fascinating and pretty concerning.
Speaker 1:How does that work?
Speaker 2:The model interacts with external content right Like it reads a webpage or processes a PDF. You feed it, and that external content secretly contains instructions, hidden prompts that alter the model's operation when it processes it.
Speaker 1:Wait, hang on. So the AI looks at a web page and the web page can essentially give it secret commands. That sounds wild and potentially very dangerous.
Speaker 2:It can be. These indirect attacks can compromise the system's integrity, its privacy, and often the negative consequences fall on the primary user of the system, not the attacker who planted the malicious content somewhere else.
Speaker 1:Wow, okay, and what about trying to get sensitive information out of the model or its training data?
Speaker 2:Right. That's the domain of information extraction. Attackers aim to access sensitive or proprietary info. This includes things like data leakage or membership inference. That's where attackers try to figure out if a specific piece of data was part of the training set. If that data point was sensitive maybe copyrighted material or personal info that shouldn't have been there in the first place that's a major leak. Then there's model inversion Attackers try to reconstruct aspects of the training data or infer sensitive details just by analyzing the model's outputs or its internal structure.
Speaker 1:And just trying to steal the model itself.
Speaker 2:Yeah, that's model extraction, Basically trying to replicate the parameters, the brain of a remote model. The key takeaway here is that GenAI's reliance on massive, often diverse data and these complex models, it just significantly increases the potential attack surface compared to older systems.
Speaker 1:That gives us a really clear picture of the current tech landscape and its vulnerabilities. The report also looks ahead, though what emerging technological trends are on the horizon beyond the kind of large language models we mostly interact with today.
Speaker 2:Yeah, it flags several really interesting developments. One is agentic AI.
Speaker 1:Agentic AI.
Speaker 2:Right. This goes beyond the model just responding to a single prompt you give it. These systems are designed to make autonomous decisions, break down complex goals into subtasks, maybe initiate actions and, crucially, learn from the outcomes. They exhibit a form of computational agency.
Speaker 1:So not just answering my question, but actively doing things or pursuing a goal on its own.
Speaker 2:Precisely, you know. Think of potential AI co-scientists autonomously formulating and testing hypotheses, or maybe self-correcting AI frameworks that improve themselves over time. This has really significant implications for how work gets done, how knowledge is produced in the future.
Speaker 1:What about AI that can understand more than just text, like images, audio.
Speaker 2:That's multimodal AI. These systems process and integrate diverse data types text, image, audio, maybe sensor data. Others Imagine an AI taking a patient's entire medical history text, notes, lab results, scans and integrating all that to produce a comprehensive diagnostic report.
Speaker 1:That sounds incredibly powerful, especially for fields like medicine.
Speaker 2:It does, or systems that translate between modalities like generating a descriptive text report from a complex image automatically. But integrating multiple data types also amplifies challenges we've already touched on, like bias. Biases from different data types can compound and copyright issues get even more complex when training on diverse existing works across different formats.
Speaker 1:Right and AI that can maybe simulate more complex reasoning like step-by-step thinking.
Speaker 2:Yes, that's. Advanced AI reasoning Systems are being designed specifically to perform logical, step-by-step problem solving, trying to emulate more deliberate human thought processes. There are things called large concept models that integrate these vast networks of conceptual knowledge to improve decision making.
Speaker 1:So trying to make AI think more like us.
Speaker 2:In a way it promises more sophisticated capabilities, for sure, but it also raises some tricky ethical questions about mimicking human cognition. And, importantly, it significantly increases energy consumption. These models can be very power hungry.
Speaker 1:Which leads us nicely to explainability or XAI. Why is that so crucial for these increasingly complex and powerful systems?
Speaker 2:Well as AI takes on bigger roles in sensitive or high stakes areas. You know security, health care, finance, even legal decisions. People need to trust how the AI reaches its conclusions. They need to understand it.
Speaker 1:So it's not enough to just get the right answer. We need to see the workings behind it, or at least understand the logic.
Speaker 2:Exactly. Explainability helps provide understandable justifications. Insights into the AI's decision making process. Understandable justifications, insights into the AI's decision-making process. It's vital for building user confidence, enabling effective human-AI collaboration and often meeting regulatory requirements. Techniques like attribution graphs can help visualize which inputs most influence the decision. The report frames XAI as increasingly an ethical dimension, really, and even a legal requirement under EU law, like the AI Act. Though standardizing explainability for these very complex, often opaque black box models that remains a huge challenge.
Speaker 1:That's a fantastic overview of the tech landscape and where it might be heading. Yeah, let's shift gears a bit and look at the economic picture. How is the EU actually positioned globally in this Gen AI economy?
Speaker 2:Yeah, as we touched on earlier, it's definitely a nuanced picture. The EU is certainly significant player, represents about 7% of global AI players, ranks third overall and, as we said, its research strength is undeniable, second only to China in publications.
Speaker 1:But the challenge is turning that research muscle into market dominance. Is that fair?
Speaker 2:That's precisely it. The report really highlights the significant lag in innovation, patents and, perhaps most critically, in venture capital investment compared to the US and China.
Speaker 1:Right the VC funding again.
Speaker 2:Yeah, While EU companies are investing in foreign players and the EU hosts foreign players too the US significantly leads in actually owning foreign Gen AI companies foreign Gen AI companies. Germany is needed as a leader within the EU in both hosting and owning stakes. But bridging this overall VC investment gap is a key challenge for EU players who want to compete globally.
Speaker 1:How are traditional industries being impacted Manufacturing, for example?
Speaker 2:Gen AI is set to really transform manufacturing. Think smart production lines using advanced data analytics, predictive maintenance powered by AI, making autonomous decisions about machine health before things break. Optimizing complex supply chains, automating intricate processes. It's about creating more interconnected, efficient and adaptable systems.
Speaker 1:And what about the creative industries? This feels like an area seeing huge and sometimes pretty contentious impact because of AI's ability to generate content.
Speaker 2:It's definitely a space of both massive opportunity and, yes, significant tension. Gene AI is revolutionizing content creation, helping creators generate text, images, music video much faster, exploring entirely new artistic models. But here's where the challenges are particularly acute Intellectual property and copyright. Genie-ai models are often trained on vast amounts of existing creative work, sometimes, you know, without explicit permission from the creators.
Speaker 1:That's the major point of conflict we're seeing play out in courtrooms right now, isn't it?
Speaker 2:It really is. It raises serious questions about fair compensation for creators whose work is used for training and the potential displacement of original human-created works by AI adaptations or variations. There's also a risk, if it's not managed carefully, of homogenization, you know, AI models just relying too heavily on existing styles and trends rather than fostering true novelty.
Speaker 1:Let's talk about the impact on the labor market Employment. We hear so much talk about AI replacing jobs.
Speaker 2:How does the report frame this? It frames it quite carefully, emphasizing productivity gains versus potential disruption or displacement, and a key insight here, I think, is that Gen AI impacts tasks within jobs, not necessarily entire jobs, wholesale.
Speaker 1:Tasks, not jobs.
Speaker 2:Okay. It augments several cognitive abilities that are crucial in many roles, things like comprehension and expression, understanding and generating language, attention and search, finding information, classifying documents and conceptualization, learning, abstraction, identifying patterns, generalizing from data.
Speaker 1:So how do those AI capabilities map onto specific jobs? Which ones are most affected?
Speaker 2:Well, a JRC study highlighted occupations most exposed to these AI capabilities. It identified engineers, software developers, teachers, office clerks and secretaries as facing particularly high exposure.
Speaker 1:Teachers, that's interesting.
Speaker 2:Yeah, the study found teachers were actually more exposed to AI-automatable tasks than 90% of workers across all the occupations they surveyed. It's not necessarily that the entire teaching job is replaced, of course, but many tasks within it could be significantly changed or augmented by AI.
Speaker 1:That makes sense. It's about the nature of the work shifting, which brings us straight to the skills gap. Right? If tasks change, people need different skills.
Speaker 2:Absolutely. The report really stresses that the needed skills go beyond just, you know, learning to use the tools. It involves understanding the broader implication, the ethical considerations, the limitations of the AI.
Speaker 1:And how are we doing on that front in the EU?
Speaker 2:Well, the EU has set an ambitious target in its digital decade program 80% of the population should have basic digital skills by 2030. But as of 2023, only 56% had met that target, so there's a clear gap that needs addressing.
Speaker 1:How do we close that gap then?
Speaker 2:It requires a really significant push for upskilling, retraining and comprehensive AI literacy initiatives, programs updating digital competence frameworks like DigComp 3.0 to explicitly include AI knowledge and ethical use, and developing specific AI literacy programs, starting right from schools.
Speaker 1:Okay, and finally, on the economic side, what does the market for conversational AI-like chatbots look like specifically within the EU?
Speaker 2:It's described as a complex and dynamic market, but currently dominated by a few large non-EU players. Market but currently dominated by a few large non-EU players. Openai's chat GPT is identified as the clear leader in terms of user base across the EU.
Speaker 1:Is that market uniform across all EU countries?
Speaker 2:Not exactly. There's variation in market share and also how people prefer to access these tools. Some prefer dedicated apps, others use websites more. It depends on the member state and while the major global players are dominant everywhere, you do have local players, like Mistral AI, based in France, who can have particular prominence in specific countries. There's also this interesting competition between companies who build both the underlying AI models and the user interface, the vertically integrated players and those who primarily build user interfaces or services on top of external models interface-only solutions.
Speaker 1:Okay, that gives us a comprehensive picture of the tech and economic aspects. Yeah, let's move into the societal dimensions now and how policymakers are trying to keep pace Misinformation and disinformation. That seems like a major challenge that Gen AI could significantly worsen.
Speaker 2:Oh, they absolutely are. The report highlights this as a critical area. Gen AI enables the creation of incredibly convincing false content think deep fakes at a massive scale and speed Right. It can pollute online information sources, amplify the spread of disinformation. It makes it incredibly hard for accurate information to keep up or for rebuttals to even be effective. We've seen its use already in sophisticated influence operations, like that doppelganger campaign targeting European countries.
Speaker 1:So technical solutions like detecting deepfakes are important, but they're not enough on their own.
Speaker 2:Precisely Technical measures like watermarking. They're valuable, yes, but the report strongly emphasizes that media literacy and AI literacy are crucial skills for citizens. People need the ability to critically evaluate AI generated content, understand its potential to deceive and resist manipulation attempts. It's a societal defense alongside the technical ones.
Speaker 1:How is Gen AI actually being talked about in the media? Does that shape public perception, do you think?
Speaker 2:Oh, definitely. The media narrative around Gen A, around Gen AI, is often quite polarized, you know, swinging between these utopian visions of transformative potential and quite dystopian warnings about risks, job losses, privacy collapse, that sort of thing. This often dramatic framing certainly shapes public discourse and, by extension, it can influence policy discussions too.
Speaker 1:Has the media coverage ramped up recently?
Speaker 2:Massively. There was a huge surge in coverage starting in late 2022, particularly after systems like ChatGPT became widely available. Reporting intensity tends to peak after key events like new model announcements, major commercial deals or significant regulatory proposals like the AI Act.
Speaker 1:And what about the tone of that coverage? Is it mostly positive, negative?
Speaker 2:Well, the report notes that in mainstream media, the overall sentiment towards Gen AI is actually predominantly positive, often highlighting economic growth opportunities. However, a significant chunk, maybe around 30%, does focus on the risks and negative implications Interestingly unverified sources. You know blogs, certain social media channels often have a more neutral or mixed tone, but they are much more prone to sensationalism and alarmism, either exaggerating the AI's capabilities or predicting catastrophic outcomes, often downplaying the ethical considerations in the process. Understanding this media landscape is really key to gauging public perception.
Speaker 1:Let's turn to something called the digital commons, things like open source code repositories, Wikipedia, openly licensed creative works. How does Gen AI interact with these vital resources?
Speaker 2:The digital commons are absolutely crucial. They serve as fundamental, often publicly accessible, training data for many, many AI models.
Speaker 1:Right, the raw material again.
Speaker 2:Exactly, but Gen AI offers opportunities here too. It could help people find and navigate vast open data sets, maybe aid in fact-checking by quickly searching open knowledge bases, facilitate translation of open content, and using knowledge from the commons can even help improve the fairness and diversity of AI outputs, making them less biased.
Speaker 1:So it sounds like potentially a symbiotic relationship.
Speaker 2:Potentially yes, but the report also points to significant risks for the commons, and this is a critical finding. Gene AI poses several threats. That's right. It could lead to the enclosure or privatization of free knowledge if access for data scraping becomes restricted or maybe monetized. It might decrease voluntary contributions to platforms like Wikipedia if people just start relying solely on chatbots for information instead of contributing back. There's a significant risk of pollution if AI-generated errors or biases get scraped and inadvertently introduced into open databases. That requires costly human effort to find and correct, and the organizations hosting these commons, often nonprofits, face real financial strain from the sheer volume of AI crawlers constantly accessing their data, often with little direct return to support the infrastructure.
Speaker 1:Wow, those risks paint a pretty challenging future for these resources we rely on.
Speaker 2:They really do. The report contrasts potential future scenarios one where the commons thrive with the right support and policies, and another where they deteriorate, becoming less trustworthy, less useful and potentially hindering AI development itself by providing lower quality training data. Protecting the digital commons isn't just about open access. It's actually essential for developing fair, robust and advanced AI systems in the long run.
Speaker 1:Okay, let's in the long run. Okay, let's address the environmental implications. Running AI models, training them it requires significant infrastructure, significant energy.
Speaker 2:It absolutely does. The direct environmental impact is considerable, primarily from the data centers that power AI. These centers consume vast amounts of energy, large amounts of water for cooling and contribute significantly to electronic waste.
Speaker 1:Do we have a sense of the scale? How much energy are we talking about?
Speaker 2:Well, estimates vary and it's a moving target, but data centers globally accounted for around maybe 1.5% of total electricity consumption in 2024. And AI's share of that is growing rapidly. Some estimates, project AI could reach, say, 27% of total data center energy consumption by 2027.
Speaker 1:That's a huge jump.
Speaker 2:It is. There's still uncertainty in these numbers, mind you. New, maybe more efficient model types are emerging, but also more complex reasoning models that use more power. Supply chain issues affect hardware deployment. It's complex, but the trend is clearly upwards, and the relatively short lifespan of the specialized hardware also adds significantly to the e-waste problem.
Speaker 1:So Gen AI is definitely resource intensive, but can it also help address environmental challenges? Is there an upside?
Speaker 2:Yes, that's the other side of the coin. Ai can be applied to climate mitigation efforts, Things like tracking pollution sources more accurately, optimizing energy grids for efficiency, designing more sustainable materials. However, the full scale of this potential help is still being quantified and it faces practical limitations. The EU is implementing regulations to address the environmental impact of data centers. There are provisions in the Energy Efficiency Directive, the taxonomy regulation, and they're promoting energy-efficient hardware like neuromorphic chips for more sustainable deployment. The upcoming Cloud and AI Development Act also aims to support sustainable cloud infrastructure.
Speaker 1:The report also mentions an indirect environmental impact from AI. What's that about?
Speaker 2:That's right. It's a less obvious point, but an important one. Biased AI models could potentially influence public attitudes or behaviors, including those related to climate change, for instance, if AI search results consistently downplay climate risks due to biases in their training data.
Speaker 1:Ah, influencing opinion.
Speaker 2:Exactly which could indirectly impact energy consumption or emissions policies. This highlights the need for transparent and unbiased models, and it also underscores the need for international cooperation, because different regions have very different policy approaches to these environmental issues right now.
Speaker 1:Let's discuss the impact on specific vulnerable groups, starting with children's rights. How does Gen AI affect children?
Speaker 2:Well, it presents opportunities, certainly, like personalized educational tools or new avenues for creativity, yeah, but the risks are significant and children are particularly vulnerable.
Speaker 1:In what ways?
Speaker 2:They're susceptible to deceptive manipulation techniques that AI enables. They're also at risk from exposure to harmful or inappropriate content and there's often a real lack of age. Appropriate privacy and safety measures built into many of these systems, plus the potential for AI bias to affect them and for hallucinations you know the AI confidently stating wrong information to mislead them these are major concerns.
Speaker 1:So general AI ethics guidelines aren't really enough here. Children need specific safeguards.
Speaker 2:Absolutely. The report strongly emphasizes the need for child safeguards to be designed into these systems right from the start, taking an age-appropriate, inclusive approach. It also calls for longitudinal studies. We need to understand the long-term impact of interacting with Gen AI on children's cognitive and mental development. That's crucial for informing future policy.
Speaker 1:What about mental health? More broadly, the report mentions risks associated with things like AI, chatbots and companion apps.
Speaker 2:Yes, there are documented risks. There too. Users can develop addiction-like behaviors, become overly reliant on chatbots for validation, potentially displacing important human relationships.
Speaker 1:Right.
Speaker 2:And, tragically, there have been cases where chatbots have actually encouraged harmful actions, sometimes linked to their perceived sentience or just their tendency to agree with users without critical assessment.
Speaker 1:And deepfakes, which we talked about regarding misinformation. They also have a significant mental health impact, don't they?
Speaker 2:A severe one. Deepfakes can be used for cyberbullying, harassment and, devastatingly, for creating and distributing non-consensual explicit content, often targeting women and girls. This causes profound psychological trauma to victims. The report cites the Almendraleo case in Spain as a tragic example of this, and Gen AI exacerbates this risk by making the creation of such harmful content much easier and more accessible than before.
Speaker 1:That really highlights how interconnected all these issues are misinformation, safety, mental health Exactly.
Speaker 2:And it presents real challenges for existing policies. Schools, for instance, have policies against cyberbullying, but applying them effectively to sophisticated AI-generated content is complex. It also challenges platforms beyond just social media, like app stores and search engines, where image manipulation apps some quite harmful have appeared.
Speaker 1:This naturally leads us back to the core issue of bias, stereotypes and fairness. How does Gen AI actually perpetuate these problems?
Speaker 2:Well, fundamentally, Gen AI models are trained on vast datasets, right, and these datasets often reflect existing societal biases and stereotypes gender biases, racial biases, cultural biases, you name it.
Speaker 1:So garbage in, garbage out, or rather bias in, bias out.
Speaker 2:Pretty much. As a result, the AI can perpetuate and even amplify these biases in its outputs. The report gives clear examples. Studies show AI credit risk assessment models can exhibit similar gender bias to traditional methods. Text-to-image generators frequently show really strong gender and racial biases when generating, say, occupational portraits. Doctors are mostly male, nurses female, and so on. It just reinforces harmful stereotypes.
Speaker 1:So the bias in the data becomes bias in the AI's output.
Speaker 2:Yeah.
Speaker 1:How can this possibly be mitigated?
Speaker 2:It's tough, but there are strategies Using more diverse and representative training data is crucial, developing and implementing fairness-focused algorithms during training or post-processing and, really importantly, increasing diversity among the actual teams building these AI systems. Regular audits using diverse benchmarks are also essential. Regular audits using diverse benchmarks are also essential. And policy frameworks like the Digital Services Act, dsa and the EU, which requires very large online platforms to conduct risk assessments on systemic risks, including discrimination. These are important levers too, but it's a complex area, often involving difficult tradeoffs between, say, fairness for different groups and overall accuracy.
Speaker 1:The report also discusses incorporating a behavioral approach into AI policy. What does that actually mean?
Speaker 2:It means leveraging insights from behavioral science. You know how humans actually make decisions, including all our cognitive biases and shortcuts, to inform the design and regulation of AI.
Speaker 1:How so.
Speaker 2:Well understanding human cognitive biases can help policymakers design rules that protect users from being exploited by AI systems, particularly advanced agentic AI that might learn and exploit individual user preferences or vulnerabilities in detail.
Speaker 1:So using insights into human behavior to make AI policy more effective, more protective.
Speaker 2:Exactly Leveraging those insights for good, while also limiting the ways AI itself can use those insights for potentially manipulative purposes. Interestingly, the report also notes the flip side. In certain domains, like maybe medicine or law, AI decision making could potentially overcome some of the cognitive biases humans are prone to. Oh right. Research is really needed there to understand when and how AI might actually make less biased decisions than humans and when human oversight or control remains absolutely essential.
Speaker 1:Privacy and data protection, especially under the GDPR in Europe. This must be a huge challenge, given Gen AI's need for massive data sets.
Speaker 2:It absolutely is. The report talks about Gen AI's insatiable appetite for data for training. This raises fundamental issues about data quality. If the training data is inaccurate or biased, it leads directly to harmful AI outputs. But there's also the risk of inference, where Gen-AI systems can infer sensitive personal information about individuals from seemingly innocuous, non-sensitive data they process process. The report mentions a retail example where analyzing shopping patterns allowed a company to infer quite accurately that a customer was pregnant, even though she hadn't told them. Wow.
Speaker 1:How does GDPR apply here, then, and what are the practical challenges?
Speaker 2:It raises really complex questions about the lawfulness of processing such vast quantities of data, particularly whether legitimate interest is a sufficient legal basis for training models on these broad, often scraped data sets. Accountability is also challenging. Who is responsible when an AI system produces harmful or privacy-violating outputs? The user, the provider. And implementing core data subject rights under GDPR, like your right to access your data or the right to have it erased, is technically incredibly difficult, maybe impossible sometimes, for these large, complex, opaque AI models. How do you find and delete one person's data from a model trained on trillions of data points?
Speaker 1:Are the regulatory bodies, like data protection authorities, getting involved?
Speaker 2:Yes, absolutely. Dpas in countries like Italy and the Netherlands have already taken action regarding specific Gen AI services. Italy and the Netherlands have already taken action regarding specific Gen AI services. The European Data Protection Board, the collective body of EU DPAs, clearly sees its supervisory role extending to Gen AI processing personal data, and it's calling for close cooperation with the new EU AI office set up under the AI Act.
Speaker 1:But we don't have all the answers yet.
Speaker 2:No, no, as the report notes, definitive legal and technical answers for many of these issues, especially around model opacity and actually implementing data subject rights effectively. They're still very much needed. It's an ongoing process.
Speaker 1:OK, finally, let's tackle the copyright challenges.
Speaker 2:This is, you balance protecting the rights of creators whose work exists today with enabling the innovation of AI, which often seems to require access to vast amounts of existing content for training.
Speaker 1:So what's the key legal issue?
Speaker 2:A key one in the EU is the application of the text and data mining TDM exception. This comes from the EU's copyright directive. It basically allows TDM on lawfully accessed works unless the right holder has explicitly reserved their rights in an appropriate manner, especially using machine-readable means.
Speaker 1:And what counts is appropriate machine-readable means. Is that clear?
Speaker 2:Well, that's a major part of the debate. In the ongoing litigation there's uncertainty whether just putting a line in your terms of service is enough or if more technical measures are required, things beyond just the standard robotstxt file, which has limitations. Efforts like AIPF are trying to develop better standards, but it's not settled. Court cases in Germany, the Netherlands, the US, they're all grappling with this interpretation right now.
Speaker 1:What about the content the AI actually produces? Can that be copyrighted itself?
Speaker 2:Generally under EU copyright law, for something to be protected, it needs to show originality and reflect the author's own intellectual creation. That typically requires human input, human creative choices.
Speaker 2:So if it's just generated automatically, If an AI output is generated purely automatically, with minimal human direction, it's unlikely to qualify for copyright protection itself. If a human user provides sufficiently precise instructions or makes significant creative selections or modifications that shape the final output, then that output might be protected, but the human user would need to demonstrate their specific creative contribution. Courts in different countries are starting to rule on what level of human intellectual input is actually sufficient.
Speaker 1:And if the AI's output infringes someone else's existing copyright, who's liable then? The user, the AI company?
Speaker 2:Infringement can definitely occur, For instance, if the AI model has effectively memorized parts of its training data, maybe a specific image or text passage, and reproduces it too closely in its output. If the AI provider didn't have the rights to use that training material for that purpose in the first place, there's a problem.
Speaker 1:And who gets sued?
Speaker 2:Liability could potentially fall on the user who prompted the infringing output or on the AI provider or both. But most of the recent high profile cases we've seen in the US and Europe like the New York Times lawsuit against OpenAI, cases brought by collecting societies like GEMA in Germany or publishers in France against major AI companies they have primarily targeted the AI providers themselves.
Speaker 1:Are there solutions being explored? To try and navigate, says maybe find a middle ground.
Speaker 2:Yes, things are starting to happen. Licensing agreements are emerging between some AI providers and publishers, like media organizations, allowing the AI to train on their content, usually for a fee. Collective licensing and collective bargaining are also being discussed as potential mechanisms, though they're complex to set up, to ensure fair compensation for creators whose works are used in training.
Speaker 1:So finding ways to pay for the data.
Speaker 2:Essentially, yes, finding workable models. Ultimately, the report suggests that harmonized approaches and standardization across the EU are really necessary here to provide legal certainty for everyone creators, users and the AI developers.
Speaker 1:Wow, that was an incredible deep dive. We covered so much ground, from the basic tech all the way through these really complex policy challenges across technology, the economy, society, ethics, regulation, everything.
Speaker 2:The EU is clearly grappling with how to foster innovation in this space, how to become a leader while also upholding its core values and ensuring a strategic, evidence-based approach, and that's where reports like this one from the JRC are so vital.
Speaker 1:Absolutely. The insights from this kind of scientific evidence are clearly crucial for navigating this incredibly fast changing landscape.
Speaker 2:And that evidence is vital for policymakers trying to make informed decisions in a world where, frankly, the technology is often evolving much faster than regulations can keep up.
Speaker 1:So, as we wrap up this deep dive into the JRC's outlook on generative AI, here's maybe a thought to leave you with, building on everything we've discussed. Here's maybe a thought to leave you with, building on everything we've discussed. Given Gen AI's incredible capacity to generate content, to automate, to augment human tasks, how do we ensure its widespread adoption doesn't inadvertently diminish essential human skills, skills like critical thinking, creativity, nuanced understanding, especially in sensitive areas like education, journalism, public discourse? How can we design AI systems and the policies governing them so they truly augment human capabilities and human values, rather than eroding them?
Speaker 2:That's a fundamental question, isn't it? Especially as this technology continues to integrate deeper and deeper into our lives.
Speaker 1:Indeed Well. Thank you for joining us for this deep dive. We hope you feel a little more well-informed about the fascinating and definitely challenging world of generative AI.