Intertek's Assurance in Action Podcast Network
Intertek's Assurance in Action Podcast Network
Navigating ISO 42001 Standard for Ethical and Responsible AI Management
This episode explores how the ISO 42001 standard guides organizations in managing AI ethically. Our experts discuss the importance of responsible data inputs, preventing bias, and ensuring transparency in AI outputs to protect society from misuse. Tune in to learn how organizations can align AI practices with ethical principles for a fair and accountable future.
Speakers:
- Angelique Brouillard- NA Program Manager, IT & Data Security at Intertek Business Assurance
- Sofia Liebon- Europe & Asia Program Manager, IT & Data Security at Intertek Business Assurance
Follow us on- Intertek's Assurance In Action || Twitter || LinkedIn.
Host: Hello and welcome to Intertek’s Assurance in Action podcast!
I’m your host, Natalia Farina, and today we’re talking about the societal impact of AI—specifically, how we can manage AI in a fair and human way. We often hear about data protection, but what about the ethical, moral, and human aspects of AI?
To help explore this, we have two experts with us: Angelique Brouillard, Program Manager for Risk, IT and Training, Business Assurance, and Sofia Liebon, Global Program Manager, IT & Data Security, Business Assurance. Welcome to both of you!
Angelique Brouillard: Thanks, Natalia. Glad to be here.
Sofia Liebon: Yes, thank you! We’re excited to dive in.
Segment 1: Why AI Ethics Matters
Host: Let’s start with the basics. We’re hearing a lot about AI ethics lately. Many tech companies have adopted their own AI ethics codes or guidelines. But from a broader perspective, why is it so important to have ethical guidelines in place for AI?
Angelique Brouillard: AI has immense potential to improve our lives, but it also carries significant risks if not managed properly. These risks aren’t just about data privacy but about fairness, bias, and societal impact. That’s where something like ISO 42001 comes into play. One of its key goals is to ensure that the data feeding an AI system is used responsibly and doesn’t have a negative societal impact.
Sofia Liebon: Exactly. It’s not just about the AI system itself but the entire lifecycle—from the data that’s input into the system to how the AI is implemented and used. Adopting a framework like ISO 42001 helps organizations mitigate risks and ensure that their AI systems are fair, transparent, and accountable.
Segment 2: Controlling What Goes Into AI
Host: So, let’s dig into this a bit more. Angelique, you mentioned the lifecycle of AI. Can you give us an example of why it’s so important to control what we put into these systems?
Angelique Brouillard: Sure. Take the example of an AI system used for hiring. Imagine a company uses AI to screen job candidates. If the data feeding the AI system—like past hiring decisions—is biased, then that bias can be embedded into the AI, leading to unfair outcomes. This is where ISO 42001 can make a big difference. It’s designed to ensure that organizations are aware of the risks at every stage, including data input. This is covered in A.6 of the standard, which focuses on understanding and mitigating risks associated with AI algorithms.
Sofia Liebon: And it’s not just about identifying the risks. ISO 42001 ensures that there’s a process in place for handling these issues if they arise. For example, if the hiring AI starts to make biased decisions, the company should have a plan for how to address that—whether it's correcting the algorithm, reviewing the data, or even halting the system’s use until the issue is resolved.
Host: That’s a great point. It’s not just about creating AI, but about responsibly managing it throughout its lifecycle.
Segment 3: Controlling How We Use AI
Host: Now, let’s talk about the other side of the coin—how we use AI. We’ve all heard about deepfakes and other types of AI-generated content being used in harmful ways. How does ISO 42001 address this?
Sofia Liebon: That’s a huge concern. The spread of misinformation using AI-generated images, videos, and audio is a growing threat. As you mentioned earlier, the line between real and fake is becoming increasingly blurred, and that’s dangerous for democracy and society as a whole. ISO 42001, specifically in section A.9, looks at the responsible use of AI systems. This includes addressing how AI outputs are used and ensuring that they aren’t causing harm.
Host: What exactly does A.9 cover?
Angelique Brouillard: A.9 focuses on how companies use AI systems. It emphasizes accountability, transparency, and preventing harm. For example, organizations must ensure that AI-generated content, like fake images or videos, isn’t used to mislead people or cause societal harm. Companies have to be responsible for the outputs of their AI systems, which includes monitoring and making sure they aren’t misused.
Sofia Liebon: Exactly. It also stresses transparency. People need to know how these AI systems work, especially when they influence decision-making or content. A.9 helps companies ensure their AI is used ethically and aligns with societal values. It’s not just about building AI; it’s about how it impacts society.
Segment 4: The Future of Ethical AI
Host: So, what’s the future of AI ethics? How can companies ensure they’re using AI responsibly moving forward?
Angelique Brouillard: We’ll likely see more companies adopting standards like ISO 42001. These standards provide a structured way for companies to think about AI not just in terms of what it can do for them, but in terms of the societal impact it can have. It’s about embedding ethics into the core of AI development and use.
Sofia Liebon: And more regulations will probably follow. As AI continues to evolve, governments and international bodies will likely introduce more regulations to ensure that AI is used responsibly. ISO 42001 could serve as a foundation for these regulations, helping companies get ahead of the curve.
Host: It sounds like a lot of progress is being made, but there’s still a long way to go.
Conclusion
Host: That brings us to the end of today’s episode. We’ve covered the importance of ethical AI management, how ISO 42001 helps ensure AI is used responsibly, and the role of section A.9 in preventing misuse. A big thank you to Angelique Brouillard and Sofia Liebon for joining us today.
Angelique Brouillard: Thanks, Natalia! It was a great discussion.
Sofia Liebon: Yes, thank you! It was a pleasure.
Host: Thank you to our listeners for tuning in to this episode. If you have questions or need assistance in establishing a solid AI Management System, feel free to reach out to us at business.assurance@intertek.com. Until next time!