EU AI Act Lead Implementer
The EU AI Act's core principles revolve around ensuring trustworthy and responsible AI development and use. This course is designed to help delegates meet this challenge.
Description
The EU AI Act's core principles revolve around ensuring trustworthy and responsible AI development and use. These principles include human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, fairness, and social and environmental wellbeing. The Act aims to protect fundamental rights, democracy, and the rule of law while fostering innovation and supporting the internal market.
About This Course
Delegates will learn how to interpret the Principles and identify and implement solutions across the full scope. This includes how to provide;
- AI systems should support human decision-making, not replace it.
- Human oversight is crucial to prevent harmful outcomes and ensure accountability.
- AI systems must be safe and secure, with robust measures to prevent harm.
- This includes risk management, data quality, and cybersecurity requirements.
- The Act emphasizes the protection of personal data and privacy, aligning with GDPR.
- It sets rules for data quality and governance, particularly for sensitive information.
- AI systems should be transparent in their operations, allowing users to understand how they work.
- This is crucial for building trust and ensuring accountability.
- The Act calls for measures to prevent discrimination and ensure equity in AI outcomes.
- AI systems should be designed to avoid bias and ensure fair outcomes for all.
- AI systems should be designed with the social and environmental impacts in mind.
- This includes ensuring AI contributes to a more sustainable and equitable future.
7. Establish a risk-based approach:
- AI systems are classified into risk levels, with varying regulatory requirements depending on the level of risk.
8. Prohibit certain practices:
- The Act prohibits AI uses that pose unacceptable risks, such as social scoring and harmful manipulation.
9. Focus on accountability:
- Providers and users of AI systems are held accountable for their actions and the consequences of their use.
Through alignment with a range of recognised standards such as ISO 27001, ISO 27701, ISO 38500 and ISO 42001, delegates will learn how to structure and implement a programme of works aligned to the requirements of the EU AI Act.
Accreditation
Assessment
Delegates are tasked with completing a combination of in-course quizzes and a final, 12 question, essay style examination on the afternoon of Day 3. Delegates must achieve a combined score of 70% in order to pass. Successful delegates are awarded a Certificate of Achievement.
Prerequisites
A foundational understanding of AI and the regulatory landscape would be beneficial.
What's Included?
- 350 page Participant Guide
- Examination fees
- A copy of the Act and associated standards - ISO 42001, ISO 27001
Who Should Attend?
- Managers with responsibility for Compliance
- Lawyers advising clients on Compliance
- Data Protection Officers
- Risk specialists
- Individuals wishing to understand the link between AI, Data Protection and Risk Management