AI Transparency and Explainability Training
Unlock Trust and Compliance in Your AI Solutions
Why This Training?
In the evolving landscape of artificial intelligence, transparency and explainability are paramount. As the EU AI Act emphasizes clear and understandable AI models, businesses must align to foster trust and meet regulatory standards. This training offers a deep dive into constructing AI solutions that are both transparent and explainable, ensuring that your AI team remains ahead of the curve and compliant.
Duration: 1 Day (8 hours), (online / virtual live session)

Who Should Attend?
AI Developers and Engineers
Data Scientists and Analysts
Compliance Officers and Legal Teams
Product Managers overseeing AI solutions
CTOs, CIOs, and Tech Leadership
Anyone interested in understanding the intricacies of transparent AI
Data Scientists and Analysts
Compliance Officers and Legal Teams
Product Managers overseeing AI solutions
CTOs, CIOs, and Tech Leadership
Anyone interested in understanding the intricacies of transparent AI

Course Highlights
Introduction to AI Transparency and Explainability
- Importance in today's AI landscape
- Overview of the EU AI Act mandates
Technical Dive into Transparent AI Models
See more
- AI model architecture for transparency
- Features and decision-making processes
Interpreting AI with SHAP and LIME
- Techniques for model interpretation
- Practical applications and examples
Challenges and Ethical Implications
- Trade-offs between model performance and transparency
- Ethical considerations for transparent AI
Hands-on Workshops
- Build and evaluate transparent AI models
- Real-world case studies and analysis

Pre-requisites
Basic understanding of AI and machine learning concepts
Familiarity with AI development processes
Familiarity with AI development processes
Training Materials Needed by Participants
Laptop with Python environment set up
AI development tools (suggested list will be provided prior to training)
Pre-training reading materials (to be provided upon registration)
Write your awesome label here.
Training Content
AI Transparency and Explainability Training
Objective: Equip your AI team with the knowledge and tools needed to construct transparent and explainable AI models. Dive deep into the methodologies, ensuring alignment with the EU AI Act and fostering trust and compliance in your AI solutions.
Session 1: Introduction to AI Transparency and Explainability
- Demystifying Transparent AI: Understanding the need and importance.
- The EU AI Act: An overview of mandates on transparency and explainability.
- Real-world Implications: Case studies where AI transparency made a difference.
Session 2: The Technical Landscape of Transparent AI
- AI Model Architecture: Key features that enable transparency.
- Decisions in AI: Unpacking how AI models make decisions.
- Interactive Activity: A walkthrough of a transparent AI model.
Session 3: Methods for Achieving Explainability
- Model Interpretation with SHAP and LIME: Techniques overview.
- Trade-offs: Performance vs. transparency in AI models.
- Hands-on Workshop: Crafting models with SHAP and LIME.
Session 4: Challenges and Ethical Implications in Transparent AI
- Navigating Complexities: Achieving balance in transparency.
- Ethical Implications: What happens when AI models aren't transparent?
- Group Discussion: Ethical challenges faced by attendees in their domains.
Session 5: Real-world Application and Case Studies
- Industry-wise Breakdown: How different sectors achieve AI transparency.
- Case Study Analysis: Dive deep into specific instances of transparent AI.
- Interactive Workshop: Analyzing and discussing real-world AI models.
Session 6: Regulatory and Compliance Considerations
- Understanding the EU AI Act: A deeper dive.
- Achieving Compliance: Steps and measures to ensure alignment.
- Group Activity: Designing a compliance checklist for AI models.
Session 7: Hands-on Workshop: Crafting Transparent AI Solutions
- Activity Brief: Working on a given AI scenario.
- Group Tasks: Constructing transparent AI models.
- Feedback Rounds: Sharing and refining the models developed.
Session 8: Concluding Thoughts and Engaging Q&A
- Recap of the Training: Highlighting key takeaways.
- Open Floor Q&A: Addressing any lingering questions.
- Path Forward: Encouraging implementation of learnings and best practices in attendees' projects.
WOMEN AI ACADEMY
Women AI Academy is a gender-equality and technology driven learning & development organization
Copyright © 2023 Brought to you by Ethos ai AI Training & Consultancy GmbH
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for Electrotechnical Standardization (CENELEC) & International Electrotechnical Commission (IEC) – safety systems, hardware & software standards committees. He was appointed by CENELEC as convener of several Working Groups for review of EN50128 Safety-Critical Software Standard and update and restructuring of the software, hardware, and system safety standards in CENELEC.
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Trish advises and trains organisations internationally on Responsible AI (AI/data ethics, policy, governance), and Corporate Digital Responsibility.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.