Jan 6
/
Naz
Why Do We Have (Harmful) Bias in AI?
Ever wondered why an AI might depict a darker-skinned woman as a cleaning person and a light-skinned man as a CEO?
AI, in its essence, is a mirror of our inputs and processes. When these inputs carry historical biases or the processes lack diversity, the AI reflects these flaws. It's not just about the code; it's about the context in which that code is written and the data it learns from.
The Bias in Generative AI in the attached study sheds light on the systemic issues ingrained in the technology we are rapidly integrating into our lives.
Here’s my perspective:
1. The Diversity Gap: The article reaffirms the urgent need for diversity in AI development. When a homogenous group designs AI, it's likely to perpetuate the same limited perspectives and biases.
2. Data Quality Matters: As highlighted in the Bloomberg piece, the quality and type of data used in AI training are pivotal. Biased data equals biased AI – it's that simple.
3. Learning from the Past or Building for the Future?
The reliance on historical data is a double-edged sword. We need AI models that don't just replicate past biases but are equipped to anticipate and adapt to future societal shifts.
The reliance on historical data is a double-edged sword. We need AI models that don't just replicate past biases but are equipped to anticipate and adapt to future societal shifts.
and my most radical approach: we replace all AI developers with Women 😊
WOMEN AI ACADEMY
Women AI Academy is a gender-equality and technology driven learning & development organization
Site Terms & Info
ETHOS AI Training & Consulting GmbH
Weihenstephanerstr.1281673
Munich-Germany
We are driven by the vision of making AI both ethical and accessible to everyone
Copyright © 2024 Brought to you by Ethos ai AI Training & Consultancy GmbH
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for Electrotechnical Standardization (CENELEC) & International Electrotechnical Commission (IEC) – safety systems, hardware & software standards committees. He was appointed by CENELEC as convener of several Working Groups for review of EN50128 Safety-Critical Software Standard and update and restructuring of the software, hardware, and system safety standards in CENELEC.
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Trish advises and trains organisations internationally on Responsible AI (AI/data ethics, policy, governance), and Corporate Digital Responsibility.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.