Jan 6 / Naz

Why Do We Have (Harmful) Bias in AI?

Ever wondered why an AI might depict a darker-skinned woman as a cleaning person and a light-skinned man as a CEO?
AI, in its essence, is a mirror of our inputs and processes. When these inputs carry historical biases or the processes lack diversity, the AI reflects these flaws. It's not just about the code; it's about the context in which that code is written and the data it learns from.
The Bias in Generative AI in the attached study sheds light on the systemic issues ingrained in the technology we are rapidly integrating into our lives.
Here’s my perspective:
1. The Diversity Gap: The article reaffirms the urgent need for diversity in AI development. When a homogenous group designs AI, it's likely to perpetuate the same limited perspectives and biases.
2. Data Quality Matters: As highlighted in the Bloomberg piece, the quality and type of data used in AI training are pivotal. Biased data equals biased AI – it's that simple.
3. Learning from the Past or Building for the Future?
The reliance on historical data is a double-edged sword. We need AI models that don't just replicate past biases but are equipped to anticipate and adapt to future societal shifts.
and my most radical approach: we replace all AI developers with Women 😊
Created with