Feb 6
/
Naz
How do we get spammy ideas into the AI?
(One) Garbage in. (Millions of)
Garbage Out.
Garbage Out.
Linda Gottfredson and Richard Lynn, as the controversial thinkers in our “real world,” were not that famous and haven't significantly affected many of us directly. Perhaps their influence is more pronounced among their followers in scientific racist networks. However, their questionable ideas are impacting us, our children, and our lives as we engage with AI technologies, particularly GenAI solutions.
"Academic freedom is essential, yet it demands responsible use, especially in areas such as race and intelligence research.
Linda Gottfredson's studies, partially funded by the Pioneer Fund, which is known for supporting scientific racism, underscore the need for rigorous ethical and scientific scrutiny. These factors necessitate a critical assessment of the motivations and potential societal impacts of such research. Understanding the implications of this work is crucial, especially concerning how it might influence social policies and the pursuit of racial equality."
To comprehend how controversial or "problematic thoughts" might become part of the data used to train AI systems, it's important to explore the mechanics of AI training and the nature of the data used.
AI systems, particularly those based on machine learning and natural language processing, are trained using extensive datasets. These datasets typically consist of a vast array of texts from the internet, including books, articles, websites, and other public domain materials. The objective is to expose the AI to a wide spectrum of human language, encompassing various styles, contexts, and opinions, enabling it to understand and generate human-like responses.
However, this broad exposure also means that AI systems can encounter biased, incorrect, or controversial content. When such content is part of the training data, the AI may learn to replicate similar patterns in its responses. It's crucial to note that the inclusion of such content doesn't imply endorsement by the AI or its developers; rather, it reflects the diversity and complexity of human thought and language.
To mitigate the risks associated with harmful or misleading content, AI developers use several strategies:
✔ Curating Training Data: Developers often curate the training data to exclude or minimize exposure to harmful, biased, or low-quality content, using sophisticated algorithms and manual review processes.
✔ Algorithmic Adjustments: Making adjustments and improvements to the AI's algorithms can assist in identifying and avoiding the replication of problematic content.
✔ Ethical Guidelines and Policies: Setting ethical guidelines and policies for AI development and usage is critical. This includes guidelines on handling sensitive topics and the type of content that should be avoided.
✔ Continuous Monitoring and Updates: AI systems are continually monitored and updated to ensure they align with ethical standards and societal norms. This includes retraining the AI with new, cleaner datasets and refining response generation mechanisms.
✔ Transparency and Accountability: Ensuring transparency in AI training processes and maintaining accountability for the outcomes of AI interactions is vital.
✔ Community Feedback: Input from users and the community can help identify areas where the AI might be underperforming or replicating undesirable content, allowing for targeted improvements.
For more detailed insights into these processes and the challenges involved, exploring "Artificial Intelligence: A Guide for Thinking Humans" by Melanie Mitchell and "Rebooting AI: Building Artificial Intelligence We Can Trust" by Gary Marcus and Ernest Davis might be beneficial. These books offer a comprehensive overview of how AI systems are trained and the complexities involved in ensuring they are ethical and unbiased.
WOMEN AI ACADEMY
Women AI Academy is a gender-equality and technology driven learning & development organization
Site Terms & Info
ETHOS AI Training & Consulting GmbH
Weihenstephanerstr.1281673
Munich-Germany
We are driven by the vision of making AI both ethical and accessible to everyone
Copyright © 2024 Brought to you by Ethos ai AI Training & Consultancy GmbH
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for Electrotechnical Standardization (CENELEC) & International Electrotechnical Commission (IEC) – safety systems, hardware & software standards committees. He was appointed by CENELEC as convener of several Working Groups for review of EN50128 Safety-Critical Software Standard and update and restructuring of the software, hardware, and system safety standards in CENELEC.
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Trish advises and trains organisations internationally on Responsible AI (AI/data ethics, policy, governance), and Corporate Digital Responsibility.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.