Jan 1
/
Naz
Deepfakes: Some AI Applications Must Be Made Illegal
Whenever I consider the potential risks of AI, my thoughts inevitably turn to those most vulnerable and close to me: my mother (77y), who owns a smartphone yet has zero understanding of its complexities, and my teenage daughter (16y), deeply influenced by TikTok and other social media platforms. Influenced by online role models, she endured a two-year battle with anorexia, a struggle that overshadowed even the guidance and influence of her own parents and teachers.
Imagine a world where deepfakes don't just target public figures, but also our most vulnerable – children, teenagers, unaware grandparents, or even altruistic public servants. This technology holds the power to manipulate and harm in deeply personal ways.
One of my favorite Middle Eastern adages reminds us: 'If we could learn from history, we wouldn't repeat it.' Yet, I fear the path of AI may mirror past technological advancements, particularly nuclear technology, where significant harm preceded control.
Reflect on the tragic lessons of Hiroshima and Nagasaki (1945), a man-made catastrophe that led to the establishment of mechanisms like the Nuclear Non-Proliferation Treaty (NPT, 1967-68).
This historical event poses a crucial question: Can we establish effective controls for AI before experiencing similar disasters? I am afraid, we don't care much...
The challenge with AI lies in its accessibility and speed. Unlike nuclear technology, which requires substantial investment and time, AI can be misused by anyone with internet access. Creating and disseminating a fake representation of reality (aka deepfakes) can happen at lightning speed and reach a global audience. Once it's out there, rectifying misinformation becomes a daunting task.
Regulating AI globally is a complex and lengthy process. As a short-term measure, making the development and use of technologies like deepfakes illegal should be a priority to mitigate immediate risks.
WOMEN AI ACADEMY
Women AI Academy is a gender-equality and technology driven learning & development organization
Site Terms & Info
ETHOS AI Training & Consulting GmbH
Weihenstephanerstr.1281673
Munich-Germany
We are driven by the vision of making AI both ethical and accessible to everyone
Copyright © 2024 Brought to you by Ethos ai AI Training & Consultancy GmbH
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for Electrotechnical Standardization (CENELEC) & International Electrotechnical Commission (IEC) – safety systems, hardware & software standards committees. He was appointed by CENELEC as convener of several Working Groups for review of EN50128 Safety-Critical Software Standard and update and restructuring of the software, hardware, and system safety standards in CENELEC.
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Trish advises and trains organisations internationally on Responsible AI (AI/data ethics, policy, governance), and Corporate Digital Responsibility.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.