Whenever I consider the potential risks of AI, my thoughts inevitably turn to those most vulnerable and close to me: my mother (77y), who owns a smartphone yet has zero understanding of its complexities, and my teenage daughter (16y), deeply influenced by TikTok and other social media platforms. Influenced by online role models, she endured a two-year battle with anorexia, a struggle that overshadowed even the guidance and influence of her own parents and teachers.
Imagine a world where deepfakes don't just target public figures, but also our most vulnerable – children, teenagers, unaware grandparents, or even altruistic public servants. This technology holds the power to manipulate and harm in deeply personal ways.
One of my favorite Middle Eastern adages reminds us: 'If we could learn from history, we wouldn't repeat it.' Yet, I fear the path of AI may mirror past technological advancements, particularly nuclear technology, where significant harm preceded control.
Reflect on the tragic lessons of Hiroshima and Nagasaki (1945), a man-made catastrophe that led to the establishment of mechanisms like the Nuclear Non-Proliferation Treaty (NPT, 1967-68).
This historical event poses a crucial question: Can we establish effective controls for AI before experiencing similar disasters? I am afraid, we don't care much...
The challenge with AI lies in its accessibility and speed. Unlike nuclear technology, which requires substantial investment and time, AI can be misused by anyone with internet access. Creating and disseminating a fake representation of reality (aka deepfakes) can happen at lightning speed and reach a global audience. Once it's out there, rectifying misinformation becomes a daunting task.
Regulating AI globally is a complex and lengthy process. As a short-term measure, making the development and use of technologies like deepfakes illegal should be a priority to mitigate immediate risks.