Feb 23
/
Naz
Google Gen vs ChatGPT Gen
Feels like a covert pact among data bandits, doesn't it?
I can't wrap my head around why Google, the titan hoarding humanity's collective wisdom, is so keen on plundering our 'wealth'—our data. They were the gatekeepers of the internet's expanse, a role they played well until ChatGPT burst onto the scene, challenging their throne with a so-called 'enhanced' model. Or should I say, 'CheatGPT'? It's become the dark muse for those seeking shortcuts, the unconscious motivation for cheating our way through complexities.
For my generation, Gen X, Google was a treasure trove. We reveled in the hunt for data, transforming it into personalized knowledge with a dash of creativity. But the younger generations, Millennials and Gen Z, seem to have lost that zest. Their mantra? Why bother with the grind when instant knowledge can be handed to them on a silver platter, courtesy of our friend, CheatGPT.
It's almost as if the brains behind LLM/ChatGPT solutions crafted these marvels out of sheer laziness, catering to their own kind. Picture this: a bunch of entitled dudes, too indifferent to pursue formal education, yet audacious enough to redefine how we access information.
I'm reminded of a German saying: "Not macht erfinderisch" or "Necessity breeds invention." It's ironic, considering the current scenario. Reddit, a cesspool of questionable data, is now Google's latest feast. They plan to gorge on this 'junk' to refine Gemini, their AI brainchild, in hopes of avoiding laughable mishaps.
But here's the kicker: AI solutions like Gemini are being spoon-fed internet data, which, let's face it, is predominantly rubbish. Add to that a dash of socially awkward CEOs and tech nerds, and what do we get? A 'revolutionary' product so half-baked, we're now scrambling to slap on layers of regulations, standards, and guidelines just to make it somewhat palatable.
The irony? In our quest for convenience, we're inadvertently signing up for a mess that demands even more effort to untangle. So much for progress, huh?
My take on this: Yes, AI will replace some of the human efforts. But we, humans, need to learn how to make useful AI and how to use it.
Recently, I encountered a senior marketing expert who had been replaced by AI and was navigating a career transition. I'm convinced that companies opting to replace human experts with AI-only solutions will ultimately incur higher expenses in addressing unforeseen issues. According to industry veterans, the rationale behind substituting human talent with AI isn't as solid as presumed. Beyond the substantial initial investment, the ongoing maintenance costs present a significant financial challenge.
Have you heard about Chevrolet dealer's chatbot that agreed to give a car away for $1 ( which costs minimum 60k) ? I just loved it and enjoy reading it each time.
WOMEN AI ACADEMY
Women AI Academy is a gender-equality and technology driven learning & development organization
Site Terms & Info
ETHOS AI Training & Consulting GmbH
Weihenstephanerstr.1281673
Munich-Germany
We are driven by the vision of making AI both ethical and accessible to everyone
Copyright © 2024 Brought to you by Ethos ai AI Training & Consultancy GmbH
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for Electrotechnical Standardization (CENELEC) & International Electrotechnical Commission (IEC) – safety systems, hardware & software standards committees. He was appointed by CENELEC as convener of several Working Groups for review of EN50128 Safety-Critical Software Standard and update and restructuring of the software, hardware, and system safety standards in CENELEC.
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Trish advises and trains organisations internationally on Responsible AI (AI/data ethics, policy, governance), and Corporate Digital Responsibility.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.