Feb 2
/
Naz
How much can we trust to the "Black Box" AI Solutions?
I'm a big fan of technology, but when it comes to personal or confidential information, I'm wary of using AI notetaking and similar tools. The privacy concerns, both legal and ethical, are significant when using AI solutions for personal tasks.
In our daily interactions, whether we're talking, writing, drawing, taking photos, making videos, or just moving, AI solutions can record our actions depending on their use
The problem is that we, as regular users, often aren't aware of the implications. We don't know how the data collected by AI apps or solutions is used by their providers, nor are we certain if these tools meet ethical and quality standards.
For instance, imagine if the health data we collect through our smartphone apps were shared with health insurance companies or employers, or if conversations with our digital assistants were collected and shared for creating inappropriate content using our voices or deep-fake images.
When buying a device, we thoroughly check its safety and quality. However, we don't apply the same level of diligence to AI solutions, which are arguably more critical. In the case of a device, we can personally verify its safety and check for any signs of it being stolen or illegally acquired.
With AI, the situation is quite different. We often have no insight into what's inside the 'black box' - how the AI is developed, whether the data used for training it was obtained ethically, or what exactly happens to our data once we input it. All we see is the output, and we can never be completely certain about its accuracy or reliability."
In summary, while we can assess and understand the risks of physical objects like cars, AI solutions present a more complex challenge due to their opaque nature and the uncertainty about how our data is processed and used.
Let's stay conscious of the risks involved in using 'Black Box' AI solutions. It's advisable to opt for those that are open-sourced and comply with ethical and quality standards.
WOMEN AI ACADEMY
Women AI Academy is a gender-equality and technology driven learning & development organization
Site Terms & Info
ETHOS AI Training & Consulting GmbH
Weihenstephanerstr.1281673
Munich-Germany
We are driven by the vision of making AI both ethical and accessible to everyone
Copyright © 2024 Brought to you by Ethos ai AI Training & Consultancy GmbH
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for Electrotechnical Standardization (CENELEC) & International Electrotechnical Commission (IEC) – safety systems, hardware & software standards committees. He was appointed by CENELEC as convener of several Working Groups for review of EN50128 Safety-Critical Software Standard and update and restructuring of the software, hardware, and system safety standards in CENELEC.
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Trish advises and trains organisations internationally on Responsible AI (AI/data ethics, policy, governance), and Corporate Digital Responsibility.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.