Welcome back to Voices of Trusted AI! In our December edition, we share more about the types of people necessary for Trusted AI decisions, new AI legislation proposed by the European Union, and more.
Don't forget:At the bottom of this digest, you have the opportunity to ask a data scientist any question on your mind. If you have a question about Trusted AI, chances are, someone else does too.
Thanks for joining us on our journey to design and develop more responsible AI systems!
Cal Al-Dhubaib, CEO
What Do We Mean by Trusted AI?
Trusted AI is the discipline of designing AI-powered solutions that maximize the value of humans while being more fair, transparent, and privacy-preserving.
What it's about: To promote transparency and ensure tech developers adhere to ethical guidelines for all future government AI projects, the Department of Defense released a set of responsible AI guidelines.
Why it matters: With access to highly sensitive information, like surveillance video footage and personal records, it only makes sense that the government set ethical standards for the design and use of AI. But, will these guidelines actually be used? Or are they simply a means to placate those with ethical concerns?
What it's about: In this three-part series, DataRobot and Amazon Web Services dive into the three essential components of any Trusted AI system: people, process, and technology. This articles focuses on people.
Why it matters: Business leaders and data scientists alike must realize that humans are a critical part of any successful Trusted AI solution. People must build AI with humans in mind and this article does a great job explaining the types of individuals needed when designing Trusted AI.
What it's about: Earlier this year, the European Union proposed new artificial intelligence legislation focused on regulating and reviewing AI systems, especially those deemed "high risk" in fields like education and healthcare.
Why it matters: If you're building an AI system that could affect an EU citizen, then you need to pay attention. This will be like GDPR—it initially caused a lot of headaches, but also led to meaningful changes across websites that make it easier to understand how data is collected and how to opt out of it. A similar wave of change is coming to AI accountability.
Meet Pamela Jasper, Founder of Jasper Consulting.AI
Pamela is the founder of Jasper Consulting.AI, a technology firm providing AI ethics governance advisory and audit services to the AI community.
She is an expert in Model Risk Management Governance, with decades of experience developing capital markets trading and quantitative risk management systems for investment banks and stock exchanges in New York, Tokyo, London, and Frankfurt. Pamela also provides AI Ethics executive advisory and program management as an outsourced AI Ethics Officer.
While most of the work Pamela's firm performs is confidential for her clients, her work has been recognized by HAI and presented at NeurIPS in 2020 and 2021.
Pamela’s policy proposal for Squawk SierraTM—a smart models method for AI regulatory and end user transparency—was recently recognized by Stanford’s HAI.
Based on her experience in developing leading bank and clearinghouse programs for model risk, model inventory, model ops, model board governance, and policies and procedures, Pamela developed the AI Ethics governance framework called FAIRTM—a Framework for AI Risk which was presented at the NeurIPS 2020/2021 AI conference and is based on bank regulations SR11-7.
Some of the components of FAIRTM include:
Model Risk Management – History, Definition, etc.
Model Risk Principles, Framework
AI Risk Global regulatory landscape – Policies, Gaps, Tools
Developing the Model Inventory – Model and Data objects, relationships, interconnectivity
Model Risk and AI Ethics Tools
Model Tiers
Model Grade
Model Validation – Effective model Validation techniques
RAI and MRM
Model and Data Risk Documentation
AI Transparency – SmartModels, Events Databases, XAI, etc.
AI Risk vs AI Ethics
Ultimately, Pamela brings expertise in AI governance, model risk, program management, product management, and technology audit to the AI Industry.
What questions should I "ask" my data before building a Trusted AI solution?
Before starting with your data, it is important to think through the question you are asking. Who are the end users, how will they be using the solution, who will be impacted by the decisions, what are opportunities for potential disparate impact or bias?
Once you have a clear vision of the end goal, it is crucial to sit down with your data for some due diligence. Below are some questions to consider, categorized into the three pillars of Trusted AI: Transparency, Privacy, and Fairness.
Transparency What is the source of your data? What was disclosed to the individuals in the data set about current and future uses? What are near and long-term plans for data collection and validation?
Privacy How is the privacy of individuals protected during data collection, storage, and use? What kind of informed consent and opt-in/out process exists? What regulations such as GDPR or HIPAA come into play?
Fairness What groups may be under or over represented in your data set? How was the data collected and what potential biases, societal or statistical, may that have introduced? How is fairness being measured and monitored?
While these questions are by no means exhaustive, sitting down with your data and going through an initial examination process is a good early step to start you on the right path for future risk reduction.