Welcome back to Voices of Trusted AI! In our November edition, we uncover the unfortunate consequences of unchecked machine learning algorithms (read: Facebook whistleblower), why data privacy is more important than ever, and more.
Reminder: At the bottom of this digest, you have the opportunity to ask a data scientist any question on your mind and we'll feature the response in an upcoming edition.
Thanks for your continued commitment to creating and deploying more responsible AI systems!
Cal Al-Dhubaib, CEO
What Do We Mean by Trusted AI?
Trusted AI is the discipline of designing AI-powered solutions that maximize the value of humans while being more fair, transparent, and privacy-preserving.
What it's about: MIT's Karen Hao breaks down the latest news in the Facebook whistleblower case. If you haven't read what's happening with Facebook's machine learning algorithms, this article will catch you up to speed.
Why it matters: Facebook is an unfortunate example of the very real dangers, and consequences, of machine learning algorithms that are left unchecked by humans. Facebook's use of AI has grown so fast that not even its own engineers can explain how content is served to users. Thus, when issues arise (especially in non-English speaking countries), the team does not have a viable solution to fix them.
What it's about: In response to Twitter's recent findings of unexplained bias in their AI algorithms, VentureBeat dives deep into the challenges data scientists and business leaders face when attempting to avoid bias in our own AI solutions.
Why it matters: Human intervention is necessary to create Trusted AI solutions, yet it is usually at these intervention points where we unintentionally add bias to training and data sets. Data scientists must carefully mine for bias in their AI solutions during creation and after deployment to avoid unexplained discriminatory results similar to what Twitter experienced.
What it's about: Digital transformation has been a hot buzzword for years, but the pandemic has significantly accelerated the need for sound digital infrastructures, especially among governments.
Why it matters: Cloudera eloquently explains how digital governments are vastly expanding. But what this article doesn't emphasize is that with this growing use of personal data, comes an ever-increasing need for more privacy and ethical standards. Even if you're not a government entity, this article brings to light just how critical an ethical data structure is for digital transformation.
Meet Amy Neumann, Executive Director of Resourceful Nonprofit
Amy Neumann is a social impact entrepreneur, keynote speaker, author, trainer, technology strategist, consultant, and artist. After two decades in technology with organizations like AT&T, Yahoo, and Case Western Reserve University, she founded the nonprofit Resourceful Nonprofit (and its subsidiary, Technology Inclusion) to help 501(c)3 nonprofits accomplish their goals faster while also being proactive about diversity, equity, and inclusion.
Currently, Amy is earning a Masters of Law, Justice, and Culture at Ohio University, with research emphasis on reducing bias in artificial intelligence, and holds a certificate in Diversity & Inclusion from Cornell University. Her 2018 Simon & Schuster book, “Simple Acts to Change the World: 500 Ways to Make a Difference,” is a tribute to the many great ideas she has discovered on the topics of social good, social justice, equity, technology for good, and volunteering through her work and philanthropy. Connect with Amy.
How do I know if I am building a Trusted AI solution?
AI is an extraordinarily powerful technology that is becoming more pervasive every day, but AI shouldn't replace human decision making; it should augment it.
Ask yourself:
Is your AI transparent?
Do you understand where your data is coming from and how your AI makes decisions or does it live in a black box?
Is your AI fair?
There arenumerous examples of AI discriminating against a class of people in completely unintentional ways. If you don't look for bias you will never be able to mitigate it.
Do you trust the people making your AI? The most important aspect of building Trusted AI is working with a team that lives its principles. You can't build Trusted AI without a team that works with you to create explainable, fair solutions.
AI makes so many important decisions so we have to treat every decision we make when creating AI as important.