Here's what industry experts have to say about the dangers of creating too much hype around AI advancements. 
View in browser
logo transparent
PAN_Email Newsletter Header_04-1

Hello,

In the last few weeks and months we’ve seen the dangers of AI overhype unfold on a very public scale. The race to create generative AI models has led to oversights in the security and privacy of others. The pressure to deploy advancements prematurely has also resulted in unchecked bias and discrimination. And most noticeably, overhype has led to unrealistic expectations about what the tech can really do.

Here’s the truth: Overhype isn’t helping anybody. But, don’t take our word for it. Here’s what others in the industry are saying.

From Senior MIT Tech Reporter Melissa Heikkilä: The systems that are being rushed out today are going to cause a different kind of havoc altogether in the very near future.

Tech companies are embedding these deeply flawed models into all sorts of products, from programs that generate code to virtual assistants that sift through our emails and calendars.

In doing so, they are sending us hurtling toward a glitchy, spammy, scammy, AI-powered internet.

From AI Author Meredith Broussard: This kind of imagined endgame of Oh, we’re just going to use AI for everything is not a future that I cosign on.

Let these words of wisdom carry over into the way your business introduces and adopts AI. Don’t oversell the capabilities of the technology, but rather educate and prepare your teams for how it can realistically augment your business. 

What Do We Mean by Trusted AI?

Trusted AI is the discipline of designing AI-powered solutions that maximize the value of humans while being more fair, transparent, and privacy-preserving.

Learn More

    The Latest in Trusted AI

    Core Views on AI Safety: When, Why, What, and How


    What it's about:
    In this article from AI safety and research company Anthropic, the team discusses why they anticipate rapid AI progress and very large impacts from AI, and how that led their company to be concerned about AI safety.

    They argue that ensuring AI safety requires not only technical solutions, but also an understanding of the social, economic, and political context in which AI is developed and used. The article also explores a range of topics related to AI safety, including the need for transparency and explainability in AI systems, the importance of alignment between human and machine values, and the potential risks posed by advanced forms of AI, such as superintelligent AI.

    Ultimately, Anthropic emphasizes the need for interdisciplinary collaboration and ongoing research to ensure AI is developed in a way that benefits humanity.

    Why it matters: Anthropic offers a pretty smart outlook on AI’s progress and impact in the coming decade. We’ve already started to see evidence of “competitive AI races” with generative AI, and can only assume that more will occur in the near future—ultimately resulting in hastily launched, unchecked systems.

    Now is the time for leaders—especially those in high-risk industries—to step up and take responsibility for the potential risks and ethical implications of using AI in their operations.

    The article emphasizes the need for AI systems to be aligned with human values and to prioritize safety, transparency, and explainability. These principles are critical for ensuring that AI technologies are developed and deployed in a responsible and ethical manner.

    As a leader, understanding these principles can help you make the right decisions about the adoption and use of AI systems. By prioritizing safety and ethical considerations, you can minimize the risks associated with AI and help to ensure that these technologies are used in a way that benefits both your organizations and society as a whole.

    View Their Core Values

    AI Experts’ Plea For Less Hype


    What it's about:
    There have been a number of experts calling for less hype around generative AI and other AI advances. The following interviews are two notable examples.

    This interview with Meredith Broussard, author of "Artificial Unintelligence: How Computers Misunderstand the World," highlights her views on the limitations of artificial intelligence and the need for a more nuanced approach to the technology. Broussard emphasizes that AI is not a panacea and that it is essential to consider the social and cultural context in which it is deployed. She also discusses the importance of ethical considerations in AI development, including the need for transparency and accountability. Overall, Broussard argues that we need to approach AI with a critical eye and be mindful of its limitations and potential biases.

    Ahead of GPT-4’s release, OpenAI’s chief tech officer Mira Murati repeatedly called for less hype. She believes that the technology should be approached with caution and that its limitations and potential biases should be recognized. Murati also argues that the development of GPT-4 should prioritize societal benefits and ethical considerations, rather than simply chasing breakthroughs.

    Why it matters: Even though GPT-4 was launched on March 14, Murati’s and Broussard’s messages still ring true as we continue to see an explosion of advancements in AI.

    Overhype is dangerous.

    It leads people to form unrealistic expectations about what it can do—ultimately resulting in disappointment and loss of trust in the technology (something we see happen often among AI-building companies). Overhype can also spread misinformation, making it harder for everyone to understand how AI works. And, like Murati mentions, putting too much emphasis on the development of AI should not come at the expense of other important areas like cybersecurity, privacy, and ethics.

    While these messages were primarily directed at the public and the media, you should be considering them in the context of your own business as well. Are you overpromising how AI can improve business processes? Will your stakeholders be disappointed in the results? Will they understand how the model arrived at those results?

    Meredith Broussard's Interview | Mira Murati's Interview

    New Voluntary Generative AI Guidelines

     

    What it's about: In an effort to build, create, and share AI-generated content more responsibly, major companies like OpenAI, TikTok, and Adobe have subscribed to a set of voluntary guidelines issued by the Partnership on AI (PAI).

    The goal of the nonprofit’s recommendations is to “ensure that synthetic media is not used to harm, disempower, or disenfranchise, but rather to support creativity, knowledge sharing, and commentary”. One of the most important guidelines is a pact to include and research ways to ​​tell users when they’re interacting with something that’s been generated by AI.

    Why it matters: The word that’s bothering most ethical AI advocates is “voluntary”. As Hany Farid, a professor at the University of California, Berkeley, put it: “voluntary guidelines and principles rarely work”.      

    And we’d have to agree. While these recommendations are a great starting point for generative AI solutions, it’s hard to enforce compliance without more stringent legislation in place.

    The guidelines from PAI also fail to include any mention of mitigating toxicity in the models’ datasets—one of the most significant ways harm is caused by generative AI systems. At the end of the day, this could be a step in the right direction, but AI-building companies must adopt a more proactive and serious approach to ethical AI design if they want to truly prevent harm. 

    Read More

    60% of Patients Uncomfortable With AI in Healthcare

     

    What it's about: A recent study by the Pew Research Center surveyed 11,000 U.S. adults and found that 54% were not comfortable with the idea of using AI to analyze their medical records, while 52% were not comfortable with the idea of receiving a diagnosis or treatment recommendation from AI. And generally, older adults and those with less education were less comfortable with AI in healthcare than younger and more educated individuals.

    Pew also addressed four specific applications of AI in healthcare, asking respondents how comfortable they’d feel if the technology was used for skin cancer screenings, pain management recommendations, mental health chatbots, and surgical robots.

    While a majority said they would want AI technology for skin cancer detection, large shares said they would not feel comfortable being the subject of the other use cases.

    Why it matters: The findings of this study solidify the crucial need for healthcare providers and AI developers to build trust between AI and patients. Without it, we will continue to see low adoption and success rates. It’s also important that we address the concerns expressed by the public regarding AI in healthcare. Focus on developing transparent and explainable models that can provide clear insights into how the technology is making recommendations.  

    This study also highlights the need to consider the demographic differences in the perception of AI in healthcare. If those with less education are less comfortable with AI, then perhaps building awareness of its benefits and limitations could help establish the trust you need for acceptance. Your messaging and other efforts may also need to be personalized to each of your audience segments and their levels of comfortability with AI. 

    Read the Full Findings

    🐼 Team Panda Picks 🐼

     

    Looking for more trustworthy AI news? Here are a few other good reads, hand-picked by our team of data scientists.   

    • The Healthcare Dive Outlook on 2023 
    • Why Ethical AI Is Critical at the Data and Modeling Layers To Prevent Bias 
    • Open Letter Calling for AI ‘Pause’ Shines Light on Fierce Debate Around Risks vs. Hype
    Facebook
    LinkedIn
    Twitter
    YouTube

    Pandata, 2515 Jay Ave, Ste 101, Cleveland, Ohio 44113, United States, 440-580-0132

    Unsubscribe Manage preferences