Here's what you need to know about this year’s Stanford AI Index, the new AI Resource Center from NIST, and AI Now Institute’s 2023 report.
View in browser
logo transparent
PAN_Email Newsletter Header_04-1

Hello,

In the absence of standardized AI rules and regulations, where can we turn for guidance when it comes to designing AI that we can trust?  

For now, reliable reports and resource centers may be our best option. 

In this Voices of Trusted AI edition, we highlight several new reports and guidelines published by reputable organizations, including the National Institute of Standards and Technology and Stanford Institute for Human-Centered Artificial Intelligence, that can serve as excellent guardrails for your next AI project.

👉 P.S. The Voices of Trusted AI digest will be moving to a bimonthly frequency after this edition. You can expect to see us in your inbox every other month with the most important trustworthy AI news and resources. 

What Do We Mean by Trusted AI?

Trusted AI is the discipline of designing AI-powered solutions that maximize the value of humans while being more fair, transparent, and privacy-preserving.

Learn More

    The Latest in Trusted AI

    NIST Launches Trustworthy AI Resource Center


    What it's about:
    The National Institute of Standards and Technology (NIST) recently launched their Trustworthy AI Resource Center to provide information, tools, and resources for the development of AI systems that are reliable, secure, and transparent. The center will also promote the ethical use of AI and provide guidance on risk management and evaluation of AI systems. The resource center aims to support stakeholders in government, industry, and academia in developing and implementing trustworthy AI solutions.

    Why it matters: This resource center is an excellent, reliable source of guidance for AI builders looking to develop and deploy trustworthy AI systems. Not only does it provide the tools needed to assess the risks associated with your models, but it also enables you to accurately identify areas for improvement.  

    In the absence of AI regulation, NIST is giving you some assurance that your AI systems are designed with bias, privacy, and fairness in mind—ultimately ensuring you avoid unintended consequences of hastily deployed models. Most importantly, following these guidelines can help your business build trust with customers, investors, and other stakeholders. 

    Although we may not have standard rules and regulations for AI design yet, they're coming. Taking advantage of free resources like this Trustworthy AI Resource Center, can help you stay ahead of what’s to come.  

    View The Resource Center

    Stanford AI Index Report


    What it's about: 
    The Stanford Institute for Human-Centered Artificial Intelligence released its annual AI Index aimed to track and measure the progress of AI research, development, and deployment across academia, industry, and government. The report covers various aspects of AI, including technical progress, education and research, workforce, diversity, ethics and policy, and the economic impact of AI. Here a few key findings from this year’s index:  

    • AI development is now being led by corporations rather than academia, with industry producing 32 significant machine learning models in 2022 compared to just three produced by academia.
    • AI hiring is on the rise, with increased demand for machine learning-related roles across many industrial sectors.   
    • The report highlights a continued lack of diversity in the AI workforce, with women and people of color underrepresented in AI-related roles. 
    • The report also emphasizes the importance of ethical and responsible AI development, with discussions of issues around bias, fairness, and privacy. 
    • AI has the potential to generate significant economic benefits, with estimates that AI-related activity could contribute up to $16 trillion to the global economy by 2030. 

    Why it matters: As AI builders, we can draw a number of conclusions from this year’s AI Index. The fact that AI development is now being led by corporations rather than academia indicates that businesses are increasingly investing in AI and seeing its potential to drive innovation and growth. There is a major likelihood that businesses that fail to adopt AI will fall behind their competitors.  

    The rise in AI hiring implies that there is growing demand for professionals with AI-related skills across many industrial sectors. In the next few years, businesses will need to invest in upskilling their employees to ensure they have the necessary expertise and literacy to make decisions based on AI-powered predictions and compete in an increasingly digital economy.   

    When it comes to diversity in the AI workforce, businesses must do better to ensure that their teams are inclusive and reflect the diversity of the communities they serve. It is also incredibly difficult to test for bias and fairness in AI models when those working with the models are not diverse.  

    View the Report

    Prompt injection: What’s the Worst That Can Happen?

     

    What it's about: In his article, British programmer Simon Willison dives into the dangers of prompt injection. Prompt injection occurs when a prompt is merged with untrusted user input. For some large language model (LLM) applications, prompt injection may not be much of a problem, but for others that have additional capabilities like ReAct pattern, Auto-GPT, ChatGPT Plugins, it can become a dangerous vulnerability.

    For example, an AI assistant prototype that uses ChatGPT API prompts to perform actions like searching email for answers to questions and even sending replies based on dictated instructions can be vulnerable to prompt injection. If an attacker sends a message to this assistant to forward the three most interesting recent emails to their email address and delete them, the assistant will perform the instruction, as it cannot tell that the instruction was not part of the original prompt.  

    Why it matters: While prompt injection may be relatively harmless for low-risk applications, it can become a serious concern for those using LLM applications in high-risk industries, like finance and healthcare. Simon points out that he “hasn’t yet seen a robust defense against this vulnerability which is guaranteed to work 100% of the time”, but there are a number of steps you can take to prevent prompt injection from occurring.

    As with many cybersecurity concerns, one of the easiest actions you can do is educate your teams and stakeholders about the risks and warning signs of prompt injection. Emphasize the importance of securing LLM applications and encourage everyone to take this risk seriously.  

    It’s also a smart idea to implement security controls like input validation, output sanitization, and limiting the permissions granted to LLM applications. Pair these with frequent security assessments to identify vulnerabilities and ensure all controls are working properly.  

    Lastly, to minimize the risk of prompt injection, consider limiting the amount of user input that LLM applications accept. This can involve using pre-defined prompts or restricting the types of inputs that are accepted.

    Read More

    AI Now Report: 2023 Landscape Confronting Tech Power

     

    What it's about: The AI Now Institute, a research institute focused on understanding the social implications of AI and related technologies, published its annual report analyzing the current state of AI. This year, the report includes insights from experts in a variety of fields and covers topics such as AI in healthcare, the impact of AI on work and labor, and the intersection of AI and climate change. 

    Why it matters: This year’s report is a long one (103 pages), but it gives leaders a number of approaches to consider when taking a responsible and ethical approach to AI design and development. Here are a few key themes:  

    • Algorithmic bias. The report highlights ongoing concerns around algorithmic bias and discrimination in AI systems. Be aware of the potential for AI systems to perpetuate and even amplify existing biases, leading to significant legal, reputational, and financial consequences. 
    • Ethical considerations. Similar to the Stanford AI Index, this report also emphasizes the need for ethical awareness and meaningful audits during all stages of AI design.
    • Labor and automation. The report examines the impact of AI and automation on work and labor. Building trust between AI and your workforce starts with education around how it works and what can be done with it. Without this trust, you’ll be left with employees that don’t want to or don’t know how to make decisions with your model.  

    View the Report

    🐼 Team Panda Picks 🐼

     

    Looking for more trustworthy AI news? Here are a few other good reads, hand-picked by our team of data scientists.   

    • Iowa Becomes Sixth State With Its Own Data Privacy Law 
    • Ethical Openness at Hugging Face 
    • Three Important Measures To Ensure Ethical Practices In A World Of Emerging AI
    Facebook
    LinkedIn
    Twitter
    YouTube

    Pandata, 2515 Jay Ave, Ste 101, Cleveland, Ohio 44113, United States, 440-580-0132

    Unsubscribe Manage preferences