Morning Blast: Biden, Tech Agree on AI Development Standards

Kevin Dietsch / Getty Images News via Getty Images

The White House announced Friday morning that the Biden administration has reached an agreement with seven companies to “seize the tremendous promise and manage the risks” associated with artificial intelligence systems. The seven companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) have voluntarily committed “to help move toward safe, secure, and transparent development of AI technology.”

According to the statement, protecting Americans’ rights and safety are the drivers of the agreement and the administration is encouraging industry leaders “to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

So, what are those standards? Here is the list.

  • Ensuring Products are Safe Before Introducing Them to the Public

Test the internal and external security of their AI systems before their release.

Share information across the industry and with governments, civil society, and academia on managing AI risks.

  • Building Systems that Put Security First

Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

Facilitate third-party discovery and reporting of vulnerabilities in their AI systems.

  • Earning the Public’s Trust

Develop robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.

Report publicly their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.

Prioritize research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination and protecting privacy.

Develop and deploy advanced AI systems to help address society’s greatest challenges.

The first two standards outline pretty much what any software development process is or should be. It is the third, earning public trust, that is–dare we say it–innovative.

A watermarking system that flags AI-generated content might help people identify AI-generated images, but watermarking does not solve the problem of AI-generated mistakes or outright lies in written material.

Reporting weaknesses and offering guidance for appropriate use is harmless, that is all it is.

Then there is avoiding bias, discrimination and protecting privacy. If this were easy, these companies would already be doing it. That is not to say it cannot or will not be done, but it will take time and will be costly.

Finally, using AI to address the world’s greatest challenges is now de rigueur for any tech company doing anything. Get back to us when AI generates workable solutions to the climate crisis.

The White House’s statement and links to more detailed documents are available at

Sponsored: Attention Savvy Investors: Speak to 3 Financial Experts – FREE

Ever wanted an extra set of eyes on an investment you’re considering? Now you can speak with up to 3 financial experts in your area for FREE. By simply clicking here you can begin to match with financial professionals who can help guide you through the financial decisions you’re making. And the best part? The first conversation with them is free.

Click here to match with up to 3 financial pros who would be excited to help you make financial decisions.

Thank you for reading! Have some feedback for us?
Contact the 24/7 Wall St. editorial team.