The order issued by the US president is the most sweeping step the administration has taken to combat AI threats, from national security to competition and consumer privacy. The measure aims to mobilize agencies across Washington, including the Departments of Commerce, Energy, and Homeland Security.

Related: Artificial Intelligence Now Helps Scammers Pass KYC on Crypto Exchanges

Companies whose artificial intelligence models could pose a threat to U.S. national security will be required to explain how they’re securing their tools, following Joe Biden’s radical decree aimed to limit the risks associated with this technology.

“To unleash AI’s potential and avoid the risks, we have to manage this technology, there is no way around it,” declared Biden.

The White House will use the Defense Production Act, a relic from the Cold War era, used at the peak of the Covid-19 pandemic. They’re set to strong-arm companies developing AI models that pose a serious threat to national security, economic stability, or public health to notify the government on how these systems are training and share safety test results.

Since the launch of GPT-4 at the dawn of last year, global leaders have been trying to decide how to keep control over AI development. Back in May of last year, the Biden administration convened with numerous AI and tech giants, many of which are now part of the consortium. OpenAI, Google, Microsoft, Nvidia, Anthropic, Hugging Face, IBM, Stability AI, Amazon, Meta, and Inflection pledged to responsibly develop AI.

Fast-forward four months after issuing an executive order on the creation and safe use of artificial intelligence, the Biden administration today announced the formation of the United States Artificial Intelligence Safety Institute Consortium (AISIC) to ensure that innovations in artificial intelligence don’t compromise security. More than 200 companies and industry leaders have joined Biden’s AI Security Squad. 

None of us can build the perfect AI on our own. We’re thrilled to join forces with other AI big companies in supporting these commitments, and we pledge to keep on collaborating, swapping info, and sharing our best expertise.

Kent Walker, Google’s President of Global Affairs. Source: Google

Introducing the U.S. AI Safety Institute Consortium (AISIC)

This new consortium boasts over 200 members, including AI-heavy leading competitors like Amazon, Google, Apple, Anthropic, Microsoft, OpenAI, and NVIDIA. The consortium is joined by representatives from healthcare, academia, unions, and the banking sector, including JP Morgan, Citigroup, Carnegie Mellon University, Bank of America, Ohio State University, and the Georgia Tech Research Institute, as well as state authorities and local government representatives.

The list of participating firms is so extensive that it might be more useful to note which companies have not joined. Among the top ten tech giants missing in action are Tesla, Oracle, and Broadcom. Cooperation from international partners is also expected. TSMC didn’t make the list either, but it’s not even an American company.

The consortium is the biggest group in the testing and evaluation groups today, and it’s set to building the foundation for a whole new science of AI security measurements. The consortium will work with like-minded organizations worldwide, poised to play a key role in crafting compatible and efficient security tools across the globe.”

The Department of Commerce

Initiatives like AISIC will play a critical role in shaping the future trajectory of artificial intelligence and mitigating potential risks to national security, economic stability, and public welfare.

More Info:

By mixing and matching insights from various sectors and fostering international teamwork, AISIC will establish robust security standards to promote the ethical use of AI on a global scale.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL.FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL.FM strongly recommends contacting a qualified industry professional.