The administration of United States President Joe Biden released an executive order on Oct. 30 establishing new standards for artificial intelligence (AI) safety and security.
Biden’s address said it is building off previous actions taken, including AI safety commitments from 15 leading companies in the industry. The new standards have six primary touch points for the new AI standards, along with plans for the ethical use of AI in the government, privacy practices for citizens, and steps for protecting consumer privacy.
The first standard requires developers of the most powerful AI system to share safety test results and “critical information” with the government. Secondly, the National Institute of Standards and Technology will develop standardized tools and tests for ensuring AI’s safety, security and trustworthiness.
The administration also aims to protect against the risk of AI usage to engineer “dangerous biological materials” through new biological synthesis screening standards.
Another standard includes working toward protection from AI-enabled fraud and deception. It says standards and best practices for detecting AI-generated content and authenticating official content will be established.
It also plans to build on the administration’s ongoing AI Cyber Challenge that was announced in August, by advancing a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software. Finally, it ordered the development of a national security memorandum, which will further direct actions on AI security.
The order also touched on privacy risks of AI saying that:
“Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”
To this, the president officially called on Congress to pass bipartisan data privacy legislation to prioritize federal support for the development and research of privacy techniques and technologies.
Officials in the U.S. also plan to focus efforts on advancements in equity and civil rights in regards to AI, employ the responsible use of AI to bring benefits to consumers and monitor the technology’s impact on the job market, among other social-related topics.
Lastly, the order laid out the administration’s plans for involvement with AI regulations worldwide. The U.S. was one of the seven G7 countries that recently agreed on a voluntary AI code of conduct for AI developers.
Within the government itself, it says it plans to release clear standards to “protect rights and safety, improve AI procurement, and strengthen AI deployment” and provide AI training for all employees in relevant fields.
In July, U.S. Senators held a classified meeting at the White House to discuss regulations for the technology and the Senate has held a series of “AI Insight Forums” to hear from top AI experts in the industry.