US President Joe Biden signed an executive order Monday setting landmark safety and security standards for artificial intelligence (AI).
The order will force major AI developers to:
- share safety test results with the US government;
- create standardized safety testing;
- limit biological AI testing;
- issue best practices for AI fraud prevention;
- establish AI tools to find weaknesses in software via competition (a continuation of a previous program);
- fund research in privacy tools;
- evaluate how government agencies store private information;
- address algorithmic discrimination in leasing and criminal justice;
- protect the rights of AI professionals; and
- fund the hiring of more AI professional in the US government.
The order authorizes several agencies to begin the process of creating consistent working definitions and regulations surrounding AI, including the Department of Commerce, Defense, Agriculture and Education. It also gives the Department of Justice one year to create a full report on all the ways that AI is currently used within the criminal justice system, from initial arrest to incarceration.
In a statement, the White House celebrated the order, writing:
The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.
The Biden administration has made AI a policy priority, previously holding a competition for hackers in order to find and address vulnerabilities in AI-powered software, securing voluntary pledges from several prominent technology companies to address AI safety and privacy concerns and issuing a blueprint for a proposed AI Bill of Rights.
AI safety has become a global concern over the last decade, with dozens of AI scientists and prominent technology figures signing a letter urging global governments to take AI security and privacy issues seriously as technology continues to develop. The order comes just after the beginning of the AI Safety Summit in the UK, a global conference on AI safety and privacy issues.
AI is currently used across industries including the auto, banking and agricultural industries. However, concerns have been raised surrounding some AI use cases including self-driving cars and their lack of response to jaywalking pedestrians, facial recognition software discriminating against darker skin tones and lack of accuracy in crime predictive software used by law enforcement.
Article: Biden signs executive order setting safety standards for AI