Cyber Pros Praise Biden Executive Order On Artificial Intelligence

President Joe Biden on Monday released an ambitious executive order (EO) around artificial intelligence that aims to have AI companies share their red team test results, establish a program to find tools and fix vulnerabilities in AI code, protect consumers against fraud, and build up the AI workforce.

The new EO, primarily a response to ongoing public concerns around the fast pace of development and potential risks around AI,  gives practically every agency of the federal government a role in the nation’s attempt to securely harness the promise of AI.

“As part of the Biden-Harris administration’s comprehensive strategy for responsible innovation, the EO builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies (including OpenAI, AWS, and Meta) to drive safe, secure, and trustworthy development of AI,” said a statement in the EO.

EOs are issued from the executive branch of the government, normally directly from the president. While an EO is not a law in the sense that it does not go through the legislative process in Congress, executive branch employees are subject to its requirements and like the May 2021 EO on Cybersecurity, these EOs set a framework for public and private cooperation.

“The EO is levelheaded — it avoids the use of phrases like ‘existential risk’ and focuses on concrete problems — security and safety, privacy, and discrimination,” said James Lewis, a senior researcher at the Center for Strategic and International Studies. “It’s approach to managing risk is increased transparency and the use of testing, tools, and standards. The EO also has a great emphasis on developing standards for critical infrastructure and a second using AI tools to fix software reinforces goals set in the National Cybersecurity Strategy.”

Here are some highlights of today’s EO around security:

  • Red team testing: The National Institute of Standards and Technology (NIST) was requried to set rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security (DHS will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. Finally, the Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
  • Fix vulnerabilities: The EO calls for the establishment of an advanced cybersecurity program to develop AI tools that will find and fix vulnerabilities in critical software, building on the administration’s ongoing AI Cyber Challenge sponsored by the Defense Advanced Projects Research Agency (DARPA).
  • Consumer protections: The Department of Commerce (DOC) was ordered to develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic, the goal of which is to set an example for the private sector and governments around the world.
  • Workforce development: The EO seeks to accelerate the rapid hiring of AI professionals as part of a governmentwide AI talent initiative led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies are being required to offer AI training for employees at all levels in relevant fields.

The industry responds Biden order on AI

Overall, security pros are glad to see the Biden administration take on the AI issue in a responsible way that also involved significant input from the industry.

“President Biden’s EO underscores a robust commitment to safety, cybersecurity, and rigorous testing,” said Casey Ellis, founder and CTO of Bugcrowd. “The directive mandates developers to share safety test results with the U.S. government, ensuring AI systems are extensively vetted before public release. It also highlights the importance of AI in bolstering cybersecurity, particularly in detecting AI-enabled fraud and enhancing software and network security. The order also champions the development of standards, tools, and tests for AI’s safety and security.”

Randy Lariar, AI security leader at Optiv, said ultimately, the question about the red teaming and safety of new models has been much-requested in the research community. Lariar said OpenAI has not been entirely open with what they’ve been doing to train and secure their latest versions.

“I worry that many open-source models, which are derived from the big foundational models, can be just as risky without the burden of red teaming — but this is a start,” said Lariar. “NIST has already put out an AI Risk Management Framework and is continuing to build upon that. I am not surprised that they’re well-positioned to continue to be a leader in the space for defining safety. To me, their existing tooling does feel to be too high-level, so this directive to develop standards, tools, and tests is greatly needed. I think NIST will continue to play a key role in defining what AI Safety means and how to align to that.”

Ashley Leonard, chief executive officer of Syxsense, added that it will be very interesting to see how the vulnerability program gets implemented and if those tools will be open source and voluntary or proprietary and government-mandated. Leonard said that over the last 30 years, we’ve seen how code degrades over time: it’s why we have new vulnerabilities and bugs released every day.

“It takes real resources — budget, time, and staff — for even the most advanced companies to keep up with vulnerabilities and bug fixes, so it makes sense to see if we could use AI tools to find and fix these vulnerabilities,” said Leonard. “The other side of this directive, though, is whether AI can check AI. We are already seeing broad use of generative AI as part of the software development process. GenAI has enabled everyone to become a coder — but if you use AI to code, can you use it to test as well?”

Marcus Fowler, chief executive officer of Darktrace Federal added that he and his team firmly believe that the industry can’t achieve AI safety without cybersecurity.

“Security needs to be by-design, embedded across every step of an AI system’s creation and deployment,” said Fowler. “That means taking action on data security, control and trust. It’s promising to see some specific actions in the EO that start to address these challenges.”

READ MORE HERE