Defending Infrastructure, Securing Systems Key To CISA’s New AI Guidelines

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) released new guidelines aimed at protecting critical infrastructure systems in a threat landscape increasingly impacted by artificial intelligence.

The 28-page document (PDF) covers critical infrastructure risk and security considerations from three distinct perspectives: defending against attackers armed with AI-enabled tools, protecting AI-powered systems from attack, and developing secure and failsafe AI systems.

The guidance is the latest in a series of initiatives rolled out by CISA’s parent organization, the Department of Homeland Security (DHS), in response to last October’s wide-ranging executive order on AI issued by President Joe Biden.

“These resources build upon the Department’s broader efforts to protect the nations’ critical infrastructure and help stakeholders leverage AI,” DHS said in an April 29 statement announcing the guidelines.

“Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk,” CISA Director Jen Easterly said in the statement.

Guidance based on AI framework issued by NIST

The guidelines outline a “govern, map, measure, manage” mitigation strategy and is based on the National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework.

In short, owners and users of critical infrastructure are encouraged to establish an organizational culture of AI risk management, understand their specific AI use context and risk profile, develop systems to assess, analyze, and track AI risks, and to prioritize and act upon AI risks to safety and security.

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks,” said Secretary of Homeland Security Alejandro Mayorkas.

Among other steps taken in the six months since the executive order was signed, DHS last week revealed a new AI Safety and Security Board whose members include tech sector CEOs Sundar Pichai of Alphabet, OpenAI’s Sam Altman, and Advanced Micro Devices’ Lisa Su.

Tim Rawlins, director and senior advisor at NCC Group, said while it was essential organizations were able to make use of the enormous efficiency benefits AI could offer, they needed to remain cognizant of the vulnerabilities the technology could introduce to their operations.

“The introduction of these new guidelines is an excellent step in the move to provide advice to organizations with very sensitive systems [wanting to] make best use of the growing tendency to introduce AI capabilities into corporate networks and systems,” he said.

Joseph Thacker, principal AI engineer at AppOmni, said while the guidelines were an important and positive development, they needed to strike the right balance between risk management and enabling innovation.

“I think the board should prioritize releasing more specific, actionable implementation recommendations alongside the high-level guidelines,” he said.

“Don’t get me wrong, the strategic recommendations are useful. But given how fast AI is moving, critical infrastructure operators will need concrete technical guidance they can put into practice. The board and organization should provide hands-on tools like reference architectures, configuration checklists, and code samples that translate the principles into real-world safeguards.”

READ MORE HERE