Best practices for AI security risk management

Today, we are releasing an AI security risk assessment framework as a step to empower organizations to reliably audit, track, and improve the security of the AI systems. In addition, we are providing new updates to Counterfit, our open-source tool to simplify assessing the security posture of AI systems.

There is a marked interest in securing AI systems from adversaries. Counterfit has been heavily downloaded and explored by organizations of all sizes—from startups to governments and large-scale organizations—to proactively secure their AI systems. From a different vantage point, the Machine Learning Evasion Competition we organized to help security professionals exercise their muscles to defend and attack AI systems in a realistic setting saw record participation, doubling the amount of participants and techniques than the previous year.

This interest demonstrates the growth mindset and opportunity in securing AI systems. But how do we harness interest into action that can raise the security posture of AI systems? When the rubber hits the road, how can a security engineer think about mitigating the risk of an AI system being compromised?

AI security risk assessment framework

The deficit is clear: according to Gartner® Market Guide for AI Trust, Risk and Security Management published in September 2021, “AI poses new trust, risk and security management requirements that conventional controls do not address.1 To address this gap, we did not want to invent a new process. We acknowledge that security professionals are already overwhelmed. Moreover, we believe that even though the attacks on AI systems pose a new security risk, current software security practices are relevant and can be adapted to manage this novel risk. To that end, we fashioned our AI security risk assessment in the spirit of the current security risk assessment frameworks.

We believe that to comprehensively assess the security risk for an AI system, we need to look at the entire lifecycle of system development and deployment. An overreliance on securing machine learning models through academic adversarial machine learning oversimplifies the problem in practice. This means, to truly secure the AI model, we need to account for securing the entire supply chain and management of AI systems.

Through our own operations experience in building and red teaming models at Microsoft, we recognize that securing AI systems is a team sport. AI researchers design model architectures. Machine learning engineers build data ingestion, model training, and deployment pipelines. Security architects establish appropriate security policies. Security analysts respond to threats. To that end, we envisioned a framework that would involve participation from each of these stakeholders.

“Designing and developing secure AI is a cornerstone of AI product development at Boston Consulting Group (BCG). As the societal need to secure our AI systems becomes increasingly apparent, assets like Microsoft’s AI security risk management framework can be foundational contributions. We already implement best practices found in this framework in the AI systems we develop for our clients and are excited that Microsoft has developed and open sourced this framework for the benefit of the entire industry.”—Jack Molloy, Senior Security Engineer, BCG

As a result of our Microsoft-wide collaboration, our framework features the following characteristics:

  1. Provides a comprehensive perspective to AI system security. We looked at each element of the AI system lifecycle in a production setting: from data collection, data processing, to model deployment. We also accounted for AI supply chains, as well as the controls and policies with respect to backup, recovery, and contingency planning related to AI systems.
  2. Outlines machine learning threats and recommendations to abate them. To directly help engineers and security professionals, we enumerated the threat statement at each step of the AI system building process. Next, we provided a set of best practices that overlay and reinforce existing software security practices in the context of securing AI systems.
  3. Enables organizations to conduct risk assessments. The framework provides the ability to gather information about the current state of security of AI systems in an organization, perform gap analysis, and track the progress of the security posture.

Updates to Counterfit

To help security professionals get a broader view of the security posture of the AI systems, we have also significantly expanded Counterfit. The first release of Counterfit wrapped two popular frameworks—Adversarial Robustness Toolbox (ART) and TextAttack—to provide evasion attacks against models operating on tabular, image, and textual inputs. With the new release, Counterfit now features the following:

  • An extensible architecture that simplifies integration of new attack frameworks.
  • Attacks that include both access to the internals of the machine learning model and with just query access to the machine learning model.
  • Threat paradigms that include evasion, model inversion, model inference, and model extraction.
  • In addition to algorithmic attacks provided, common corruption attacks through AugLy are also included.
  • Attacks are supported for models that accept tabular data, images, text, HTML, or Windows executable files as input.

Learn More

These efforts are part of broader investment at Microsoft to empower engineers to securely develop and deploy AI systems. We recommend using it alongside the following resources:

  • For security analysts to orient to threats against AI systems, Microsoft, in collaboration with MITRE, released an ATT&CK style Adversarial Threat Matrix complete with case studies of attacks on production machine learning systems, which has evolved into MITRE ATLAS.
  • For security incident responders, we released our own bug bar to systematically triage attacks on machine learning systems.
  • For developers, we released threat modeling guidance specifically for machine learning systems.
  • For engineers and policymakers, Microsoft, in collaboration with Berkman Klein Center at Harvard University, released a taxonomy documenting various machine learning failure modes.
  • For security professionals, Microsoft open sourced Counterfit to help with assessing the posture of AI systems.
  • For the broader security community, Microsoft hosted the annual Machine Learning Evasion Competition.
  • For Azure machine learning customers, we provided guidance on enterprise security and governance.

This is a living framework. If you have questions or feedback, please contact us.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1 Gartner, Market Guide for AI Trust, Risk and Security Management, Avivah Litan, et al., 1 September 2021 GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

READ MORE HERE