TrendMicro

Trend Micro Leading the Fight to Secure AI

AI threats on the rise

According to one estimate, generative AI (GenAI) could add the equivalent of $2.6-4.4 trillion annually to the global economy. But as more organizations build out AI infrastructure and embed the technology into more business-critical processes, they could also be exposed to the risk of sensitive data compromise, extortion, and sabotage in new ways.

We’ve highlighted this in the past, noting countless vulnerabilities and misconfigurations in AI components like vector stores, LLM-hosting platforms and other open source software. Among other things, organizations fear that threat actors could steal training data for profit, poison it to compromise an LLMs output and integrity, or steal the models themselves.

In developing AML.CS0028, we uncovered disturbing trends:

  • Over 8,000 exposed container registries were found online—double the number observed in 2023.
  • 70% of these registries allowed push (write) permissions, meaning attackers could inject malicious AI models.
  • Within these registries, 1,453 AI models were identified, many in Open Neural Network Exchange (ONNX) format, with vulnerabilities that could be exploited.

This sharp growth reflects a broader trend: attackers are increasingly targeting the underlying infrastructure supporting AI, not just the AI models themselves.

Turning research into action

Fortunately, Trend Micro’s global team of forward-looking threat researchers is always on the hunt for new threat actor tactics, techniques, and procedures (TTPs) to leverage. The more we know, the more we can help network defenders enhance cyber resilience and improve their detection, protection and response efforts.

We’ve submitted our latest discovery to MITRE ATLAS. The case study (AML.CS0028) is based on a real-world data poisoning attack against a container-hosted AI model in the cloud. As part of our research, we discovered over 8,000 exposed container registries, 70% of which allowed write access, and 1,453 AI models that could also have been exploited.

This is the first ATLAS case study to involve both cloud and container infrastructure in a sophisticated supply chain compromise. Only 31 studies have been accepted by the non-profit since 2020, so we’re thrilled to be making a positive contribution to the security community with this submission.

Fighting the good fight together

As one would expect from Trend’s star team of expert researchers, this case study stands out from the crowd in both scope and technical depth. We’re confident that its publication in MITRE ATLAS will help make the digital world safer, for several reasons:

  • The study is encoded in ATLAS YAML, allowing easy integration into tools already aligned with MITRE ATT&CK.
  • It provides a reproducible scenario that defenders can simulate to improve threat detection and incident response planning.
  • It contributes to MITRE’s Secure AI initiative, encouraging others to share anonymized incidents and help grow a collective understanding of AI threats.

At Trend Micro, we never forget that cybersecurity is a team sport. That’s why our threat research and product development efforts are leveraged not just to protect our customers, but all technology users. It’s the same philosophy that spurred us to create a specialized Pwn2Own AI competition later this year, which will help to surface new vulnerabilities in some of the world’s most popular AI components.

With MITRE ATLAS, we have another way to make a positive impact on the global cybersecurity landscape.

Read More HERE