Trend Secures AI Infrastructure with NVIDIA
Together, we are focused on securing the full AI lifecycle—from development and training to deployment and inference—across cloud, data center, and AI factories. Read More HERE…
Together, we are focused on securing the full AI lifecycle—from development and training to deployment and inference—across cloud, data center, and AI factories. Read More HERE…
Get a sneak peak into how Trend Micro’s Pwn2Own Berlin 2025 is breaking new ground, focusing on AI infrastructure and finding the bugs to proactively safeguard the future of computing. Read More HERE…
What is PLeak, and what are the risks associated with it? We explored this algorithmic technique and how it can be used to jailbreak LLMs, which could be leveraged by threat actors to manipulate systems and steal sensitive data. Read More HERE…
Trend Micro has become a Gold sponsor of the OWASP Top 10 for LLM and Gen AI Project, merging cybersecurity expertise with OWASP’s collaborative efforts to address emerging AI security risks. This partnership underscores Trend Micro’s unwavering commitment to advancing AI security, ensuring a secure foundation for the transformative power of AI. Read More HERE…
Effective April 2025, Microsoft is launching their Azure vTAP and integrating it with Trend Vision One Network Detection and Response solution. This integration allows organizations to gain deep visibility into cloud network traffic without compromising performance. It ensures real-time detection, faster incident response, and an enhanced security posture while reducing operational complexity. Read More HERE…
From quantum leaps to AI factories, GTC 2025 proved one thing: the future runs on secure foundations. Read More HERE…
International cooperation, reporting, and capacity building are critical to enhance cybersecurity defenses. Effective governance in an increasingly risky landscape requires visibility as well as coordinated vulnerability disclosure. Read More HERE…
This entry explores how the Chain of Thought reasoning in the DeepSeek-R1 AI model can be susceptible to prompt attacks, insecure output generation, and sensitive data theft. Read More HERE…
This article explains the invisible prompt injection, including how it works, an attack scenario, and how users can protect themselves. Read More HERE…