{"id":42044,"date":"2021-07-29T16:00:21","date_gmt":"2021-07-29T16:00:21","guid":{"rendered":"https:\/\/www.microsoft.com\/security\/blog\/?p=94938"},"modified":"2021-07-29T16:00:21","modified_gmt":"2021-07-29T16:00:21","slug":"attack-ai-systems-in-machine-learning-evasion-competition","status":"publish","type":"post","link":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/","title":{"rendered":"Attack AI systems in Machine Learning Evasion Competition"},"content":{"rendered":"<div><img decoding=\"async\" src=\"https:\/\/www.microsoft.com\/security\/blog\/uploads\/securityprod\/2021\/07\/MSC21_Getty_officeMeeting_1178691055.jpg\" class=\"ff-og-image-inserted\"><\/div>\n<p>Today, we are launching MLSEC.IO, an educational <a href=\"https:\/\/mlsec.io\/\" target=\"_blank\" rel=\"noopener\">Machine Learning Security Evasion Competition<\/a> (MLSEC) for the AI and security communities to exercise their muscle to attack critical AI systems in a realistic setting. Hosted and sponsored by Microsoft, alongside NVIDIA, CUJO AI, VM-Ray, and MRG Effitas, the competition rewards participants who efficiently evade AI-based malware detectors and AI-based phishing detectors.<\/p>\n<p>Machine learning powers critical applications in virtually every industry: finance, healthcare, infrastructure, and cybersecurity. Microsoft is seeing an uptick of attacks on commercial AI systems that could compromise the confidentiality, integrity, and availability guarantees of these systems. Publicly known cases documented by <a href=\"https:\/\/atlas.mitre.org\/studies\" target=\"_blank\" rel=\"noopener\">MITRE\u2019s ATLAS framework<\/a>, show how with the proliferation of AI systems comes the increased risk that the machine learning powering these systems can be manipulated to achieve an adversary\u2019s goals. While the risks are inherent in all deployed machine learning models, the threat is especially explicit in cybersecurity, where machine learning models are increasingly relied on to detect threat actors\u2019 tools and behaviors. Market surveys have consistently indicated that the security and privacy of AI systems are top concerns for executives. According to <a href=\"https:\/\/my.ccsinsight.com\/reportaction\/5044\/Marketing\" target=\"_blank\" rel=\"noopener\">CCS Insight\u2019s survey of 700 senior IT leaders in 2020<\/a>, security is now the biggest hurdle companies face with AI, cited by over 30 percent of respondents<sup>1<\/sup>.<\/p>\n<p>However, security practitioners are unaware of how to clear this new hurdle. A recent <a href=\"https:\/\/arxiv.org\/pdf\/2002.05646.pdf\" target=\"_blank\" rel=\"noopener\">Microsoft survey<\/a> found that 25 out of 28 organizations did not have the right tools in place to secure their AI systems. While academic researchers have been studying how to attack AI systems for close to two decades, awareness among practitioners is low. That is why one recommendation for business leaders from the 2021 Gartner report <a href=\"https:\/\/blogs.gartner.com\/avivah-litan\/2021\/01\/21\/top-5-priorities-for-managing-ai-risk-within-gartners-most-framework\/\" target=\"_blank\" rel=\"noopener\">Top 5 Priorities for Managing AI Risk Within Gartner\u2019s MOST Framework published<\/a><sup>2<\/sup> is that organizations \u201cDrive staff awareness across the organization by leading a formal AI risk education campaign.\u201d<\/p>\n<p>It is critical to democratize the knowledge to secure AI systems. That is why <a href=\"https:\/\/www.microsoft.com\/security\/blog\/2021\/05\/03\/ai-security-risk-assessment-using-counterfit\/\" target=\"_blank\" rel=\"noopener\">Microsoft recently released Counterfit<\/a>, a tool born out of our own need to assess Microsoft\u2019s AI systems for vulnerabilities with the goal of proactively securing AI services. For those new to adversarial machine learning, NVIDIA released MINTNV, a hack-the-box style environment to explore and build their skills.<\/p>\n<h2>Participate in MLSEC.IO<\/h2>\n<p>With the launch today of MLSEC.IO, we aim to highlight how security models can be evaded by motivated attackers and allow practitioners to exercise their muscles attacking critical machine learning systems used in cybersecurity.<\/p>\n<blockquote>\n<p><em>\u201cThere is a lack of practical knowledge about securing or attacking AI systems in the security community. Competitions like Microsoft\u2019s MSLEC democratizes adversarial machine learning knowledge for the offensive and defensive security communities, as well as the machine learning community. MLSEC\u2019s hands-on approach is an exciting entry point into AML.\u201d\u2014<\/em>Christopher Cottrell, AI Red Team Lead, NVIDIA<\/p>\n<\/blockquote>\n<p>The competition involves two challenges beginning on August 6 and ending on September 17, 2021: an Anti-Malware Evasion track and an Anti-Phishing Evasion track.<\/p>\n<ol>\n<li>Anti-Phishing Evasion Track: Machine learning is routinely used to detect a highly successful attacker technique for gaining initial via phishing. In this track, contestants play the role of an attacker and attempt to evade a suite of anti-phishing models. Custom built by CUJO AI, the phishing machine learning models are purpose-built for this competition only.<\/li>\n<li>Anti-Malware Evasion track: This challenge provides an alternative scenario for attackers wishing to bypass machine-learning-based antivirus: change an existing malicious binary in a way that disguises it from the antimalware model.<\/li>\n<\/ol>\n<p>In addition, for each of the Attacker Challenge tracks, the highest-scoring submission that <a href=\"https:\/\/www.microsoft.com\/security\/blog\/2021\/05\/03\/ai-security-risk-assessment-using-counterfit\/\" target=\"_blank\" rel=\"noopener\">extends and leverages Counterfit<\/a>\u2014Microsoft\u2019s open-source tool for investigating the security of machine learning models<em>\u2014<\/em>will be awarded a bonus prize.<\/p>\n<blockquote>\n<p><em>\u201cThe security evasion challenge creates new pathways into cybersecurity and opens up access for a broader base of talent. This year, to lower barriers to entry, we are introducing the phishing challenge, while still strongly encouraging people without significant experience in malware to participate.\u201d<\/em>\u2014Zoltan Balazs, Head of Vulnerability Research Lab at CUJO AI and cofounder of the competition.<\/p>\n<\/blockquote>\n<h3>Key details about the competition<\/h3>\n<ul>\n<li>The competition runs from August 6 to September 17, 2021. Registration will remain open throughout the duration of the competition.<\/li>\n<li>Winners will be announced on October 27, 2021, and contacted via email.<\/li>\n<li>Prizes for first place, honorable mentions, as well as a bonus prize will be awarded for each of the two tracks.<\/li>\n<\/ul>\n<h2>Learn More<\/h2>\n<p>To learn more about the 2021 Machine Learning Security Evasion Competition:<\/p>\n<ul>\n<li><a href=\"https:\/\/mlsec.io\/\" target=\"_blank\" rel=\"noopener\">Register now<\/a> to begin participating on August 6, 2021, to exercise your offensive security muscle.<\/li>\n<li>Visit the <a href=\"https:\/\/github.com\/Azure\/counterfit\" target=\"_blank\" rel=\"noopener\">Counterfit GitHub Repository<\/a> to learn more about Counterfit.<\/li>\n<li>If you are new to adversarial machine learning, practice attacking AI systems via <a href=\"https:\/\/ngc.nvidia.com\/catalog\/containers\/nvidia:product-security:mintnv\" target=\"_blank\" rel=\"noopener\">NVIDIA\u2019s MINTNV<\/a> hack-the-box style challenge.<\/li>\n<\/ul>\n<p>This competition is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems. We recommend using it alongside the following resources:<\/p>\n<ul>\n<li>For security analysts to orient to threats against AI systems, Microsoft, in collaboration with MITRE, released an ATT&amp;CK style&nbsp;<a href=\"https:\/\/github.com\/mitre\/advmlthreatmatrix\" target=\"_blank\" rel=\"noopener\">AdvML Threat Matrix<\/a>&nbsp;complete with case studies of attacks on production machine learning systems.<\/li>\n<li>For security incident responders, we released our own&nbsp;<a href=\"https:\/\/docs.microsoft.com\/en-us\/security\/engineering\/bug-bar-aiml\" target=\"_blank\" rel=\"noopener\">bug bar<\/a>&nbsp;to systematically triage attacks on machine learning systems.<\/li>\n<li>For developers, we released&nbsp;<a href=\"https:\/\/docs.microsoft.com\/en-us\/security\/engineering\/threat-modeling-aiml\" target=\"_blank\" rel=\"noopener\">threat modeling guidance<\/a> specifically for machine learning systems.<\/li>\n<li>For engineers and policymakers, Microsoft, in collaboration with Berkman Klein Center at Harvard University,&nbsp;<a href=\"https:\/\/docs.microsoft.com\/en-us\/security\/engineering\/failure-modes-in-machine-learning\" target=\"_blank\" rel=\"noopener\">released a taxonomy<\/a> documenting various machine learning failure modes.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/mlsec.io\/\" target=\"_blank\" rel=\"noopener\">Register now<\/a> to participate in the Machine Learning Security Evasion Competition that begins on August 6 and ends on September 17, 2021. Winners will be announced on October 27, 2021.<\/p>\n<p>To learn more about Microsoft Security solutions,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/security\/business\/solutions\" target=\"_blank\" rel=\"noopener\">visit our&nbsp;website<\/a>.&nbsp;Bookmark the&nbsp;<a href=\"https:\/\/www.microsoft.com\/security\/blog\/\" target=\"_blank\" rel=\"noopener\">Security blog<\/a>&nbsp;to keep up with our expert coverage on security matters. Also, follow us at&nbsp;<a href=\"https:\/\/twitter.com\/@MSFTSecurity\" target=\"_blank\" rel=\"noopener\">@MSFTSecurity<\/a>&nbsp;for the latest news and updates on cybersecurity.<\/p>\n<hr>\n<p><sup>1<\/sup>CCS Insight, <a href=\"https:\/\/my.ccsinsight.com\/reportaction\/5044\/Marketing\" target=\"_blank\" rel=\"noopener\">Senior Leadership IT Investment Survey<\/a>, Nick McQuire et. al, 18 August 2020.<\/p>\n<p><sup>2<\/sup>Gartner,<a href=\"https:\/\/www.gartner.com\/document\/3995616\" target=\"_blank\" rel=\"noopener\">&nbsp;Top 5 Priorities for Managing AI Risk Within Gartner\u2019s MOST Framework<\/a>, Avivah Litan, et al., 15 January 2021.<\/p>\n<p> READ MORE <a href=\"https:\/\/www.microsoft.com\/security\/blog\/2021\/07\/29\/attack-ai-systems-in-machine-learning-evasion-competition\/\">HERE<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today, we are launching MLSEC.IO, a new machine learning security evasion competition as an educational effort for the AI and security communities to exercise their muscle to attack critical AI systems in a realistic setting.<br \/>\nThe post Attack AI systems in Machine Learning Evasion Competition appeared first on Microsoft Security Blog. READ MORE HERE&#8230;<\/p>\n","protected":false},"author":2,"featured_media":42045,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_layout":"default_layout","footnotes":""},"categories":[276],"tags":[347],"class_list":["post-42044","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-microsoft-secure","tag-cybersecurity"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Attack AI systems in Machine Learning Evasion Competition 2026 | ThreatsHub Cybersecurity News<\/title>\n<meta name=\"description\" content=\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security &amp; Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Attack AI systems in Machine Learning Evasion Competition 2026 | ThreatsHub Cybersecurity News\" \/>\n<meta property=\"og:description\" content=\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security &amp; Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\" \/>\n<meta property=\"og:site_name\" content=\"ThreatsHub Cybersecurity News\" \/>\n<meta property=\"article:published_time\" content=\"2021-07-29T16:00:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"799\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"TH Author\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@threatshub\" \/>\n<meta name=\"twitter:site\" content=\"@threatshub\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"TH Author\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\"},\"author\":{\"name\":\"TH Author\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476\"},\"headline\":\"Attack AI systems in Machine Learning Evasion Competition\",\"datePublished\":\"2021-07-29T16:00:21+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\"},\"wordCount\":1007,\"publisher\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg\",\"keywords\":[\"Cybersecurity\"],\"articleSection\":[\"Microsoft Secure\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\",\"url\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\",\"name\":\"Attack AI systems in Machine Learning Evasion Competition 2026 | ThreatsHub Cybersecurity News\",\"isPartOf\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg\",\"datePublished\":\"2021-07-29T16:00:21+00:00\",\"description\":\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage\",\"url\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg\",\"contentUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg\",\"width\":1200,\"height\":799},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.threatshub.org\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Cybersecurity\",\"item\":\"https:\/\/www.threatshub.org\/blog\/tag\/cybersecurity\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Attack AI systems in Machine Learning Evasion Competition\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#website\",\"url\":\"https:\/\/www.threatshub.org\/blog\/\",\"name\":\"ThreatsHub Cybersecurity News\",\"description\":\"%%focuskw%% Threat Intel \u2013 Threat Intel Services \u2013 CyberIntelligence \u2013 Cyber Threat Intelligence - Threat Intelligence Feeds - Threat Intelligence Reports - CyberSecurity Report \u2013 Cyber Security PDF \u2013 Cybersecurity Trends - Cloud Sandbox \u2013- Threat IntelligencePortal \u2013 Incident Response \u2013 Threat Hunting \u2013 IOC - Yara - Security Operations Center \u2013 SecurityOperation Center \u2013 Security SOC \u2013 SOC Services - Advanced Threat - Threat Detection - TargetedAttack \u2013 APT \u2013 Anti-APT \u2013 Advanced Protection \u2013 Cyber Security Services \u2013 Cybersecurity Services -Threat Intelligence Platform\",\"publisher\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#organization\"},\"alternateName\":\"Threatshub.org\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.threatshub.org\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#organization\",\"name\":\"ThreatsHub.org\",\"alternateName\":\"Threatshub.org\",\"url\":\"https:\/\/www.threatshub.org\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg\",\"contentUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg\",\"width\":432,\"height\":435,\"caption\":\"ThreatsHub.org\"},\"image\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/threatshub\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476\",\"name\":\"TH Author\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"caption\":\"TH Author\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Attack AI systems in Machine Learning Evasion Competition 2026 | ThreatsHub Cybersecurity News","description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/","og_locale":"en_US","og_type":"article","og_title":"Attack AI systems in Machine Learning Evasion Competition 2026 | ThreatsHub Cybersecurity News","og_description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","og_url":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/","og_site_name":"ThreatsHub Cybersecurity News","article_published_time":"2021-07-29T16:00:21+00:00","og_image":[{"width":1200,"height":799,"url":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg","type":"image\/jpeg"}],"author":"TH Author","twitter_card":"summary_large_image","twitter_creator":"@threatshub","twitter_site":"@threatshub","twitter_misc":{"Written by":"TH Author","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#article","isPartOf":{"@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/"},"author":{"name":"TH Author","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476"},"headline":"Attack AI systems in Machine Learning Evasion Competition","datePublished":"2021-07-29T16:00:21+00:00","mainEntityOfPage":{"@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/"},"wordCount":1007,"publisher":{"@id":"https:\/\/www.threatshub.org\/blog\/#organization"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage"},"thumbnailUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg","keywords":["Cybersecurity"],"articleSection":["Microsoft Secure"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/","url":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/","name":"Attack AI systems in Machine Learning Evasion Competition 2026 | ThreatsHub Cybersecurity News","isPartOf":{"@id":"https:\/\/www.threatshub.org\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage"},"thumbnailUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg","datePublished":"2021-07-29T16:00:21+00:00","description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","breadcrumb":{"@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#primaryimage","url":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg","contentUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2021\/07\/attack-ai-systems-in-machine-learning-evasion-competition.jpg","width":1200,"height":799},{"@type":"BreadcrumbList","@id":"https:\/\/www.threatshub.org\/blog\/attack-ai-systems-in-machine-learning-evasion-competition\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.threatshub.org\/blog\/"},{"@type":"ListItem","position":2,"name":"Cybersecurity","item":"https:\/\/www.threatshub.org\/blog\/tag\/cybersecurity\/"},{"@type":"ListItem","position":3,"name":"Attack AI systems in Machine Learning Evasion Competition"}]},{"@type":"WebSite","@id":"https:\/\/www.threatshub.org\/blog\/#website","url":"https:\/\/www.threatshub.org\/blog\/","name":"ThreatsHub Cybersecurity News","description":"%%focuskw%% Threat Intel \u2013 Threat Intel Services \u2013 CyberIntelligence \u2013 Cyber Threat Intelligence - Threat Intelligence Feeds - Threat Intelligence Reports - CyberSecurity Report \u2013 Cyber Security PDF \u2013 Cybersecurity Trends - Cloud Sandbox \u2013- Threat IntelligencePortal \u2013 Incident Response \u2013 Threat Hunting \u2013 IOC - Yara - Security Operations Center \u2013 SecurityOperation Center \u2013 Security SOC \u2013 SOC Services - Advanced Threat - Threat Detection - TargetedAttack \u2013 APT \u2013 Anti-APT \u2013 Advanced Protection \u2013 Cyber Security Services \u2013 Cybersecurity Services -Threat Intelligence Platform","publisher":{"@id":"https:\/\/www.threatshub.org\/blog\/#organization"},"alternateName":"Threatshub.org","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.threatshub.org\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.threatshub.org\/blog\/#organization","name":"ThreatsHub.org","alternateName":"Threatshub.org","url":"https:\/\/www.threatshub.org\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg","contentUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg","width":432,"height":435,"caption":"ThreatsHub.org"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/threatshub"]},{"@type":"Person","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476","name":"TH Author","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","caption":"TH Author"}}]}},"_links":{"self":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts\/42044","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/comments?post=42044"}],"version-history":[{"count":0,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts\/42044\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/media\/42045"}],"wp:attachment":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/media?parent=42044"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/categories?post=42044"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/tags?post=42044"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}