{"id":55856,"date":"2024-04-16T14:03:46","date_gmt":"2024-04-16T14:03:46","guid":{"rendered":"https:\/\/packetstormsecurity.com\/news\/view\/35785\/AI-Watchdog-Defends-Against-New-LLM-Jailbreak-Method.html"},"modified":"2024-04-16T14:03:46","modified_gmt":"2024-04-16T14:03:46","slug":"ai-watchdog-defends-against-new-llm-jailbreak-method","status":"publish","type":"post","link":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/","title":{"rendered":"AI Watchdog Defends Against New LLM Jailbreak Method"},"content":{"rendered":"<div><img decoding=\"async\" src=\"https:\/\/files.scmagazine.com\/wp-content\/uploads\/2024\/04\/AdobeStock_694405008_Editorial_Use_Only.jpg\" class=\"ff-og-image-inserted\"><\/div>\n<p>Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security <a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2024\/04\/11\/how-microsoft-discovers-and-mitigates-evolving-attacks-against-ai-guardrails\/\" target=\"_blank\" rel=\"noreferrer noopener\">in a blog post Thursday.<\/a><\/p>\n<p>Microsoft first revealed the \u201cCrescendo\u201d LLM jailbreak method <a href=\"https:\/\/arxiv.org\/abs\/2404.01833\" target=\"_blank\" rel=\"noreferrer noopener\">in a paper<\/a> published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI\u2019s ChatGPT, Google\u2019s Gemini, Meta\u2019s LlaMA or Anthropic\u2019s Claude, to produce an output that would normally be filtered and refused by the LLM model.<\/p>\n<p>For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM\u2019s previous outputs, follow up with questions about how they were made in the past.<\/p>\n<p>The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called \u201cCrescendomation,\u201d which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants.<\/p>\n<h2>Microsoft\u2019s \u2018AI Watchdog\u2019 and \u2018AI Spotlight\u2019 combat malicious prompts, poisoned content<\/h2>\n<p>Microsoft reported the Crescendo jailbreak vulnerabilities to the affected LLM providers and explained in its blog post last week how it has improved its LLM defenses against Crescendo and other attacks using new tools including its \u201cAI Watchdog\u201d and \u201cAI Spotlight\u201d features.<\/p>\n<p>AI Watchdog uses a separate LLM trained on adverse prompts to \u201csniff\u201d out adversarial content in both inputs and outputs to prevent both single-turn and multiturn prompt injection attacks. Microsoft uses this tool, along with a multiturn prompt filter that looks at the pattern of a conversation rather than only the immediate interaction, to reduce the efficacy of attempted Crescendo attacks.<\/p>\n<p>In addition to direct prompt injection attacks, Microsoft\u2019s recent blog goes over indirect prompt injection attacks involving poisoned content. For example, a user may ask an LLM to summarize an email that, unbeknownst to them, contains hidden malicious prompts. If used in the LLM\u2019s outputs, these prompts could perform malicious tasks such as forwarding sensitive emails to an attacker.<\/p>\n<p>AI Spotlighting is a technique Microsoft uses to separate the user prompts from additional content, like emails and documents, the AI is asked to reference. The LLM avoids incorporating potential instructions from this additional content in its output, instead using the content only for analysis before responding to the user\u2019s prompt.<\/p>\n<p>AI Spotlight reduces the success rate of content poisoning attacks from more than 20% to below detection threshold without significantly impacting the AI\u2019s overall performance, according to Microsoft.<\/p>\n<p>Earlier this year, Microsoft released an open automation framework for red teaming generative AI systems, called the <a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2024\/02\/22\/announcing-microsofts-open-automation-framework-to-red-team-generative-ai-systems\/\" target=\"_blank\" rel=\"noreferrer noopener\">Python Risk Identification Toolkit for generative AI<\/a> (PyRIT), that can aid AI developers in testing their systems against potential attacks and discover new vulnerabilities.<\/p>\n<p>In February, <a href=\"https:\/\/www.scmagazine.com\/news\/microsoft-openai-reveal-chatgpt-use-by-state-sponsored-hackers\" target=\"_blank\" rel=\"noreferrer noopener\">the company discovered<\/a> that LLMs, including ChatGPT, were being used by state-sponsored hackers to generate social engineering content, perform vulnerability research, help with coding and more. And <a href=\"https:\/\/www.scmagazine.com\/news\/chatgpt-jailbreak-prompts-proliferate-on-hacker-forums\" target=\"_blank\" rel=\"noreferrer noopener\">a report by Abnormal Security<\/a> earlier this month found that a variety of LLM jailbreak prompts remained popular among cybercriminals, with entire hacker forum sections dedicated to \u201cdark AI.\u201d<\/p>\n<p>In late March, the U.S. House of Representatives <a href=\"https:\/\/www.scmagazine.com\/news\/us-house-forbids-staff-members-from-using-ai-chatbot-microsoft-copilot\" target=\"_blank\" rel=\"noreferrer noopener\">voted to ban the use of Copilot<\/a> by House staff, citing the risk of leaking sensitive data to unapproved cloud services.<\/p>\n<p>READ MORE <a href=\"https:\/\/packetstormsecurity.com\/news\/view\/35785\/AI-Watchdog-Defends-Against-New-LLM-Jailbreak-Method.html\">HERE<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>READ MORE HERE&#8230;<\/p>\n","protected":false},"author":2,"featured_media":55857,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_layout":"default_layout","footnotes":""},"categories":[277],"tags":[5505],"class_list":["post-55856","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cybersecurity-blogs","tag-headlinehackermicrosoftflaw"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI Watchdog Defends Against New LLM Jailbreak Method 2026 | ThreatsHub Cybersecurity News<\/title>\n<meta name=\"description\" content=\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security &amp; Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Watchdog Defends Against New LLM Jailbreak Method 2026 | ThreatsHub Cybersecurity News\" \/>\n<meta property=\"og:description\" content=\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security &amp; Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\" \/>\n<meta property=\"og:site_name\" content=\"ThreatsHub Cybersecurity News\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-16T14:03:46+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/files.scmagazine.com\/wp-content\/uploads\/2024\/04\/AdobeStock_694405008_Editorial_Use_Only.jpg\" \/>\n<meta name=\"author\" content=\"TH Author\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@threatshub\" \/>\n<meta name=\"twitter:site\" content=\"@threatshub\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"TH Author\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\"},\"author\":{\"name\":\"TH Author\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476\"},\"headline\":\"AI Watchdog Defends Against New LLM Jailbreak Method\",\"datePublished\":\"2024-04-16T14:03:46+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\"},\"wordCount\":610,\"publisher\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg\",\"keywords\":[\"headline,hacker,microsoft,flaw\"],\"articleSection\":[\"CyberSecurity Blogs\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\",\"url\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\",\"name\":\"AI Watchdog Defends Against New LLM Jailbreak Method 2026 | ThreatsHub Cybersecurity News\",\"isPartOf\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg\",\"datePublished\":\"2024-04-16T14:03:46+00:00\",\"description\":\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage\",\"url\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg\",\"contentUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg\",\"width\":800,\"height\":386},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.threatshub.org\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"headline,hacker,microsoft,flaw\",\"item\":\"https:\/\/www.threatshub.org\/blog\/tag\/headlinehackermicrosoftflaw\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"AI Watchdog Defends Against New LLM Jailbreak Method\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#website\",\"url\":\"https:\/\/www.threatshub.org\/blog\/\",\"name\":\"ThreatsHub Cybersecurity News\",\"description\":\"%%focuskw%% Threat Intel \u2013 Threat Intel Services \u2013 CyberIntelligence \u2013 Cyber Threat Intelligence - Threat Intelligence Feeds - Threat Intelligence Reports - CyberSecurity Report \u2013 Cyber Security PDF \u2013 Cybersecurity Trends - Cloud Sandbox \u2013- Threat IntelligencePortal \u2013 Incident Response \u2013 Threat Hunting \u2013 IOC - Yara - Security Operations Center \u2013 SecurityOperation Center \u2013 Security SOC \u2013 SOC Services - Advanced Threat - Threat Detection - TargetedAttack \u2013 APT \u2013 Anti-APT \u2013 Advanced Protection \u2013 Cyber Security Services \u2013 Cybersecurity Services -Threat Intelligence Platform\",\"publisher\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#organization\"},\"alternateName\":\"Threatshub.org\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.threatshub.org\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#organization\",\"name\":\"ThreatsHub.org\",\"alternateName\":\"Threatshub.org\",\"url\":\"https:\/\/www.threatshub.org\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg\",\"contentUrl\":\"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg\",\"width\":432,\"height\":435,\"caption\":\"ThreatsHub.org\"},\"image\":{\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/threatshub\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476\",\"name\":\"TH Author\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"caption\":\"TH Author\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI Watchdog Defends Against New LLM Jailbreak Method 2026 | ThreatsHub Cybersecurity News","description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/","og_locale":"en_US","og_type":"article","og_title":"AI Watchdog Defends Against New LLM Jailbreak Method 2026 | ThreatsHub Cybersecurity News","og_description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","og_url":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/","og_site_name":"ThreatsHub Cybersecurity News","article_published_time":"2024-04-16T14:03:46+00:00","og_image":[{"url":"https:\/\/files.scmagazine.com\/wp-content\/uploads\/2024\/04\/AdobeStock_694405008_Editorial_Use_Only.jpg","type":"","width":"","height":""}],"author":"TH Author","twitter_card":"summary_large_image","twitter_creator":"@threatshub","twitter_site":"@threatshub","twitter_misc":{"Written by":"TH Author","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#article","isPartOf":{"@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/"},"author":{"name":"TH Author","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476"},"headline":"AI Watchdog Defends Against New LLM Jailbreak Method","datePublished":"2024-04-16T14:03:46+00:00","mainEntityOfPage":{"@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/"},"wordCount":610,"publisher":{"@id":"https:\/\/www.threatshub.org\/blog\/#organization"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage"},"thumbnailUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg","keywords":["headline,hacker,microsoft,flaw"],"articleSection":["CyberSecurity Blogs"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/","url":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/","name":"AI Watchdog Defends Against New LLM Jailbreak Method 2026 | ThreatsHub Cybersecurity News","isPartOf":{"@id":"https:\/\/www.threatshub.org\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage"},"thumbnailUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg","datePublished":"2024-04-16T14:03:46+00:00","description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","breadcrumb":{"@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#primaryimage","url":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg","contentUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2024\/04\/ai-watchdog-defends-against-new-llm-jailbreak-method.jpg","width":800,"height":386},{"@type":"BreadcrumbList","@id":"https:\/\/www.threatshub.org\/blog\/ai-watchdog-defends-against-new-llm-jailbreak-method\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.threatshub.org\/blog\/"},{"@type":"ListItem","position":2,"name":"headline,hacker,microsoft,flaw","item":"https:\/\/www.threatshub.org\/blog\/tag\/headlinehackermicrosoftflaw\/"},{"@type":"ListItem","position":3,"name":"AI Watchdog Defends Against New LLM Jailbreak Method"}]},{"@type":"WebSite","@id":"https:\/\/www.threatshub.org\/blog\/#website","url":"https:\/\/www.threatshub.org\/blog\/","name":"ThreatsHub Cybersecurity News","description":"%%focuskw%% Threat Intel \u2013 Threat Intel Services \u2013 CyberIntelligence \u2013 Cyber Threat Intelligence - Threat Intelligence Feeds - Threat Intelligence Reports - CyberSecurity Report \u2013 Cyber Security PDF \u2013 Cybersecurity Trends - Cloud Sandbox \u2013- Threat IntelligencePortal \u2013 Incident Response \u2013 Threat Hunting \u2013 IOC - Yara - Security Operations Center \u2013 SecurityOperation Center \u2013 Security SOC \u2013 SOC Services - Advanced Threat - Threat Detection - TargetedAttack \u2013 APT \u2013 Anti-APT \u2013 Advanced Protection \u2013 Cyber Security Services \u2013 Cybersecurity Services -Threat Intelligence Platform","publisher":{"@id":"https:\/\/www.threatshub.org\/blog\/#organization"},"alternateName":"Threatshub.org","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.threatshub.org\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.threatshub.org\/blog\/#organization","name":"ThreatsHub.org","alternateName":"Threatshub.org","url":"https:\/\/www.threatshub.org\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg","contentUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg","width":432,"height":435,"caption":"ThreatsHub.org"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/threatshub"]},{"@type":"Person","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476","name":"TH Author","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","caption":"TH Author"}}]}},"_links":{"self":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts\/55856","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/comments?post=55856"}],"version-history":[{"count":0,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts\/55856\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/media\/55857"}],"wp:attachment":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/media?parent=55856"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/categories?post=55856"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/tags?post=55856"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}