{"id":51382,"date":"2023-04-07T14:00:00","date_gmt":"2023-04-07T14:00:00","guid":{"rendered":"https:\/\/www.darkreading.com\/vulnerabilities-threats\/bad-actors-will-use-large-language-models-defenders-can-too"},"modified":"2023-04-07T14:00:00","modified_gmt":"2023-04-07T14:00:00","slug":"bad-actors-will-use-large-language-models-but-defenders-can-too","status":"publish","type":"post","link":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/","title":{"rendered":"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too"},"content":{"rendered":"<div><img decoding=\"async\" src=\"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg\" class=\"ff-og-image-inserted\"><\/div>\n<p><span>AI is dominating headlines. <\/span><a href=\"https:\/\/www.darkreading.com\/omdia\/chatgpt-artificial-intelligence-an-upcoming-cybersecurity-threat-\" target=\"_blank\" rel=\"noopener\">ChatGPT<\/a><span>, specifically, has become the topic&nbsp;du jour. Everyone is taken by the novelty, the distraction. But no one is addressing the elephant in the room: how large language models (LLMs) can and will be weaponized.<\/span><\/p>\n<p><span>The Internet has become incredibly large and complex, exposing our most precious crown jewels. Whereas a decade ago a company had a single website, today it may have dozens \u2014 filled with unknown and untracked assets and subsidiaries \u2014 that an attacker can use to exfiltrate IP and\/or breach into their network and systems. Recent research we conducted provided an eye-opening glimpse into this reality:<\/span><\/p>\n<ul>\n<li><span>Companies&#8217; attack surfaces fluctuate 9% in size each month, making security gaps harder to detect.&nbsp;<\/span><\/li>\n<\/ul>\n<ul>\n<li>Organizations have, on average, 104 subsidiaries (i.e., entities owned by a parent company, which might be business units, brands, or standalone companies), and the core security team is unaware of 10 to 31 of them.<\/li>\n<\/ul>\n<ul>\n<li>Invisible or hard-to-detect subsidiaries contain an average of 56% of the critical and high-priority vulnerabilities affecting customer assets.<\/li>\n<\/ul>\n<p><span>In short, companies&#8217; attack surfaces have never been larger and more vulnerable. And security leaders are in constant fear that another issue like <\/span><a href=\"https:\/\/www.darkreading.com\/attacks-breaches\/log4j-vulnerabilities-are-here-to-stay-are-you-prepared-\" target=\"_blank\" rel=\"noopener\">Log4j<\/a><span> is going to cripple their business.<\/span><\/p>\n<p><span>And as if we didn&#8217;t have enough security threats to contend with, large language models like ChatGPT have entered the mainstream, shining a light on language AI as a potential weapon for cyberattacks. Should we be worried? The short answer is yes. But there is a bright side, which I&#8217;ll address later.<\/span><\/p>\n<h2 class=\"regular-text\">Large Language Models Can and Will Be Used Against You<\/h2>\n<p><span>There are several stages of cyberattacks where LLMs can give bad actors a major advantage of scale, scope, reach, and speed. Here are a few:<\/span><\/p>\n<ul>\n<li><span><strong>Automated reconnaissance.<\/strong><\/span><span>&nbsp;Map and discover any assets (devices, files, etc.) and subsidiaries, brands, and services associated with your organization. Find sensitive information such as exposed credentials in AWS directories.<\/span><\/li>\n<\/ul>\n<ul>\n<li><strong>Vulnerability discovery.<\/strong>&nbsp;Find weaknesses in the targeted network.<\/li>\n<\/ul>\n<ul>\n<li><strong>Exploitation.<\/strong>&nbsp;To begin, initial exploitation is about using a technique like phishing to gain access to a network. Then, targeted exploitation might use watering-hole attacks to develop and exploit vulnerabilities within the network.&nbsp;<\/li>\n<\/ul>\n<ul>\n<li><strong>Data theft.<\/strong>&nbsp;Copy or exfiltrate sensitive or valuable data from the network.<\/li>\n<\/ul>\n<p><span>Also, consumer applications based on LLMs, most notably ChatGPT, can be used both intentionally and unintentionally by employees to leak company IP, simply by using the free public version. Companies like JP Morgan caught on to this early and&nbsp;<\/span><a href=\"https:\/\/www.cnn.com\/2023\/02\/22\/tech\/jpmorgan-chatgpt-employees\/index.html\" target=\"_blank\" rel=\"noopener\">were swift to ban corporate use of ChatGPT<\/a><span>.<\/span><\/p>\n<p><span>Spear-phishing campaigns provide another use case.&nbsp;High-quality phishing is based on deep understanding of the target; that is precisely what large language models can do quite well, because they process large volumes of data very quickly and customize messages effectively. Emails created by a large language model can impersonate a boss, co-worker, friend, or reputable organization with increasing precision and believability. Since&nbsp;<\/span><a href=\"https:\/\/www.verizon.com\/business\/en-gb\/resources\/reports\/dbir\/\" target=\"_blank\" rel=\"noopener\">82% of data breaches involve a human element<\/a><span>, including phishing and the use of stolen credentials, this will be an area to watch as hackers use LLMs to ramp up such attacks.<\/span><\/p>\n<h2 class=\"regular-text\">Security Teams Can Turn the Tables on Attackers<\/h2>\n<p><span>There is good news: Security teams can also use machine learning and LLMs to do reconnaissance on their own companies and remediate vulnerabilities before attackers get to them. They can use them to quickly and cost-effectively scan and map their own attack surfaces deeply to find exposed sensitive assets, personal identifiable information (PII), files, etc. By contrast, performing the same feat with manual methods could take months and\/or cost hundreds of thousands of dollars.<\/span><\/p>\n<p><span>Knowing the business context of any given asset is the only way security teams can effectively prioritize risk \u2014 and machine learning can help. For example, machine learning could recognize a database holding PII and play a role in revenue transactions.&nbsp;<\/span><\/p>\n<p><span>Machine learning can also determine the business purpose of an asset, distinguishing between a payment mechanism, a critical database, and a random device \u2014 and classify its risk profile. This context allows exponentially better risk prioritization and a higher level of threat intelligence. Without proper prioritization, security teams confront endless lists of vulnerabilities with labels like Urgent and Critical that are often, in fact, not correct.&nbsp;<\/span><\/p>\n<h2 class=\"regular-text\">Preparing for a New Era of Attacks<\/h2>\n<p><span>There is every reason to expect attackers will make the most of large language models to automate reconnaissance and map your attack surfaces. It is time for security teams to embark on yet another learning curve: Find the best, most effective uses of large language models for defensive purposes. Right now, someone somewhere is looking for your organization&#8217;s vulnerabilities, and it&#8217;s just a matter of time before they use this newly popular type of tool to find them.<\/span><\/p>\n<p>Read More <a href=\"https:\/\/www.darkreading.com\/vulnerabilities-threats\/bad-actors-will-use-large-language-models-defenders-can-too\">HERE<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Security teams need to find the best, most effective uses of large language models for defensive purposes.Read More <a href=\"https:\/\/www.darkreading.com\/vulnerabilities-threats\/bad-actors-will-use-large-language-models-defenders-can-too\">HERE<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_layout":"default_layout","footnotes":""},"categories":[151],"tags":[],"class_list":["post-51382","post","type-post","status-publish","format-standard","hentry","category-darkreading-ti"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too 2026 | ThreatsHub Cybersecurity News<\/title>\n<meta name=\"description\" content=\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security &amp; Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too 2026 | ThreatsHub Cybersecurity News\" \/>\n<meta property=\"og:description\" content=\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security &amp; Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/\" \/>\n<meta property=\"og:site_name\" content=\"ThreatsHub Cybersecurity News\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-07T14:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg\" \/>\n<meta name=\"author\" content=\"TH Author\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@threatshub\" \/>\n<meta name=\"twitter:site\" content=\"@threatshub\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"TH Author\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/\"},\"author\":{\"name\":\"TH Author\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#\\\/schema\\\/person\\\/12e0a8671ff89a863584f193e7062476\"},\"headline\":\"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too\",\"datePublished\":\"2023-04-07T14:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/\"},\"wordCount\":811,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/eu-images.contentstack.com\\\/v3\\\/assets\\\/blt66983808af36a8ef\\\/blt4bddad5870a21606\\\/61b1908e49813175aa360f05\\\/cyberattack_Anucha_Cheechang_shutterstock.jpg\",\"articleSection\":[\"DarkReading |TI\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/\",\"url\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/\",\"name\":\"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too 2026 | ThreatsHub Cybersecurity News\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/eu-images.contentstack.com\\\/v3\\\/assets\\\/blt66983808af36a8ef\\\/blt4bddad5870a21606\\\/61b1908e49813175aa360f05\\\/cyberattack_Anucha_Cheechang_shutterstock.jpg\",\"datePublished\":\"2023-04-07T14:00:00+00:00\",\"description\":\"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#primaryimage\",\"url\":\"https:\\\/\\\/eu-images.contentstack.com\\\/v3\\\/assets\\\/blt66983808af36a8ef\\\/blt4bddad5870a21606\\\/61b1908e49813175aa360f05\\\/cyberattack_Anucha_Cheechang_shutterstock.jpg\",\"contentUrl\":\"https:\\\/\\\/eu-images.contentstack.com\\\/v3\\\/assets\\\/blt66983808af36a8ef\\\/blt4bddad5870a21606\\\/61b1908e49813175aa360f05\\\/cyberattack_Anucha_Cheechang_shutterstock.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/bad-actors-will-use-large-language-models-but-defenders-can-too\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/\",\"name\":\"ThreatsHub Cybersecurity News\",\"description\":\"%%focuskw%% Threat Intel \u2013 Threat Intel Services \u2013 CyberIntelligence \u2013 Cyber Threat Intelligence - Threat Intelligence Feeds - Threat Intelligence Reports - CyberSecurity Report \u2013 Cyber Security PDF \u2013 Cybersecurity Trends - Cloud Sandbox \u2013- Threat IntelligencePortal \u2013 Incident Response \u2013 Threat Hunting \u2013 IOC - Yara - Security Operations Center \u2013 SecurityOperation Center \u2013 Security SOC \u2013 SOC Services - Advanced Threat - Threat Detection - TargetedAttack \u2013 APT \u2013 Anti-APT \u2013 Advanced Protection \u2013 Cyber Security Services \u2013 Cybersecurity Services -Threat Intelligence Platform\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#organization\"},\"alternateName\":\"Threatshub.org\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#organization\",\"name\":\"ThreatsHub.org\",\"alternateName\":\"Threatshub.org\",\"url\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/Threatshub_Favicon1.jpg\",\"contentUrl\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/Threatshub_Favicon1.jpg\",\"width\":432,\"height\":435,\"caption\":\"ThreatsHub.org\"},\"image\":{\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/threatshub\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.threatshub.org\\\/blog\\\/#\\\/schema\\\/person\\\/12e0a8671ff89a863584f193e7062476\",\"name\":\"TH Author\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g\",\"caption\":\"TH Author\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too 2026 | ThreatsHub Cybersecurity News","description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/","og_locale":"en_US","og_type":"article","og_title":"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too 2026 | ThreatsHub Cybersecurity News","og_description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","og_url":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/","og_site_name":"ThreatsHub Cybersecurity News","article_published_time":"2023-04-07T14:00:00+00:00","og_image":[{"url":"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg","type":"","width":"","height":""}],"author":"TH Author","twitter_card":"summary_large_image","twitter_creator":"@threatshub","twitter_site":"@threatshub","twitter_misc":{"Written by":"TH Author","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#article","isPartOf":{"@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/"},"author":{"name":"TH Author","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476"},"headline":"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too","datePublished":"2023-04-07T14:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/"},"wordCount":811,"commentCount":0,"publisher":{"@id":"https:\/\/www.threatshub.org\/blog\/#organization"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#primaryimage"},"thumbnailUrl":"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg","articleSection":["DarkReading |TI"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/","url":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/","name":"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too 2026 | ThreatsHub Cybersecurity News","isPartOf":{"@id":"https:\/\/www.threatshub.org\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#primaryimage"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#primaryimage"},"thumbnailUrl":"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg","datePublished":"2023-04-07T14:00:00+00:00","description":"ThreatsHub Cybersecurity News | ThreatsHub.org | Cloud Security & Cyber Threats Analysis Hub. 100% Free OSINT Threat Intelligent and Cybersecurity News.","breadcrumb":{"@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#primaryimage","url":"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg","contentUrl":"https:\/\/eu-images.contentstack.com\/v3\/assets\/blt66983808af36a8ef\/blt4bddad5870a21606\/61b1908e49813175aa360f05\/cyberattack_Anucha_Cheechang_shutterstock.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.threatshub.org\/blog\/bad-actors-will-use-large-language-models-but-defenders-can-too\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.threatshub.org\/blog\/"},{"@type":"ListItem","position":2,"name":"Bad Actors Will Use Large Language Models \u2014 but Defenders Can, Too"}]},{"@type":"WebSite","@id":"https:\/\/www.threatshub.org\/blog\/#website","url":"https:\/\/www.threatshub.org\/blog\/","name":"ThreatsHub Cybersecurity News","description":"%%focuskw%% Threat Intel \u2013 Threat Intel Services \u2013 CyberIntelligence \u2013 Cyber Threat Intelligence - Threat Intelligence Feeds - Threat Intelligence Reports - CyberSecurity Report \u2013 Cyber Security PDF \u2013 Cybersecurity Trends - Cloud Sandbox \u2013- Threat IntelligencePortal \u2013 Incident Response \u2013 Threat Hunting \u2013 IOC - Yara - Security Operations Center \u2013 SecurityOperation Center \u2013 Security SOC \u2013 SOC Services - Advanced Threat - Threat Detection - TargetedAttack \u2013 APT \u2013 Anti-APT \u2013 Advanced Protection \u2013 Cyber Security Services \u2013 Cybersecurity Services -Threat Intelligence Platform","publisher":{"@id":"https:\/\/www.threatshub.org\/blog\/#organization"},"alternateName":"Threatshub.org","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.threatshub.org\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.threatshub.org\/blog\/#organization","name":"ThreatsHub.org","alternateName":"Threatshub.org","url":"https:\/\/www.threatshub.org\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg","contentUrl":"https:\/\/www.threatshub.org\/blog\/coredata\/uploads\/2025\/05\/Threatshub_Favicon1.jpg","width":432,"height":435,"caption":"ThreatsHub.org"},"image":{"@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/threatshub"]},{"@type":"Person","@id":"https:\/\/www.threatshub.org\/blog\/#\/schema\/person\/12e0a8671ff89a863584f193e7062476","name":"TH Author","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/066276f086d5155df79c850206a779ad368418a844da0182ce43f9cd5b506c3d?s=96&d=mm&r=g","caption":"TH Author"}}]}},"_links":{"self":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts\/51382","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/comments?post=51382"}],"version-history":[{"count":0,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/posts\/51382\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/media?parent=51382"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/categories?post=51382"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.threatshub.org\/blog\/wp-json\/wp\/v2\/tags?post=51382"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}