Generative AI: What Every CISO Needs to Know

The ‘disruptive’ part of disruptive innovations often comes from the unexpected consequences they bring. The printing press made it easy to copy text, but in doing so re-wove the social, political, economic, and religious fabric of Europe. By revolutionizing human mobility, the car reshaped community design, spawning suburbs and a 20th-century driving culture. More recently, the world wide web completely transformed how people connect with each other and access information, reframing questions of privacy, geopolitical boundaries, and free speech in the process.

Generative AI seems poised to be every bit as transformative as all of these, with large language models like ChatGPT and Google Bard and image generators like DALL-E capturing outsize interest in the span of just months.

Given the rapid uptake of these tools, CISOs urgently need to understand the associated cybersecurity risks—and how those risks are radically different from any that have come before.

Unbridled uptake

To say companies are excited by the possibilities of generative AI is a massive understatement. According to one survey, just six months after the public launch of ChatGPT, 49% of businesses said they were already using it, 30% said they planned to use it, and 93% of early adopters intended to use it more.

What for? Everything from writing documents and generating computer code to carrying out customer service interactions. And that’s barely scratching the surface of what’s to come. Proponents claim AI will help solve complex problems like climate change and improve human health—for example by accelerating radiology workflows and making X-ray, CT scan, and MRI results more accurate, while improving outcomes with fewer false positives.

Yet any new technology brings risks, including novel vulnerabilities and attack modalities. Amid all the noise and confusion surrounding AI today, those risks are not yet well understood.

What makes generative AI different?

Machine learning (ML) and early forms of AI have been with us for some time. Self-driving cars, stock trading systems, logistics solutions, and more are powered today by some combination of ML and AI. In security solutions like XDR, ML identifies patterns and benchmarks behaviors, making anomalies more detectable. AI acts as a watchdog, monitoring activity and applying sniffing out potential threats based on that ML analysis of what normal or non-threat activity looks like, triggering automated responses when needed.

But ML and simpler forms of AI are ultimately limited to working with the data that’s been fed into them. Generative AI is different because its algorithms aren’t necessarily fixed or static as they usually are in ML: they’re often constantly evolving, building on the system’s past ‘experiences’ as part of its learning and allowing it to create completely new information.

Up to now, bad actors have largely avoided ML and more limited forms of AI because their outputs aren’t especially valuable for exploitation. But the data-processing capacity of ML with the creativity of generative AI makes a far more compelling attack tool.

Security risks: Key questions

The British mathematician and computer scientist Alan Turing conceived of a test in the 1950s to see if a sufficiently advanced computer could be taken for human in natural language conversation. Google’s LaMDA AI system passed that test in 2022, highlighting one of the major security concerns about generative AI, namely its ability to imitate human communication.

That capability makes it a powerful tool for phishing schemes, which up to now have relied on phony messages often rife with spelling mistakes. AI-created phishing texts and emails, on the other hand, are polished and error-free and can even emulate a known sender such as a company CEO issuing instructions to her team. Deep fake technologies will take this a step further with their ability to mimic people’s faces and voices and create whole ‘scenes’ that never happened.

Generative AI can do this not only on a one-to-one basis but also at scale, interacting with many different users simultaneously for maximum efficiency and chances of penetration. And behind those phishing schemes could be malicious code also generated by AI programs for use in cyberattacks.

Many companies have piled on the AI chatbot bandwagon without fully considering the implications for their corporate data—especially sensitive information, competitive secrets, or records governed by privacy legislation. In fact, there are currently no clear protections for confidential information that gets entered into public AI platforms, whether that consists of personal health details provided to schedule a medical appointment or proprietary corporate information run through a chatbot to generate a marketing handout.

Inputs to a public AI chatbot become part of the platform’s experience and could be used in future training. Even if that training is moderated by humans and privacy-protected, conversations still have potential to ‘live’ beyond the initial exchange, meaning corporations do not have full control of their data once it’s been shared.

AI chatbots have famously proven susceptible to so-called hallucinations, generating false information. Reporters from The New York Times asked ChatGPT when their paper first reported on artificial intelligence and the platform conjured up an article from 1956—title and all—that never existed. Taking AI outputs on faith and sharing them with customers, partners, or the public, or building business strategies on them, is clearly a strategic and reputational corporate risk.

Equally concerning is the susceptibility of generative AI to misinformation. All AI platforms are trained on datasets, making the integrity of those datasets vitally important. Increasingly, developers are moving toward using the live, real-time Internet as a continuously updated dataset, putting AI programs at risk of exposure to bad information—either innocently erroneous or else planted maliciously to skew AI outputs, possibly creating safety and security risks.

Read More HERE