Taking the art of email security to the next level

Sponsored Feature Email is a popular target for cybercriminals, offering an easy way of launching an attack disguised as an innocent message. One moment of inattention on the part of the recipient and the door is open to malware, spam, phishing, perhaps even a dose of the dreaded ransomware. Entire organisations can suffer, not just individual victims.

Recent research from Deloitte found that 91 percent of all cyberattacks start with an email that has the appearance of being from a trusted source, but is actually from a bad actor intent on theft or damage on an enterprise scale.

You might imagine from the considerable number of email security tools on the market that the problem is being kept relatively in check, but you’d be wrong. The threat is actually getting worse, not better. That’s because the bad guys are deploying new and more sophisticated methods, and traditional defences have not caught up. Too many existing email security tools are based around analysis of historical attack data, so can only spot what they’ve seen before. By the time IT departments are aware of a new type of threat, it can be several days after it has hit.

One particularly ominous new trend is the increasing use of artificial intelligence (AI) by attackers to enhance the believability of malicious emails. Carlos Gray is Cybersecurity Product Specialist at Darktrace. He says the company has noted a substantial uptick in AI-driven, email-borne attacks dating back to the 2022 launch of ChatGPT.

“We haven’t recorded AI being used by attackers at this scale before,” he says. “In the last three months we’ve seen something like a 135 percent increase in attacks that use social engineering. These attacks are more personalised and sophisticated than we’ve seen, with a greater accuracy of language usage, all coinciding with the advent of Chat GPT.”

Gray points out that attacks based on social engineering – the psychological manipulation of victims driving them to carry out actions or offer up confidential data – are hard to pick up with conventional email security tools, as they don’t always have classic payloads in the form of links and attachments that give them away.

But not only is AI great at mimicking genuine human interactions, it’s also very easy for criminals to adopt and deploy: “Personalised, well thought out attacks backed by careful reconnaissance used to be quite rare because they required massive resources,” Gray explains. “AI and other technologies have lowered the threshold for the launch of these sophisticated attacks. Weaknesses that were becoming apparent in existing email security have been accelerated by AI. It is easier and cheaper for criminals from all over the world to get involved.”

Off the shelf hacking tools proliferate

The black economy of cybercrime increasingly functions like modern digitally-focussed commerce, with tools available on-demand and as-a-service. The means of enabling social engineering, phishing and ransomware are all available off the shelf. There has never been an easier time to break into the cyberthreat sector.

“There’s not just increased sophistication in terms of the content of these emails, but also in their delivery,” says Gray. “We’re seeing abuse of legitimate services like SharePoint and Mailchimp and the use of these services to conceal nefarious intentions.”

It comes as a relief then to find out that AI is also a tool in the armoury of the good guys: “AI offers defenders the opportunity to check every email independently, comparing it to the expected behaviour of a sender or recipient,” explains Gray. “Even if a trusted third party has been compromised, your analysis will succeed if is based around a zero trust mentality.”

AI can be thrown at Big Data to reduce the false positives that distort the results of traditional analytics. It can also be used to analyse attacks to see if patterns of normal and abnormal behaviour can be identified and used in the detection process.

“Instead of just looking at attacks and attackers, defences need to be focussed on users,” argues Gray. “With the application of AI, anything unfamiliar should automatically become suspicious and a potential threat. You can’t simply establish ‘normal’ patterns of behaviour as a kind of baseline. Everything must be continuously learned, and AI is good at that.”

Complete email security needs to go well beyond simply scrutinising what’s coming into email inboxes: “You must look at the wider context,” Gray continues. “In order to protect the user, you need to protect the environment they live in – their inbox, but also their Microsoft accounts, their cloud applications. You need to be collecting intelligence on all of those data sources, leveraging that information and feeding it into email threat detection. Threats are not simply contained within emails, but wherever users are found. The entire user lifecycle needs to be understood.”

Going deep to combat complex cyberthreats

Only the right kind of deep behavioural analysis will improve detection in the face of vastly more complex threats. Appropriate email security will not only allow defenders to understand full intelligence around the user but also be sufficiently granular to balance productivity with risk mitigation: “You can’t just block everything because then you just stop the business,” points out Gray. “You must take very precise actions, for example flagging up and removing a suspicious attachment but allowing the email through.”

Darktrace’s own unique AI-led approach, used by over 3,000 organisations worldwide, brings every piece of the jigsaw together to provide defenders with one set of intelligence. It is special in providing connectivity between many different environments, allowing for highly precise actions to be taken. With Darktrace, defenders are able to detect new types of email attack on average 13 days earlier than old school email security tools that are trained only on knowledge of past threats, estimates the company.

Other solutions, including native, cloud and ‘static AI’ tools, are likely to leave defenders vulnerable for lengthy periods if relied on. That’s plenty of time for bad actors to get in, do their damage and move on to the next victim before anything is detected.

“Darktrace aims to switch the conversation from ‘Should I use AI in email security’, to ‘What AI should I use and how should I deploy it’,” says Gray. “Attackers will continue to use AI, so defenders must use it in more innovative and creative ways to counter that. An AI-native solution like ours is constantly learning and correcting itself. Traditional security takes hours and hours to build a library of information. We offer a way to reduce the needed man hours, important in a landscape where skills are in short supply.”

New capabilities recently added to Darktrace combine account takeover and email protection in a single product. By analysing behaviour, Darktrace can also detect misdirected emails thus preventing intellectual property or confidential information being sent to the wrong recipient in error. It has also perfected a way to leverage insights at the level of individual employees to inform its AI and bring Darktrace’s power to users, providing real-time, in-context insights and security awareness for all.

Intelligent mail management is another new feature, enabling improved productivity against graymail, spam and the kind of unwanted newsletters that can clutter inboxes. Optimised workflows and integrations for security teams are now included in the Darktrace mobile app, and automated investigations of email incidents bundled with other coverage areas in Darktrace’s Cyber AI Analyst.

On the new frontier of email security, AI is rapidly emerging as the main weapon in the hands of both attackers and defenders. By ensuring they make full use of it, organisations can master the art of email security and avoid learning the hard way what it means to rely on old solutions to new problems.

Sponsored by Darktrace.

READ MORE HERE