Exponential risk: The mathematical case for an AI toolkit in enterprise cyber security

Keeping IT assets secure has never been more challenging. But breaking down potential vulnerabilities by the numbers can leave even seasoned IT pros breathless.

I reached out to security expert Gaurav Banga, founder of vulnerability assessment and management firm Balbix, to give me the skinny. His threat insights shine a light on the growing use of AI and machine learning tools in the broader digital security strategies of large and mid-sized business.

By the numbers

The simple math here is what’s so eye opening. I asked Banga to give me a breakdown of the threat map faced by large organizations, such as Fortune 500 companies.

His answer:

Take just the asset type of line of business (LOB) apps and just the attack vector of shared passwords. A Fortune 500 or 1000 size company will easily have 750+ apps and 1500 users for each app. The risk multiplies to 1M+ potential shared passwords (e.g. a user having the same password for Facebook or LinkedIn as they do for Salesforce.com or Office 365). Imagine the full scale of the ‘attack surface’ when you do similar math for 100s of asset types (especially when you add in the growing number of IoT, BYO and other non-traditional, non-managed assets, as well as the assets of supply chain, reseller and other business partners) and 260+ attack vectors!

The attack surface he’s describing is gargantuan, something on the order of 10s of millions of cybersecurity factors requiring analysis for even small firms. For the big boys (Banga mentioned U.S. DoD, for example) that attack surface grows to hundreds of billions of time-varying factors, as illustrated in the embedded graph.

digital-risk.jpg

The sheer workload involved becomes even more daunting when evaluating breach risk for each of these threat points.

For each attack surface point, four calculations need to be performed to evaluate the breach risk at that point: a) the security configuration b) the threat level c) the business impact/criticality of the corresponding asset and d) the effect of any deployed mitigating controls, like relevant security products and processes. These calculations yield the effective risk heat map, which identifies areas of concerns where action is needed. Using Risk instead of security posture is very important to align security projects and decisions with business objectives.

heat-map.jpg

heat-map.jpg

Banga’s conclusion is that enterprises need to build security infrastructure leveraging the power of AI, machine learning, and deep learning to handle the sheer scale of analysis and reporting. A key provision here is that the analysis has to be relevant to human IT pros. Banga uses the phrase “human consumable.”

AI-enhanced security platforms are needed to find, analyze and recommend how to fix/mitigate the risk in such a large ecosystem and resulting attack surface (all assets x all attack vectors). Human teams, however skilled (which itself is a challenge to hire/retain at scale), cannot address the volume of work required at this scale…and certainly not 7x24x365!

AI-enhanced systems can, and that’s why the ecosystem for automated digital security has been growing so quickly in the past couple years. Machine learning in particular has become central to the threat countermeasures strategies of big enterprises.

Proactively, companies are using machine learning to identify important areas in the organization’s extended network that might be vulnerable. “When a new threat begins to emerge,” explains Banga, “for example due to activity in the dark web, or vendor disclosures, machine learning can very quickly (within seconds) identify the risk change for the enterprise and pin point the most efficient and prioritized compensating actions or mitigations that are need to reduce the risk to acceptable levels.”

Reactively, machine learning can help identify abnormal behavior across devices and users, which is often a breadcrumb trail to a security compromise.

To be sure, Banga isn’t proposing taking humans out of the IT security loop. For one thing, there’s still no substitute for human decision making in digital security. For another, it’s not possible to envision an approach that relies on AI alone.

Right now, AI needs a fair amount of help, but not in the way that most people think. For example, to deploy AI in an enterprise of any scale, a business judgement guides where in the network the AI based system should be deployed first, vs later.

Human users are needed to provide some level of supervision to AI, particularly in areas of business impact specification. For example, AI cannot distinguish between the relative business value of two websites that a company may be operating. With a little human supervision, AI can learn these and propagate business value numbers to the multitude of IT components that each website depends on (like switches, routers, domain controllers, DNS servers, SAN arrays, etc.) which are quite difficult for humans to keep track of.

Last but not least, there is a fair amount of cybersecurity technology that is not AI at all–just labeled Ai or ML by marketeers. Many of such techniques are simply heuristics or rules based expert-systems repackaged.

Images were provided by Balbix. Thanks to Gaurav Banga for the security insights.

Must-Read: Security

READ MORE HERE