Shielding the data that drives AI

Sponsored Feature Every organisation must prioritise the protection of mission critical data, applications and workloads or risk disaster in the face of an ever-widening threat landscape.

So much of our digital activity depends on high levels of reliability, availability, performance and security within our IT infrastructure that any disruption can have potentially serious consequences for a business, for its customers, and for wider society.

If the existential risk posed by a bad data breach was not worry enough, there is an expanding regulatory framework for managing sensitive data, with serious sanctions for non-compliance.

Adding to the challenge is the fact that today’s enterprise applications routinely store and process information of a sensitive and personal nature, including details of financial transactions and confidential health records, all of which represent classic targets for theft, extortion and sabotage. This necessitates a proactive approach to protecting against cyberattack and data loss and the implementation of security throughout an organisation’s technology stack.

The profound impact of AI

The growing importance of Artificial Intelligence (AI) has magnified the challenge still further. AI is having a game changing impact on many areas of business, and it is adding a new twist to the cybersecurity narrative in the process. As organisations continue to transform digitally and migrate to the cloud, it is incumbent on them to find new and better ways to protect the data that AI relies on to deliver results and drive value.

“AI lives off data,” explains Stephan Gillich, Director of Artificial Intelligence – GTM in Intel’s AI Center of Excellence. “There is no AI without it. If you handle a lot of data then security and privacy and data protection automatically become important subjects. If you are using data to gain insights, then some of that data is likely to be sensitive, personally identifiable information such as medical records and banking transactions. This could be exploited either on purpose or accidentally. That’s something nobody wants. But you need that data for your insights, so there’s a conflict.”

What’s needed, says Intel, is a way of securing AI data while allowing AI models the freedom they need to deliver results. Confidential computing looks like a solution which can square that circle. It’s all about providing a layer of protection around data as it moves over a network and between computing systems. This is done using encryption which delivers the authentication needed to help ensure that AI data is accessible only to authorised individuals and systems, so that even the most sensitive data is kept away from bad actors.

This approach allows data to be pooled and shared, not just within an organisation but between different businesses, all the while preserving its confidentiality. This is great for AI-powered applications that thrive on collaboration, many of which are already being used extensively in sectors like healthcare and banking which are subject to stringent data protection rules and regulation.

Protect data in storage, transit and processing

Confidential computing is at the heart of Intel’s product portfolio, explains Gillich. The 4th generation of Intel Xeon processors feature built-in security features that help to prevent attackers from stealing high-value information from computer systems.

The chips help to secure and isolate the most sensitive data, designed to offer a trusted foundation for any organisation looking to deploy AI-powered applications and workloads while meeting strict security and confidentiality requirements. Intel has also deliberately optimised its 4th Generation Intel Xeon processors to deliver high levels of application performance, platform capability and workload acceleration.

The 4th Generation Intel Xeon Scalable Processors feature more built-in accelerators than any other CPU on the market which help improve performance efficiency for emerging workloads and especially for AI. That includes workloads for safeguarding data, potentially offering a backbone for any zero-trust security strategy. Financial services organisations are already using Confidential Computing enabled by Intel® Software Guard Extensions (Intel® SGX) to detect anomalies and pick up fraud attempts, and it can also be harnessed to augment the accuracy and effectiveness of video surveillance.

Here’s a more detailed rundown of some of the innovations introduced with this latest generation of processors:

– Intel Software Guard Extensions (Intel® SGX): Traditional security measures may be good enough to protect data when it is at rest and in transit, but often fall short of protecting it while in active use in memory. Intel SGX helps protect data in use via unique application isolation technology. Intel SGX enforces isolation at the application level.It’s trusted execution environment, called an enclave, only allows trusted, verified application code to access confidential data.This very tight trust boundary is attractive to users with high security requirements because the least amount of code can access data.This means that organisations can use Intel SGX to help protect selected code and data from modification by untrusted sources to prevent access to confidential information. Applications may need to be modified for operation inside an Intel SGX enclave, however.

– Intel Trust Domain Extensions (Intel® TDX): This innovation delivers architectural elements to help deploy hardware-isolated, virtual machines (VMs) called trust domains (TDs). Intel TDX isolates VMs from the virtual-machine manager (VMM) or hypervisor and any other non-TD software on the platform to protect TDs from a broad range of software. Because Intel TDX enforces isolation at the VM level, confidential data is not accessible by the host OS, hypervisors or other VMs typically implemented in multitenant cloud services; it is designed so that only the guest OS and all the applications inside the VM can access data. Although it has a larger trust boundary [than Intel SGX], Intel TDX is more amenable to existing applications and rarely requires code changes.

Additional layers of protection

These hardware embedded data security capabilities are backed up by additional tools and services designed to offer organisations the assurance that data cannot be accessed or tampered with at the system level. Intel remote attestation for example allows users to verify whether an enclave is trusted before sharing data with it. It means organisations can verify that a platform has the latest security updates and information about the software running in an enclave, giving unique granular-level control and protection at both the enclave and application level.

Intel also recently introduced Intel Trust Authority, an independent attestation service which attests to the validity of compute assets at the network, edge and in the cloud which are running Trusted Execution Environments (TEEs) based on Intel SGX and Intel TDX.

An additional layer of protection orchestrated specifically for sensitive data shared by AI/ML applications and workloads is enabled by Federated Learning (FL). AI improves through training over vast data sets, and where this involves centralising those data sets in one location, problems can arise, especially when the training involves personal data.

Federated learning is a distributed machine learning approach that supports organisational collaboration without exposing sensitive data or ML algorithms. It’s designed to help securely connect multiple systems and data sets, and remove any barriers which may prevent the aggregation of data for analysis. Industries such as retail, manufacturing, healthcare, and financial services can benefit from federated learning to gain valuable insights from data.

The deployment of AI demands an ever evolving approach to securing the whole technology stack, from the software layer down to the bits and bytes, from the CPUs in the data centre to the nodes and sensors at the network edge. The right architecture is essential to provide a trusted foundation for every organisation, giving them the confidence to deploy AI-powered applications and workloads in accordance with security and compliance requirements.

Sponsored by Intel.

DISCLAIMER: Performance varies by use, configuration and other factors. Learn more on the Performance Index site. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation.

READ MORE HERE