Trusted execution environments shield proprietary data against the very cloud providers that host it. See how confidential computing works today.
Today’s tech industry needs to always keep a step ahead of attackers. Confidential computing is a part of that conversation, but as with the edge, there is some confusion over what it actually means.
AWS defines it as certain hardware and firmware that separates an inside, often customer data, from an outside, often a cloud provider. It includes elements of tiered zero trust, allowing organizations that work with a cloud provider to further divide data by its security needs. It can secure data in use and find a balance between collaboration and data ownership.
SEE: Hiring Kit: Cloud Engineer (TechRepublic Premium)
Because we’re talking about current-generation tech, let’s use this definition: Confidential computing is an initiative to create more secure hardware-based execution environments. It’s often used to secure data in use across multiple environments.
Protecting data at rest or in transit is generally considered easier than protecting data in use. According to IEEE, the problem is the paradox. Data has to be exposed in order to be processed, so how do you stop malware from sneaking in at the “in use” stage? The answer is a trusted execution environment providing real-time encryption on certain hardware, accessible only by approved code.
Jump to:
History of confidential computing
In 2020, the Confidential Computing Consortium began working on its Technical Advisory Council to set out standards. Companies like Meta, Google, Huawei, IBM, Microsoft and Tencent weighed in.
At that time, the idea was that by isolating protected data, confidential computing could allow different organizations to share data sets without sharing full access, or it could cut down on energy needs because high-bandwidth or high-latency data like video could be stored in the TEE rather than locally.
The TEE is a secure section within a CPU, separated by embedded encryption keys accessed by authorized application code only. During computation and decryption, the data is invisible even to the operating system or hypervisor. Along with protecting proprietary business logic and applications, it’s also a possible solution for analytics functions or AI/ML algorithms.
One of the goals for cloud providers who also provide confidential computing is that it allows them to reassure customers they can breathe easier about the cloud provider itself seeing proprietary information.
How does confidential computing work?
There are as many ways confidential computing can work as there are companies coding them, but recall the definition noted above. Google Cloud uses confidential virtual machines with secure encrypted virtualization extension supported by 3rd Gen AMD EPYC CPUs and cloud computing cloud processes. Data remains encrypted in memory with node-specific, dedicated keys that are generated and managed by the processor, which security keys generated within the hardware during node creation. From there, they never leave that hardware.
Today, IBM claims to be on the fourth generation of their confidential computing products, starting with IBM Cloud’s Hyper Protect Services and Data Shield in 2018. In pride of place with Hyper Protect services comes a FIPS 140-2 Level 4 certified cloud hardware security module. Both products are rated for regulations such as HIPAA, GDPR, ISO 27K and more.
IBM also offers HPC Cluster, a portion of IBM cloud where customers’ clusters are made confidential using “bring your own encrypted operating system” and “keep your own key” capabilities. IBM’s Secure Execution for Linux allows customers to host a high volume of Linux workloads within a TEE.
AWS’s Nitro System undergirds their Elastic Cloud Compute services, an infrastructure on demand service that by nature requires some walls and doors between Amazon and the customer using the services. They create those walls and doors in various ways. One is the Nitro System, which has a proprietary security chip that cryptographically measures and validates the system.
Intel’s Software Guard Extensions contributes to this company’s hardware-based security. In 2021, they focused on providing TEE services tailored for healthcare, finance and government.
Microsoft Azure also offers confidential virtual machines, as well as confidential Kubernetes containers. Their TEEs form the backbone for Azure confidential ledger, a “tamperproof, unstructured” data pool verified using blockchain. Tampering will show up dramatically on their trusted computing base, Microsoft says. A hardware root of trust provides a digital signature on each transaction within the confidential layer. Certificate-based authorizations also make sure cloud providers can’t see into the data hosted there.
What’s next for confidential computing?
Confidential computing has a lot of crossover with other cloud services and security methods such as the blockchain. Is it a revolutionary initiative, or is it a hodgepodge of existing current-generation security considerations rolled up into a term relatively easy to put on a line in a budget?
While there isn’t anything wrong with making it easier for the higher-ups to understand what you’re doing with the IT budget, there are also many hackers — regardless of the colors of their hats — taking a look at TEEs.
The Confidential Computing Consortium is also growing. A market study from the Everest Group and the Consortium predicted in 2021 that the confidential computing industry will grow to $54 billion by 2026.
“While the adoption of confidential computing is in the relatively nascent stage, our research reveals growth potential not only for enterprises consuming it but also for the technology and service providers enabling it,” said Abhishek Mundra, practice director at Everest Research.
TechRepublic recently noted confidential computing as one of 7 trends dominating infrastructure innovation. For more, see Intel’s new independent trust assurance initiative and how the latest version of Ubuntu supports confidential computing.