August 12, 2024
It seems that most of the emails, articles and webinar invites I receive lately have Gen AI and [in]security as part of the headlines. Also, I was reading some of the reports coming back from BlackHat and it seems those same headlines (and the Crowdstrike debacle) were the major items of interest. Looking around for answers, it seems to me that Confidential Computing comes closest to helping to address some of the problems. So I was very happy that an old friend of mine from IBM, Ron Perez, agreed to do an interview / podcast on this topic. So you can listen to the podcast here, and/or read the interview below as Ron discusses the evolution, use cases, and benefits of Confidential Computing. Ron is currently the chair of the Confidential Computing Consortium as well as an Intel Fellow and Chief Security Architect. From the cloud, to the edge, to the endpoint, Confidental Computing is seeing an uptick of adoption and new applications as industry and governments recognize the need for hardware-based separation and attestation to build a trustworthy stack [for AI] – and one that is secure by design.
Spotlight on Mr. Ron Perez
» Title: Intel Corporation Fellow and Chief Security Architect at Intel Corporation
» Website: https://www.intel.com/content/www/us/en/security/overview.html
» LinkedIn: https://www.linkedin.com/in/ron-perez-security
Read his bio below.
Chris Daly, Active Cyber™ – Ron, give my visitors some background about yourself and your role at Intel and how you got involved in Confidential Computing.
Ron Perez, Intel Corporation Fellow and Chief Security Architect – I’ve been with Intel now for seven years. I’m an Intel Fellow and Chief Security Architect, which is a title that I’m a little uncomfortable with. It basically means I have some responsibilities for Intel’s security technologies from a technology roadmap standpoint to ensure that we really have a pipeline of compelling and valuable technologies in the security space. We have a number of technology domains and chief architects in those domains, and I’m the one for security. I’ve been involved in what we now call Confidential Computing for quite some time. My background includes over a dozen years at IBM where you and I met. I worked for IBM research there. I spent some time at Advanced Micro Devices (AMD) and a few other places in my career.
I’d probably say that, for me, a lot of the concepts that we now associate with Confidential Computing really first materialized during my time at IBM Research. I was able to advance some of those concepts while at AMD, and certainly now at Intel where a lot of the technologies that we have, and of course AMD has and IBM has, are really hitting their stride in this Confidential Computing space. You can argue a lot of the initial work in Confidential Computing really began in academia at MIT and other leading universities, as well as companies like IBM with the original Xbox design and hardware security modules. Later it developed with ARM Trust Zone in the embedded space. I think all of those have contributed to what we now call Confidential Computing, which I think we’re going to talk more about.
Active Cyber™ – Yes, I remember the days at IBM for Secure Blue and the cell processor and Trusted Computing and all that stuff. So it seems for me, that’s where a lot of this stuff originated from. But you bring a much deeper background and context for that. So let’s talk a little bit about what are some of the primary benefits that you see for Confidential Computing? What’s and Who’s driving the technology, and what are you seeing on the consumer side of it in terms of benefits?
Mr. Perez – Arguably a lot of the incentive behind Confidential Computing is cloud computing – this idea of having globally available compute resources that are managed and owned by large providers, almost like a utility, if you will, and available on demand. So this concept of having just enormous amounts of computing resources available anywhere in the world at any time, day or night, that’s really compelling. Of course it drives huge efficiencies, which makes it even more compelling. Now, the issue with cloud computing and this globally available computing infrastructure, of course, is somebody else owns and operates it. That’s a good thing from a cost standpoint, from an efficiency standpoint. But you, as a workload data owner, you may have some concerns about whether your data is safe in this environment. Obviously, cloud providers go through great lengths to secure their environments, and because they are operating at that scale, they often do a much better job than a smaller enterprise might do. But it’s still where the money is, to borrow a term. If all the valuable data in computing is happening in the cloud, well that’s where the bad guys are going to be too. It makes it a very big target. So as a workload provider, as a data owner, you may have some concerns about that.
You might also be restricted in some cases from using this very efficient cloud computing environment because of data security or other security concerns, maybe even regulatory obstacles that will prevent you from moving to this cloud environment. In these cases, I think Confidential Computing provides a unique business case, since it really does try to separate the infrastructure owner and operator from the workload and data owner, and that’s the value. Can you actually provide that separation? And what I mean by that is can you say that the confidentiality and the integrity of both the data and the actual workload itself, the software that’s running in this shared environment that’s owned and operated by somebody else, can be assured.
Confidential Computing provides you some assurances that even the cloud operator, if they’re compromised, if there’s some insider threat, that they can’t access your data, they can’t change your software, at least not without you being able to recognize this and make it aware to you. So that’s tremendously powerful. That’s really the value proposition for Confidential Computing. And it’s not just for use in the cloud, but there are many cases at the edge and even in the enterprise where Confidential Computing makes sense and brings value.
Active Cyber™– So Ron, you talk about data protection and isolation, but there’s things like hypervisors and containers that can provide some levels of isolation as well as encryption and things like that. So what’s special about Confidential Computing that goes above and beyond those typical protection and isolation mechanisms?
Mr. Perez – That’s a great question actually. That is the question, right? As far as I can remember, and actually going back to the 1960s and 1970s, computing security concepts were first developed as we were developing time sharing systems. Any sort of resource-sharing computer had to deal with access control, et cetera. And every security model that we built from those days up to now has been hierarchical, whether it’s security kernels, secure operating systems, hypervisors, all these separation mechanisms, are hierarchical. “Hierarchical” means that you at the highest level, your application, your data running in user mode at the highest levels of the software stack, has to depend on all the software, all the firmware and on all the hardware below, and of course everything on the site as well, all the other applications that are running in this complex computing environment.
The Confidential Computing concept basically carves out a space for you to run your software and process your data that is not dependent on the underlying application software, the operating system, the hypervisor or anything else in the system, all the hierarchical software. All it’s dependent on is some fundamental capabilities in the hardware. This ability to kind of carve out this space is what we call the trusted execution environment or TEE, and that’s what Confidential Computing is all about. It’s kind of breaking this hierarchical model that has existed for many, many decades and allowing you to have a much smaller “trusted computing base.” Having a small trusted computing base has been a key security challenge as computing gets more and more complex, it never gets simpler. Basically you have billions of lines of code including system level code as well as firmware. Even the hardware designs are getting more complex. That just expands the trusted computing base and makes it more difficult for you to assure the separation of workloads and data. Confidential Computing tries to take a step back and carve out only the pieces that you really do have to pay attention to. And ideally those area much smaller set of hardware and software. So that’s how it differs. It’s basically breaking the old hierarchical model. With Confidential Computing you have just your software and your data of course, and the underlying hardware providing the separation mechanism.
Active Cyber™ – So with a hardware basis, I guess Confidential Computing lands right into Intel’s sweet spot. So what are Intel’s offerings now with Confidential Computing?
Mr. Perez – Yes, definitely our sweet spot. So from a hardware standpoint, and there’s more than just the hardware, Intel has two technologies. One is kind of the original software guard extension or what’s commonly called SGX. I think Intel first productized this technology for their PC client platforms back in 2016. In 2021, it first made its entry onto server platforms, Zion CPUs, and it’s what some people call application isolation, or I would say it’s really fine-grained separation. So you can essentially take a part of a process, even a single line of code to take it to an extreme, and isolate just that piece of software and any other data that you might want to operate on. It can be that fine-grained – it’s super powerful. The folks who designed that spent many years. There’s hundreds, literally hundreds, if not thousands of research papers written that talk about SGX or use SGX to solve problems that were just too difficult to solve or impractical to solve before. So that’s kind of the first instantiation of a truly Confidential Computing capability – Software Guard extension or SGX.
Most recently we’ve come up with another technology that’s called TDX or Trust Domain Extension. So what TDX does is very similar to a technology that I worked on at AMD now called SEV or Secure Encrypted Virtualization. It basically takes the abstraction of a virtual machine as the isolation boundary. And the advantage of this is that there are so many workloads, especially in the cloud environment, that today really are packaged and deployed in virtual machines, VMs. So this is an abstraction that we all understand from a business standpoint, from a software standpoint, from a systems standpoint, we understand what virtual machines are. Hardware has supported virtualization for many years. It’s kind of just easy to use. The potential downside, of course, is that VMs can be very small or they can be very big. There’s both pros and cons there, but you can put a tremendous amount of software into a virtual machine, which can be great if you’re really trying to broadly secure your solution, your application, et cetera. But it puts more onus on the workload owner to ensure that everything you’re putting in that now secure virtual machine or trusted virtual machine is secure itself. You can put bad software into a trusted or a confidential virtual machine, and it’s still going to be bad software. It’ll still be subject to some attacks. The separation capability that TDX or AMD’s SEV provides is wonderful from protecting the memory, protecting the integrity and the confidentiality of what’s in that virtual machine. But again, if you have software running that’s leaking data, then there’s not a whole lot we can do about that. But those are the two technologies that we have – SGX and TDX – fine-grained isolation and what I’ll call coarser-grained but easy to use.
Active Cyber™ – So is there any possibility of actually nesting those two capabilities together in a more complex environment?
Mr. Perez – Yes, there definitely is. We’ve had a number of people ask about that, but nothing to announce today or anything. But it is something that we’ve looked at whether and how it would make sense to have those technologies operate together in the same environment. Today, the SGX developers have been around for a while and that ecosystem is still growing. A lot of times they want to run on what we call bare metal systems that are without virtualization, for example, which makes SGX wonderful. It has no dependency, right?
Obviously, TDX being based 0n that virtual machine abstraction really requires virtualization. So we see a lot more adoption in virtual machine type environments, but as the overall ecosystem grows, we do see customers that are recognizing – “Hey, this virtual machine abstraction is great. It provides separation between my workload and every other workload on the platform as well as the cloud provider themselves, but I may actually need even finer grained separation within my workload. I may want to separate my admins from the data, from my customer’s data.” So those are the types of use cases, I think, that are driving interest in having SGX and TDX work together.
Active Cyber™ – Interesting. Okay. From an adoption perspective, is Confidential Computing expensive and how hard is it to set up and administer? Give me your thoughts on that.
Mr. Perez – I don’t know about “expensive.” There’s a number of ways you can kind of qualify that. I think the one thing that most people are interested in is from a performance standpoint. Underlying pretty much every Confidential Computing implementation, whether they’re from Intel or somebody else, is encrypted memory and to do encryption is not cheap. So people are concerned about what the performance impact is. For the most part, I would say that it’s very low overall impact. The kind of impacts you would see are memory latency and bandwidth reductions. And because you can do inline encryption with hardware assist at fairly economically and efficient ways, we’re seeing in the single digit percentage overheads for both latency and bandwidth. And we are trying to drive that down to the really low single digit impact to make it basically not an issue anymore for anybody who’s really concerned about it. So from an “expense” standpoint, I think that’s becoming less and less of an issue. It’s already a small issue, but the other aspect that you asked about is how difficult to set up and to run.
One of the values of Confidential Computing, and trusted computing has the same concept, is attestation. So what our customers want to do is to have this strong separation capability, but also to be able to verify it. And that’s where attestation comes in, verifying that only the software that you think is running in this remote environment – software that you don’t have any direct control over – that that’s the right software, or that the data that you think you’ve put in there is the right data, and that the environment itself is the right one that you’re running on. For example, it’s a genuine Intel SGX or TDX system, and it’s enabled with the latest patches and its software stack is up-to-date, et cetera. So only attestation gives you this cryptographically verifiable mechanism. And for that, you do need an attestation infrastructure, and we’re seeing there’s a lot of work in open source and in the standards bodies to develop the standards and the data formats to support attestation so that we can get more uniformity across the industry. And even in our case, we’ve developed a product that helps you manage this attestation information, and I suspect other people will develop similar products as well, but this is something that you need, whether you want to buy it from us or somebody else, or use open source or build it yourself, the real value of contributing comes out with this attestation capability. So there is some setup required and management required, but it’s what you get from that I think is just so tremendous that hopefully, hopefully everybody sees the value there.
Active Cyber™ – Yes. So did you guys borrow anything from the Trusted Computing Group from a standards perspective or anything else like that when it comes down to attestation?
Mr. Perez – I think the basic concepts certainly come from Trusted Computing Group – this idea of measuring software before you run it, and a hierarchical model for doing so. In fact, TPMs or Trusted Platform Modules, is the basic capability that came out of the Trusted Computing Group that underlies a lot of everything else. What they call virtual TPMs are pretty widely used today in cloud environments to attest non-Confidential Computing virtual machines, and they’re going to be used and they are being used with Confidential Computing virtual machines as well. So that’s a way of measuring everything that’s in the Confidential Computing environment, in this trusted execution environment. So we definitely see those two technologies (Confidential Computing and Trusted Computing) working together. You don’t have to trust the entire system, the hypervisor, et cetera, and all the other systems software, but you do want to be able to measure and verify what’s in this trusted execution environment.
Active Cyber™ – Nice. Sounds great. So we talked about cloud processing as the big use case, the uber use case for Confidential Computing. There are some other ones in my mind that we can talk about as well. One is, and you’ve kind of mentioned it already a little bit, but one has to do with authenticity of content. I guess this use case would be driven by the attestation environment that you have, but also the memory protections that you provide and stuff like that. Would you give me your thoughts on authenticity of content and how confidential computing plays a role there?
Mr. Perez – Yes, authenticity. I would put integrity in that same category anywhere that you need to or you want to ensure that the right thing is running and the right configuration, the right environment. That’s where Confidential Computing I think helps. So does trusted computing. I will say that too. But one example where authenticity and Confidential Computing come together is federated machine learning. This is a non-cloud type use case scenario where you may just have too much data in the enterprises and you’re wanting to create a model, a machine learning model based on, for example, medical research data.
In this use case a number of universities and medical institutions get together and they really want to build a good model to detect and prevent cancer or any other diseases, but in some cases they can’t share the data they have. Maybe there are regulatory issues around sharing, or they just have too much data and they can’t be shipping the data around. So you want to be able to use federated machine learning in this environment where you run the training on the data in the local enterprises, in the medical institutions themselves, and then you aggregate that model in some centralized facility. But each participant in the model development wants to make sure that the training process that you’re running is using all the right software. So you want to make sure that nobody’s cheating or at least that nobody’s been compromised. And I think running Confidential Computing in those edge computing use cases at the medical institutions is a perfect example of that. You’re doing it because you want that authenticity. You want that integrity assurance, not necessarily because you’re worried that somebody in the hospital will hack the data there, that may not be your primary concern. It’s just ensuring the integrity of the data of the environment so that the end model now also has some authenticity and integrity providence, if you will.
Active Cyber™ – I love the concept of a federated model or federated use case for Confidential Computing. I can see lots of other possibilities like smart contracts comes to mind for that as well. Can you talk a little bit about how smart contracts might be applied in that kind environment using Confidential Computing as well?
Mr. Perez – Yes, absolutely. Blockchain is a wonderful set of technologies put together in a really interesting and compelling way. Smart contracts, or this ability to execute this code that you’ve agreed on – that is, this is what code will run and these are the certain circumstances that that will happen. But what you want to make sure is that this agreement which you’ve implicitly or explicitly all agree to for whoever’s using this blockchain, you want to make sure that that contract executes correctly because everybody’s depending on the results, you’ve all made some commitment made based on those results. So in that case, you want to have something like a Confidential Computing environment so that when the smart contract executes in those environments you get attestation measurements from the hardware that “yes, this is the hardware and software environment and this is the contract that was executed.” So Confidential Computing gives you more assurance in the smart contract execution results. You’re not relying on any of the parties that are participating in the blockchain. You’re just relying on the hardware and the mechanisms that were put in that hardware, and you have cryptographically verifiable proof that this is the software that ran.
You also can benefit in many cases due to the confidentiality aspects that Confidential Computing provides. Not only do you want assurances that this is the right software that ran, but maybe you don’t necessarily want to have everybody participating in the blockchain know the data that was processed. All you’re all concerned about is that the results are trustworthy and everybody gets to see the results. So yes, smart contracts and blockchains are fantastic use cases for Confidential Computing as well.
Active Cyber™– Now, are you also seeing an uptick in the use of Confidential Computing as a result of machine learning and AI? I mean, you already kind of mentioned the federated learning kind of concept – the medical use case you mentioned, but overall, are you seeing a lot more use of Confidential Computing as a result of the issues around protecting people’s AI data being used and related privacy issues?
Mr. Perez – Absolutely. I think I’m always leery of talking about killer apps because, in my experience, that’s hardly ever the case. But certainly we’re seeing tremendous amount of interest now in the AI set of scenarios, and today, most Confidential Computing technologies like SGX and TDX that I mentioned from Intel or SEV from AMD for that matter, they exist on CPUs, which is great. You can do a lot of machine learning on CPUs, especially on the inferencing side, but obviously the elephant in the room here is NVIDIA and GPUs that are really promoting the growth of AI and the interest in Confidential Computing. So expanding Confidential Computing to these distributed and heterogeneous environments has been something that we’ve been anticipating for a while. It’s really taken off in the last year and a half or so with NVIDIA and just the boom in AI in general. And we are developing capabilities for our own AI accelerators, as has NVIDIA. And we’ve worked very closely with NVIDIA as well. Their H100, for example, works with Intel’s TDX. So it allows you to not only process the data in this trusted execution environment on an Intel Xeon platform using TDX, but also to share that data to seamlessly get those same Confidential Computing assurances using NVIDIA’s attached H100 GPUs to do training or really large language model processing.
Active Cyber™ – So that’s really interesting to me. So you’re starting to see at least bilateral, if not open sharing, of standard versions of things to be able to share data between different infrastructure models of Confidential Computing?
Mr. Perez – Yes, absolutely.
Active Cyber™ – That’s great. And I assume the attestation infrastructure could cross over as well?
Mr. Perez – Yes, we’ve worked closely with NVIDIA on that. This is just an example too, right? One that’s already public. I mentioned the Intel server-based attestation framework capability. NVIDIA has provided attestation capabilities for their GPUs. We worked with them to ensure that those attestations can be consumed by our attestation framework so that customers get one view of the environment, both the CPU piece and the GPU piece and whatever workload is sharing both those environments.
Active Cyber™ – So is Intel going to push this over to a GPU version of their own?
Mr. Perez – Yes, definitely. There’s a set of technologies that we helped drive through the standards bodies In PCI SIG, that support the expansion of Confidential Computing capabilities across the PCI bus to connected devices like NVIDIA’s GPUs. And so those are the underlying standards. Obviously there is also always some proprietary aspects that go on top of that, but those underlying standards are used by AMD, presumably by ARM as well very soon. So basically the whole industry will be using those standards that we helped develop and push through, and then of course we differentiate on top of those, but if those standards are not there, then it makes it really difficult for customers and users to adopt the overall technology and to take advantage of Confidential Computing at scale. So we understand that.
Active Cyber™– Now, I noticed that Apple had their announcement recently about extending their version of Confidential Computing to the cloud and endpoint. Do they play into these standards and what do you think of their announcement about Confidential Computing?
Mr. Perez – I think the announcement was very interesting. I’d love to hear and to see more of the details. Apple has been pretty good at that in the past, so I’ll be looking for even more details going forward. This certainly shows that they recognize the value of connecting the clients, the phone in their case, or the MacBook to this AI cloud environment where there’s so many more resources available. Like I said, it’s hard for me to tell from just the announcements what the underlying technologies are.
I don’t see Apple participating in some of the standards working groups. They’re not part of the Confidential Computing Consortium. We’d love to have them there. By the way, I’m the chair of that Linux Foundation organization along with Microsoft, Google, ARM, AMD, and a lot of other member companies participating there. We’d love to have Apple participate there as well, but so far we haven’t seen them do that. So I guess I’ll have to reserve the judgment until I see more details about their underlying technology. But it’s encouraging that they see the value of at least calling what they’re doing “Confidential Computing.” I’ll be looking to see if it really is, does it really provide these properties where you as a data owner, as a workload provider don’t have to rely on Apple for your security properties?
Active Cyber™ – That brings to my mind the fact that you see a lot of attacks on the endpoint these days, and there’s a lot of endpoint security tools now that help work on that, but it would seem to me that having some level of Confidential Computing at the end point may help remedy some of those problems. It would provide you kind of an end-to-end view of attestation and Confidential Computing from endpoint to cloud. But when you start getting into IoT environments, I think that’d be a little bit of a hard push for Confidential Computing, but at least for your normal phone to laptop, the MacBook, whatever, it would seem to me that it would be a possibility. What’s your view on that?
Mr. Perez – Yes, absolutely. I think it’s the issue that applies directly to, I’ll call it the edge environment, whether that’s clients or any sort of server sitting on a telephone pole, for example, base stations, things like that. Wherever we start to push computation farther out to the edge and even to your phone or your PC, then it makes absolute sense to provide this sort of Confidential Computing environment to protect the valuable machine learning models that are being pushed out to these environments. And obviously vice versa, to protect the user’s data, whether it’s your queries or your kind of model tweaking data that is customized for your own use cases. Those all have to be protected. So you can call it privacy or confidentiality, kind of depends on if you’re an individual or a company, but absolutely Confidential Computing is made for those sorts of situations.
Active Cyber™ – So we talked about some of the providers of Confidential Computing capabilities. Let’s talk about who’s adopting it right now. We’ve talked a little bit about that. But who does Intel see as your major adopters – the cloud providers? Some of these edge providers? Where’s it going?
Mr. Perez – Yes, definitely every cloud provider from top tier to next wave is either already providing some sort of Confidential Computing capabilities, whether they’re from Intel or somebody else. They’re in the environment, in trial or planning to be very soon. It varies by size of the cloud service provider, telcos and basic use cases, edge use cases, OEMs. We’re working very closely with them to meet those use cases as well.
For enterprises, it’s banks, in general. In terms of industry, I would say the regulated industries are probably the first wave of adopters that we’re seeing a lot. GDPR in Europe, for example, where there’s a lot of privacy regulations. We’re seeing a lot of interest from not only the governments there, but industry in those countries seeking to comply with the letter as well as the spirit of the regulations to protect their citizens’ information. We have a lot of use cases in the healthcare space too. Kind of mentioned some of those before, government defense, et cetera. Obviously a lot of interest there. So all those are the kind of early adopters, but it’s also extending out to anywhere that data is valuable. As we talked about with the AI boom here, there’s a lot of interest in protecting these very valuable models.
Active Cyber™ – One thing you hear about a lot these days is the critical infrastructure being a soft underbelly, and how there’s been adversaries like China who has been infiltrating some of the telecom infrastructure and other things like that. How do you think Confidential Computing will play out as far as being able to secure the critical infrastructure? Does it have a major role in doing that, and are you seeing adoption by the critical infrastructure folks? I’m also thinking autonomous systems, such as self-driving automobiles, drones, and all that stuff too.
Mr. Perez – Yes, I think so. Certainly from the cloud service provider standpoint, and, we’re seeing interest elsewhere. It goes back to those fundamental properties. If you remember the iPhone situation and the FBI years ago where they were saying – “Hey, look, we can’t get the data even if we want to.” I think everybody would like to be in that situation. If a government is demanding data on users in their jurisdiction, I think a lot of the infrastructure providers, whether it’s cloud or automobiles, they would love to be say, look, we just provide the MIPS. We just provide the compute capabilities. We don’t have access to the data.
That kind of capability also comes in play with the hackers and bad guys – they can’t get access to your data either. That’s the dual edge of that sword. But I think that fits the cloud provider business model – they are in business to provide computing resources and they are not in business to decide who gets to see your data.
Active Cyber™ – That confidentiality aspect is also part of the reason I’m looking at Confidential Computing at the endpoint as a real benefit. I’m really interested in semantic web technology and distributed identifiers and stuff like that, and Confidential Computing on the endpoint would be a really helpful capability.
Mr. Perez – Yes, absolutely.
Active Cyber™ – And then this is kind of a far off question, but the federal government obviously could be a big user of this too. I was wondering if we are seeing space-based assets using any of this Confidential Computing technology yet? Do you know of that?
Mr. Perez – I wouldn’t be surprised. Nothing comes to mind immediately. We’ve certainly had, or I’ve been in discussions personally, with different governments and agencies that represent broad swaths of governments, including NASA for example. So yes, I wouldn’t be surprised if this isn’t some of the use cases that they have in mind. You think they may have even mentioned that, but I’m not currently involved in any sort of engagement like that right now. But certainly it makes complete sense. Anytime you’re providing some computation and putting data out to an environment where it’s potentially hostile, where you don’t know who has access to it, then I think you’re interested in these types of protections and assurances that Confidential Computing provides.
Active Cyber™– Yes, I agree. So speaking of things coming up around the corner a little bit, we’ve been hearing about quantum computing and how potentially RSA 2048’s been broken and things like that. So what impact does Confidential Computing have on quantum computing and vice versa? I mean, is it quantum safe encryption that it’s using today? Are you going to have to see some wholesale changes to the technology that you use? What’s going on?
Mr. Perez – Yes, certainly Confidential Computing uses encryption quite extensively, whether it’s memory encryption using symmetric key-based crypto like AES, or in the attestation case where you’re actually doing signature verification and using asymmetric crypto. You’ll see that all of the contributing capabilities and technologies out there will start to very quickly add support for AES 256, for example – so resulting in quantum resistant, symmetric key processing for memory encryption. We’re still sort of waiting on the finalization of the public key quantum resistant algorithms. These algorithms have been identified but now we have to settle on the parameters. As soon as that’s done, we’ll see the asymmetric algorithms being adopted. There are overheads that come with these new technologies too. Going from as 128 bits to 256 obviously involves more processing power or more processing capabilities. We can address a lot of that in hardware of course, but there’s always going to be some extra overhead from that. But like I said earlier, most of the symmetric key processing can kind of be hidden in the pipeline, so you won’t see a lot of impact on that. But on the public key side when you’re doing key exchanges and signatures – that may have a bigger impact. We’ll see what the final parameters look like and what performance looks like and how we can actually help to accelerate that on the hardware side as well. But Confidential Computing is a big, big user of encryption. So yes, intel’s going to be adopting all the post quantum algorithms and capabilities fairly soon. I think you’re already seeing that in some cases, but certainly within the next couple of years, every Confidential Computing technology will have that capability.
Active Cyber™ – Okay. So will Confidential Computing also change when quantum computing really starts to matter?
Mr. Perez – Yes, that’s a really good question. I don’t have the answer to that. I think, at least from my own standpoint, we’re really just starting to look at that. Quantum computing obviously is a very different paradigm. We’ll have to look at how we can get those same sorts of confidentiality and integrity assurances in that environment. It may be that it’s exactly the same from a Confidential Computing standpoint as it is today, but I don’t think we’ve done enough work so far to say that.
Active Cyber™ – Okay. So if I go to DEFCON what are the hackers going to say about Confidential Computing?
Mr. Perez – Yes, I think that’s an issue. When you put a security technology out there, whether it’s Confidential Computing or anything else, that’s where everybody’s going to go – all the security researchers in the industry, the hackers, et cetera, everybody wants to make sure that these technologies really are strong or at least try to define the limits of those technologies. With cloud, you are still running in a shared computing environment, and as we’ve all learned, side channels can be pretty powerful. That whole area of vulnerabilities and attacks in the side channel space has been kind of popular in the security researcher / hacker community for a while. A lot of people looking at SGX for many years helped us correct areas that we may have missed, making it a better technology, better product. The same thing is going to happen for TDX, for AMD SEV as well. What can we do to mitigate side channels with the minimum amount of overhead? You can always have side channel resistant algorithms. Many of those though take extra processing power. So ultimately it’s going to be up to the workload owner, the data provider themselves, how much are they willing to sacrifice in terms of performance to address these side channels.
Active Cyber™ – Interesting. Yes, and we’re also hearing a lot about zero trust these days. So how does zero trust and Confidential Computing play together? What’s the complement there?
Mr. Perez – I think zero trust certainly has gained a lot of attention, and rightfully so. I think in the distributed computing world, we need that sort of mindset, those same concepts where you aren’t making assumptions on what is trusted and what’s not trusted. You can’t just simply say, “if it’s in my building, I trust it,” because there really are no perimeters anymore in a distributed cloud computing-based world with edge computing and everything else. So you need to be able to verify that the computing resources are what you think they are, what they need to be, that the software running there is what it’s supposed to be, and that the data hasn’t been accessed by somebody who shouldn’t have access to it or modified in a way that you didn’t agree with. And whether it’s trusted computing, which does that wholesale for entire platforms, or Confidential Computing, which takes more targeted looks at just the workloads you’re concerned about. I think they both provide that cryptographically verifiable mechanism that we call attestation, which is the basis for zero trust.
Active Cyber™ – And I could also see that you could get attestation and anonymity, which may be a thing that you’re interested in when you’re talking about zero trust as well.
Mr. Perez – Yes, absolutely.
Active Cyber™ – So you mentioned the Consortium. Can you talk a little bit about that since you’re the chair, what’s it about?
Mr. Perez – Yes, so the Confidential Computing Consortium is a Linux Foundation organization. Linux Foundation helps to support a bunch of different open source organizations, and while Confidential Computing isn’t necessarily all about open source, that’s certainly a big piece of it and something that we all promote. The founders, including Intel and Microsoft, Google, et cetera, we all really wanted to create and support this growing ecosystem. And so we took our desires to Linux Foundation back in 2019 and created this consortium.
There’s a number of open source projects, as many Linux Foundation organizations have, that are supported by the organization, but we’re also driving market definition from a technology standpoint so that people really understand what is Confidential Computing and what is not Confidential Computing. So helping to more clearly answer those questions and define those terms and security properties for the community is a key mission of the consortium. Other activities of the consortium include responding to government requests regarding the post quantum space, for example, as well as just getting the word out.
We also sponsor the Confidential Computing summit that’s been held for the last two years to really bring the industry together around this topic. We’ve seen that grow tremendously over the past two years and looking forward to next year’s summit as well. But it’s really just a consortium for anybody, any company, any individual who is interested in these technologies and to advancing these technologies to get together to either work on a project, an open source project, or helping with the outreach committee to get the word out, looking at governance, looking at use cases. We published a summary of the really major use cases. I think there’s six or seven of them. It’s just a wonderful place. We produce a number of webcasts that cover each of these technologies from different companies as well as open source initiatives. It’s just really a community gathering place, and I would encourage anybody interested to look into that.
Active Cyber™ – So how do they find out more about it?
Mr. Perez – www.confidentialcomputing.io is the website.
Active Cyber™ – When’s your next summit going to be and where?
Mr. Perez – So we just had a summit in June. They haven’t set the date and location for the next one, but I imagine it’ll be about that time, maybe a little bit earlier May timeframe, but the last two were held in June in San Francisco. Kind of a central location for a lot of people. We haven’t decided yet where the next one will be and what the timing is, but we had it as a one day event the first year and expanded it to two days this last year, and so there’s a growing amount of interest.
Active Cyber™ – Well, I hope this interview helps to generate some more interest for you and let me know if there’s anything else I can do to get the word out.
Mr. Perez – Yeah, I definitely will. Thank you. Appreciate it.
Active Cyber™– And so one last question, and we’re about to run out of time here. Let’s talk CrowdStrike, okay. Recently their technology had a global impact as a result of a memory error. So I know this is a little different than maybe what Confidential Computing can do, but tell me what’s your thoughts on that in particular. I guess bad code running in a Confidential Computing environment is still going to have potentially bad impacts like this did.
Mr. Perez – Yes, I think that that’s a fair analysis. With respect to CrowdStrike in particular, I can’t say that I’ve studied it thoroughly to understand exactly what the root causes were. It seems like there were a number of problems that led to it, and typically when we see events like this, it’s a series of things. It’s never one thing, but in terms of the actual technology piece of it, yes, I think memory safety capabilities would certainly seem to have played a role in mitigating or preventing this sort of situation, and there’s obviously been a lot of interest in that. I think memory safety vulnerabilities in general seem to be, if not the highest, pretty close to the root cause for many software vulnerabilities and events. So having those capabilities, whether they’re hardware supported or not, is certainly an area where we see a lot of interest. We’ve even seen calls from the White House, from the US government and other areas for the industry to do more around memory safety. There’s a lot of work going on at Intel and elsewhere.
Active Cyber™ – So could I set up my Confidential Computing cloud environment to only run memory safe programming languages?
Mr. Perez – Yes, absolutely.
Active Cyber™ – Are you doing an attestation of that nature, so to speak?
Mr. Perez – Yes. Whether it’s language-based or memory tagging kind of technologies. There’s so many existing ways to address it right now, which is kind of embarrassing that we’re not using some of these languages and other capabilities really to their full extent.
Active Cyber™ – Nice. Well, this has been a great journey through Confidential Computing, and I appreciate the opportunity to have this discussion with you today, Ron. I’m really excited for the potential of this technology, and I want to keep track of this, so please keep me informed, keep the Active Cyber zone listeners informed as this moves forward.
Mr. Perez – Well, thank you for having me here today. It was a pleasure. I always liked talking about computing, and it’s even better when I get to talk to an old friend.
Thanks Ron for your insight into this powerful security technology. I am excited about its potential to make things better, especially as industry and governments move out on generative AI and the resulting need to protect the data and the models. I am also hopeful of a collaborative and innovative ecosystem, as well as I know you are, to ensure the market for this technology stays fresh and matures in a robust fashion. And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, authenticity, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, Augmented Reality, or other emerging technology topics. Also, email chrisdaly@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.
About Mr. Ron Perez Ron Perez is an Intel Fellow and Chief Security Architect in the Office of the CTO at Intel. He is responsible for overall Intel security architecture and cross-business unit security technology roadmap alignment, with a focus on Confidential Computing and platform security for the disaggregated, heterogeneous, and distributed data center. He leads a team of senior technology leaders driving Intel’s foundational security technologies for server platforms and roots-of-trust, supply chain and life cycle integrity, and trusted execution environments in collaboration with Intel Labs and world-wide strategy, planning, engineering, design, and validation teams. An industry veteran, Ron joined Intel in 2017 with a breadth of experience spanning security, semiconductors, and cloud computing. Most recently, he was Vice President of Security Research at Visa Inc., and has held roles as Fellow and Chief Technology Officer in the Cryptography Research Division of Rambus Inc., Senior Fellow and head of security architecture at Advanced Micro Devices Inc., and earlier in his career, as a senior manager and senior technical staff member at IBM’s T.J. Watson Research Center. |