Rechercher dans ce blog

Tuesday, November 3, 2020

Security Tradeoffs In Chips And AI Systems - SemiEngineering

chips.indah.link

Semiconductor Engineering sat down to discuss the cost and effectiveness of security in chip architectures and AI systems with with Vic Kulkarni, vice president and chief strategist at Ansys; Jason Oberg, CTO and co-founder of Tortuga Logic; Pamela Norton, CEO and founder of Borsetta; Ron Perez, fellow and technical lead for security architecture at Intel; and Tim Whitfield, vice president of strategy at Arm. What follows are excerpts of that conversation, which was conducted live at the Virtual Hardware Security Summit. Part one of this discussion is here.

SE: Looking at different markets, none has a definition of what constitutes “good enough” security. Does it vary by application or by layer? And do you allocate more resources to some systems versus others, particularly if they’re all connected?

Perez: ‘Good enough’ is sometimes defined by government regulatory agencies, but most often by what customers and users are willing to pay for and to accept. But your question digs deeper, down to the component level in the computing stack. In the past it was easier for a lot of us in the hardware space to draw a boundary around what was acceptable. As we’ve learned from side channels in the last few years, those boundaries don’t exist anymore. As an industry, we’re struggling to define new boundaries, which are no longer where we set them. Except for the government space, we have said that side channels were out of our scope. Now they’re not. Given the variety of side channels, many of which we don’t know about yet, it’s a challenge to know where we draw that line.

Oberg: It brings up risk management, the business implications of security, and how to know if you’ve done enough from both a technical perspective and for your risk profile. On one level, you’re making sure you’ve implemented the security features and mitigations needed so your product is not vulnerable to a certain class of attacks. On a higher level, based on the markets you’re serving, you ask what’s your risk profile? That’s a business type of a metric. If you look at the broader cybersecurity domain, that is definitely the mindset that has taken there. You have CISOs and CSOs who look at their IT infrastructure, their systems, and ask, ‘What is my risk with respect to breaking into my system?’ That’s how they fully justify their spend. The semiconductor industry can adopt that same type of mindset, saying, ‘Here are the features I’m building in my product. Here are the things I’m trying to prevent against based on the markets I’m serving.’ And then they can roll that up into a broader risk-management type of analysis. You’re never going to get there completely, even if you spend an infinite amount of money. But there is an optimal place, based on the markets you serve. If you change the mindset to risk management or risk reduction, that will go a long way for the industry.

Whitfield: It’s about threat modeling. It’s looking at the attack surface on an application-by-application basis and doing the right threat modeling. You need a holistic approach to security. It’s not just the chip and the device. It’s all the way through the physical layers and the software to the outside. And yes, it’s going to be dependent on application because it has to be. But it’s also about that first stage, looking at the threat model and deciding what you need to protect against what attack surfaces, and what countermeasures you need.

SE: It’s not just the cost of it, though, in dollars, right? It’s also about performance and power overhead. If you add all the security measures you want, your chip may be slow and power hungry.

Perez: I would add another challenge, too, based on what I said before. It’s not just the regulatory environment or what your customers or what CSOs are demanding today. Given the long lead times to develop products, we’re trying to decide today what is acceptable four or five, six years from now because our product will be in the market for another 10 years after that — or longer in the automotive space. That’s now a crystal-ball type area.

Whitfield: I agree with the PPA thing. It’s becoming PPAS. It’s security as well. For years, we’ve been making tradeoffs between performance and power and area. And we’re now adding security. The DARPA AISS program that we’re involved with, along with others, is looking at how you create the right tools and the right frameworks to be able to make those tradeoffs. That’s really important.

Oberg: Those tradeoffs are a lot easier to make if you think about them at the beginning. What tends to happen is that there’s focus on time to market, ship the product, make it small, make it fast, make it low power. And then there’s, ‘We need a security feature for this market because we want to have some fancy marketing branding. It will help with our competitive positioning.’ But if you actually make those tradeoffs at the beginning, you can implement security in a way that’s very reasonable, and the costs are not as significant. If you try to add it on at the end, where you just buy a large IP block, stick it in there, or send it off to a lab, there’s going to be a lot more impactful cost on your system, both from a monetary and a power/performance standpoint.

Kulkarni: It reminds me of a set of engineers designing a bridge. They follow all the rules of construction on the stress and strain analysis and so on, and the bridge still collapses. It’s very admirable how the recent Spectre attack was resolved because no one could replace all the hardware, but there are other failure mechanisms no one anticipated, just like with a bridge. With Spectre, the focus quickly shifted to the OS and application layer, and the community saw that problem and addressed it very effectively. But in terms of what is good enough, there is no such thing. Hackers are getting smarter than us.

Whitfield: Yes, the reaction to Spectre was great, but there was a cost in terms of performance, and a financial cost to making those fixes. Also, nobody predicted 10 or 15 years ago that this was going to be a problem. If you know at the start, you can do a ground-up design and it has less of an impact.

Perez: And you can argue why speculative execution side channels like Spectre may not have been known. But side channels have been known since the ’70s and ’80s — early competing days — and we chose as an industry not to address those in commercial products.

Norton: One of the things that we’ve embedded is a quantum-proof random number generator. The reason is that, looking five years out, we know all of our current two-factor authentication is going to be hacked by these new quantum computers. It’s really looking at what elements we can pre-design in or have, knowing that we are going to be at risk with our current authentication process. It’s past its prime. We know it and it has been around since 1970s.

Oberg: That’s a really good point. Remediation is a lot easier in the software domain. Basically, one of the key things about security is this whole concept that you’re never going to figure it all out up front. You want to have a process and a way of reacting to it. But having a way of updating hardware is hard. There are ways of doing that with updating certain firmware. Obviously, FPGAs have really unique opportunity there. From a security standpoint, there’s a lot of benefits you can get from those types of deployments. But that has to be part of the strategy. And in some instances, that can be really challenging — especially with the Meltdown/Spectre type of issues. Those things are just not updatable. You can mitigate certain areas of it, but it’s tough to update. That’s one of the scary things about hardware, and it further emphasizes the importance of making more early investment because you can’t update to deal with a lot of these problems.

Perez: And to the extent that you can make it updatable or more configurable, you may be introducing new attack vectors, as well.

SE: It gets more difficult as we start adding in intelligence almost everywhere. The whole idea behind AI and machine learning is that these systems will self-optimize automatically. But as they do, one may be different than another. So how do we make these systems secure? And is it different for every device?

Norton: As we get these neural networks processing, then you’ve got these training networks that are going to be learning and training real time. So it’s personalized to me — whatever it is augmenting for me in my life. And then Siri and Google Home are listening to my conversations, knowing all my information. It becomes a privacy issue around who is the trusted custodian of this chip that is managing a robot in my house or whatever. It knows everything about my life and my kids. Who are those trusted custodians that are managing the data, updating it, and ensuring that my personal identity is de-identified? They’re still munching and crunching on the data, getting what they need. We’re introducing more and more security of personal identities, but all the third-party data providers still want access to this data.

Perez: Pamela is right about privacy, which we sometimes forget about when we’re talking about security. But that definitely should be more top-of-mind these days, especially in a lot of the consumer products that we all work with and that have an impact on our lives. But in general, this is one reason why explainability, and the research around explainability for AI, are so important. For the most part, we know intuitively and by experimentation that these technologies work. But it’s pretty scary that we don’t exactly know why. We can’t explain every detail of why the results were the way they were, or how susceptible these schemes are to data poisoning. This emphasizes the importance of data integrity and confidentiality.

Kulkarni: How do we create massive number of training data sets with different workloads, different conditions? If you think about drones or fighter planes, or even a tank or a soldier, all that data is coming at us. But how do we get those training data sets to create an industrywide understanding of inferencing for AI, as well as looking at these cause and effect relationships, to avoid such attacks and to prevent as much as we can?

SE: To make matters worse, these systems are black boxes. As they evolve and adapt, how important is it to say why it changed or exactly what changed? Do we have any insight into that?

Norton: That’s why it’s important to look at the data that’s being processed right at the hardware level. We’re encrypting it within a trusted execution environment. You’re hashing the data, so you have a transaction of what’s happened. You have the ability with those ledgers — you can have millions and millions of ledgers embedded — to provision who has access to that. And then you add on homomorphic computing, which allows you to compute on the data while keeping it private. You’re de-identifying. You have the ability to rate that chip to say, ‘Okay, this chip is not only encrypting, but this chip is processing data using homomorphic encryption, which ensures there are no personal identifiers.’ That, I believe, is where the market is going to be going. We’re getting better performance with homomorphic computing, and there has been some very significant progress, which is exciting because the concept has have been around for 40 years. It’s just been an issue of high compute and cost. So when we again look at bringing that cost down, is that a way to be able to say this is a trusted AI inferencing chip? We can ensure that no person’s personal information is exposed, whether they’re being contact traced, or at the airport, whatever it might be. And in that trusted, executed environment, it’s encrypted, so only the FBI or whoever needs access can see that person’s face or identity to run the data, if they’ve got permission. I’m encouraged by what we have seen and some initiatives that we are launching here in the next month around private AI and privacy issues, specifically around homomorphic computing, and how can we encourage and create more of those opportunities in the industry to ensure that our privacy and the data is secure.

SE: Just to be clear, homomorphic computing means you’re actually not decrypting anything. All the computing is done while encrypted, right?

Norton: Yes, and it’s improving. We’re doing 300,000 transactions per second.

Perez: It’s getting better rapidly since [computer scientist Craig] Gentry’s breakthrough in 2009. DARPA now has some pretty audacious goals.

—Susan Rambo contributed to this report.


The Link Lonk


November 03, 2020 at 03:05PM
https://ift.tt/3oTfuGz

Security Tradeoffs In Chips And AI Systems - SemiEngineering

https://ift.tt/2RGyUAH
Chips

No comments:

Post a Comment

Featured Post

Intel Delays “Sapphire Rapids” Server Chips, Confirms HBM Memory Option - The Next Platform

chips.indah.link It is a relatively quiet International Supercomputing conference on the hardware front, with no new processors or switch ...

Popular Posts