Rechercher dans ce blog

Thursday, November 5, 2020

Dealing With Security Holes In Chips - SemiEngineering

chips.indah.link

Semiconductor Engineering sat down to discuss security risks in security across multiple market segments with Helena Handschuh, security technologies fellow at Rambus; Mike Borza, principal security technologist for the Solutions Group at Synopsys; Steve Carlson, director of aerospace and defense solutions at Cadence; Alric Althoff, senior hardware security engineer at Tortuga Logic; and Joe Kiniry, principal scientist at Galois, an R&D center for national security. What follows are excerpts of that discussion, which was held live at the Virtual Hardware Security Summit.

SE: One of the big changes now is that devices are supposed to last a lot longer than they did in the past. German carmakers are asking for chips that function properly for 18 years. The problem with that is they also need to be kept current with updates. How do we do that? Who’s responsible when something goes wrong? And how do we trace that back to the root cause?

Handschuh: Every vendor shares a bit of the responsibility. They need to be as certain as possible about what they provide. Then it’s more of a merged responsibility in terms of trying to sit together with the customer, particularly when they put together a new security system, to figure out what is the actual threat model and which part of which IP in which piece will bring what to the table. Then they can figure out if the entire system is secure. But you can never avoid something breaking somewhere. You have to trace it back to what was the actual thing that failed. Was it a concurrent problem involving several things, because each one contributed to it, or did they break? Then you use that model to figure out what the new solution is.

Carlson: In automotive you have functional safety standards like ISO 26262, but you can’t have safety without security. If you can hack into somebody’s brakes, you don’t have a safe car. That standard brings a nice degree of traceability to the processes, but it doesn’t really do a lot for prevention. It’s great for finger pointing, but how is remediation going to happen? Helena gave a nice polite answer — all engineers working together — but it’s really about who’s going to pay the cost. Lawyers are going to be involved. There will be indemnification, warranties, all those kinds of things.

Borza: That is going to be one of the keys. Traditionally, a lot of these questions get answered by answering questions about indemnity, which inevitably leads to the courts. There’s going to be a period of time where that’s how things will get settled, because traditionally people have been able to ship products and walk away from them fairly soon after the warranty period. And now you’re going to be facing a situation in which you have long-life products that are continually under threat. As we go forward in time you will see more and more cooperative automation amongst vehicles on the road, which means that you need to deal with new threats all the time. You need to close those threats off. You can’t put the entire driving population at risk because somebody’s chip is being hacked in some vehicle and nobody is there to take responsibility for it. That is a traditional North American answer. You’d like to see something a bit more proactive, something that says people are actually going to own the responsibility for this. But it also means there needs to be an ongoing maintenance activity that the owners of vehicles ultimately end up being responsible for. They either pay for it upfront in the cost of the vehicle, or they pay for it as a part of the ongoing maintenance of the vehicle.

Kiniry: There’s a lot to be learned from what more advanced vendors like Tesla are doing with automotive updates. Tesla actually has cryptographers on staff, and they are doing a reasonable update scheme. With our customers in the DoD, updates in the field are not mainstream. It’s more like, ‘Don’t touch it.’ Only in recent years have they started to explore the idea of being able to do updates over the lifetime of a product. And that has spawned entire new fields of work around composability and reasoning about these systems with updates over a multi-year timeframe, instead of a 20-year timeframe. It’s remarkable how often they evaluate existing mechanisms for providing updates for attestation and the like, especially when they’re coming over the fence from fields such as IoT, where they may find that the underlying update mechanisms are wholly insecure to begin with, and you’re just opening up a huge backdoor into your system in the first place. My biggest, deepest concern is making sure those underlying update mechanisms are in fact correct and secure.

Althoff: To tie this together, it seems like we’re talking about a ‘market forces versus legislation’ question. In the United States nobody wants to pay for security, but everybody wants to benefit from it. The side of the coin that is making standards also has to be responsible for setting up enforcement — and those standards do have to be enforced. It seems like that really has to come from above, and people making the standards need some plan to enforce the standards, as well.

Handschuh: As soon as we start talking about liability, indemnification, standards, and enforcement, that means we’re going to talk about certification. Somebody is going to have to pick up the task of becoming automotive security and safety certification lab. I guess on the safety side, you already have that. Security might be a little less obvious. It exists for other market segments, but not necessarily automotive. The banking sector has it, and many others as well. But automotive certainly would benefit from having professionals take a look at all the implementations and everything that was put together to make sure that all the pieces fit together, and then put some kind of a stamp on your release.

Borza: Certification gets you started, but it doesn’t provide an ongoing value-added stamp that this continues to be secure. And this is what really we’re talking about when we talk about having long-lived products that need to survive and be functional in an evolving threat environment. Regulation is one way to get that. Liability is another way to get it, but you still have to deal with the fact that companies will come and go. Depending on the industry, the lifetime of companies can be shorter than the lifetime of products. And that’s just a fact in which we operate. So that still needs to be dealt with.

Carlson: Talking about the carrot and the stick — the certification and legal actions are the stick. On the carrot side is a business opportunity. I wanted to give a shout out to Galen Hunt over at Microsoft and the Microsoft Azure sphere effort over there. They’re doing some really interesting things in terms of taking on who is going to own the process for updates, and who’s going to be responsible. They’ve set up a nice lifetime security system based on Azure, putting in some in designed-for-security techniques and updates over the lifetime of the product. They have a pretty reasonable business model for that, where companies can sign up to have their products registered and monitored throughout the lifecycle and updated, with an analysis of the security flaws found real time. That’s a really interesting idea. We’ll have to see how quickly that they and other parties that enter the market can grow. The carrot part is if people start making money on security. That’s when things will move rapidly. The stick part only gets you so far. We see that in the aerospace industry, where they have security certification. But I ran into a program recently, three years after the chip was completed, where they were still in security certification for that particular device. That’s not going to work very well in the commercial world. There isn’t really a best practice there that translates well to the larger market.

SE: As you start layering on patch after patch, though, does it become more insecure? And does it become more insecure because not everybody is on the same level of whatever the latest update is?

Carlson: That’s where you come to the realization that system security starts at the hardware layer. If you have an inherently unstable foundation, you can keep patching and patching it, but you are going to run into a house-of-cards effect. That’s one of the other reasons why Microsoft keeps talking about it. I’m pretty impressed with the holistic view they’ve taken. They start with the hardware foundation. They make sure there’s a good solid — like Rambus provides — hardware root of trust and all those basic features in there. And there’s an expanding group of things that you can put into the hardware that helps support the system level security as you go.

Althoff: That sounds like a great opportunity to track compatibility between techniques. So if you’ve got traceability across all these layers, you have the ability to track what things interact well with one another — what are prone to error, essentially. We were starting to develop wisdom around each one of the layers, and now we need to have something about their connection.

Carlson: The CWE (Common Weakness Enumeration) stuff that you guys are working hard on is a great beginning to that process.

Handschuh: You also have other ingredients. At each layer, you can try to make sure your designs are somehow as secure as possible. For now, we’ve always started with a hardware root of trust. Then a secure software environment for update mechanisms helps. We’re starting to see new things arrive and be more common, such as tools that allow you to verify at the coding level in hardware, if you want. You can now use tools that make sure that specific hardware security properties are maintained through your design, before you even start shipping. And you can start applying things like what Galois and Tortuga Logic are working on. You can also start applying formal security models, hardware security properties that will be proven in some form, and then develop according to that. So that’s an all-new layer that goes one layer down before production, before synthesizing and all of that. That’s pretty new and pretty cool.

Kiniry: The struggle we have around explaining those techniques and demonstrating them is that there’s this word ‘layer.’ You end up talking with people who are experts on one side of the fence or the other, but they don’t talk to each other well. And system security spans all those layers. And so it’s a constant struggle to talk about things like co-design, co-engineering, co-verification, when the people you’re talking to don’t know what a ‘co’ means. This is one of the big struggles we have in terms of building these tools, demonstrating them, and convincing the market that they are useful and applicable for their security assurance.

Borza: Even though software-hardware co-design has been around for 20 years or more, it’s still a relatively novel thing in many places. The software team almost never talks to the hardware team. And the two of them talk at cross purposes when they do talk. Then you layer on top of that security complexity for teams that are not necessarily experts at security but have to supply a secure environment, and you end up with that kind of finger pointing going on.

Carlson: There is cause for hope, though. For a lot more SoC design and systems companies that are really controlling the whole stack — the library development, the SoC, the firmware, the operating system, and some of the applications in that environment — we see the use of technologies like emulation. You’re shifting left on the software development. Once you’ve done that, now you’ve got a virtual model of the system. It’s actually not a model, but a representation of the system where you can execute the hardware and the software together. It primarily came into being for functional verification purposes, but you can re-purpose that capability for some early security analysis. You can look at different surfaces of attack that are logical in nature and transpose those into the physical analysis domain, and look at certain side-channel attack effects of power, thermal, and timing kinds of issues.

Althoff: This seems like another area where education really comes into play, too, because application-specific architectures permeate out into mainstream devices, and then into the curricula of students who are building or learning how to build high-performance systems. At the same time, they’re becoming more familiar with hardware-specific issues. This is an opportunity through education to eliminate that layered and compartmentalized mindset that we’ve fallen into.

—Susan Rambo contributed to this report.


The Link Lonk


November 05, 2020 at 03:26PM
https://ift.tt/3l3ITf1

Dealing With Security Holes In Chips - SemiEngineering

https://ift.tt/2RGyUAH
Chips

No comments:

Post a Comment

Featured Post

Intel Delays “Sapphire Rapids” Server Chips, Confirms HBM Memory Option - The Next Platform

chips.indah.link It is a relatively quiet International Supercomputing conference on the hardware front, with no new processors or switch ...

Popular Posts