Sandro Cerato, senior vice president and CTO of the Power & Sensor Systems Business Unit at Infineon Technologies, sat down with Semiconductor Engineering to talk about fundamental shifts in chip design with the rollout of the edge, AI, and more customized solutions. What follows are excerpts of that conversation.
SE: The chip market is starting to fall into three distinct buckets, the end point, the edge, and the cloud. What’s the impact of this?
Cerato: There is an evolution of the products that we are making or that we have to design. It doesn’t matter whether you’re doing a sensor, or just the computing part of a sensor, or whether that’s connected to the cloud or the edge. You need to know all three of these, because if you change something in any of these it affects the functionality of all of them. When you design a product, you’re designing a function. If you have gas sensors, for example, they are calibrated in the network. This is a big focus for us, because now we have to create knowledge for the network operators.SE: So you’re thinking in terms of a bigger system rather than just the chip, right?
Cerato: Exactly, and you cannot escape this. If you think about AI, is it in the cloud or on the edge? At the edge, there are many different AIs. And there are many different functions, depending on where they are located. This allows me to communicate with my vacuum cleaner, which is connected to a network, and there is one part of this network that is very intelligent.
SE: That also means that each one of these designs is very unique. Design teams also have to improve the performance, limit the power, often using a heterogeneous mix of components and over a longer lifetime. Those goals don’t go together very easily.
Cerato: It’s not easy. For example, we are designing a radar chip with an integrated antenna. We are handling the antenna, the radio, and the fast interfaces like SPI (Serial Peripheral Interface). So how do you use this radar chip? They are 60 gigahertz in the size of a small chip. If you look at the manual, and read all 500 pages, you still don’t know how to make it. So for us, the only way to do this is to make an application that uses radar. But the application of using radar is not just creating a microcontroller, maybe with AI. We have developed AI for that because it enables what the radar can detect. So with COVID, we developed an application for our canteens in Munich and Singapore that tells people when you are allowed to go in or out, depending upon occupancy. We had to develop everything, including the software that counts people. In the past, we would develop a reference design that was one board. Now we have to develop everything, including the software that counts the people, the dashboard, the complete system. That even includes software in the cloud. We get data from other sensors. We use recalibration measures. And now, we also have to understand all the blocks and develop the training software for the AI, the overall AI infrastructure, and make that available to Amazon or to Alibaba, for example. The expansion is incredible. Even with power tools, you could add Bluetooth. But where is the Bluetooth connected? What kind of information is given to the cell phone or to the cloud? You have to learn everything and then decide what piece of the business you want.
SE: That certainly becomes a much bigger problem. But do you have customization capabilities when you do that? Can you build this in a modular way so you can sell part of a solution or all of the solution?
Cerato: So if you think about a device like radar, the customization is in the software. We have a data acquisition lab in Dresden for AI training. We have a camera that works in combination with the radar. The camera uses a standard algorithm, and then you classify and combine that with the images that you can get from radar. Then you store it, and we have dataset management. We train the neural net that we choose for the radio with the information classification process from the camera, and then we develop use cases. This can be a gesture or someone entering a room and moving around, or it can involve more people. You have to enable the customers to use this in their applications. Some people are using it for monitoring what is happening on a desk in an office. Others are using it for mobile. The piece of hardware is always the same, but all of these applications are software. That’s how we develop in a more modular way. We have an infrastructure. We develop applications on that infrastructure. Data is a big topic, and we have to standardize the way that we collect data. If we don’t do that, we cannot manage it.
SE: There are different data types, and you’ve got to partition and prioritize traffic. And then you also have to look at that in the context of drift, interference, aging and security issues. How does that work?
Cerato: Yes, there are a lot of problems. You can combine the same sources in the same box with sensor fusion, or you can combine all of this in the network. At the end of the day, the way that you combine it only changes the place. You can combine all these things at a cloud level or you can combine them in the box. With an alarm system, you may want to know if someone is breaking a window. To do that, we combine a pressure sensor and microphone. If you break the window, you change the pressure, and the sound can be analyzed by your filter to be sure somebody has broken the window. But with the same type of sensor, you also can detect if the window is open or closed. If you only have the sound, you cannot distinguish whether glass is breaking because it has fallen on the floor, or whether it’s glass from a broken window. We also can combine radar with ultrasound, or with time of flight, and we can increase precision. This combination can be local, it can be on a network, or it can be in the cloud.
SE: How about calibration?
Cerato: If you put CO2 sensors around a building, or you put other types of sensors in street lamps in a city, the more you have the more you can rely on averaging those measurements for calibration. These are algorithms you can develop once you understand the functionality and the aging effects. If you put radar sensors in a building, during the day you can use it to determine room occupancy. If you download different software with an over-the-air update, you can change the functionality. It can become an alarm system or something else, and it can be combined with a video camera. Once the radar detects motion, the camera will turn on. And then you the AI part kicks in for identification. And from time to time, the algorithms or software can be updated, usually with unsupervised training. The training can be done in the cloud before it is used on the edge, and the neural networks do not need to change.
SE: This raises some interesting issues. If you leverage one application versus another, does something age differently? Does it take advantage of the sensors differently. And are there more security issues?
Cerato: Security is a big topic, and one that is not fully resolved. With the typical security in communication, there are various methods for authentication. Some are cheaper, some are more effective, and all of these are evolving. But if you think about when you deploy AI, for example, that makes the edge more autonomous. And through the communication system, eventually you pass top-level communication. That means every wrong command can have bigger consequences in the edge. Imagine an automatic guided vehicle (AGV) that runs in a factory. You give it commands like, ‘Move from point A to point B.’ At point B there is a human to make sure it works. But suppose you’re using 5G to connect the machine directly to servers in the cloud, and there is a virus in the cloud that gets past all the firewalls and it tells the AGV where to go. That can be a disaster. So how you secure this — either in the cloud or on the edge to make sure you are not creating problems — is not resolved. On the edge, you don’t know what happened in the cloud. You can authenticate that the cloud is good and the communication is good, but you may not detect a problem. One solution is to have agents. You can have security agents that run on the cloud that talk to other agents at the edge.
SE: What do these agents do?
Cerato: They define a safe operating area. You define a safe operating area on one side, a safe operating area on the other, and they exchange communications. You can change firmware multiple times a day. In the past, you had an app that was always doing the same thing throughout its lifetime. If you changed the software, you changed the functionality, and you got updates with new features. And when you put a product in the market, you got a new release that fixes things. It was the same for IoT devices. But now, they are AI IoT, which allows them to increase the number of features and what they can do. For the same money, you get more. And there is an AI part that is purely in software, because now we’re learning how to reduce the code. So now you can classify gases using an AI algorithm, and that runs on a low-cost ARM6 processor with autonomy and multiple features.
SE: But now you also need to understand potential interactions and optimizations, right?
Cerato: Yes, and this is where aging fits in. Aging in a gas sensor is important because that can change the accuracy. First of all, you need to characterize these devices so you can compensate for that aging. To do that, you need to know the electrical behavior, which allows you to calibrate it. We’re planning to do the same thing for touch sensors. You do that through the network and the cloud, and you have a service for these calibrations. That’s for sensing. There also is aging in power, and we see this in the switch power supply. And you can calibrate this precisely, but you also need to understand that the rest of the device can change, and you need an algorithm to compensate for that, too.
SE: When a chip is shipped into the field, you need to understand all of the possible interactions because often it’s part of a more complex system, right?
Cerato: Yes, and we are moving into more digital types of components, which are a tremendous improvement over the analog systems. We have sold 200 million-plus digital controllers for the chargers in adapters. This traditionally was done in analog, and we got a few parts per million of returns. Most of those returns were for drift or products at the edge of the spec. And because they weren’t all used the same way, some of them were falling out. With analog, there is drift deviation as the device ages. When you go into digital, the threshold is much higher, so you have much better quality. In analog, you also have to have enough margin to compensate for that drift and allow you to recalibrate.
SE: In the past, we also used to have pretty good differentiation between memory on a board and the processor. Now we’re starting to see in-memory and near-memory processing, and various different types of memories. How does that impact your designs?
Cerato: There are products we’re designing right now based on Arm’s M55, which is a combination of a RISC processor with an AI accelerator. On top of that we add another much smaller block to optimize memory and to pre-process the information. So the memory is no longer just a big block. It’s now an integral part of the functionality of your products. If you put this memory too far from the processor, it requires a lot of power and you end up with performance issues. So you build the memory really close by and customize it to the neural net. This is a completely different approach than a standardized system, where you optimize the configurations of different blocks and connect everything to the bus. You’re no longer thinking of the memory as a big block. Now you have different types of memory for different functions. You also have to think about different types of memory for security.
SE: How much of this effort is centered around AI?
Cerato: Within my R&D organization, 70% is focused on AI. When we started that work, we put AI in power and AI in sensors. At the beginning of all of this, we were thinking we would have a terabyte of data, high computations, and machines with acceleration. We have since learned that, depending on the categories of things we have to do, it can be done with a few megabytes of memory. The miniaturization part of AI is a combination of the type of neural net that you choose and the tool that compresses the data. We’ve found that a specific AI application actually can fit, free of charge, in what we are currently paying for. So if you think about the gas sensor, for example, if you’re looking for three gases in our graphene sensor, we only use 5 kilobytes. There are two instructions in the neural net that go into a low-cost ARM processor. Even the M0 can be used for that. There is a big impact on what you can do in the software by using simulation, what you quantify, and what you squeeze.
SE: That also means you understand the problem well enough to be able to do that, right?
Cerato: Absolutely. What differentiates two companies working in the IoT these days isn’t the tools, which are now available to everybody for AI, or the simulation — or even skilled people. It’s the understanding of the application. You then realize what features you need to use for training your neural net, and you also understand the level of complexity of the neural net. At the end of the process, you also know how much you can squeeze the data without losing the information. So the key value of implementing AI is understanding the application. The person who is closest to the application is the one who is capable of making small neural nets using small amounts of memory and still have everything working. There is a lot of classification data available for voice, but you cannot find that for radar, for example. We have to create a database for that, and that database has a value because it’s classifying a feature we know we need. Today, if you have a high-definition movie, you can see all of the details on a big screen. But if you put the video in a watch, you don’t need the same level of information.
SE: How do you determine what you need?
Cerato: In the past, we were selling components to a company, and that company was making the application. Now we are in front of the customers, but we don’t necessarily know what the customer wants. It can’t be so technical that only engineers can use it. If you talk about user experience, we talk about intuitive sense. We have to go to a different level and learn the user approach, and that’s new. There is a disconnect between engineers and users. Now we also have to sell to smaller companies, not just through distributors. In the past, we went to three customers’ reference designs. Now we have thousands of customers and they are connected directly to us, and we are asking about the products they need.
SE: So what changes from the design side, because it’s no longer just about hardware or software? It’s really a multi-dimensional team.
Cerato: Yes, and you have to define the way you combine all of these things together and make them easy to use for yourself and for the customer. For instance, after the acquisition from Cypress, we got a product called a programmable SoC. So you have standard blocks, and you can configure different functions. Plus, there is a microcontroller base with a configurable FPGA. You can imagine how many things you can do with that, and it’s good for 10,000 units, or maybe even 1 million, but after that you need to do something else. Here, you’re really looking at total cost of ownership, and in this case what’s important is how long it takes to develop an application for a specific market rather than the cost of the chip. You need a library. It has to be organized, and there has to be a system for changing the blocks and to debug it.
The Link Lonk
June 10, 2021 at 02:04PM
https://ift.tt/3zg3BQn
Customizing Chips For Power And Performance - SemiEngineering
https://ift.tt/2RGyUAH
Chips
No comments:
Post a Comment