Rechercher dans ce blog

Tuesday, April 6, 2021

Ice Lake has arrived: Intel launches 10-nanometer Xeon server chips with 40-core flagship - SiliconANGLE News

chips.indah.link

Intel Corp. today refreshed its flagship Xeon server processor line with more than three dozen new chips based on its latest 10-nanometer manufacturing process and featuring a raft of specialized features, including a high-speed artificial intelligence accelerator. 

The new chips, which are launching under the code name Ice Lake, represent the biggest update to Intel’s server portfolio in recent memory. They will power an upcoming generation of compute instances in the major public clouds, new supercomputers and high-speed 5G infrastructure, among many other systems.

“The data centers of the future will look very different from the way they do today,” Navin Shenoy, executive vice president and general manager of Intel’s Data Platforms Group, said during an online launch event today. “Intel now builds every component of what enterprises need.”

Intel says the Ice Lake series, which Shenoy called “the greatest processor we’ve ever built,” provides a 20% improvement in the number of instructions that can be carried out per clock cycle, a key measure of chip performance. That’s mainly thanks to the series’ use of a new core design known as Sunny Lake. Intel has packaged no fewer than 40 of those Sunny Lakes cores into the fastest Ice Lake central processing unit, the Xeon Platinum 8380, which can support 80 threads at a base frequency of 2.3 gigahertz. 

Intel says customers should expect an average 46% speedup in “popular data center workloads” compared with its previous-generation server CPUs. Compared with a five-year-old server, machines powered by Ice Lake will perform computations 260% faster.

The 20% boost in instructions per clock cycle is only one of the contributors to Ice Lake’s performance. Intel’s chip designers have also increased the size of the onboard caches, the memory circuits in which CPUs store the data they currently process. The more memory circuits there are, the more data can be processed at once. The Sunny Cove cores that power the Ice Lake series have a 50% bigger L1 cache, the technical term for a CPU’s main memory bank, than previous-generation Xeon chips and a 25% bigger L2 secondary cache.

While at it, Intel enhanced the speed at which its CPUs can fetch data from random-access memory. In a notable improvement over previous-generation Xeon chips, all Ice Lake processors are shipping with eight memory channels. That means more data can be transferred to and from RAM at once. 

The top-end Xeon Platinum 8380 also pulls information significantly faster from other types of peripheral hardware, such as graphics cards, networking devices and flash storage, thanks to support for the new PCIe 4.0 hardware interconnect technology. PCIe 4.0 is twice as fast as the previous PCIe 3.0 standard. 

“With a backdrop of fierce competition, Intel is leading with its strengths with its 3rd Gen Xeon processors,” commented Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “The company is offering a platform approach to provide its partners solutions incorporating CPUs, storage, memory, FPGAs and networking ASICs. This is in addition to its ability to leverage resources for co-marketing and co-development.”

Another element of Intel’s pitch to partners such as server makers is a set of specialized processing features shipping with Ice Lake. They mainly focus on boosting cybersecurity and artificial intelligence applications, which process data using a different set of mathematical operations than standard workloads. 

In the cybersecurity department, the flagship Xeon Platinum 8380 and other top-end Ice Lake chips ship with a memory isolation technology called SGX. It allows the CPU to turn parts of a server’s RAM into “secure enclaves” that lend themselves to storing sensitive data such as encryption keys. Data in secure enclaves is inaccessible to other applications running on the same server, even if they otherwise have full administrator-level access to the machine.

Common encryption algorithms such as AES, SHA and GFNI run faster on Ice Lake silicon thanks to a series of complex software algorithms built into the chips. Part of the speedup is the result of Intel having parallelized some encryption operations so that they can be run side-by-side rather than one after another. In one instance, the company’s engineers found a way to compress two separate algorithms into a single workflow to reduce processing requirements.

For applications that use AI models, Ice Lake provides an implementation of Intel’s DL Boost machine learning accelerator. The accelerator is one of the main highlights of the processor series. Intel is promising 74% faster AI performance than previous-generation chips and up to 150% faster speeds than rival Advanced Micro Devices Inc.’s EPYC 7763 CPU. The chipmaker reached the latter figure by comparing the two processors across what it describes as 20 popular AI workloads. 

According to Moorhead, Intel is “differentiated with its on-chip ML inference and cryptographic capabilities versus its closest competitors.” 

Though Ice Lake officially only launched today, Intel has already sold more than 200,000 units to early customers. Those chips were purchased by leading cloud service providers, which Intel says are already working on new Ice Lake-powered instances, high-performance computing labs and server makers. Telecommunications equipment makers have lined up to purchase Ice Lake chips as well, in part because the chip series features multiple carrier-focused processors specifically optimized to power 5G infrastructure.

With reporting from Robert Hof

Photo: Intel/livestream

Since you’re here …

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission:    >>>>>>  SUBSCRIBE NOW >>>>>>  to our YouTube channel.

… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

The Link Lonk


April 07, 2021 at 12:12AM
https://ift.tt/39OSzX7

Ice Lake has arrived: Intel launches 10-nanometer Xeon server chips with 40-core flagship - SiliconANGLE News

https://ift.tt/2RGyUAH
Chips

No comments:

Post a Comment

Featured Post

Intel Delays “Sapphire Rapids” Server Chips, Confirms HBM Memory Option - The Next Platform

chips.indah.link It is a relatively quiet International Supercomputing conference on the hardware front, with no new processors or switch ...

Popular Posts