Intel’s “Built for Wonderful” event at MWC Barcelona 2022 demonstrated how the business allows software-defined, completely programmable infrastructure from the cloud via the internet and 5G networks to the intelligent edge alongside its partners and customers. The event, hosted by Nick McKeown, senior vice president and general manager of Intel’s Network and Edge Group (NEX), focused on significant shifts and transitions, with software as an essential technology for the transformation.
To reveal new products and architectural approaches across the 5G network and edge, McKeown was joined by Dan Rodriguez, corporate vice president and general manager of the Network Platforms Group, and Sachin Katti, chief technology officer of the Network and Edge Group. Leading communications service providers and industry partners joined Intel to address these developments and share successful cooperation.
Intel unveiled today, via its MWC Barcelona 2022 kick-off virtual keynote, new programmable hardware and open software as global networks become software-defined and edge inference transforms every sector. These two advancements will enable Intel’s customers and developer community to create unprecedented innovation at scale while positioning Intel for future growth and expanding its current market leadership.
From new architectural enhancements and future central processing units (CPUs) in Intel’s next-generation Intel® Xeon® Scalable processor, Sapphire Rapids, to a new Intel® XeonTM D processor, Intel’s latest SoC designed from the ground up for the software-defined network and edge, a slew of announcements were made. In terms of software, Intel unveiled a slew of new features, including OpenVINOTM 2022.1 and new software modules in Intel® Smart Edge, all of which will help developers innovate at the edge with more speed and flexibility.
Intel’s new Xeon D processor is developed from the ground up with the network and edge-specific features, including integrated AI and crypto acceleration, built-in Ethernet, support for time-coordinated computation and time-sensitive networking, and industrial-class durability. The new Intel Xeon D processor shines in use cases such as security appliances, enterprise routers and switches, cloud storage, wireless networks, and AI inferencing when compute processing needs to be performed close to where data is generated.
Intel unveiled new software modules in its Smart Edge portfolio as more developers developed new services at the edge. The modules are entirely tuned on Intel Xeon processors to assist achieve the requisite latency and bandwidth3 for 5G User Plane Function (UPF) workloads at the network edge. The modules isolate the hardware’s complexities so that developers can design applications on top that take full advantage of Intel’s CPUs’ packet processing capabilities, making it easier to improve 5G UPF performance.
The Intel® Xeon® D-2700 and Intel® Xeon® D-1700, two new additions to Intel’s networking and edge processor family, meet clients where they need computation the most: in space- and power-constrained ruggedised settings. They provide industrial-grade durability, several hardware-based security features, and up to 56 high-speed PCIe lanes to enable high-bandwidth networks up to 100Gb Ethernet. The Intel® Xeon® D-1700 is expandable from 4 to 10 cores, while the Intel Xeon D-2700 is scalable from 4 to 20 seats, allowing customers to build solutions to their specific computing and performance requirements.
The new Intel Xeon D processors feature:
- Up to 4CH DDR4 with 3200 MT/s.
- Up to 100 GbE in the Ethernet throughput capability.
- Up to 32/64 4.0 PCIe lanes.
Built on Sunny Cove Core architecture, the new Intel® Xeon® D processors offer:
- Improved front end, with a higher capacity and improved branch predictor.
- Wider and deeper machine sustained by a broader allocation, more significant structures and execution resources.
- Enhancements in transition line buffers (TLBs), single-thread execution and prefetching.
- With new data centre-optimised capabilities, including a larger Mid-level Cache (l2) and a higher Vector throughout.
Since the advent of OpenVINOTM in 2018, hundreds of thousands of developers have drastically improved AI inferencing performance, starting at the edge and progressing to the enterprise and client. Today, the firm released a new version of the Intel® Distribution of OpenVINO Toolkit ahead of MWC Barcelona 2022. The new features are based on developer feedback over the last three and a half years. They include a more extensive selection of deep learning models, additional device portability options, and improved inferencing performance with more minor code modifications.
Edge AI transforms every industry, allowing new and improved use cases in everything from manufacturing to health and life sciences to retail, safety, and security. According to Media Research, worldwide edge AI chipset revenue will reach $51.9 billion by 2025, owing to the growing demand for AI inference at the edge. Edge inference lowers latency, increases bandwidth, and improves performance to satisfy IoT devices and applications’ increasingly time-critical processing demands.