Intel building

Intel targets AI hardware dominance by 2025

The chip giant's diverse range of CPUs, GPUs, and AI accelerators complement its commitment to an open AI ecosystem
Pro

3 April 2023

Intel has laid out a roadmap for establishing product leadership in the processor market by 2025, alongside a goal of democratising AI under a consolidated range of AI-optimised hardware and software.

Core to its proposition is a diverse range of products including central processing units (CPUs), graphics processing units (GPUs), and dedicated AI architecture alongside open source software improvements.

Businesses can expect to benefit from fourth-generation Sapphire Rapids Xeon CPUs immediately, with the fifth-generation Xeon codenamed Emerald Rapids set for a Q4 2023 release. This will be followed in 2024 by two processors known as Granite Rapids and Sierra Forest. 

 

advertisement



 

Sapphire Rapids can deliver up to ten times greater performance than previous generations. Internal test results also showed that a 48-core, fourth-generation Xeon delivered four times better performance than a 48-core AMD EPYC for a range of AI imaging and language benchmarks. 

With Granite Rapids and Sierra Forest, Intel will address current limitations for AI and high-performance computing workloads such as memory bandwidth, with 1.5TB memory bandwidth capacity, and 83% peak bandwidth increases over current generations. 

Seperately, Intel is also focusing development of GPU and FPGAs (field programmable gate arrays) to meet the demands for large language model training, largely through its Intel Max and Gaudi chips.

It stated that Gaudi 2 has demonstrated two times higher deep learning inference and training performance than the most popular GPUs.

Training on this level is key for large language models (LLM), and demand has risen since the meteoric rise in generative AI models such as ChatGPT.

Around 15 FPGA products will be brought out this calendar year, which will add to Intel’s compute product range, including for deep learning, artificial intelligence, and other high-performance computing needs.

Over time, Intel intends to draw together its GPU and Gaudi AI accelerator portfolios to allow developers to run software to run across architectures.

In addition to its achievements and plans for hardware, the firm said it aims to capture and democratise the AI market through software development and collaboration.

With 6.2 million active developers in its community, and 64% of AI developers using Intel tools, its ecosystem already has strong foundations for further AI development.

Intel cited its recent work with Hugging Face, enabling the 176 billion-parameter LLM BLOOMZ through its Gaudi2 architecture. This is a refined version of BLOOM, a text model that can process 46 languages and 13 programming languages, and is also available in a lightweight 7 billion parameter model.

“For the 176-billion-parameter checkpoint, Gaudi2 is 1.2 times faster than A100 80GB,” wrote Régis Pierrard, machine learning engineer at Hugging Face.

Future Publishing

Read More:


Back to Top ↑

TechCentral.ie