Home News World’s fastest AI chip features 900,000 AI cores

World’s fastest AI chip features 900,000 AI cores

0
World’s fastest AI chip features 900,000 AI cores

Cerebras Methods has launched the Wafer Scale Engine 3 (WSE-3), the world’s quickest AI chip, that includes 4 trillion transistors and 900,000 AI cores. The WSE-3, constructed on a 5 nm course of, powers the Cerebras CS-3 AI supercomputer, which is able to 125 petaflops of peak AI efficiency. This new chip is designed to coach giant AI fashions effectively, supporting fashions as much as 24 trillion parameters with out the necessity for partitioning, thus simplifying the coaching course of.

This AI chip is a powerhouse, boasting 4 trillion transistors and 900,000 AI cores. It’s the core of the Cerebras CS-3 AI supercomputer, which delivers an astonishing 125 petaflops of peak AI efficiency. This chip is about to rework how giant AI fashions are educated, dealing with as much as 24 trillion parameters with ease. Third Technology 5nm Wafer Scale Engine (WSE-3) Powers Trade’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs by way of 2048 Nodes.

“Once we began on this journey eight years in the past, everybody mentioned wafer-scale processors had been a pipe dream. We couldn’t be extra proud to be introducing the third-generation of our groundbreaking water scale AI chip,” mentioned Andrew Feldman, CEO and co-founder of Cerebras.“ WSE-3 is the quickest AI chip on this planet, purpose-built for the newest cutting-edge AI workfrom combination of consultants to 24 trillion parameter fashions. We’re thrilled for carry WSE-3 and CS-3 to market to assist clear up right now’s greatest AI challenges.”

The WSE-3 is constructed utilizing superior 5 nm course of expertise, which has allowed for the combination of 44 GB of on-chip SRAM. Nevertheless it doesn’t cease there; you possibly can broaden the chip’s reminiscence externally up to an enormous 1.2 petabytes. Which means that even duties that require lots of knowledge may be processed and not using a hitch. The chip’s design is extremely scalable, permitting you to attach as much as 2048 CS-3 techniques. This makes it versatile for varied makes use of, from companies to large-scale computing environments.

Cerebras unveils world’s quickest AI chip

Cerebras hasn’t simply targeted on uncooked efficiency; they’ve additionally made positive their expertise is user-friendly. The Cerebras Software program Framework now helps PyTorch 2.0, which simplifies programming giant language fashions (LLMs). This implies builders can do extra with much less code, reducing down on complexity and dashing up the time it takes to develop new functions. The WSE-3 additionally introduces {hardware} acceleration for dynamic and unstructured sparsity, which may probably make coaching occasions as much as eight occasions quicker.

Specs

  • 4 trillion transistors
  • 900,000 AI cores
  • 125 petaflops of peak AI efficiency
  • 44GB on-chip SRAM
  • 5nm TSMC course of
  • Exterior reminiscence: 1.5TB, 12TB, or 1.2PB
  • Trains AI fashions as much as 24 trillion parameters
  • Cluster measurement of as much as 2048 CS-3 techniques

On this planet of computing, being power environment friendly is essential. Impressively, the WSE-3 has doubled the efficiency of its predecessor whereas preserving energy consumption the identical. That is important as a result of it means we are able to proceed to push the boundaries of AI with out blowing our power budgets.

The impression of the WSE-3 and the CS-3 AI supercomputer is already being felt throughout completely different industries. Cerebras has a major backlog of orders from sectors like enterprise, authorities, and worldwide cloud providers. The expertise performs a key function in partnerships with main establishments equivalent to Argonne Nationwide Laboratory and Mayo Clinic, aiding AI analysis and enhancing affected person care.

Wanting forward, Cerebras has plans to collaborate with G42 to construct among the world’s largest AI supercomputers. One challenge within the pipeline, the Condor Galaxy 3, is about to ship an unbelievable 8 exaFLOPs of AI compute, showcasing the immense potential of the WSE-3.

The Wafer Scale Engine 3 from Cerebras is a serious step ahead in AI expertise. With its unmatched computational energy, scalability, and energy-efficient efficiency, together with the help of a sophisticated software program framework, it’s an indispensable device for anybody seeking to harness the total energy of AI. As Cerebras continues to push the envelope, the way forward for AI growth and software seems extra promising than ever.

Newest H-Tech Information Devices Offers

Disclosure: A few of our articles embrace affiliate hyperlinks. In the event you purchase one thing by way of certainly one of these hyperlinks, H-Tech Information Devices might earn an affiliate fee. Study our Disclosure Coverage.

LEAVE A REPLY

Please enter your comment!
Please enter your name here