The US manufactures the world's largest computer chips

Cerebras' Wafer Scale Engine 3 (WSE-3) chip contains 4 trillion transistors and will help run the future 8 exaFLOP supercomputer Condor Galaxy 3.

Picture 1 of The US manufactures the world's largest computer chips
Wafer Scale Engine 3 computer chip from Cerebras. (Photo: Cerebras).

Scientists built the world's largest computer chip containing 4 trillion transistors (active semiconductor components, often used as an amplifier element or an electronic key), according to Live Science . In the future, this giant chip will be used to operate extremely powerful artificial intelligence (AI) supercomputers. Wafer Scale Engine 3 (WSE-3) is a third-generation platform from supercomputer company Cerebras, designed to run AI systems such as OpenAI's GPT-4 and Anthropic's Claude 3 Opus. The chip includes 900,000 AI cores, made from a 21.5 x 21.5 cm silicon wafer, just like its 2021 predecessor, WSE-2.

The new chip uses the same amount of power as the WSE-2 but is twice as powerful. Compared to it, the previous chip included 2.6 trillion transistors and 850,000 AI cores, meaning the number of transistors in computer chips doubles every two years. For comparison, one of the most powerful chips today used to train AI models is the Nvidia H200 graphics processing unit (GPU). However, Nvidia's GPU only has 80 billion transistors, 57 times less than the Cerebras chip.

One day, WSE-3 will be used for the Condor Galaxy 3 supercomputer located in Dallas, Texas, according to a company representative shared on March 13. The Condor Galaxy 3 supercomputer under construction will include 64 base blocks of the Cerebras CS-3 AI system powered by the WSE-3 chip. When connected together and activated, the entire system has a computing capacity of up to 8 exaFLOPs. Then, when combined with the Condor Galaxy 1 and Condor Galaxy 2 systems , the entire network will reach a total capacity of 16 exaFLOPs (a unit of measurement for the combined computing power of computer systems). One exaFLOP would be equal to 1,000 petaflops, equivalent to 1 trillion operations per second.

Currently, the world's most powerful supercomputer is Oak Ridge National Laboratory's Frontier with a capacity of 1 exaFLOP. The Condor Galaxy 3 supercomputer will be used to train future AI systems that are 10 times larger than Google's GPT-4 or Gemini.