IBM has pushed out a new Power chip, or as the company puts it, the ‘next generation in accelerated computing’ has arrived.
The Power9 processor is built to cope with intensive AI and machine learning workloads, utilizing Nvidia NVLink and PCIe Gen4 for 5.6x faster data throughput compared to PCIe Gen3, along with OpenCAPI technology. IBM claims a quadruple bandwidth improvement compared to its predecessor, Power8.
IBM has also built a server around the new chip, with the IBM Power Systems AC922 offering AI processing speeds of up to 300 petaflops. The company also notes that it’s capable of boosting deep learning framework performance by up to a factor of 3.8x compared to x86 solutions.
As TechCrunch reports, Patrick Moorhead, principal analyst at Moor Insights & Strategy, commented on the chip: “IBM’s Power9 is literally the Swiss Army knife of ML [machine learning] acceleration as it supports an astronomical amount of IO and bandwidth, 10X of anything that’s out there today.”
Reaching the Summit
IBM’s new chips are also being used in new supercomputers at the Lawrence Livermore and Oak Ridge national laboratories in the US – known as ‘Sierra’ and ‘Summit’ respectively – which should be up and running early next year.
Summit will apparently provide an individual application performance which is five to 10 times faster than Titan, Oak Ridge’s older supercomputer, and Sierra will provide a boost of four to six times compared to its predecessor Sequoia.