In a recent announcement, Google unveiled information about its latest artificial intelligence supercomputer, which boasts superior speed and efficiency compared to rival Nvidia systems. The development is of particular significance as the demand for power-intensive machine learning models continues to dominate the tech industry.
While Nvidia dominates the market for AI model training and deployment, with over 90% of the market, Google has been designing and deploying AI chips called Tensor Processing Units, or TPUs, since 2016.
On Tuesday, Google said that it had built a system with over 4,000 TPUs joined with custom components designed to run and train AI models. It’s been running since 2020, and was used to train Google’s PaLM model, which competes with OpenAI’s GPT model, over 50 days.
Google’s TPU-based supercomputer, called TPU v4, is “is 1.2x–1.7x faster and uses 1.3x–1.9x less power than the Nvidia A100,” the Google researchers wrote.
“The performance, scalability, and availability make TPU v4 supercomputers the workhorses of large language models,” the researchers continued.
However, Google’s TPU results were not compared to the latest Nvidia AI chip, the H100, because it is more recent and was made with more advanced manufacturing technology, the Google researchers said.
Google increases price of YouTube TV subscription to $73/month starting April 2023
How much will Google charge for its new Android tablet?
An AI (Artificial Intelligence) supercomputer is a powerful computing system designed to perform complex AI tasks such as natural language processing, image and speech recognition, and deep learning.
It typically consists of multiple processors and high-performance computing components that work together to provide massive processing power and faster execution times for AI applications.
AI supercomputers are essential for solving some of the most challenging problems in fields such as healthcare, finance, and scientific research.
Leave a Reply