AI indirectly topped this week’s most interesting tech news by virtue of being the center of attention of NVIDIA’s earnings report. At a time when investors are running in circles to lockdown the latest and greatest AI startups, and every company on the planet seems to be marketing the use of generative AI, it’s become quite challenging to cut through the hype and find hard data that supports the growth, implementation, and usage of this general purpose technology. But not this week. NVIDIA’s year-over-year data center revenue growth increased by a jaw dropping (and I mean to the floor jaw dropping) 405%, largely driven by its graphic processing unit (GPU) computing platform. I say “indirectly” because short of reading (or hearing) the earnings call commentary from NVIDIA CFO, Colette Kress, it wouldn’t have been intuitive that the GPU platform was the driving force for this growth, nor exactly what or why that was.
NVIDIA’s GPU platform and chips have historically been designed for and used in the application of gaming, for rendering and managing parallel tasks in graphics-intensive applications. But the unique design of these chips and their efficiency in processing large amounts of data in parallel (unlike the all purpose central processing unit (CPU) most of us are accustomed to in our day-to-day) makes them extremely well suited for machine learning and other artificial intelligence algorithms whose efficacy is almost entirely predetermined by their ability to process immensely large amounts of cleaned, ordered, and properly formatted data.
As I’ve written before in the context of quantum computing (and was aptly brought to the fore by NVIDIA’s Kress on the call), Moore’s Law is approaching invalidity, as the number of transistors that can be packed into a micro-chip are rapidly approaching the limit of what’s physically possible. And until quantum computing gets its legs underneath itself, enterprise-level companies, especially the cloud computing giants like Microsoft and Google, are scrambling to squeeze-out greater computational efficiencies to keep pace with the rapidly increasing need for AI-specific programs.
The answer to this immediate computational bottleneck has been a new scheme called GPU accelerated data analytics which incorporates multiple GPU processors to handle the heavy lift of parallel processing required for AI. It’s not that GPUs are being repurposed, but that their unique ability to process large swaths of data, quickly and in parallel, make them extremely well suited for these types of systems.
GPUs and their attendant platforms have been at the core of NVIDIA’s business for years. But the demand for the product and service has never been greater due to the rush, especially among, again, enterprise- level cloud giants, for better machines to run AI applications. And that’s exactly what NVIDIA’s earnings report verified.
It’s interesting, too, that the best quantifiable evidence to support the growth and utilization of AI to date isn’t coming from the software side, it’s coming from the hardware sales, and the capital allocation of corporations toward building new data centers designed to house these new GPU architectures.
These invested dollars in AI infrastructure definitively prove the age of AI is here.