The Super Bowl of AI: Decoding Nvidia's GTC 2025 Keynote
- J L
- Mar 20
- 2 min read
Accelerated Computing and Key Concepts Explained

Thanks to the lightning speed of AI development, Nvidia’s GTC is getting bigger and bigger each year, described as the “Super Bowl of AI”, exceeding last year’s “Woodstock of AI”. Having sifted through massive information, here are the key insights TechSoda drew from Nvidia CEO Jensen Huang’s keynote speech.
If we are to tell you in one sentence about GTC 2025, here’s what caught our eyes: “As accelerated computing will soon reach critical mass and AI undergoes radical change, Nvidia is driving the future forward.”
And “trillion” is the keyword in this keynote speech. Jensen mentioned “trillion” at least 16 times! Trillion tokens, trillion parameters, trillion transistors, robotics is the next trillion dollar business, and trillion floating point operations per second! He had mentioned that he expects data center build-out to reach a trillion dollars by 2030, and this year he said, “I am fairly certain we're going to reach that very soon.”
Since Accelerated Computing is a holistic approach, Nvidia is combining hardware, software, and domain-specific optimizations to unlock AI’s potential across industries, from data centers to edge networks. And in the GTC 2025, it was announced that Cisco, Nvidia, T Mobile, Cerberus, ODC, are going to build a full stack for radio networks in the United States, to put AI on the edge. The significance of it? Autonomous cars will be big business for Nvidia.
According to Jensen, Nvidia powers autonomous driving through three core pillars:
Data Center GPUs: Used by Tesla, Waymo, and Wayve to train AI models for self-driving systems.
In-Vehicle Compute: Nvidia’s DRIVE AGX platform (e.g., Orin chips) runs in cars from Toyota, Mercedes, and GM, handling real-time decision-making.
Full-Stack Solutions: Includes AI training (DGX), simulation (Omniverse), and safety-certified software (DriveOS).
And Jensen announced that GM has selected Nvidia as a partner to build their future self-driving car fleet.
GTC 2025 is no ordinary product launch event that Nvidia and other technology companies used to do each year, even though the next-generation GPU Feymann was announced. People may have picked up information such as “Blackwell is shipping in volumes”, “NVIDIA Dynamo is announced”, or “Everything is liquid-cooling”. It is important to note that Jensen’s keynote took to lengths to explain concepts and messages regarding accelerated computing that requires massive scale-up.
Dispelling doubts about R1 decreasing computing demand
Not surprisingly, Jensen took time to explain why DeepSeek’s R1 inferencing model will boost the demand for computing power instead of decreasing it. He compared a traditional LLM and a reasoning model solving a wedding seating problem.
A traditional LLM attempted to seat 300 wedding guests with constraints (traditions, feuds, photogenic layouts) in a single pass, producing a flawed answer in under 500 tokens.
In contrast, the reasoning model R1 broke down the problem, tested multiple scenarios, and self-verified its solution through iterative reasoning, consuming ~8,000 tokens. While the traditional LLM’s answer was fast but error-prone, R1’s thorough analysis ensured accuracy, though at a higher computational cost. This highlights reasoning models' ability to handle complex, multi-variable tasks through step-by-step logic and self-correction, even if slower and more resource-intensive.
To read the rest of the article, please visit: https://techsoda.substack.com/p/the-super-bowl-of-ai-decoding-nvidias
Comments