GTC 2026 Day 1: Jensen Huang Just Changed the Game
Three hours. 30,000 people in an arena. A trillion dollars in projected orders. An Olaf robot walking onto a stage in San Jose.
GTC 2026 Day 1 was not a product announcement. It was a statement of intent. Jensen Huang used every minute of that keynote to make one argument: the age of agentic AI is here, Nvidia built the infrastructure for it, and if you are not paying attention, you are already behind.
Here is everything that happened, and what it means.
The Number That Stopped the Room
Before any hardware, before any demos, Jensen Huang opened with a number: $1 trillion in projected orders through 2027 across Blackwell and Vera Rubin platforms combined.
That is not a forecast from an analyst. That is Nvidia telling the market what its own order book looks like. For context, Nvidia posted $130 billion in revenue in its last fiscal year. A trillion-dollar pipeline is a different category of business entirely.
The throughline of the entire keynote followed from that number. Computing demand, Jensen said, has increased by one million times over the last few years. Agentic AI is not a feature. It is a new computing paradigm. And every announcement that followed was Nvidia’s answer to what that paradigm requires.
Vera Rubin: The Full Stack, Finally
The Vera Rubin platform has been anticipated for months. On Monday, Nvidia showed the complete picture: 7 chips, 5 rack-scale systems, and one supercomputer, vertically integrated from silicon to software.
The new Vera CPU is purpose-built for agentic AI workloads. The BlueField-4 STX handles storage architecture. The full system claims 10x more performance per watt versus Grace Blackwell, assembled from 1.3 million components.
Jensen’s framing was deliberate: “When we think Vera Rubin, we think the entire system, vertically integrated, complete with software, extended end to end, optimized as one giant system.”
That is the pitch. Not a GPU. A computing platform. Vera Rubin NVL72 ships to customers later in 2026.
The Groq Surprise
Everyone expected Vera Rubin. Nobody expected this.
Nvidia unveiled the Groq 3 LPU: the first chip product from its $6 billion Groq acquisition in December 2025, the largest deal in Nvidia history. And it did not just unveil a chip. It unveiled a full rack system.
The Groq 3 LPX rack holds 256 LPUs and sits alongside the Vera Rubin rack-scale system. The design philosophy is elegant: one processor optimized for high throughput, one for low latency, unified into a single system. Nvidia claims 35x tokens-per-watt improvement when the two are paired.
Jensen on stage: “We united, unified two processors of extreme differences, one for high throughput, one for low latency.”
Groq 3 ships in Q3 2026. The inference market just got significantly more interesting.
NemoClaw and OpenShell: Enterprise Agents Get a Foundation
This is the announcement that matters most for the enterprise market, and it landed exactly as anticipated.
Nvidia is backing OpenClaw across its entire platform. Jensen called it “the most popular open source project in the history of humanity” and then went further: “Every single company in the world today has to have an OpenClaw strategy.”
That endorsement carries weight. But the product announcement is what enterprises actually needed.
NemoClaw is Nvidia’s enterprise security stack built on top of OpenClaw: policy enforcement, network guardrails, privacy routing. The companion OpenShell runtime handles secure agent deployment inside organizations. Jensen described the combined stack as “the policy engine of all the SaaS companies in the world.”
The pre-keynote prediction held: NemoClaw is exactly the enterprise control layer that organizations needed before they could take OpenClaw seriously. OpenClaw gave developers the sports car. NemoClaw gives the enterprise the fleet management system.
Feynman: What Comes After
Nvidia did not just announce what is shipping. It showed what comes next.
The Feynman architecture is the next major platform after Vera Rubin. Key details: a new CPU called Rosa (named for Rosalind Franklin), paired with LP40 LPU and BlueField-5, connected via NVIDIA Kyber, which combines copper and co-packaged optics. The Kyber rack fits 144 GPUs in vertical compute trays for higher density and lower latency.
Feynman debuts in Vera Rubin Ultra in 2027. The roadmap is intact and the pace is not slowing.
Space Data Centers
This one landed differently in the room.
Nvidia announced plans to bring AI data centers into orbit. The NVIDIA Space-1 Vera Rubin system extends accelerated computing beyond Earth. The DSX AI Factory reference design and Omniverse DSX Blueprint ship alongside it. DSX Air lets you simulate an AI factory in software before building it physically.
It sounds like science fiction. It is also entirely consistent with where satellite compute is heading. When bandwidth to orbit is no longer the bottleneck, running AI inference in space becomes a logistics question rather than an engineering question. Nvidia is not waiting to find out.
The Deals That Define the Moment
Two partnerships stood out above everything else announced on stage.
Thinking Machines Lab, the new frontier AI company founded by Mira Murati after she left OpenAI, signed a multi-year strategic partnership to deploy at least 1 gigawatt of Vera Rubin systems for frontier model training. One gigawatt. That is the largest known compute commitment by a frontier AI startup.
Disney ended the show by walking an Olaf robot onto the stage. The droid, named EMBO, is built on multiple Nvidia AI models for motion, speech, and personality. It is targeted for theme park deployment. Nvidia demonstrating that physical AI is not a future capability — it is a product shipping to one of the most demanding deployment environments imaginable.
DLSS 5 and the Rendering Shift
Nvidia announced 3D-guided neural rendering enabling real-time photoreal 4K performance on local hardware. The probabilistic rendering approach delivers significant visual fidelity improvements, particularly for human faces. Jensen: “The future is neural rendering.”
This matters beyond games. The same rendering technology underpins simulation environments for robotics, autonomous vehicles, and enterprise digital twins.
The Nemotron Coalition
Nvidia expanded its open model ecosystem with the Nemotron Coalition: Thinking Machines Lab, Perplexity, Cursor, Mistral AI, and others. The focus is agentic, physical, and healthcare AI model families.
This is Nvidia building the model supply chain that runs on its infrastructure. Not by building models itself. By funding and partnering with the companies that do.
Market Reaction
NVDA closed up 2% on Monday. The number sounds modest given the scale of announcements. It is not.
Nvidia was trading below its 50-day moving average heading into GTC, under pressure from macro uncertainty and investor questions about Vera Rubin demand. A clean 2% gain on keynote day, with 93% of analysts at Buy and Morgan Stanley reinstating it as the top semiconductor pick, is a vote of confidence from a market that was looking for reasons to doubt.
The caution that remains is supply-side: investors want proof that Vera Rubin can ramp as fast as demand requires. That answer will come in the next two quarters.
What Day 2 Holds
The keynote was the headline. The real GTC happens in the technical sessions.
Watch for: deep dives on the Vera Rubin architecture, NemoClaw developer documentation, Groq 3 LPU benchmark data from hardware reviewers, and any Vera Rubin pricing or availability updates from OEM partners including Dell and HPE. The developer community reaction to OpenShell and NemoClaw will tell you more about real-world adoption potential than any press release.
Day 1 set the table. The food gets served all week.
Sources: NVIDIA Blog, CNBC, Reuters, Tom’s Hardware, Tom’s Guide, CNET, TechRadar, ServeTheHome, ZDNet, TipRanks, Yahoo Finance.
Share this