Colette Kress
Executive Vice President and Chief Financial Officer at NVIDIA
Thank you, Stuart. Q3 was another record quarter. We continue to deliver incredible growth. Revenue of $35.1 billion was up 17% sequentially and up 94% year-on-year and well above our outlook of $32.5 billion. All market platforms posted strong sequential and year-over-year growth, fueled by the adoption of NVIDIA accelerated computing and AI. Starting with Data Center. Another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year.
NVIDIA Hopper demand is exceptional. And sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest product ramp in our company's history. The H200 delivers up to two times faster inference performance and up to 50% improved TCO. Cloud service providers were approximately half of our data center sales with revenue increasing more than two times year-on-year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads.
NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave and Microsoft Azure with Google Cloud and OCI coming soon. Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped year-on-year as North America, India, and Asia Pacific regions ramped NVIDIA Cloud instances and sovereign cloud build-outs.
Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models training, multimodal, and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads. NVIDIA Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements.
Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible five times in one year and cut time to first token by five times. Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x. Continuous performance optimizations are a hallmark of NVIDIA and drive increasingly economic returns for the entire NVIDIA installed base. Blackwell is in full production after a successfully executed mass change.
We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink and from liquid cooled to air cooled, every customer is racing to be the first to market.
Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world's first Zettascale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models.
Yesterday, Microsoft announced they will be the first CSP to offer, in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand. Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the per GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. The 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a four times reduction in cost.
NVIDIA Blackwell architecture with NVLink Switch enables up to 30x faster inference performance and a new level of inference scaling throughput and response time that is excellent for running new reasoning inference applications like OpenAI's O1 model. With every new platform shift, a wave of start-ups is created hundreds of AI native companies are already delivering AI services with great success. Through Google, Meta, Microsoft, and OpenAI are the headliners.
Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Light Tricks, Harvey, Podium, Purser, and the Bridge are seeing great success, while thousands of AI-native start-ups are building new services. The next wave of AI enterprise AI and industrial AI. Enterprise AI is in full throttle. NVIDIA AI Enterprise, which includes NVIDIA NeMo and MEMS micro services is an operating platform of agentic AI. Industry leaders are using NVIDIA AI to build Copilots and agents.
Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Neutronics, Salesforce, SAP, and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years. Consulting leaders like Accenture and Deloitte are taking NVIDIA AI to the world's enterprises. Accenture launched a new business group with 30,000 professionals trained on NVIDIA AI technology to help facilitate this global build-out.
Additionally, Accenture with over 770,000 employees is leveraging NVIDIA-powered agentic AI applications internally, including one case that cuts manual steps in marketing campaigns for 25% to 35%. Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full year revenue to increase over two times from last year and our pipeline continues to build. Overall, our software, service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.
Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI foundation models that understand the physical world, like NVIDIA NeMo for enterprise AI agents. We built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics. Some of the largest industrial manufacturers in the world are adopting NVIDIA Omniverse to accelerate their businesses, automate their workflows, and to achieve new levels of operating efficiency.
Foxconn, the world's largest electronics manufacturer is using digital twin and industrial AI built on NVIDIA Omniverse to speed the bring-up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects a reduction of over 30% in annual kilowatt hour usage. From a geographic perspective, our data center revenue in China grew sequentially due to shipments of export-compliant hopper products to industries. As a percentage of total Data Center revenue, it remains well below levels prior to the onset of export controls. We expect the market in China to remain very competitive going forward.
We will continue to comply with export controls while serving our customers. Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India's leading CSPs include Tata Communications and CUDA Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x.
Infosys, TSC, Wipro are adopting NVIDIA AI Enterprise and upskilling nearly 0.5 million developers and consultants to help clients build and run AI agents on our platform. In Japan, SoftBank is building the nation's most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and ARAN platform that can process both 5G RAN on AI on CUDA. We are launching the same in the US with T-Mobile.
Leaders across Japan, including Fujitsu, NEC and NTT are adopting NVIDIA AI Enterprise and major consulting companies, including EY, strategy and consulting, will help bring NVIDIA AI technology to Japan's industries. Networking revenue increased 20% year-on-year. Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs and BlueField DPUs. Though networking revenue was sequentially down, networking demand is strong and growing, and we anticipate sequential growth in Q4.
CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters. NVIDIA Spectrum -X Ethernet for AI revenue increased over three times year-on-year, and our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments. Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute.
Utilizing Spectrum-X, xAI's Colossus 100,000 hopper supercomputer experienced 0 application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet. Now moving to gaming and AI PCs. Gaming revenue of $3.3 billion increased 14% sequentially and 15% year-on-year. Q3 was a great quarter for gaming with notebook, console and desktop revenue, all growing sequentially and year-on-year.
RTX and demand was fueled by strong back-to-school sales as consumers continue to choose GeForce's RTX GPUs and devices to power gaming, creative and AI applications. Channel inventory remains healthy and we are gearing up for the holiday season. We began shipping new GeForce RTX AI PC with up to 321 AI TOPS from ASUS and MSI with Microsoft's Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo, and video editing, image generation and coding.
This past quarter, we celebrated the 25th anniversary of the GeForce 256, the world's first GPU. The transforming, executing graphics to igniting the AI revolution, NVIDIA's GPUs have been the driving force behind some of the most consequential technologies of our time. Moving to ProViz. Revenue of $486 million was up 7% sequentially and 17% year-on-year. NVIDIA RTX workstations continue to be the preferred choice to power professional graphics, design, and engineering-related workloads.
Additionally, AI is emerging as a powerful demand driver, including autonomous vehicle simulation, generative AI model prototyping for productivity-related use cases and generative AI content creation in media and entertainment. Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving brands of NVIDIA Orin and robust end market demand for NAVs. Global Cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.
Okay, moving to the rest of the P&L. GAAP gross margin was 74.6% and non-GAAP gross margin was 75%, down sequentially, primarily driven by a mix shift of the H100 systems to more complex and higher-cost systems within Data Center. Sequentially, GAAP operating expenses and non-GAAP operating expenses were up 9% due to higher compute, infrastructure and engineering development costs for new product introductions. In Q3, we returned $11.2 billion [Phonetic] to shareholders in the form of share repurchases and cash dividends.
Well, let me turn to the outlook for the fourth quarter. Total revenue is expected to be $37.5 billion, plus or minus 2%, which incorporates continued demand for Hopper architecture and the initial ramp of our Blackwell products. While demand is greatly exceed supply, we are on track to exceed our previous Blackwell revenue estimate of several billion dollars as our visibility into supply continues to increase.
On Gaming, although sell-through was strong in Q3, we expect fourth quarter revenue to decline sequentially due to supply constraints. GAAP and non-GAAP gross margins are expected to be 73% and 73.5%, respectively, plus or minus 50 basis points. Blackwell is a customizable AI infrastructure with seven different types of NVIDIA built chips, multiple networking options and for air and liquid cooled data centers. Our current focus is on ramping to strong demand, increasing system availability and providing the optimal mix of configurations to our customer.
As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s. GAAP and non-GAAP operating expenses are expected to be approximately $4.8 billion and $3.4 billion, respectively. We are a data center-scale AI infrastructure company. Our investments include building data centers for development of our hardware and software stacks and to support new introductions. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $400 million, excluding gains and losses from nonaffiliated investments.
GAAP and non-GAAP tax rates are expected to be 16.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We will be attending the UBS Global Technology and AI Conference on December 3rd in Scottsdale. Please join us at CES in Las Vegas, where Jensen will deliver a keynote on January 6. And we will host a Q&A session for financial analysts the next day on January 7. Our earnings call to discuss results for the fourth quarter of fiscal 2025 is scheduled for February 26, 2025.
We will now open the call for questions. Operator, can you poll for questions, please?