Colette Kress
Executive Vice President and Chief Financial Officer at NVIDIA
Thanks, Stewart. Q2 was another record quarter. Revenue of $30 billion was up 15% sequentially and up 122% year-on-year and well above our outlook of $28 billion.
Starting with Data Center. Data Center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year-on-year, driven by strong demand for NVIDIA Hopper GPU computing and our networking platforms. Compute revenue grew more than 2.5 times, networking revenue grew more than two times from the last year. Cloud service providers represented roughly 45% of our Data Center revenue and more than 50% stemmed from the consumer Internet and enterprise companies. Customers continue to accelerate their Hopper architecture purchases, while gearing up to adopt Blackwell. Key workloads driving our Data Center growth include Generative AI, model training and inferencing, video, image and text data pre- and post-processing with CUDA and AI workloads, synthetic data generation, AI-powered recommender systems, SQL and vector database processing as well.
Next-generation models will require 10 times to 20 times more compute to train with significantly more data. The trend is expected to continue. Over the trailing four quarters, we estimate that inference drove more than 40% of our Data Center revenue. CSPs, consumer Internet companies and enterprises benefit from the incredible throughput and efficiency of NVIDIA's inference platform. Demand for NVIDIA is coming from frontier model makers, consumer Internet services and tens of thousands of companies and start-ups building Generative AI applications for consumers, advertising, education, enterprise and healthcare and robotics. Developers desire NVIDIA's rich ecosystem and availability in every cloud. CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high-demand.
NVIDIA H200 platform began ramping in Q2, shipping to large CSPs consumer Internet and enterprise companies. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the H100. Our Data Center revenue in China grew sequentially in Q2 and is a significant contributor to our Data Center revenue. As a percentage of total Data Center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going forward.
The latest round of MLPerf inference benchmarks highlighted NVIDIA's inference leadership with both NVIDIA Hopper and Blackwell platforms combining to win gold medals on all tasks. At Computex, NVIDIA with the top computer manufacturers unveiled an array of Blackwell architecture-powered systems and NVIDIA networking for building AI factories and data centers. With the NVIDIA MGX modular reference architecture, our OEMs and ODM partners are building more than 100 Blackwell-based systems designed quickly and cost-effectively. The NVIDIA Blackwell platform brings together multiple GPU, CPU, DPU, NVLink, NVLink Switch and the networking chips, systems and NVIDIA CUDA software to power the next-generation of AI across the cases, industries and countries.
The NVIDIA GB200 NVL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30 times faster inference for LLMs, workloads and unlocking the ability to run trillion parameter models in real-time. Hopper demand is strong and Blackwell is widely sampling. We executed a change to the Blackwell GPU mask to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26. In Q4, we expect to ship [Phonetic] several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second-half of fiscal 2025. Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year.
Networking revenue increased 16% sequentially. Our Ethernet for AI revenue, which includes our Spectrum-X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings. Spectrum-X has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers and enterprise, including xAI to connect the largest GPU compute cluster in the world. Spectrum-X supercharges Ethernet for AI processing and delivers 1.6 times the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support demand for scaling compute clusters from tens of thousands of DPUs today to millions of GPUs in the near future. Spectrum-X is well on-track to begin a multi-billion dollar product-line within a year.
Our sovereign AI opportunities continue to expand, as countries recognize AI expertise and infrastructure at national imperatives for their society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI Bridging Cloud Infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low-double-digit billions this year.
The enterprise AI wave has started. Enterprises also drove sequential revenue growth in the quarter. We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. A range of applications are fueling our growth, including AI-powered chat-bots, Generative AI co-pilots and agents to build new monetizable business applications and enhance employee productivity. Amdocs is using NVIDIA Generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%. ServiceNow is using NVIDIA for its Now Assist offering, the fastest-growing new product in the company's history. SAP is using NVIDIA to build dual co-pilots. Cohesity is using NVIDIA to build their Generative AI agent and lower Generative AI development costs. Snowflake, serves over 3 billion queries a day for over 10,000 enterprise customers, is working with NVIDIA to build co-pilots. And lastly, Wistron is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%.
Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multi-billion dollars in revenue across on-prem and cloud consumption and will grow as next-generation AV models require significantly more compute. Healthcare is also on its way to being a multi-billion dollar business as AI revolutionizes medical imaging, surgical robots, patient care, electronic health record processing and drug discovery. During the quarter, we announced a new NVIDIA AI foundry service to supercharge Generative AI for the world's enterprises with Meta's Llama 3.1 collection of models. This marks a watershed moment for enterprise AI. Companies for the first time can leverage the capabilities of an open-source frontier level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business. Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy Generative AI applications.
NVIDIA NIMs accelerate and simplify model deployment. Companies across healthcare, energy, financial services, retail, transportation and telecommunications are adopting NIMs, including Aramco, Lowe's and Uber. AT&T realized 70% cost-savings and eight times latency reduction after moving into NIMs for Generative AI, call transcription and classification. Over 150 partners are embedding NIMs across every layer of the AI ecosystem. We announced NIM Agent Blueprints, a catalog of customizable reference applications that include a full suite of software for building and deploying enterprise Generative AI applications. With NIM Agent Blueprints, enterprises can refine their AI applications over time, creating a data-driven AI flywheel. The first NIM Agent Blueprints include workloads for customer service, computer-aided drug discovery and enterprise retrieval augmented generation. Our system integrators, technology solution providers and system builders are bringing NVIDIA NIM Agent Blueprints to enterprises. NVIDIA NIM and NIM Agent Blueprints are available through the NVIDIA AI enterprise software platform, which has great momentum. We expect our software, SaaS and support revenue to approach a $2 billion annual run-rate exiting this year, with NVIDIA AI enterprise notably contributing to growth.
Moving to Gaming and AI PCs. Gaming revenue of $2.88 billion increased 9% sequentially and 16% year-on-year. We saw sequential growth in console, notebook and desktop revenue and demand is strong and growing and channel inventory remains healthy. Every PC with RTX is an AI PC. RTX PCs can deliver up to 1,300 AI TOPS and there are now over 200 RTX AI laptops designs from leading PC manufacturers. With 600 AI-powered applications and games and an installed-base of 100 million devices, RTX is set to revolutionize consumer experiences with Generative AI. NVIDIA ACE, a suite of generative AI technologies, is available for RTX AI PCs. Mecha BREAK is the first game to use NVIDIA ACE, including our small large -- small language model, Minitron 4B, optimized on device inference. The NVIDIA Gaming ecosystem continues to grow, recently added RTX and DLSS titles, including Indiana Jones and the Great Circle, Dune Awakening and Dragon Age: The Veilguard. The GeForce NOW library continues to expand, with total catalog size of over 2,000 titles, the most content of any cloud gaming service.
Moving to Pro Visualization. Revenue of $454 million was up 6% sequentially and 20% year-on-year. Demand is being driven by AI and graphic use cases, including model fine-tuning and Omniverse-related workloads. Automotive and manufacturing were among the key industry verticals driving growth this quarter. Companies are racing to digitalize workflows to drive efficiency across their operations. The world's largest electronics manufacturer, Foxconn, is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. And several large global enterprises, including Mercedes-Benz, signed multi-year contracts for NVIDIA Omniverse cloud to build industrial digital twins of factories.
We announced new NVIDIA USD NIMs and connectors to open Omniverse to new industries and enable developers to incorporate Generative AI co-pilots and agents into USD workflows, accelerating their ability to build highly accurate virtual worlds. WPP is implementing USD NIM microservices in its Generative AI-enabled content creation pipeline for customers, such as The Coca-Cola Company.
Moving to Automotive and Robotics. Revenue was $346 million, up 5% sequentially and up 37% year-on-year. Year-on-year growth was driven by the new customer ramps in self-driving platforms and increased demand for AI cockpit solutions. At the consumer -- at the Computer Vision and Pattern Recognition conference, NVIDIA won the Autonomous Grand Challenge in the end-to-end driving at-scale category, outperforming more than 400 entries worldwide. Boston Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, Skild AI and Teradyne Robotics, are using the NVIDIA Isaac robotics platform for autonomous robot arms, humanoids and mobile robots.
Now moving to the rest of the P&L. GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new products within Data Center and inventory provisions for low-yielding Blackwell material. Sequentially, GAAP and non-GAAP operating expenses were up 12%, primarily reflecting higher compensation-related costs. Cash flow from operations was $14.5 billion. In Q2, we utilized cash of $7.4 billion towards shareholder returns in the form of share repurchases and cash dividends, reflecting the increase in dividend per share. Our Board of Directors recently approved a $50 billion share repurchase authorization to add to our remaining $7.5 billion of authorization at the end of Q2.
Let me turn to the outlook for the third quarter. Total revenue is expected to be $32.5 billion, plus or minus 2%. Our third quarter revenue outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products. We expect Blackwell production ramp in Q4. GAAP and non-GAAP gross margins are expected to be 74.4% and 75%, respectively, plus or minus 50 basis points. As our Data Center mix continues to shift to new products, we expect this trend to continue into the fourth quarter of fiscal 2025. For the full year, we expect gross margins to be in the mid-70% range.
GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively. Full-year operating expenses are expected to grow in the mid- to upper-40% range as we work on developing our next-generation of products. GAAP and non-GAAP other income and expenses are expected to be about $350 million, including gains and losses from non-affiliated investments and publicly-held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website.
We are now going to open the call for questions. Operator, would you please help us open poll for questions?