Colette Kress
Executive Vice President and Chief Financial Officer at NVIDIA
Thanks, Simona. We had an exceptional quarter. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year-on year, and above our outlook of $11 billion.
Let me first start with Data Center. Record revenue of $10.32 billion was up 141% sequentially and up 171% year-on year. Data Center Compute revenue nearly tripled year-on year, driven primarily by accelerating demand from cloud service providers and large consumer internet companies for our HGX platform, the engine of generative AI and large language models. Major companies including, AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud as well as growing number of GPU cloud providers are deploying in volume HGX systems based on our Hopper and Ampere architecture, Tensor Core GPU.
Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs.
Our Data Center supply chain including HGX with 35,000 parts and highly complex networking has been built up over the past decade. We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process, such as CoWoS packaging. We expect supply to increase each quarter through next year.
By geography, Data Center growth was strongest in the U.S. as customer's direct their capital investments to AI and accelerated computing. China demand was within the historical range of 20% to 25% of our Data Center revenue, including Compute & Networking solutions. At this time, let me take a moment to address recent reports on the potential for increased regulations on our exports to China. We believe the current regulation is achieving the intended results.
Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our Data Center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long-term, restrictions prohibiting the sale of our Data Center GPUs to China, if implemented, will result in a permanent loss of an opportunity for the U.S. industry to compete and lead in one of the world's largest markets.
Our cloud service providers drove exceptional strong demand for HGX systems in the quarter as they undertake a generational transition to upgrade their data center infrastructure for the new era of accelerated computing and AI. The NVIDIA HGX platform is culminating of nearly two decades of full stack innovation across silicon, system, interconnects, networking, software and algorithms. And since it's powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers with others on the way shortly.
Consumer internet companies also drove the very strong demand. Their investments in data center infrastructure, purpose-built for AI are already generating significant returns. For example, Meta recently highlighted that since launching reels, and AI recommendations, have driven a more than 24% increase in time spent on Instagram.
Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud, as well as demand for on-premise infrastructure. Whether we serve customers in the cloud or on-prem, through partners or direct, their applications can run seamlessly on NVIDIA AI Enterprise Software with access to our acceleration libraries, pre-trained models and APIs.
We announced a partnership with Snowflake to provide enterprises with accelerated path to create customize generative AI applications using their own proprietary data, all securely within the Snowflake Data Cloud. With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbots, search and summarization right from the Snowflake data cloud.
Virtually, every industry can benefit from generative AI. For example, AI Copilot, such as those just announced by Microsoft and boosts the productivity of over a billion office workers and tens of millions of software engineers. Millions [Phonetic] of professionals in legal services, sales, customer support and education will be available to leverage AI assistants trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.
We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world's largest marketing and communication services organization is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and contents from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design.
Visual content provider Shutterstock is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene background with the help of generative AI. We partnered with ServiceNow and Accenture to launch the AI Lighthouse program, fast-tracking the development of enterprise AI capabilities. AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture Consulting and deployment services.
We are collaborating also with Hugging Face to simplify the creation of new and custom AI models for enterprises. Hugging Face will offer a new service for enterprises to train and tune advanced AI models, powered by NVIDIA DGX cloud.
And just yesterday, VMware and NVIDIA announce a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully-integrated platform featuring AI software and accelerated computing on NVIDIA with multi-cloud software for enterprises running VMware. VMware's hundreds of thousands of enterprise customers will have access to the infrastructure, AI and cloud management software needed to customize models and run generative AI applications such as intelligent chatbots, assistants, search, and summarization.
We also announced new NVIDIA AI Enterprise-ready servers, featuring the new NVIDIA L40S GPU build for the industry-standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor. L40S is not limited by CoWoS supply and is shipping to the world's leading server system makers [Phonetic]. L40S is a universal data center processor designed for high-volume data center scaling out to accelerate the most compute-intensive applications including AI training and inferencing, 3D designing, visualization, video processing and NVIDIA omniverse industrial digitalization.
NVIDIA AI Enterprise ready servers are fully optimized for VMware Cloud Foundation and private AI Foundations. Nearly 100 configurations of NVIDIA AI Enterprise ready servers will soon be available from the world's leading enterprise IT computing companies including Dell, HPE and Lenovo.
The GH200 Grace Hopper Superchip which combines our ARM-based Grace CPU with Hopper GPU entered full production and will be available this quarter in OEM servers, but is also shipping to multiple supercomputing customers including Los Alamos, National Labs and the Swiss National Computing center.
And NVIDIA and SoftBank are collaborating on a platform based on GH200 for generative AI and 5G, 6G applications. The second-generation version of our Grace Hopper Superchip with the latest HBM3e memory will be available in Q2 of calendar 2024. We announced the DGX GH200, a new class of large memory AI supercomputer for giant AI language model recommendator systems and data analytics. This is the first use of the new NVIDIA NVLink Switch System, enabling all of its 256 Grace Hopper Superchips will work together as one, a huge jump compared to our prior generation connecting just eight GPUs over NVLink. DGX GH200 systems are expected to be available by the end-of-the year; Google Cloud, Meta and Microsoft among the first to gain access.
Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU system. Thanks to its end-to-end optimization and in-network computing capabilities InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollars AI infrastructures the value from the increased throughput of InfiniBand is worth hundreds of millions [Phonetic] and pays for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners.
For Ethernet-based cloud data centers that seek to optimize their AI performance, we announced NVIDIA Spectrum-X, an accelerated networking platform designed to optimize Ethernet for AI workloads. Spectrum-X couple with Spectrum-4 Ethernet switch with the BlueField-3 DPU achieving 1.5 times better overall AI performance and power efficiency versus traditional Ethernet. BlueField-3 DPU is a major success It is in qualification with major OEMs and ramping across multiple CSPs and consumer internet companies.
Now moving to gaming. Gaming revenue of $2.49 billion was up 11% sequentially and 22% year-on-year. Growth was fueled by GeForce RTX 40 Series GPUs for laptops and desktops. End customer demand was solid and consistent with seasonality. We believe global end-demand has returned to growth after last year's slowdown.
We have a large upgrade opportunity ahead of us. Just 47% of our installed-base have upgraded to RTX and about 20% have a GPU with an RTX 3060 or higher performance. Laptops GPU posted strong growth in the key back-to-school season, led by RTX 4060 GPUs. NVIDIA's GPU powered laptops have gained in popularity and their shipments are now outpacing desktop GPUs from several regions around the world. This is likely to shift the reality of our overall gaming revenue a bit, with Q2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops.
In desktop, we had launched the GeForce RTX 4060 and the GeForce RTX 40 160 TI GPUs, bringing the Ada Lovelace architecture down price points as low as $299. The ecosystem RTX and DLSS gains continuing to extend five new games, added to DLSS support including blockbusters, such as the outlook for A3. Is now over 330 RTX accelerated gains and apps.
We are bringing generative AI to games. At COMPUTEX, we announced NVIDIA Avatar Cloud Engine or ACE for games, a custom AI model foundry service. Developers can use this service to bring intelligence to non-player characters. It harnesses a number of NVIDIA Omniverse and AI technologies, including NeMo, Riva and Audio2Face.
Now moving to Professional Visualization. Revenue of $375 million was up 28% sequentially and down 24% year-on-year. The Ada architecture ramp drove strong growth in Q2, rolling out initially in laptop workstations with a refresh of desktop workstations coming in Q3. These will include powerful new RTX systems with up to four NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory. They can be configured with NVIDIA AI enterprise or NVIDIA Omniverse inside. We also announced three new desktop workstation GPUs based on the Ada generation. The NVIDIA RTX 5000, 4500 and 4000, offering up to 2 times the RT core throughput and up to 2 times faster AI training performance compared to the previous generation.
In addition to traditional workloads such as 3D design and content creation, new workloads in generative AI, large language model development and data science are expanding the opportunity in pro visualization for our RTX technology. One of the key themes in Jensen's keynote at SIGGRAPH earlier this month was the conversion of graphics and AI. This is where NVIDIA Omniverse is positioned.
Omniverse is OpenUSD's native platform. OpenUSD is a universal interchange that is quickly becoming the standard for the 3D world, much like HTML is the universal language for the 2D [Indecipherable]. Together, Adobe, Apple, Autodesk, Pixar and NVIDIA form the alliance for OpenUSD. Our mission is to accelerate OpenUSD's development and adoption. We announced new and upcoming Omniverse cloud APIs, including RunUSD and ChatUSD to bring generative AI to OpenUSD workload.
Moving to automotive. Revenue was $253 million, down 15% sequentially and up 15% year-on-year. Solid year-on-year growth was driven by the ramp of self-driving platforms based on the NVIDIA DRIVE Orin SoC with a number of new energy [Phonetic] vehicle makers. The sequential decline reflects lower overall automotive demand, particularly in China. We announced a partnership with MediaTek to bring drivers and passengers new experiences inside the car. MediaTek will develop automotive SoCs and integrate a new product line of NVIDIA's GPU chiplet. The partnership covers a wide range of vehicle segments from luxury to entry level.
Moving to the rest of the P&L. GAAP gross margins expanded to 70.1% and non-GAAP gross margin to 71.2%, driven by higher Data Center sales. Our Data Center products include a significant amount of software and complexity, which is also helping drive our gross margin. Sequential GAAP operating expenses were up 6% and non-GAAP operating expenses were up 5%, primarily reflecting increased compensation and benefits.
We returned approximately $3.4 billion to shareholders in the form of share repurchases and cash dividends. Our Board of Directors has just approved an additional $25 billion in stock repurchases to add to our remaining $4 billion of authorization as of the end of Q2.
Let me turn to the outlook for the third quarter of fiscal 2024. Demand for our Data Center platform or AI is tremendous and broad-based across industries and customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity.
Additionally, the new L40S GPU will help address the growing demand for many types of workloads from cloud to enterprise. For Q3, total revenue is expected to be $16 billion, plus or minus 2%. We expect sequential growth to be driven largely by Data Center with gaming and ProViz also contributing.
GAAP and non-GAAP gross margins are expected to be 71.5% and 72.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $2.95 billion and $2 billion, respectively.
GAAP and non-GAAP other income and expenses are expected to be an income of approximately $100 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 14.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website.
In closing, let me highlight some upcoming events for the financial community. We will attend the Jefferies Tech Summit on August 30th in Chicago, the Goldman Sachs Tech Conference on September 5th in San Francisco, the Evercore Semiconductor Conference on September 6th, as well as the Citi Tech Conference on September 7th both in New York. And the BofA virtual AI Conference on September 11. Our earnings call to discuss the results of our third quarter of fiscal 2024 is scheduled for Tuesday, November 21st.
Operator, we will now open the call for questions. Could you please poll for questions for us? Thank you.