Colette Kress
Executive Vice President and Chief Financial Officer at NVIDIA
Thanks, Stuart. Q4 was another record quarter. Revenue of $39.3 billion was up 12% sequentially and up 78% year-on-year and above our outlook of $37.5 billion. For fiscal 2025, revenue was $130.5 billion, up 114% from the prior year. Let's start with data center. Data center revenue for fiscal 2025 was $115.2 billion, more than doubling from the prior year. In the 4th-quarter, data center revenue of $35.6 billion was a record, up 16% sequentially and 93% year-on-year. As the Blackwell ramp commenced and Hopper 200 continued sequential growth.
In Q4, Blackwell sales exceeded our expectations. We delivered $11 billion of Blackwell revenue to meet strong demand. This is the fastest product ramp-in our company's history, unprecedented in its speed and scale. Blackwell production is in-full gear across multiple configurations and we are increasing supply quickly meet expanding customer adoption. Our Q4 data center compute revenue jumped 18% sequentially and over 2x year-on-year. Customers are racing to scale infrastructure to train the next-generation of cutting-edge models and unlock the next level of AI capabilities.
With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more. Shipments have already started for multiple infrastructures of this size. Post-training and model customization are fueling demand for NVIDIA infrastructure and software as developers and enterprisers leverage techniques such as fine-tuning, reinforcement learning and distillation to tailor models for domain-specific use cases. Hugging face alone hosts over 90,000 derivatives created from the Lama foundation model. The scale of post-training and model customization is massive and can collectively demand orders of magnitude more compute than pre-training.
Our inference demand is accelerating, driven by test time scaling and new reasoning models like OpenAISE, 03, R1 and 3. Long thinking reasoning AI can require 100x more compute per task compared to one-shot inferences. Blackwell was architected for re-reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower-cost versus Hopper 100. It is revolutionary. Transformer engine is built for LLM and mixture of experts infants. And its Envy link domain delivers 14 times the throughput of DCIE Gen5, ensuring the response time, throughput and cost-efficiency needed to tackle the growing complexity of inference at-scale.
Companies across industries are tapping into NVIDIA's full stack inference platform to boost performance and/costs. Tripled inference throughput and cut costs by 66% using NVIDIA TensorRT for its screenshot feature. Sees 435 million monthly queries and reduced its inference costs 3 times with NVIDIA, Triton Inference Server and Tensor RTLLM. Microsoft Being achieved a 5x speedup and major TCO savings for visual search across billions of images. Blackwell has great demand for inference. Many of the early GB-200 deployments are Blackwell addresses the entire AI market from pre-training post-training to inference across cloud to on-premise to enterprise.
UDA's programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against in current pace of innovation is unmatched. We're driven to a 200 times reduction in inference costs in just and the highest ROI and full stack optimizations for NVIDIA and our large ecosystem, including 5.9 million developers continuously improve our customers' economics. In Q4, large CSPs represented about half of our data center revenue. And these sales increased nearly 2x year-on-year.
Large CSPs were some of the first to stand-up Blackwell with Azure, GCP, AWS and OCI bringing GB 200 systems to cloud regions around the world to meet surging customer demand for AI. Regional cloud-hosting NVIDIA GPUs increased as a percentage of data center revenue, reflecting continued AI factory build-outs globally and rapidly rising demand for AI reasoning models and agents. We've launched a 100,000 GV 200 cluster-based instance with NVLink switch and Quantum 2 InfiniBand. Consumer Internet revenue grew 3 times year-on-year, driven by an expanding set of generative AI and deep learning use cases.
These include recommender systems, vision language understanding, synthetic data generation search and agentic AI. For example, X-AI is adopting the to train and inference its next-generation of AI models. Meta's cutting-edge Andromeda advertising engine runs on NVIDIA's Grace Hopper Superchip serving vast quantities of ads across Instagram, Facebook applications. Andromeda harnesses Grace Hopper's fast interconnect and large memory to boost inference throughput by 3 times, enhance ad personalization and deliver meaningful jumps in monetization and ROI.
Enterprise revenue increased nearly 2x year-on accelerating demand for model fine-tuning, RAG and identic AI workflows and GPU accelerated data processing. We introduced NVIDIA Lama, Lama Numatron Model family NIMS to help developers create and deploy AI agents across a range of applications, including customer support, fraud detection and product supply-chain and inventory management. Leading AI agent platform providers, including SAP and ServiceNow are among the first to use new models. Healthcare leaders, IQVIA, Illumina and Mayo Clinic are well as ARC Institute are using NVIDIA AI to speed drug discovery, enhance genomic research and pioneer advanced healthcare services with generative and agentic AI.
As AI expands beyond the digital world, NVIDIA infrastructure and software platforms are increasingly being adopted to power robotics and physical AI development. One of the early and largest robotics applications and autonomous vehicles where virtually every AV company is developing on NVIDIA in the data center, the car or both. NVIDIA's automotive vertical revenue is expected to grow to approximately $5 billion this fiscal year. At CS -- at CES, Hyundai Mortar Group announced it is adopting NVIDIA technologies to accelerate AV and robotics development and smart factory initiatives.
Vision Transformers, self-supervised learning, multimodal sensor fusion and high fidelity simulation are driving breakthroughs in AV development and EX, we announced the NVIDIA Cosmo World Foundation Model platform. Just as language foundation models have revolutionized language AI, Cosmos is a physical AI to revolutionize robotics. The new robotics and automotive companies, including ride-sharing giant Uber are among the first to adopt the platform. From a geographic perspective, sequential growth in our data center revenue was strongest in the US, driven by the initial ramp of Blackwell. Countries across the globe are building their AI ecosystems and demand for compute infrastructure is surging.
France's EUR200 billion AI investment and the EU's EUR200 billion Invest AI initiatives offer a glimpse into the build-out to set redefined global AI infrastructure in the coming years. Now as a percentage of total data center revenue, data center sales in China remained well below levels absent any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for data center solutions remains very competitive. We will continue to comply with export controls controls while serving our customers. Networking revenue declined 3% sequentially.
Our networking attached to GPU compute systems is robust at over 75%. We are transitioning from small 8 with InfiniBand to large NVLink 72 with Spectrum X has increased and represents a major new growth vector. We expect networking to return to growth in Q1. AI requires a new class of networking. NVIDIA offers switch systems, demand for HPC supercomputers and Spectrum X for Ethernet environments. Spectrum X enhances the Ethernet for, OCI, Weave and others are building large AI factories with Spectrum X. The first Cisco announced integrating Spectrum X into their networking portfolio to help enterprises build AI infrastructure.
With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry. Now moving to gaming and AIPCs. Gaming revenue of $2.5 billion decreased 22% sequentially and 11% year-on-year. Full-year revenue of $11.4 billion increased 9% year-on-year and demand remained strong throughout the holiday. However, Q4, we expect strong sequential growth in Q1 as supply increases. The new GeForce RTX 50 Series desktop and laptop GPUs are here. Built for gamers, creators and developers, they fuse AI and graphics redefining visual computing.
Powered by the Blackwell architecture, fifth-generation Tensor cores and fourth-generation RT cores and featuring 3,400 AI tops. These GPUs deliver a 2x performance leap and new AI-driven rendering including neural shaders, digital human technologies, geometry and lighting. The new VLSS4 boosts frame rates up to 8x with AI-driven frame generation, turning one rendered frame into three. It also features the industry's first real-time application of transformer models packing 2x more parameters and 4x to compute for unprecedented visual fidelity.
We also announced a wave of GeForce Blackwell laptop GPUs with new NVIDIA MAX Q technology that extends battery life by up to an incredible 40%. These laptops will be available starting in March from the world's top manufacturers. Moving to our professional digitalization business. Revenue of $511 million was up 5% sequential -- sequentially and 10% year-on-year. Full-year revenue of $1.9 billion increased 21% year-on-year. Key industry verticals driving demand include automotive and healthcare.
NVIDIA technologies and generative AI are reshaping design, engineering and simulation workloads. Increasingly, these technologies, cadence and Siemens, fueling demand for NVIDIA RTX workstations. Now moving to automotive. Revenue was a record $570 million, up 27% sequentially and up 103% year-on-year. Full-year revenue of $1.7 billion increased 55% year-on-year. Strong growth was driven by the continued ramp-in autonomous vehicles, including cars and robotoxies. At CES, we announced Toyota, the world's largest automaker, will build its next-generation vehicles on NVIDIA, running the safety certified NVIDIA DRIVE OS. We announced Aurora and Continental will deploy driverless trucks at-scale powered by NVIDIA Drive 4.
Finally, our end-to-end autonomous vehicle platform, NVIDIA Drive Hyperia has passed industry safety assessments like and Riland, trade safety and cybersecurity. NVIDIA is the first AV platform to receive a comprehensive set of third-party assessments. Okay, moving to the rest of the P&L. GAAP gross margins was 73% and non-GAAP gross margins was 73.5%, down sequentially as expected with our first deliveries of the Blackwell architecture. As discussed last quarter, Blackwell is a customizable AI infrastructure with several different types of chips, multiple networking options and for air and liquid-cooled data center.
We exceeded our expectations in Q4 in ramping Blackwell, increasing system availability, providing several configurations to our customers. As Blackwell ramps, we expect gross margins to be in the -- in the low-70s. We -- initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build-out Blackwell infrastructure. Improve the cost and gross margin will improve and return to the mid 70s late this fiscal year. Sequentially, GAAP operating expenses were up 9% and non-GAAP operating expenses were 11%, reflecting higher engineering development costs and higher compute and infrastructure costs for new product introductions. In Q4, we returned $8.1 billion to shareholders in the form of share repurchases and cash dividends.
Let me turn to the outlook in the first-quarter. Total revenue is expected to be $43 billion, plus or minus 2%. Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1. We expect sequential growth in both data center and gaming. Within data center, we expect sequential growth from both compute and networking. GAAP and non-GAAP gross margins are expected to be 70.6% and 71%, respectively, plus or minus 50 basis-points. GAAP and non-GAAP operating expenses are expected to be approximately $5.2 billion and $3.6 billion, respectively.
We expect full-year fiscal year '26 operating expenses to grow to be in the mid 30s. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $400 million, excluding gains and losses from non-marketable and publicly-held equity securities. GAAP and non-GAAP tax rates are expected to be 17% plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website, including a new financial information AI agent.
In closing, let me highlight upcoming events for the financial community. We will be at the TD Cowen Healthcare Conference in Boston on March 3rd and at the Morgan Stanley Technology, Media and Telecom Conference in San Francisco on March 5. Please join us for our annual GTC conference starting Monday, March 17 in San Jose, California. Dinson will deliver a newspacked keynote on March 18, and we will host a Q&A session for our financial analysts the next day, March 19. We look-forward to seeing you at these events. Our earnings call to discuss the results for our first-quarter of fiscal 2026 is scheduled for May 28, 2025.
We are going to open up the call, operator to questions. If you could start that, that would be great.