Dr. Lisa Su
Chair and Chief Executive Officer at Advanced Micro Devices
Thank you, Mitch, and good afternoon to all those listening today. We delivered strong top and bottom-line growth in the third quarter with revenue coming in above expectations, driven by record instinct and Epic product sales and robust demand for our Ryzen PC processors. Third quarter revenue increased 18% year-over-year to a record $6.8 billion as significantly higher data center and client processor sales more than offset declines in gaming and embedded product sales.
We expanded gross margin by 2.5 percentage points and increased earnings per share by 31% year-over-year as data center segment revenue more than doubled. Turning to the segments. Data center segment revenue increased 122% to a record $3.5 billion. we believe we gained server CPU share in the quarter as enterprise wins accelerated, cloud providers expanded their use of Epic CPUs across their infrastructure, and we began the initial ramp of fifth-gen EPYC processors.
EPYC has become the CPU of choice for the modern data center and our multi-generation product portfolio delivers leadership performance and significant TCO advantages across virtually every enterprise and cloud workload. In cloud, EPYC CPUs are deployed at scale to power many of the most important services, including Office 365, Facebook, Teams, Salesforce, SAP, Zoom, Uber, Netflix, and many more. Meta alone has deployed more than 1.5 million EPYC CPUs across their global data center fleet to power their social media platforms.
Cloudflare selected General X processors with our industry-leading 3D chiplet stacking technology to power their next-generation servers that support twice as many requests per second and deliver 60% higher performance per watt versus their prior generation. Public cloud instances increased 20% year-over-year to more than 950 as Microsoft, AWS and others launched or expanded their EPYC processor-powered offerings in the quarter.
EPYC Instance adoption with enterprise customers also grew in the quarter, highlighted by wins with Adobe, Boeing, Micron, Nestle, Slack, Synopsys, Tata and others. In the enterprise, sales grew by a strong double-digit percentage year-over-year for the fifth straight quarter as EPYC CPU adoption accelerated and sell-through momentum grew. Dell, HPE, Lenovo and others have expanded the number of fourth-gen Epic platforms they offer by 50% in the last year.
There are now more than 200 different EPYC solutions available that are optimized for a broad range of enterprise and edge workloads. We are building strong momentum with large customers, highlighted in the third quarter by wins with large technology, energy, financial services and automotive companies in the quarter, including Airbus, Daimler Truck, FedEx, HSBC, Siemens, Walgreens and others.
We launched our next-generation Turin family earlier this month that delivers absolute performance and TCO leadership across both enterprise scale up and cloud-native scale-out workloads. Turin has already set more than 130 performance records for virtualization, database, AI, business applications and energy efficiency with the full EPYC portfolio accounting for more than 500 performance world records. More than 130 fifth-gen EPYC enterprise platforms are in development from all the leading server OEMs and ODMs.
These new servers complement existing fourth-gen EPYC platforms providing a top to bottom stack of platforms optimized for a broad range of business applications. In Cloud, Google and OCI announced plans to launch fifth-gen EPYC instances early next year, and we expect broad adoption with our largest cloud customers based on the significant performance and efficiency advantages of Turin.
As an example, Oracle's Turin instances delivered 35% higher performance per core, 33% faster memory speeds and double the networking bandwidth, delivering a level of compute performance and capability that is only possible with EPYC CPUs. Looking ahead, we are very well-positioned for continued growth in share gains based on the strength of our broad EPYC portfolio and the momentum we have built with cloud and enterprise customers.
We also took a major step in the quarter to advance the x86 architecture, forming an ecosystem advisory group with Intel, several industry luminaries and the largest cloud PC and enterprise leaders to accelerate innovation by driving consistency and compatibility across both the x86 instruction set and architectural interfaces ensuring we evolve x86 as a compute platform of choice for developers and customers. Turning to our Data Center AI business. Data Center GPU revenue ramped as MI300X adoption expanded with cloud, OEM and AI customers.
Microsoft and Meta expanded their use of MI300X accelerators to power their internal workloads in the quarter. Microsoft is now using MI300X broadly for multiple co-pilot services powered by the family of GPT four models. Meta announced they have optimized and broadly deployed MI300X to power their inferencing infrastructure at scale, including using MI300X exclusively to serve all live traffic for the most demanding LAMA-405B frontier model.
We are also working closely with Meta to expand their instinct deployments to other workloads where MI300X offers TCO advantages, including training. MI300X public cloud instance availability expanded in the quarter with Microsoft, Oracle Cloud, and multiple AI specialized cloud providers now offering Instinct instances with leadership performance and TCO for many of the most widely used models.
Instant cloud instance adoption is strong with multiple start-ups and industry leaders adopting MI300 instances to power their models and services, including Essential AI, Fireworks AI, Luma AI, and Databricks. On the AI software front, since launching MI300 10 months ago, we have expanded functionality at every layer of the ROCm stack and increase the number of models that run out of the box on Instinct accelerators to more than one million, enabling customers to get up and running as fast as possible with maximum out-of-the-box performance.
With the release of ROCm 6.2 last quarter, MI300x inferencing performance has improved 2.4 timessince launch, and trading performance has increased 80%. We are working closely with a growing number of marquee cloud and enterprise customers to fine-tune their specific inferencing workloads for MI300 with many customers seeing 30% higher performance compared to competitive offerings, and we continue to expand our work with the open source community, broadening support for key frameworks like JAK, libraries like VLLM, and hardware-agnostic compilers like Triton.
At our Advancing AI event earlier this month, we were excited to be joined by the creators and leaders of some of the most important AI software technologies who have added foundational support for ROCm into Triton, the LAMA Stack, SGLang, VLLM, and TensorFlow and are working to enable broader open source community work with Instinct platforms. With this growing support from the broader AI software ecosystem and the significant advances we have made in our software stack, ROCm now provides AI developers with a truly open software alternative that has been deployed and validated at scale.
To expand our AI systems capabilities, we announced a definitive agreement to acquire ZT Systems, one of the leading providers of AI infrastructure to the world's largest hyperscale computing companies. The ZT team complements our silicon and software capabilities with critical systems expertise needed to deliver rack and cluster level solutions. With ZT, we will be able to design and validate our next-gen AI silicon and systems in parallel, greatly accelerating time to deploy Instinct accelerators at data center scale.
Customer feedback has been very positive as the ZT acquisition enabled hyperscale customers to rapidly deploy AMD AI infrastructure at scale and provides OEMs and ODMs with optimized board and module designs for a wide range of differentiated enterprise solutions. On the regulatory front, we made good progress as we recently passed the HSR waiting period required for U.S. approval. We remain on track to close the acquisition in the first half of 2025.
As a reminder, we plan to divest ZT's industry-leading U.S.-based data center infrastructure manufacturing business at the close of the transaction and are pleased that we have received significant interest from a number of parties to-date. Looking ahead, we launched our next-gen MI325X GPU earlier this month that extends our memory capacity and bandwidth advantages and delivers up to 20% higher inferencing performance compared to H200 and competitive training performance.
Customer and partner interest for MI325X is high. Production shipments are planned to start this quarter with widespread system availability from Dell, HP, Lenovo, Supermicro and others starting in the first quarter of 2025. Longer term, we have successfully accelerated our product development pace to deliver an annual cadence of new Instinct products. Our next-gen MI350-series silicon is looking very good and is on track to launch in the second half of 2025 with the largest generational increase in AI performance we have ever delivered.
Development on our MI400 series based on the CDNA Next architecture is also progressing very well towards a 2026 launch. We have built significant momentum across our data center AI business with deployments increasing across an expanding set of cloud, enterprise and AI customers. As a result, we now expect data center GPU revenue to exceed $5 billion in 2024, up from $4.5 billion we guided in July and our expectation of $2 billion when we started the year.
Turning to our Client segment. Revenue was $1.9 billion, an increase of 29% year-over-year, driven by strong demand for our latest generation Zen five notebook and desktop processors. Desktop channel sales grew by a significant double-digit percentage led by the launch of our Ryzen 9000 series processors that deliver leadership productivity, gaming and content creation performance.
We are seeing strength across our Ryzen desktop portfolio and are on track to launch our next-gen Ryzen 9000 X3D processors in November with leadership gaming performance. In mobile, Ryzen AI 300 Series sales ramped significantly from the prior quarter as Acer, HP, Lenovo, Asus and others announced new consumer and commercial notebooks with leadership compute and AI performance.
We made good progress expanding our presence in the commercial PC market in the quarter, closing multiple large deals with AstraZeneca, Bayer, Mazda, Shell, Volkswagen and other enterprise customers. We also launched our Ryzen AI Pro 300 Series family, the first CPUs with enterprise-class security, manageability and AI capabilities for CoPilot+ PCs.
HP and Lenovo are on track to more than triple the number of Ryzen AI Pro platforms they offer in 2024, and we expect to have more than 100 Verizon AI Pro commercial platforms in market next year, positioning us well for share gains as businesses refresh the hundreds of millions of Windows 10 PCs that will no longer receive Microsoft technical support starting in 2025.
Now turning to our Gaming segment. Revenue declined 69% year-over-year to $462 million. Semi-custom sales declined as Microsoft and Sony reduced channel inventory. Sony announced the PS5 Pro with significant increases in graphics and ray tracing performance and AI-driven upscaling, featuring a new AMD semi-custom SoC that extends our multigenerational partnership.
In gaming graphics, revenue declined year-over-year as we prepare for a transition to our next-gen Radeon GPUs based on our RDNA four architecture. In addition to a strong increase in gaming performance, RDNA four delivers significantly higher ray tracing performance and adds new AI capabilities. We are on track to launch the first RDNA four GPUs in early 2025. Turning to our Embedded segment.
Third quarter revenue decreased 25% year-over-year to $927 million. Embedded demand continues recovering gradually led by strength in test and emulation offset by ongoing softness in the industrial market. Momentum continues building for our for our differentiated Versal family of Adaptive SoCs, led by strong demand Versal premium VP 1902, which is the world's largest Adaptive SoC in FPGA that is powering multiple platforms for all three of the largest EDA vendors.
Our Versal portfolio is also being adopted broadly across multiple aerospace customers. As one example, SpaceX recently launched their latest generation broadband satellites powered by Versal AI Core Adaptive SoCs. To build on this momentum, we taped out Telluride last quarter, the first product in our second-gen Versal family that delivers up to 10x more compute and enables AI application acceleration on a single chip.
Design win momentum is very strong across our portfolio, tracking to grow more than 20% year-over-year in 2024 and positioning us well to grow our embedded business faster than the overall market in the coming years. In summary, the business accelerated in the third quarter, and we expect strong demand for our Instinct, EPYC, and Ryzen processors to result in another quarter of significant year-over-year growth.
Taking a step back, this month marks my 10th anniversary as AMD's CEO. In the last 10 years, we have successfully completed multiple arcs. First, by turning the company around and setting the solid financial and operational foundation required for sustained growth. And then by transforming AMD into the high-performance and adaptive computing leader. While I'm incredibly proud of what we've accomplished, I'm even more excited about the unprecedented growth opportunities in front of us.
Looking out over the next several years, we see significant growth opportunities across our data center, client, and embedded businesses, driven by the nearly insatiable demand for more compute. Each of these opportunities is amplified exponentially by the rapid adoption of AI and which is enabling new experiences that will make high-performance computing and even more essential part of our daily lives.
In the data center alone, we expect the AI Accelerator TAM will grow at more than 60% annually to $500 billion in 2028. To put that in context, this is roughly equivalent to annual sales for the entire semiconductor industry in 2023. Beyond the data center, we are adding leadership AI capabilities across our product portfolio and partnering deeply with a broad ecosystem of partners to deliver differentiated AI solutions at scale. This is an incredibly exciting time for AMD, as the breadth of our technology and product portfolios combined with our deep customer relationships and diversity of markets we address, provide us with a unique opportunity as we execute our next arc and make AMD the end-to-end AI leader.
Now, I'd like to turn the call over to Jean to provide some additional color on our third quarter results. Jean?