Lisa Su
Chair and Chief Executive Officer at Advanced Micro Devices
Thank you. Thank you, Matt, and good afternoon to all those listening today. 2024 was a transformative year for AMD. We successfully established our multi-billion dollar data center AI franchise, launched a broad set of leadership products and gained significant server and PC market-share. As a result, we delivered record annual revenue, grew net income 26% for the year and more than doubled free-cash flow from 2023. Importantly, the data center segment contributed roughly 50% of annual revenue as instinct and Epic processor adoption expanded significantly with cloud, enterprise and supercomputing customers.
Looking at our financial results, 4th-quarter revenue increased 24% year-over-year to a record $7.7 billion, led by record quarterly data center and client segment revenue, both of which grew by a significant double-digit percentage. On a full-year basis, annual revenue grew 14% to $25.8 billion as data center revenue nearly doubled and client segment revenue grew 52%, more than offsetting declines in our gaming and embedded segments. Turning to the segments. Data center segment revenue increased 69% year-over-year to a record $3.9 billion. 2024 marked another major inflection point for our server business as share gains accelerated driven by the ramp of fifth-gen Epic Turin and strong double-digit percentage year-over-year growth in fourth-gen Epic sales.
In cloud, we exited 2024 with well over 50% share at the majority of our largest hyperscale customers. Hyperscaler demand for Epic CPUs was very strong, driven by expanded deployments powering both their internal compute infrastructure and online services. Public-cloud demand was also very strong with the number of epic instances increasing 27% in 2024 to more than 1,000. AWS, Alibaba, Google, Microsoft and Tencent launched more than 100 AMD general-purpose and AI instances in the 4th-quarter alone. This includes new Azure instances powered by a custom-built Epic processor with HBM memory that delivers leadership HPC performance-based on offering 8x higher memory bandwidth compared to competitive offerings.
We also built significant momentum with Forbes 2,000 global businesses using Epic in the Cloud as enterprise customers activated more than double the number of Epic Cloud instances from the prior quarter. This capped off a strong year of growth as enterprise consumption of Epic in the cloud nearly tripled from 2023. Turning to enterprise on-prem adoption, Epic CPU sales grew by a strong double-digit percentage year-over-year as sell-through increased and we closed high-volume deployments with Akamai,, LG, ServiceNow, Verizon, Visa and others. We are seeing growing enterprise pull-based on the expanding number of Epic platforms available and our increased go-to-market investments. Exiting 2024, there are more than 450 Epic platforms available from the leading server OEMs and ODMs, including more than 120 turn platforms that went into production in the 4th-quarter from Cisco, Dell, HPE, Lenovo, Super Micro and others.
Looking-forward, is clearly the best server processor in the world with more than 540 performance records across a broad range of industry-standard benchmarks. At the same time, we are seeing sustained demand for both fourth and third-gen epic processors as our consistent roadmap execution has made AMD the dependable and safe choice. As a result, we see clear growth opportunities in 2025 across both cloud and enterprise based on our full portfolio of Epic processors optimized for leadership performance across the entire range of data center workloads and system price points turning to our data center AI business, 2024 was an outstanding year as we accelerated our AI hardware roadmap to deliver an annual cadence of new instinct accelerators, expanded our Rockham software suite with significant uplifts in inferencing and training performance, built strong customer relationships with key industry leaders and delivered greater than $5 billion of data center AI revenue for the year.
Looking at the 4th-quarter, MI300X production deployments expanded with our largest cloud partners. Meta exclusively used MI300X to serve their Lama 405B frontier model on Meta.ai and added instinct GPUs to its OCP compliant Grand TETON platform designed for deep learning recommendation models and large-scale AI inferencing workloads. Microsoft is using MI300X to power multiple GPT4 based co-pilot services and launch flagship instances that scale-up to thousands of GPUs for AI training and inference and HPC workloads. IBM, DigitalOcean, Vulture and several other AI-focused CSPs have begun deploying AMD Instinct accelerators for new instances.
IBM also announced plans to enable MI300X on their Watson X AI and data platform for training and deploying enterprise-ready generative AI applications. Instinct platforms are currently being deployed across more than a dozen CSPs globally, and we expect this number to grow in 2025. For enterprise customers, more than 25 MI300 series platforms are in-production with the largest OEMs and ODMs. To simplify and accelerate enterprise adoption of AMD Instinct platforms, Dell began offering MI300X as a part of their AI Factory solution suite and is providing multiple ready-to-deploy containers via the Dell Enterprise Hub on Hugging Face.
HPC adoption also grew in the quarter. AMD now powers five of the 10 fastest and 15 of the 25 most energy-efficient systems in the world on the latest Top 500 supercomputer list. Notably, the El Capitan system at Lawrence Livermore National Labs debuted as the world's fastest supercomputer using over 44,000 MI300A APUs to deliver more than 1.7 exaflops of compute performance. Earlier this month, the high-performance computer center at the University of Stuttgart launched the Hunter supercomputer that also uses MI300A. Like El Capitan, Hunter will be used for both foundational scientific research and advanced AI projects, including training LLMs in 24 different European languages.
On the AI software front, we made significant progress across all layers of the Rockham stack in 2024. Our strategy is to establish AMD Rockham as the industry's leading open software stack for AI, providing developers with greater choice and accelerating the pace of industry innovation. More than 1 million models on Hugging Face now run-out of the box on AMD and our platforms are supported in the leading frameworks like and Jacks, serving solutions like VLLM and compilers like OpenAI Triton.
We have also successfully ramped large-scale production deployments with numerous customers using Rockham, including our lead hyperscale partners. We ended the year with the release of Rockham 6.3 that included multiple performance optimizations, including support for the latest flash attention algorithm that runs up to three times faster than prior versions and SG Lang runtime that enabled day zero support for state-of-the-art models like Deep Seek V3. As a result of these latest enhancements, MI300X inferencing performance has increased 2.7 times since launch. Looking-forward, we're continuing to accelerate our software investments to improve the out-of-the-box experience for a growing number of customers adopting Instinct to power their diverse AI workloads.
For example, in January, we began delivering biweekly container releases that provide more frequent performance and feature updates and ready-to-deploy packages and we continue adding resources dedicated to the open-source community that enable us to build, test and launch new software enhancements at a faster pace. On the product front, we began volume production of MI325X in the 4th-quarter. The production ramp is progressing very well to support new customer wins.
MI325 is well-positioned in-market, delivering significant performance and TCO advantages compared to competitive offerings. We have also made significant progress with the number of customers adopting AMD Instinct. For example, we recently closed several large wins with MI300 and MI325 at Lighthouse AI customers that are deploying Instinct at-scale Across both their inferencing and training production environments for the first time. Looking ahead, our next-generation MI350 series featuring our cDNA 4 architecture is looking very strong. CDNA 4 will deliver the biggest generational leap in AI performance in our history with a 35x increase in AI compute performance compared to CDNA3. The silicon has come up really well. We were running large-scale LLMs within 24 hours of receiving first silicon and validation work is progressing ahead of schedule. The customer feedback on MI350 series has been strong, driving deeper and broader customer engagements with both existing and net-new hyperscale customers in preparation for at-scale MI350 deployments. Based on early silicon progress and the strong customer interest in the MI350 series, we now plan to sample lead customers this quarter and are on-track to accelerate production shipments to midyear. As we look-forward into our multi-year AMD Instinct roadmap, I'm excited to share that MI400 series development is also progressing very well. The cDNA Next architecture takes another major leap, enabling powerful rack scale solutions that tightly integrate networking CPU and GPU capabilities at the silicon level to support NSYNC solutions at data center scale. We designed cDNA Next to deliver leadership AI and HPC flops while expanding our memory capacity and bandwidth advantages and supporting an open ecosystem of scale-up and scale-out networking products. We are seeing strong customer interest in the MI400 series for large-scale training and inference deployments and remain on-track to launch in 2026. Turning to our acquisition of ZT Systems. We passed key milestones in the quarter and received unconditional regulatory approvals in multiple jurisdictions, including Japan, Singapore and Taiwan. Cloud and OEM customer response to the acquisition has been very positive as ZT's systems expertise can accelerate time-to-market for future Instinc accelerator platforms. We have also received significant interest in ZT's manufacturing business. We expect to successfully divest ZT's industry-leading US-based data center infrastructure production capabilities shortly after we closed the acquisition, which remains on-track for the first-half of the year. Turning to our client segment, revenue increased 58% year-over-year to a record $2.3 billion. We gained client revenue-share for the fourth straight quarter, driven by significantly higher demand for both Ryzen desktop and mobile processors. We had record desktop channel sellout in the 4th-quarter in multiple regions as Ryzen dominated the best-selling CPU list at many retailers globally, exceeding 70% share at Amazon, New Egg, Mine Factory and numerous others over the holiday period. In mobile, we believe we had a record OEM PC sell-through share in the 4th-quarter as Ryzen AI 300 Series notebooks ramp. In addition to growing share with our existing PC partners, we were very excited to announce a new strategic collaboration with Dell that marks the first time they will offer a full portfolio of commercial PCs powered by Ryzen Pro processors. The initial wave of Ryzen Power Dell commercial notebooks is planned to launch this spring with the full portfolio ramping in the second-half of the year as we focus on growing commercial PC share. At CES, we expanded our portfolio with the launch of 22 new mobile processors that deliver leadership compute, graphics and AI capabilities. Our processor portfolio has never been stronger with leadership compute performance across the stack. For AIPCs, we are the only provider that offers a complete portfolio of CPUs, enabling Windows CoPilot Plus experiences on premium ultra-thin, commercial, gaming and mainstream notebooks. Looking into 2025, we are planning for the PC TAM to grow by a mid-single-digit percentage year-on-year. Based on the breadth of our leadership client CPU portfolio and strong design-win momentum, we believe we can grow client segment revenue well-ahead of the market. Now turning to our gaming segment. Revenue declined 59% year-over-year to $563 million. Semi-custom sales declined as expected as Microsoft and Sony focused on reducing channel inventory. Overall, this console generation has been very strong, highlighted by cumulative unit shipments surpassing $100 million in the 4th-quarter. Looking-forward, we believe channel inventories have now normalized and semi-custom sales will return to more historical patterns in 2025. In gaming graphics, revenue declined year-over-year as we accelerated channel sellout in preparation for the launch of our next-gen right Radeon 900 series GPUs. Our focus with this generation is to address the highest-volume portion of the enthusiast gaming market with our new RDNA 4 architecture. Our DNA 4 delivers significantly better ray tracing performance and adds support for AI-powered upscaling technology that will bring high-quality 4K gaming to mainstream players when the first Radeon 9070 series GPUs go on-sale in early March. Now turning to our Embedded segment. 4th-quarter revenue decreased 13% year-over-year to $923 million. The demand environment remains mixed with the overall market recovering slower-than-expected as strength in aerospace and defense and test and emulation is offset by softness in the industrial and communication markets. We continued expanding our adaptive computing portfolio in the quarter with differentiated solutions for key markets. We launched our Versal RF series with industry-leading compute performance for aerospace and defense markets, introduced our Versal Premium series Gen 2 as the industry's first adaptive compute devices supporting CXL 3.1 and PCI Gen6 and began shipping our next-gen Alveal card with leadership performance for ultra-low latency trading. We believe we gained adaptive computing share in 2024 and are well-positioned for ongoing share gains-based on our design-win momentum. We closed a record $14 billion of design-wins in 2024, up more than 25% year-over-year as customer adoption of our industry-leading adaptive computing platforms expands and we won large new embedded processor designs. Thank you. In summary, we ended 2024 with significant momentum, delivering record quarterly and full-year revenue. Epic and Ryzen processor share gains grew throughout the year, and we are well-positioned to continue outgrowing the market-based on having the strongest CPU portfolio in our history. We established our multi-billion dollar data center AI business and accelerated both our Instinct hardware and Rockham software roadmaps. For 2025, we expect the demand environment to strengthen across all of our businesses, driving strong growth in our data center and client businesses and modest increases in our gaming and embedded businesses. Against this backdrop, we believe we can deliver strong double-digit percentage revenue and EPS growth year-over-year. Looking further ahead, the recent announcements of significant AI infrastructure investments like Stargate and latest model breakthroughs from and the Allen Institute highlight the incredibly rapid pace of AI innovation across every layer of the stack, from silicon to algorithms to models, systems and applications. These are exactly the types of advances we want to see as the industry invests in increased compute capacity while pushing the envelope on software innovation to make AI more accessible and enable breakthrough generative and AI experiences that can run on virtually every digital device. All of these initiatives require massive amounts of new compute and create unprecedented growth opportunities for AMD across our businesses. AMD is the only provider with the breadth of products and software expertise needed to power AI from end-to-end across data center, edge and client devices. We have made outstanding progress building the foundational product, technology and customer relationships needed to capture a meaningful portion of this market. And we believe this places AMD on a steep long-term growth trajectory, led by the rapid scaling of our data center AI franchise from more than $5 billion of revenue in 2024 to tens of billions of dollars of annual revenue over the coming years. Now I'd like to turn the call over to Gene to provide some additional color on our 4th-quarter and full-year results. Jean?