Arista Networks Q3 2024 Earnings Call Transcript

There are 12 speakers on the call.

Operator

Welcome to the Q3 2024 Arista Networks Financial Results Earnings Conference Call. During the call, all participants will be in a listen only mode. After the presentation, we will conduct a question and answer session. Instructions will be provided at that time. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website following this call.

Operator

Ms. Liz Stein, Arista's Director of Investor Relations, you may begin.

Speaker 1

Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Lal, Arista Networks' Chairperson and Chief Executive Officer and Chantal Breitup, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal Q3 ending September 30, 2024. If you would like a copy of this release, you can access it online at our website.

Speaker 1

During the course of this conference call, Arista Networks' management will make forward looking statements, including those relating to our financial outlook for the Q4 of the 2024 fiscal year, longer term business model and financial outlooks for 2025 beyond our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10 Q and Form 10 ks and which could cause actual results to differ materially from those anticipated by these statements. These forward looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on this call are expressed on a non GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non GAAP financial measures to GAAP financial measures in our earnings press release.

Speaker 1

With that, I will turn the call over to Jayshree.

Speaker 2

Thank you, Liz, and thank you, everyone, for joining us this afternoon for our Q3 2024 earnings call. We delivered revenues of $1,810,000,000 for the quarter with a record non GAAP earnings per share of $2.40 Services and software support renewals contributed strongly at approximately 17.6% of revenue. Our non GAAP gross margin of 64.6% was influenced by both pressure from cloud titan customer pricing, offset by favorable enterprise margin and supply chain hygiene. International contributions for the quarter registered at approximately 18% with the Americas very strong at 82%. Clearly, Q3 2024 had a lot of bright spots in the quarter and we are encouraged by the strength and momentum of the company.

Speaker 2

At the recent 10th anniversary in June in 2024 celebration and vision event, we covered a lot of ground and what we would have otherwise said in an Analyst Day. So today, I'd like to briefly expand on our Arista 2.0 plans for 2025. We believe that networks are emerging at the epicenter of mission critical transactions and our Arista 2.0 strategy is resonating well with customers. We are, we believe, the only pure play network innovator for the next decade. Our modern networking platforms are foundational for transformation from silos to centers of data.

Speaker 2

This can be a data center, a campus center, a WAN center or an AI center. At the heart of this is our state oriented publish subscribe network data lake EOS software stack for multimodal datasets. One simply cannot learn without having access to all this data. So it is all about the data. We provide customers the state foundation for data for AI and machine learning without which AI and ML would just be buzzwords.

Speaker 2

Arista is well positioned with the right network architecture for client to campus, data center, cloud and AI networking. 3 principles guide us and differentiate us in bringing this data driven networking. Number 1, best in class 5 gs available proactive products with resilience and hitless upgrade built in at multiple levels. 2, zero touch automation and telemetry with predictive client to cloud one click operations with that granular visibility that relies less on human stuff. Number 3, prescriptive insights for deeper AI for networking delivering AI ops and AVA algorithms for security, observability and root cause analysis.

Speaker 2

Networking for AI is gaining a lot of traction as we move from trials in 2023 to more pilots in 2024 connecting to thousands of GPUs and we expect more production in 20252026. In our vernacular, Arista AI centers are made up of both the back end clusters and front end networks. AI traffic differs greatly from cloud workloads in terms of diversity, duration and size of flow. The fidelity of AI traffic flows with the slowest flow matters and one slow flow can slow down the entire job completion time is a crucial factor in networking. Our AI centers connect seamlessly from the back end to the front end of compute, storage, WAN and classic cloud networks.

Speaker 2

Arista is emerging as a pioneer in scale out Ethernet accelerated networking for large scale training and AI workloads. Our new Ether length portfolio with wire speed 800 gig throughput and non blocking performance scales from single tier to efficient 2 tier networks for over 100,000 GPUs, potentially even a 1000000 AI accelerators with multiple tiers. Our accelerated AI networking portfolio consists of 3 families with over 20 switching products and not just 1 point switch. At the recent OCP in mid October 2024, we officially launched a very unique platform, the distributed Etherlink 7,700 to build 2 tier networks for up to 10,000 GPU clusters. The 77R4 DES platform was developed in close collaboration with Meta.

Speaker 2

And while it may physically look like and be cabled like a 2 tier leaf spine network, DES provides a single stage forwarding with highly efficient spine fabric, eliminating the need for tuning and encouraging fast failover for large AI accelerator based clusters. It complements our Arista flagship 7,800 AI spine for the ultimate scale with differentiated fare and fully scheduled cell spraying architecture with a virtual output queuing fabric, saving valuable AI processor resources and improving job completion time. I would like to now invite John McCool, our Chief Platform Officer, to describe our 2024 platform and supply chain innovations after challenging couple of years. John, over to you.

Speaker 3

Thank you, Jayshree. I'm pleased to report Arista's 7,700R4 distributed EtherLink switch, the 7,800R4 spine, along with the 7,060X6 AI Leaf that we announced in June have entered into production providing our customers the broadest set of 800 gigabit per second Ethernet products for their AI networks. Together with 800 gigabit per second parallel optics, our customers are able to connect 2 400 gigabit per second GPUs to each port, increasing the deployment density over current switching solutions. This broad range of Ethernet platforms allows our customers to optimize density and minimize tiers to best match the requirements of their AI workload. As our customers continue with AI deployments, they're also preparing their front end networks.

Speaker 3

New AI clusters require new high speed port connections into the existing backbone. These new clusters also increase bandwidth on the backbone to access training data, capture snapshots and deliver results generated by the cluster. This trend is providing increased demand for our 7800R34 100 gigabit solution. While the post pandemic supply chain has returned to predictability, lead times for advanced semiconductors remained extended from pre pandemic levels. To assure availability of high performance switching silicon, we've increased our purchase commitments for these key components.

Speaker 3

In addition, we will increase our on hand inventory to respond to the rapid deployment of new AI networks and reduce overall lead times as we move into next year. Our supply chain team continues to work closely with planning to best align receipt of these purchases with expected customer delivery. Next generation data centers integrating AI will contend with significant increases in power consumption, while looking to double network performance. Our tightly coupled electrical and mechanical design flow allows us to make system level design trade offs across domains to optimize our solutions. Our experience in co design with the leading cloud companies provides insight into the variety of switch configurations required for these tightly coupled data center environments.

Speaker 3

Finally, our development operating software with SDK integration, device diagnostics and data analysis supports a fast time to design and production with a focus on first time results. These attributes give us confidence that we will continue to execute on our roadmap in this rapidly evolving AI networking segment. Back to you, Jayshree.

Speaker 2

Thank you, John, and congrats on a very high performance year to you and your new executives Alex Rose, Mike Capas, Luke Calero and the entire team. You guys have really done a phenomenal job. Critical to the rapid adoption of AI networking is the Ultra Ethernet Consortium specification expected imminently with Arista's key contributions as a founding member. The UEC ecosystem for AI has evolved to over 97 members. In our view, Ethernet is the only long term viable direction for open standard space AI networking.

Speaker 2

Arista is building holistic AI centers powered by our unparalleled superiority of EOS and the depth of automation and visibility software provided by CloudVision. Arista EOS delivers dynamic methods using cost of load balancing for congestion control and smart system upgrades where the traffic for AI continues to flow in the midst of an upgrade. Arista continues to work with AI accelerators of all types and we're agnostic to NICS to bring advanced EOS visibility all the way down to the host. Shifting to 2025 goals. As we discussed in our New York Stock Exchange event in June, our TAM has expanded to $70,000,000,000 in 2028.

Speaker 2

And you know, we've experienced some pretty amazing growth years with 33.8% growth in 2023 and 2024 appears to be heading at least to 18% exceeding our prior predictions of 10% to 12%. This is quite a jump in 2024 influenced by faster AI pilots. We are now projecting an annual growth of 15% to 17% next year, translating to approximately $8,000,000,000 in 2025 revenue with our healthy expectation of operating margin. Within that $8,000,000,000 revenue target, we are quite confident in achieving our campus and AI back end networking targets of $750,000,000 each in 2025 that we set way back 1 or 2 years ago. It's important to recognize though that the back end of AI will influence the front end AI network and its ratios.

Speaker 2

This ratio can be anywhere from 30% to 100% and sometimes we've seen it as high as 200% of the back end network depending on the training requirements. Our comprehensive AI center networking number is therefore likely to be double of our back end target of $750,000,000 now aiming for approximately $1,500,000,000 in 2025. We will continue to aim for double digit annual growth and a 3 year CAGR forecast of teens in the foreseeable future of 2024 to 2026. More details forthcoming from none other than our Chief Financial Officer. So over to you, Chantal.

Operator

Thank you, Jayshree. Turning now to more detail on the financials. This analysis of our Q3 results and our guidance for Q4 fiscal year 'twenty four is based on non GAAP. It excludes all non cash stock based compensation impacts, intangible asset amortization and other non recurring items. A full reconciliation of our selected GAAP to non GAAP results is provided in our earnings release.

Operator

Total revenues reached $1,810,000,000 marking a 20% year over year increase. This strong performance exceeded our guidance range of $1,720,000,000 to $1,750,000,000 Services and subscription software contributed approximately 17.6% of revenues in the 3rd quarter. International revenues for the quarter came in at $330,900,000 or 18.3 percent of total revenue, down from 18.7% last quarter. This quarter over quarter decrease reflects an increased contribution from domestic shipments to our cloud and enterprise customers. Overall, gross margin in Q3 was 64.6 percent, above the upper range of our guidance of approximately 64%, down from 65.4% last quarter and up from 63.1% in Q3 prior year.

Operator

This year over year improvement is driven by stronger enterprise margins and supply chain discipline in the current quarter. Operating expenses in the quarter were $279,900,000 or 15.5 percent of revenue, down from last quarter at $319,800,000 R and D spending came in at $177,500,000 or 9.8 percent of revenue, down from $216,700,000 last quarter. An item of note is that there were additional R and D related expenses originally expected in Q3 that are now expected to materialize in the Q4 quarter. R and D headcount has increased low double digit percentage versus Q3 in the prior year. Sales and marketing expense was $83,400,000 or 4.6 percent of revenue, down slightly from last quarter.

Operator

Our G and A cost came in at $19,100,000 or 1.1 percent similar to last quarter. Our operating income for the quarter was $890,100,000 or 49.1 percent of revenue. This was favorably impacted by the shift of R and D related expenses from Q3, now anticipated in Q4 of this year. Other income and expense for the quarter was a favorable $85,300,000 and our effective tax rate was 21.1%. This resulted in net income for the quarter of $769,100,000 or 42.5 percent of revenue.

Operator

Our diluted share number was 320,500,000 shares, resulting in a diluted earnings per share number for the quarter of $2.40 up 31.1 percent from the prior year. This too was favorably impacted by the shift in R and D related expenses from Q3 to Q4. Now turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at approximately $7,400,000,000 In the quarter, we repurchased $65,200,000 of our common stock at an average price of $318.14 per share. As the $1,200,000,000 repurchase program approved in May 2024, dollars 1,000,000,000 remains available for repurchase in future quarters.

Operator

The actual timing and amount of future repurchases will be dependent upon market and business conditions, stock price and other factors. Turning to operating cash performance for the Q3. We generated approximately $1,200,000,000 of cash from operations in the period, reflecting strong earnings performance combined with favorable working capital results. DSOs came in at 57 days, down from 66 days in Q2, reflecting a strong collections quarter combined with contributions from the linearity of billing. Inventory turns were 1.3 times, up from 1.1 last quarter.

Operator

Inventory decreased to $1,800,000,000 in the quarter, down from $1,900,000,000 in the prior period, reflecting a reduction in our raw materials inventory. Our purchase commitments in inventory at the end of the quarter totaled $4,100,000,000 up from $4,000,000,000 at the end of Q2. We expect this number to continue to have some variability in future quarters as a reflection of demand for our new product introductions. Our total deferred revenue balance was $2,500,000,000 up from $2,100,000,000 in Q2. The majority of the deferred revenue balance is services related and directly linked to the timing and term of service contracts, which can vary on a quarter by quarter basis.

Operator

Our product deferred revenue increased approximately $320,000,000 versus last quarter. Fiscal 2024 continues to be a year of new product introductions, new customers and expanded use cases. These trends have resulted in increased customer trials and contracts with customer specific acceptance clauses and has and will continue to increase the variability and magnitude of our product deferred revenue balances. Accounts payable days were 42 days, down from 46 days in Q2, reflecting the timing of inventory receipt payments. Capital expenditures for the quarter were $7,000,000 In October, we began our initial construction work to build expanded facilities in Santa Clara, and we expect to incur approximately $15,000,000 during Q4 for this project.

Operator

Now turning to the Q4. Our guidance for the Q4, which is based on non GAAP results and excludes any non cash stock based compensation impacts, intangible asset amortization and other non recurring items is as follows: revenues of approximately $1,850,000,000 to $1,900,000,000 gross margin of approximately 63% to 64% operating margin of approximately 44%. Our effective tax rate is expected to be approximately 21.5% with diluted shares of approximately 321,000,000 shares on a pre split basis. On the cash front, while we have experienced significant increases in operating cash over the last couple of quarters, we anticipate an increase in working capital requirements in Q4. This is primarily driven by increased inventory in order to respond to the rapid deployment of AI networks and to reduce overall lead times as we move into 2025, mentioned in John's prepared remarks.

Operator

We will continue our spending investment in R and D, go to market activities and scaling the company. Additionally, in Q4, as part of our ongoing commitment to creating long term value for our shareholders and enhancing the accessibility of our stock, we are pleased to announce that Arista's Board of Directors has approved a 4 for 1 stock split. This decision reflects our confidence in the continued growth and prospects of the company. It's important to note that while the stock split increases the number of shares outstanding, it does not change the intrinsic value of the company nor does it impact our financial performance or strategy. The split is designed to make our stock more accessible and attractive to a wider range of investors, particularly retail investors, which we believe will ultimately support broader ownership and improved trading dynamics.

Operator

Transitioning now to fiscal year 2025. As Jayshree mentioned, we are projecting revenue growth of 15% to 17%. The expected revenue mix is forecasted to have an increased weighting of cloud and AI customers, placing the gross margin outlook at 60% to 62% and operating margin at approximately 43% to 44%. Our commitment remains to continue to invest in R and D, go to market and the scaling of the company as we forecast to reach approximately $8,000,000,000 in revenue in 2025. To reiterate our double digit growth forecast in the foreseeable future and a 3 year revenue CAGR goal of mid teens for fiscal years 'twenty four through 'twenty six.

Operator

We are excited by the current and future opportunities to serve our customers as the pure play networking innovation company and to deliver strong returns to our shareholders. I will now turn the call back to Liz. Liz?

Speaker 1

Thank you, Chantal. We will now move to the Q and A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away.

Operator

We will now begin the Q and A portion of the Arista earnings call. Your first question comes from the line of Samik Chatterjee with JPMorgan. Please go ahead.

Speaker 4

Hi. Thanks for taking my question. And strong set of results, but if I can ask one on the guidance, if you don't mind. Jesher, you're guiding here to the $750,000,000 of AI target that you had issued previously, and you're also guiding to sort of meet your campus revenue target. So if I take those 2 into account, it does imply that the ex sort of AI and ex campus business is only growing single digits next year.

Speaker 4

This is on the sort of heels of coming through a double digit year in 2024, where you comped backlog digestion in 2023. So just maybe help us parse through that as to why there's a significant desperation in the non AI and sort of non campus business implied in the numbers and what maybe is driving that sort of your expectations what's driving that outlook? Thank you.

Speaker 2

Thank you, Samik. As you know, our visibility only extends to roughly about 6 months, right. So we don't want to get ahead of ourselves on how much better we can do in 2025. And that's kind of how we started 2024 either and we were pleasantly surprised with the faster acceleration of AI, pilots in 2024. So, we definitely see that our large cloud customers are continuing to refresh on the cloud, but are pivoting very aggressively to AI.

Speaker 2

So it wouldn't surprise me if we grow faster in AI and faster in campus in the new center markets and slower in our classic markets called that data center and cloud. And this is the best we can see right now. It doesn't mean we couldn't do better or worse. But as far as our visibility goes, I think this represents a nice combination of all our different customer segments and all our different product sectors.

Speaker 4

Thanks. Thank you.

Speaker 2

Thanks. Our

Operator

next question comes from the line of Antoine Shkabin with New Street Research. Please go ahead.

Speaker 3

Hi, thank you very much for taking my question. Can you maybe provide an update on the 4 major AI trials that you gave in the past? How are things progressing versus your expectations as of 90 days ago? And when do you expect the move to production to happen and what kind of scale are we talking about?

Speaker 2

Yes. No, thank you, Antoine. That's a good question. Arista now believes we are actually 5 out of 5, not 4 out of 5. We are progressing very well in 4 out of 5 clusters.

Speaker 2

3 of the customers are moving from trials to pilots this year and we're expecting those 3 to become 50,000 to 100,000 GPU customers in 2025. We're also pleased with the new Ethernet trial in 2024 with our 5th customer. This customer was historically very, very InfiniBand driven and we are now moving in that particular 5th customer. We are largely in a trial mode in 2024 and we hope to go to pilots and production in 2025. There is one customer who the so 3 are going well, 1 is starting.

Speaker 2

The 5th customer is moving slower than we expected. They may get back on their feet in 2025. They're awaiting new GPUs and they've got some challenges on power cooling, etcetera. So 3, I would give an A. The 4th one, we're really glad we won and we're getting started.

Speaker 2

And the 5th one, I'd say, steady state, not quite as great as we would expect them have expected them to be.

Speaker 3

Thanks guys for the color.

Operator

Our next question comes from the line of Tal Liani with Bank of America. Please go ahead.

Speaker 5

Hi, guys.

Speaker 6

NVIDIA in the last quarter, because of the launch of the Spectrum X, it shows that in data center switching, their market share went up from like 4% to 15%. Does it mean that you're seeing increased competition from NVIDIA? And is it competing with you on the same spot or is it more competing with white boxes? And the second question is about white boxes. What is the outlook for white box participation in Gen AI, is it going to be higher or lower than in front end data centers?

Speaker 2

Okay. Hi, thanks Tal. Which question do you want me to answer?

Speaker 6

Let's go with NVIDIA.

Operator

Okay. Alright. So Give

Speaker 7

me the gift of

Speaker 2

Somebody else may ask that question anyway, so you'll get your answer.

Speaker 8

Okay.

Speaker 2

But just to answer your question on NVIDIA, first of all, we view NVIDIA as a good partner. If we didn't have the ability to connect to their GPUs, we wouldn't have all this AI networking demand. So thank you NVIDIA, thank you, Jensen for the partnership. Now as you know, NVIDIA sells the full stack and most of the time it's with InfiniBand. And with the Mellanox acquisition, they do have some Ethernet capability.

Speaker 2

We personally do not run into the Ethernet capability very much. We run into it maybe in 1 or 2 customers. And so generally speaking, Arista has looked upon as the expert there. We have a full portfolio. We have full software.

Speaker 2

And whether it's the large scale out Ethernet working customers like the titans or even the smaller enterprises, we're seeing a lot of smaller GPU clusters of the enterprise. Arista has looked upon as the expert there. But that's not to say we're going to win 100%. We certainly welcome NVIDIA as a partner on the GPU side and a fierce competitor and we look to compete with them on the Ethernet switching.

Speaker 9

Thank you.

Speaker 1

Thank you.

Operator

Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.

Speaker 5

Thanks. I'll tag team with Tal. So we're partnered once again here. So I do want to sort of look at this competition or competitive landscape broadly in that. What I'm trying to understand is how it may be changing with the advent of AI.

Speaker 5

So not just hearing from you about White Box, but also competitors like Cisco and Juniper and Nokia. So really an update on the competitive landscape would be helpful. Thank you.

Speaker 2

Yes. Thank you, Simon. That's a nice broad question. So since you asked me specifically about AI as opposed to cloud, let me parse this problem into 2 halves, the back end and the front end, right? At the back end, we're natively connecting to GPUs.

Speaker 2

And there can be many times we just don't see it because somebody just models it in the GPU, in particular NVIDIA. And you may remember a year ago, I was saying we're outside looking in because most of the bundling is happening with InfiniBand. I would expect in the back end any share Arista gets including that $750,000,000 is incremental. It's brand new to us. We were never there before.

Speaker 2

So we'll take all we can get, but we are not claiming to be a market leader there. We're in fact claiming that there are many incumbents there with InfiniBand and smaller versions of Ethernet that Arista is looking to gain more credibility and experience and become the gold standard for the back end. On the front end, in many ways, we are viewed as the gold standard. So competitively, it's a much more complex network. You have to build a leaf spine architecture.

Speaker 2

John alluded to this. There's a tremendous amount of scale with L2, L3, VPNs, VXLAN, visibility, telemetry, automation, routing at scale, encryption at scale. And this what I would call accelerated networking portfolio complements NVIDIA's accelerated compute portfolio. And compared to all the peers you mentioned, we have the absolute best portfolio of 20 switches and 3 families and the capability and the competitive differentiation is bar none. In fact, I am specifically aware of a couple of situations where the AI applications aren't even running on some of the industry peers you talked about and they want to swap theirs for ours.

Speaker 2

So feeling extremely bullish with the 7,800 flagship product, the newly introduced 7,700 that we worked closely with Meta, the 7,060, this product line running today mostly at 400 gig because a lot of the make and the ecosystem isn't there for 800. But moving forward into 800, this is why John and the team are building the supply chain to get ready for it. So competitively, I would say we're doing extremely well in the front end and it's incremental on the back end. So and overall, I would classify our performance in AI coming from being a no one 2 years ago to where we are today, NA.

Speaker 5

Thank you.

Speaker 2

Thank you, Simon.

Operator

Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.

Speaker 9

Hey, Jayshree and team, thanks for the question. I wanted to ask a little bit more about the $750,000,000 in AI for next year. Has your visibility on that improved over the last few months? I wanted to reconcile your comment around the 5th customer not going slower than expected. And it sounds like you're now in 5 of 5, but wondering if that 5th customer going slower has is limiting upside or limiting your visibility there or has it actually improved and it's gotten more conservative over the last

Operator

few months?

Speaker 9

Thanks a lot.

Speaker 2

Somebody has to bring up conservative, Ben, but I think we're being realistic. So I think you said it right. I think on 3 out of the 5, we have good visibility, at least for the next 6 months, maybe even 12, John, what do you think? Yes?

Speaker 10

Yes.

Speaker 2

On the 4th one, we're in early trials regardless of improving to do. So let's see, but we're not looking for 2025 to be the bang up year on the 4th one, it's probably 2026. And on the 5th one, we're a little bit stalled, which may be why we're being careful about predicting how they'll do. They may step in nicely in the second half of twenty twenty five, in which case we'll let you know. But if they don't, we're still feeling good about our guide for twenty twenty five.

Speaker 2

Is that right, Shanta?

Operator

I would totally agree. It's a good question, Ben. But I think out of the 5, the way Jayshree categorized them, I would completely agree.

Speaker 9

Okay. Thanks a lot guys.

Operator

Our next question comes from the line of Karl Ackerman with BNP Paribas. Please go ahead.

Speaker 5

Yes. Thank you. Jayshree, could you discuss whether the AI programs you are engaged with on hyperscalers will be deploying your new Ether Link switches and AI Spine products on 800 gig ports? In other words, have these pilots or trials been on 400 gig and production could be on 800 gig? And I guess if so, what's the right way to think about the hardware mix of sales on 800 gig in 2025?

Speaker 2

Yes, yes. Good question. I mean, just going back to history again, it was always hard to tell the 104100 because somebody can take that 400 and break it into breakouts of 4 100. So I would say today, if you ask John and I, majority of the trials and pilots are on 400 because people are still waiting for the ecosystem at 800 including the next and the UEC and the packet spring capabilities etcetera. So while we're in some early trials on 800, majority of 2024 is 400 gig.

Speaker 2

I expect as we go into 2025, we will see a better split between 408100.

Speaker 9

Thank you.

Speaker 2

Thank you, Karl.

Operator

Your next question comes from the line of Ryan Coons with Needham and Company. Please go ahead.

Speaker 8

Great. Thanks for the question. I was hoping we could touch base on your campus opportunities a bit and where you're seeing the most traction in terms of your applications? Is this primarily from your strength and kind of core moving big bits around the core campus core or you're seeing WiFi? Can you maybe just update us on the campus applications and which verticals you're seeing the most traction in?

Speaker 8

Thank you.

Speaker 2

Yes. Ryan, let me try and step back and say tell you that our enterprise opportunity has never been stronger. As a pure play innovator, we are getting invited more and more into enterprise deals, even though sometimes we don't often have the sales coverage for it. And what I mean by that is, I think Arista is being sought for a network design that doesn't have 5 operating systems and 7 different silos and a bunch of spaghetti code. And there's an awful lot of competitive fatigue.

Speaker 2

And add to the fact that there's an awful lot of consolidation going on and a lot of our peers in the industry are looking at other things, whether it's observability or bringing other products together. So our enterprise opportunity now, we don't just characterize as data center. There's data center, there's campus center, there's WAN center and of course a little bit of AI in there too. So now let me address your campus question more specifically. Clearly, one of the first places everybody went on our campus is the universal spine.

Speaker 2

They go, oh, okay. I can have the same spine for my data center and campus. This is so cool. So that activity has already started and a big part of our 750,000,000 projection comes from the confidence that they've already put in a platform and a foundation to get ready for more spine. Then if Kumar Srikanth and our VP GM for campus were here, he'd say, but Ajay Shri, you need to measure the edge ports, which is the power of Ethernet, the wired and the Wi Fi.

Speaker 2

And this is super important, John McCulvey is smiling or laughing. Sounds like Kumar. Sounds like Kumar. Yeah. And so he would say, you got to get that right.

Speaker 2

And so number 1, we're in the spine. 2, we're making stronger progress on the wired. Our weakest piece, partly because we're data center folks and we're still learning how to sell radio is the WiFi that we plan to fix that. And this is where the extra coverage will come in. So I would say more of our strength is coming in the wired and spine.

Speaker 2

We are doing very well in pockets of WiFi, but we need to do better.

Speaker 8

Super helpful. Jayshree, thank you.

Speaker 2

Shantel, you want to add something?

Operator

Just to take the second part, I think you were asking about some of the verticals in your question. I just wanted to add some of the verticals. Sure. I think that's where we're seeing some strength data center and campus, I would say financials, healthcare, media, retail, Fed and SLED?

Speaker 2

Yes, Fed and SLED, that's a good one. This is historically an area we have not paid attention to. The federal market we're getting very serious about, including setting up its own subsidiary. So Chantal, you've been a huge part of pushing us there. So thank you for that.

Speaker 2

Thank you.

Speaker 8

Thanks so much.

Operator

Thank you, Ryan. Our next question will come from the line of Amit Duryanani with Evercore. Please go ahead.

Speaker 10

Good afternoon. Thanks for taking my question. I guess, I'm hoping you could spend some time on the sizable acceleration we're seeing both on your total deferred number, but also the product deferred number is going up pretty dramatically. Jeffrey, historically when product deferred goes up in such a dramatic manner, you actually end up with really good revenue acceleration in the out years and you're guiding for revenue set to decelerate in 2025. Maybe just help me connect like what's the delta why product deferred one makes the acceleration the way it historically has?

Speaker 2

I'm going to let Chantal, the expert answer the question, but I will say one line. Remember in the case of those examples you're quoting, the trials were typically, I don't know, 6 to 12 months. This can be multiple years and can take a lot longer to manifest. It may not all happen in 2025. Over to you, Santo.

Speaker 2

Yes.

Operator

So I think, yes, so thank you, Jayshree. So part of it is the type of use case, the type of customers, the mix of product that goes in there. They all have bespoke timeframes Jayshree referred to, you're starting to see those lengthen. And the other thing too is that this is what we know now. As you move through every quarter, there are deferreds in and out.

Operator

So this is what we know at this time. And it's a mix of the variables that we told you before. And then as we move through 2025, we'll continue to update.

Speaker 1

Thanks Amit.

Operator

Our next question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.

Speaker 7

Great. Thanks. Jayshree, just wanted to get a sense of clearly you keep clearly you have these 4 main trials and have added a 5th. But just how are you thinking about kind of adding other either Tier 2 opportunities or sovereigns or just kind of some of these other customers that are investing heavily in AI? And kind of how do you see those opportunities developing as for Arista?

Speaker 2

Yes, Mitra, this is a good question. So we're not saying these 5 are the be all end all, but these are the 5 we predict can go to 100,000 GPUs and more. That's the way to look at this. So there are a lot of just AI titans, if you will. And they can be in the cloud hyperscaler titan group.

Speaker 2

They could be in the Tier 2 as well by the way. Very rarely would they be in a classic enterprise. By the way, we do have at least 10 to 15 trials going on in the classic enterprise too, but they're much smaller GPU count, so we don't talk about it. So we're focusing on the big five to make a point that they really skew our numbers and they're very important to establish our beachhead, our innovation and our market share in AI. But there's definitely more going on.

Speaker 2

In terms of specifically, your question on Tier 2 and will there be more, there will be more, but these are the 5 we see in that category and they're spread across both the Tier 1 Titan Cloud as well as the Tier 2. Great. Thanks so much. Thank you, Mirio.

Operator

Our next question comes from the line of Sebastien Najee with William Blair. Please go ahead.

Speaker 5

Yes. Good evening. Thanks for taking the question. Just specifically on the Ethernet or EtherLink portfolio, could you maybe rank order or comment on what you see as the opportunity across each of the 3 families, so Single Tier, Leaf Spine and then the EtherLink switch, as we're going into 2025 and beyond?

Speaker 2

I'll take a crack at it, but John help me out here because this is purely a guesstimate. And it's probably one we should say no comment, but we'll try to give you color. On the EPIR link, I would say the fixed 7,060 switches in terms of units are very popular because it's a single switch, it's one our customers are familiar with, it's based on intense partnership with Broadcom. So we've done Tomahawk 1, 2, 3, 4 and here we are on 5, right? So I would say volume wise, that's the big one.

Speaker 2

Going into the other extreme, the 7,800 in volume may be smaller, but in dollars is extremely strategic and this is where we feel competitively. Again, working with our partners in Broadcom with the Jericho and Kumren family, this is just what would you say John, a real flagship, right?

Speaker 3

That's right.

Speaker 2

In dollars, that's the stealer, if you will. And then the 7,700 is kind of the best of both worlds that gives you all the capabilities of the 7,800 in a mini configuration up to 10,000 GPUs. It's brand new. So, but I think it's going to and competitively, there's no peer for this. Nobody else does this but us with a schedule fabric in a single stage.

Speaker 2

We did this in very close collaboration, John, with Meta, right?

Operator

That's right.

Speaker 2

So you guys have been working together, John, for 18 months, 2 years, I would say. So I think we know less about how to qualify that, but it could be very promising and it could be a fast accelerator in the next couple of years.

Speaker 3

Yes, I can just add to that, John. Yes, the 7,700 people are interested in the very large scale is attractive for the 7,700. Between the 7,060 and the 7,800 we do see people that are optimizing a mix of both of those products in the same deployment, so they can get the minimum number of tiers, but have the maximum amount of GPUs that fit their use case. So, we do see a lot of tailoring now around the size of the deployments based on how many GPUs they want to deploy in their data center.

Speaker 2

Yes, that's a really good point. And then suddenly they'll go, okay, I want to go from a 4 way radix to an 8 way and then suddenly we have to add. More line cards in your 7,800 and come running to you for more supply chain.

Operator

That's right. Great.

Speaker 5

Thank you both.

Operator

Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.

Speaker 8

Yes. Thanks for taking the question. I wanted to kind of segue off the competitive landscape and just ask you about when I look at your 2025 outlook as well as the midterm model that you provided, it looks like you're making some assumptions of some margin declines. I'm curious of what's underlying those expectations of gross margin declines? Is it mix of customers?

Speaker 8

Do you expect multiple 10% plus customers in 2025? Just any help on what's factored into that those margin expectations? Thank you.

Operator

Yes. Thanks, Erin, for your question. I would say, absolutely, in the outlook that you referred to, it is customer mix. We're expecting John to continue the great supply chain discipline that he's been doing with his team. So, comment only.

Operator

And as for the 10% customers, I would say the one dynamic, maybe it's a bit cheeky to say it, but as the denominator gets bigger, that gets a bit tougher. So we'll see as we go in the out years. But right now, we'll just keep to the ones that we currently talk about and we'll see how that goes in 2025 and 2026.

Speaker 2

It's going to get harder and harder to have 10 customers. So I believe M and M will still be that in 2025, but I don't anticipate there's any others at the moment.

Speaker 8

Thank you.

Speaker 1

Thanks, Aaron. Operator, we have time for one last question.

Operator

Our final question will come from the line of Adith Malik with Citigroup. Please go ahead.

Speaker 11

Hi, thank you for taking my question. Jayshree, at some of the recent conferences, you've talked about every dollar spent on back end is at least 2x on the front end. What signs are you looking for to see the lift from AI on the front end or classic cloud from the pressure on the bandwidth?

Speaker 2

Yes. Listen, I think it all depends on, Atif, their approach to AI. If they just want to build a back end cluster and prove something out, then they just look for the highest job training completion and intense training models. And it's a very narrow use case. But what we're starting to see more and more, especially with the top 5, like I said, is for every dollar spent in the back end, you could spend 30% more, 100% more, and we've even seen a 200% more scenario, which is why our $750,000,000 will carry over to we believe next year another $750,000,000 on front end traffic that will include AI, but it will include other things as well.

Speaker 2

It won't be unique to AI. So I wouldn't be surprised if that number is anywhere between 30% 100%. So the average is 100%, which is 2x our back end NOVA. So feeling pretty good about that. Don't know how to exactly count that as pure AI, which is why I qualify it by saying increasingly if you start having inference, training, front end, storage, WAN, classic cloud all come together, the AI the pure AI numbers becomes difficult to track.

Speaker 1

Thanks so much, Adith. This concludes the Arista Networks' 3rd quarter 2024 earnings call. We have posted a presentation which provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today and thank you for your interest in Arista.

Operator

Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.

Remove Ads
Earnings Conference Call
Arista Networks Q3 2024
00:00 / 00:00
Remove Ads