NYSE:ANET Arista Networks Q2 2024 Earnings Report $29.88 +0.46 (+1.56%) As of 04:00 PM Eastern Earnings HistoryForecast Independent Bank EPS ResultsActual EPS$0.53Consensus EPS $0.43Beat/MissBeat by +$0.10One Year Ago EPS$0.35Independent Bank Revenue ResultsActual Revenue$1.69 billionExpected Revenue$1.66 billionBeat/MissBeat by +$34.88 millionYoY Revenue Growth+15.90%Independent Bank Announcement DetailsQuarterQ2 2024Date7/30/2024TimeAfter Market ClosesConference Call DateTuesday, July 30, 2024Conference Call Time4:30PM ETUpcoming EarningsIndependent Bank's Q1 2025 earnings is scheduled for Thursday, April 24, 2025, with a conference call scheduled at 11:00 AM ET. Check back for transcripts, audio, and key financial metrics as they become available.Q1 2025 Earnings ReportConference Call ResourcesConference Call AudioConference Call TranscriptSlide DeckPress Release (8-K)Quarterly Report (10-Q)Earnings HistoryCompany ProfileSlide DeckFull Screen Slide DeckPowered by Independent Bank Q2 2024 Earnings Call TranscriptProvided by QuartrJuly 30, 2024 ShareLink copied to clipboard.There are 21 speakers on the call. Operator00:00:00As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website following this call. Operator00:00:08Ms. Liz Stein, Arista's Director of Investor Relations, you may begin. Speaker 100:00:13Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer and Chantal Brightup, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal Q2 ending June 30, 2024. If you would like a copy of this release, you can access it online at our website. Speaker 100:00:42During the course of this conference call, Arista Networks Management will make forward looking statements, including those relating to our financial outlook for the Q3 of the 2024 fiscal year, longer term financial outlooks for 2024 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10 Q and Form 10 ks and which could cause actual results to differ materially from those anticipated by these statements. These forward looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on the call are expressed on a non GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non GAAP financial measures to GAAP financial measures in our earnings press release. Speaker 100:01:55With that, I will turn the call over to Jayshree. Speaker 200:01:58Thank you, Liz, and thank you, everyone, for joining us this afternoon for our Q2 2024 earnings call. As a pure play networking innovator with greater than 70,000,000,000 TAM ahead of us, we are pleased with our superior execution this quarter. We delivered revenues of $1,690,000,000 for the quarter with a non GAAP earnings per share of $2.10 Services and software support renewals contributed strongly at approximately 17.6% of revenue. Our non GAAP gross margin of 65.4% was influenced by outstanding manufacturing discipline realizing cost reductions. International contribution for the quarter registered at 19% with the Americas strong at 81%. Speaker 200:02:46As we celebrated our 10th anniversary at the New York Stock Exchange with our near and dear investors and customers, we are now supporting over 10,000 customers with a cumulative of 100,000,000 ports deployed worldwide. In June 2024, we launched Arista's EtherLink AI Platforms that are ultra Ethernet consortium compatible, validating the migration from InfiniBand to Ethernet. This is a rich portfolio of 800 gig products, not just a point product, but in fact a complete portfolio that is both NIC and GPU agnostic. The AI portfolio consists of the 7,060X6 AI Lease Switch that supports 648 100 gig or 128400 gig Ethernet ports with a capacity of 51 terabits per second. The 7,800R4AI Spine is our 4th generation of Arista's flagship 7,800, offering 100% non blocking throughput with a proven virtual output queuing architecture. Speaker 200:03:48The 7,800R4 supports up to 4 60 terabits in a single chassis corresponding to 576-800 gigabit Ethernet ports or 1152-400 gigabit port density. The 7,700R4AI distributed EtherLink switch is a unique product offering with a massively parallel distributed scheduling and congestion free traffic spraying fabric. The 7,700 represents the 1st in a new series of ultra scalable intelligent distributed systems that can deliver the highest consistent throughput for very large AI clusters. Let's just say once again, Arista is making Ethernet great. First, we began this journey with low latency in 2,009 timeframe and then there was cloud and routing in the 2015 era, followed by WAN and Campus in the 2020 era and now AI in our 5th generation in 2025 era. Speaker 200:04:50Our EtherLink portfolio is in the midst of trials and can support up to 100,000 XPUs in a 2 tier design built on our proven and differentiated extensible OS. We are quite pleased with our progress across cloud, AI, campus and enterprise customers. I would like to invite Ashwin Kohli, our newly appointed Chief Customer Officer, to describe our diverse set of customer wins in 2024. Ashwin, over to you. Speaker 300:05:20Many thanks, Jayshree. Thank you for inviting me to my first earnings call. Let me walk everybody through the 4 global customer wins. The first example is an AI enterprise win with a large Tier 2 cloud provider, which has been heavily investing in GPUs to increase their revenue and penetrate new markets. Their senior leadership wanted to be less reliant on traditional core services and work with Arista on new reliable and scalable Ethernet fabrics. Speaker 300:05:52Their environment consisted of new NVIDIA H100s. However, it was being connected to their legacy networking vendor, which resulted in them having significant performance and scale issues with their AI applications. The goal of our customer engagement was refresh the front end network to alleviate these issues. Our technical partnership resulted in deploying a 2 step migration path to alleviate the current issues using 400 gig 7200 gig, eventually migrating them to an 800 gig AI Ethernet link in the future. The second next win highlights adjacencies in both campus and routing. Speaker 300:06:35This customer is a large data center customer, which has deployed us for almost a decade. Their team was able to leverage success to help them demonstrate our value for their global campus network, which spans across 100 and 1000 of square feet globally. The customer had considerable dissatisfaction with their current vendor, which led them to a last minute request to create a design for the new corporate headquarters. Given only 3 months window, Arista leveraged the existing data center design and adapted this to the campus topology with a digital twin of the design in minimal time. CloudVision was used for visibility and lifecycle management. Speaker 300:07:17The same customer once again was struggling with extreme complexity in their routing environment as well with multiple parallel backbones and numerous technical complexities. Arista simplified their routing network by removing legacy routers, increasing bandwidth and moving to a simple fixed form factor platform router. The core spine leverages the same EOS software, streamlining their certification procedures and instilling confidence in the stability of the products. Once again CloudVision came to the rescue. The third example is the next win in the international arena of a large automotive manufacturer that due to its size and scale previously had more than 3 different vendors in the data center, which created a very high level of complexity both from a technical and also from an operational perspective. Speaker 300:08:15The customers key priority was to achieve a high level of consistency across the infrastructure, which is now being delivered via a single EOS binary image and cloud vision solution from Arista. Their next top priority was to use automation, consistent end to end provisioning and visibility, which can be delivered by a CloudVision platform. This simplification has led the customer to adopt Arista beyond the data center and extend the Arista solution into the routing component of the infrastructure, which included our 7,500 R3 Spine platforms. This once again shows a very clear example of the same Arista 1 EOS and 1 cloud version solution delivering multiple use cases. And Jayshree, this last win demonstrate our strength in service provider routing space. Speaker 300:09:09We have been at the forefront of providing innovative solutions for service provider customers for many years. As we all know, we are in the midst of optical and packet integration as a result, our router support industry leading dense 400 gig ZR Plus Coherent Pluggable Optics. In this service provider customer example, we provided a full turnkey solution including our popular 7,280 R3 routers and our newly announced AWE 7,250 WAN router as a BGP route reflector along with CloudVision and Professional Services. We showcase our strength in supporting a wide variety of these pluggable coherent optics along with our SR and EVPN solutions, which allowed this middle mind service provider customer to build out a 400 gig statewide backbone at cloud scale economics. Thanks, Rishi, and back over to you. Speaker 200:10:11Thank you, Ashwin, and congratulations. Hot off the press is our new and highest Net Promoter Score of 87, which translates to 95%. Hats off to your team for achieving that. It's so exciting to see the momentum of our enterprise sector. As a matter of fact, as we speak, we are powering the broadcasters of the Olympics, symbolic of our commitment to the media and entertainment vertical. Speaker 200:10:35And so it's fair to say that so far in 2024, it's proving to be better than we expected because of our position in the marketplace and because of our best of breed platforms for mission critical networking. I am reminded of the 1980s when Sun was famous for declaring the network is the computer. Well, 40 years later, we're seeing the same cycle come true again with the collective nature of AI training models mandating a lossless, highly available network to seamlessly connect every AI accelerator in the cluster to one another for peak job completion times. Our AI networks also connect trained models to end users and other multi tenant systems in the front end data center such as storage, enabling the AI system to become more than the sum of its parts. We believe data centers are evolving to holistic AI centers where the network is the epicenter of AI management for acceleration of applications, compute, storage and the wide area network. Speaker 200:11:35AI Centers need a foundational data architecture to deal with the multimodal AI datasets that run on our differentiated EOS network data lake systems. Arista showcased the technology demonstration of our EOS based AI agent that can directly connect on the NIC itself or alternatively inside the host. By connecting into adjacent Arista switches to continuously keep up with the current state, send telemetry or receive configuration updates, we have demonstrated the network working holistically with network interface cards such as NVIDIA BlueField and we expect to add more NICs in the future. Well, I think the Arista purpose and vision is clearly deriving our customer traction. Our networking platforms are becoming the epicenter of all digital transactions, be they campus center, data center, WAN centers or AI centers. Speaker 200:12:32And with that, I'd like to turn it over to Chantal, our Chief Financial Officer, to review the financial specifics and tell us more. Over to you, Chantal. Speaker 400:12:40Thanks Jayshree. It really was great to see everyone at the New York Stock Exchange IPO celebration event. Now turning to the numbers. This analysis of our Q2 results and our guidance for Q3 is based on non GAAP and excludes all non cash stock based compensation impacts, certain acquisition related charges and other non recurring items. A full reconciliation of our selected GAAP to non GAAP results is provided in our earnings release. Speaker 400:13:06Total revenues in Q2 were 1 point $69,000,000,000 up 15.9 percent year over year, significantly above the upper end of our guidance of $1,620,000,000 to 1,650,000,000 dollars Growth was delivered across all three sectors of cloud, enterprise and providers. Services and subscription software contributed approximately 17.6% of revenue in the quarter, up from 16.9% in Q1. International revenues for the quarter came in at $316,000,000 or 18.7 percent of total revenue, down from 20.1% in the prior quarter. This quarter over quarter decrease was driven by a relatively weaker performance in our APJ region. The overall gross margin in Q2 was 65.4 percent, above our guidance of 64%, up from 64.2% last quarter and up from 61.3% in the prior year quarter. Speaker 400:13:57The year over year gross margin improvement was primarily driven by a reduction in inventory related reserves. Operating expenses for the quarter were $319,800,000 or 18.9 percent of revenue, up from last quarter at 265,000,000 R and D spending came in at $216,700,000 or 12.8 percent of revenue, up from $164,600,000 in the last quarter. This primarily reflected increased headcount and higher new product introduction costs in the period. Sales and marketing expense was $85,100,000 or 5 percent of revenue compared to $83,700,000 last quarter, with a double digit percentage increase of headcount in the quarter versus the prior year. Our G and A costs came in at $18,000,000 or 1.1 percent of revenue, up from last quarter at $16,700,000 Our operating income for the quarter was $785,600,000 or 46.5 percent of revenue. Speaker 400:14:54Other income and expense for the quarter was a favorable $70,900,000 and our effective tax rate was 21.5%. This resulted in net income for the quarter of 672 point $6,000,000 or 39.8 percent of revenue. Our diluted share number was 319,900,000 shares, resulting in a diluted earnings per share number for the quarter of $2.10 up 32.9% from the prior year. Turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at CAD6.3 billion. Speaker 400:15:29In the quarter, we repurchased CAD172,000,000 of our common stock at an average price of CAD282.20 per share. Of the $172,000,000 $82,000,000 was repurchased under our prior $1,000,000,000 authorization, which is now complete, and the remaining $90,000,000 was purchased under the new program of $1,200,000,000 approved in May 2024. The actual timing and amount of future repurchases will be dependent upon market and business conditions, stock price and other factors. Now turning to operating cash performance for the Q2. We generated $989,000,000 of cash from operations in the period, reflecting strong earnings performance with a favorable contribution from working capital. Speaker 400:16:14DSOs came in at 66 days, up from 62 days in Q1, impacted by large service renewals at the end of the quarter. Inventory turns were 1.1 times, up from 1 turn last quarter. Inventory decreased to $1,900,000,000 in the quarter, down from $2,000,000,000 in the prior period, reflecting a reduction in our raw materials inventory. Our purchase commitments and inventory at the end of the quarter totaled $4,000,000,000 up from $3,500,000,000 at the end of Q1. We expect this number to stabilize as supplier lead times improve, but we'll continue to have some variability in future quarters as a reflection of demand for our new product introductions. Speaker 400:16:55Our total deferred revenue balance was $2,100,000,000 up from $1,700,000,000 in Q1. The majority of the deferred revenue balance services related and directly linked to the timing and term of service contracts, which can vary on a quarter by quarter basis. Our product deferred revenue increased approximately $253,000,000 versus last quarter. As a reminder, we expect 2024 to be a year of new product introductions, new customers and expanded use cases. These trends may result in increased customer trials and contracts with customer specific acceptance clauses and increase the variability and magnitude of our product deferred revenue balances. Speaker 400:17:36Accounts payable days was 46 days up from 36 days in Q1, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $3,200,000 As we enter the second half of fiscal year 4, we are encouraged by the momentum that we see in the market. Our existing innovative product portfolio, along with our new product introductions are well suited for our cloud, AI, enterprise and providers customers. We will continue to invest in our R and D and go to market through both people and processes. With all of this as a backdrop for fiscal year 2024, our revenue growth guidance is now at least 14%. Speaker 400:18:17Gross margin outlook remains at 62% to 64% and operating margin is now raised to approximately 44%. Our guidance for the Q3 based on non GAAP results and excluding any non cash stock based compensation impacts and other non recurring items is as follows: revenues of approximately $1,720,000,000 to $1,750,000,000 gross margin of approximately 63% to 64% and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5% with diluted shares of approximately 321,000,000 shares. With that, I now turn the call back to Liz. Liz? Speaker 100:18:58Thank you, Chantal. We will now move to the Q and A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away. Operator00:19:12We will now begin the Q and A portion of the Arista earnings call. Our first question comes from the line of Michael Ng with Goldman Sachs. Please go ahead. Speaker 500:19:37Hi, good afternoon. Thank you for the question. As we head into next generation GPUs with Blackwell and the MBL 3,672, there's been some discussion about whether these systems may be less modular in of its components, particularly for back end networking. Speaker 600:19:57I was just wondering if Speaker 500:19:57you could share your views and provide some clarity on the vendor modularity of Blackwell, particularly as it relates to networking and how that might affect Arista's positioning over the next couple of years, if at all? Thank you very much. Speaker 200:20:14Sure. Michael, I think as the GPUs get faster and faster, obviously the dependency on the network for higher throughput is clearly related. And therefore, our timely introduction of these 800 gig products will be required, especially more for Blackwell. In terms of its connection and modularity with NVLink and 72 port, there's always been a market for what I call scale up where you're connecting the GPUs internally in a server and the density of those GPUs connecting in the past has been more PCIe and the 6 cell and now NVLink and there's a new consortium now for called UAL that's going to specify that I believe eventually by the way even there Ethernet will win. And so that density depends more on the AI accelerator and how they choose to connect. Speaker 200:21:05As I've often said, it's more robust technology. So eventually where risk to play strongly both on the front end and back end is on the scale out, not on the scale up. So independent of the modularity, whether it's a rack based design, a chassis or multiple RU, the ports have to come out Ethernet and those Ethernet ports will connect into scale out switches from Arista. Great. Speaker 600:21:30Thank you, Jayshree. Speaker 200:21:31Thank you, Michael. Operator00:21:33Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead. Speaker 700:21:39Yes. Thank you for taking the question. I guess the metric that stands out to me the most is the deferred revenue balance up it looks like 95% in total year on year. And it looks like you based on what you disclosed, it looks like you're now at about $520,000,000 of product deferred. On the product deferred line, can you help us appreciate how do we think about that number? Speaker 700:22:02Is that related to these AI opportunities? Just the cadence of how we should expect the revenue recognition from that, again being looks like almost 60% above what was previously the peak level of product deferred? Speaker 200:22:15Yes. No, good question, Aaron. Let me start generically and of course I'll hand to my CFO, Chantal here. Product deferred sort of ebbs and flows. It goes in, it comes out and it's particularly high when we have a lot of new product and new use cases. Speaker 200:22:31But it's not extraordinary to see us running a high product deferred in 1 quarter or in 1 year and then dipping down. So the long term is consistent. The short term can ebb and flow. Do you want to say a Speaker 400:22:44few words on that? Yes. Thank you, Jason. The only thing I would add to that is that the deferred balance is always a mix of customers and use cases. So I wouldn't rotate on any one particular intersection of those. Speaker 400:22:56It really is a mix of those combined. Speaker 800:22:59Okay. Thank you. Speaker 200:23:02Thanks, Karen. Operator00:23:04Our next question comes from the line of Meta Marshall with Morgan Stanley. Please go ahead. Speaker 900:23:10Great. Thanks. Jayshree, last quarter you had mentioned kind of 4 major AI trials that you guys were a part of. Obviously, you guys are Ashwin listed off a list of kind of the 4 wins that you had during the quarter. Just trying to get a sense of maybe if that Tier 2 win was a part of those AI trials or just any update on where those 4 AI trials stand or what the current count of AI trials is currently? Speaker 900:23:35Thank you. Speaker 200:23:37Yes. No, I'm going to speak Speaker 400:23:38to it and I want to turn it over to Speaker 200:23:39Ashwin since he's here with us. First of all, all four trials are largely in what I'd call cloud and AI titans. A couple of them could be classified as specialty providers as well depending on how they end up. But those 4 are going very well. They started out as largely trials. Speaker 200:23:59They are now moving into pilots this year, most of them. And with any luck next year, maybe we won't be saying 4 out of 5 and we can say 5 out of 5. That's my hope anyway. But in addition to that, we have tens of smaller customers who are starting to do AI pilots. And Ashwin, you've been snuck in the middle of a lot of those. Speaker 200:24:18Maybe you want to speak to that a little Speaker 300:24:19bit. Yes, absolutely, Jayshree. Hi, Meta. So just wanted to clarify the example that I shared with you was more around a Tier 2 cloud provider. And if I take a step back, the types of conversations my team is having with customers is either around general purpose enterprise customers or it's around tier 2 cloud providers, right, which are different to the ones Jayshree is referring to. Speaker 200:24:42Yes. And they tend to be early adopters. Speaker 300:24:45Absolutely. Speaker 200:24:45They're about to build an AI cluster. It's a reasonably small size, not classified in 1,000 or 10,000, but you got to start somewhere. So they started about a few 100 GPUs, would you say? Speaker 300:24:56Absolutely, yes. Speaker 900:24:58Great. Thank you. Speaker 200:24:59Thanks, Vinu. Operator00:25:01Our next question comes from the line of Adith Malik with Citi. Please go ahead. Speaker 1000:25:07Hi, thank you for taking my question. Jayshree, you mentioned a $70,000,000,000 TAM number in your prepared remarks. Can you help us understand what is in that TAM and how does that relate to the 750,000,000 dollars AI networking revenue number you have provided for next year? Speaker 200:25:25Yes, I get asked that question a lot. First of all, the TAM is far greater than the $750,000,000 we've signed up for and remember that's early years. But that TAM consists of our data center TAM, our AI TAM, which we count in a more narrow fashion as how much of InfiniBand will move to Ethernet on the back end. We don't count the AI TAM that's already in the front end, which is part and parcel of our data center. And then obviously there's a campus TAM, which is very, very big. Speaker 200:25:55It's north of 10,000,000,000 dollars And then there's the wide area and routing. So these 4 are the building blocks that I call the campus center, data center, AI center and WAN center. And then layered upon that is some very nice software. If you saw, we had a nice bump in software and service renewals this quarter, which would be largely centered around cloud vision, observability and security. So I would say these are the 4 building blocks and then the 3 software components on top of it. Speaker 200:26:22Not to always of course, not to forget the services and support that are part of these TAMs as well. Thank you, Addis. Thank you, Addis. Operator00:26:33Our next question comes from the line of Antwan Shkabin with New Street Research. Please go ahead. Speaker 300:26:39Hi. I'd like actually to ask about the non AI component of your cloud and AI segment. What can you tell us about how investments in traditional infrastructure are trending? Because we heard from other vendors that the inventory digestion is now easing. So are you seeing that too? Speaker 200:26:59We saw that last year. We saw that there was a lot of pivot going on from the classic cloud, as I like to call it, to the AI in terms of spend. And we continue to see favorable preferences to AI spend in many of our large cloud customers. Having said that, at the same time simultaneously, we are going through a refresh cycle where many of these customers are moving from 100 to 200 or 200 to 400 gig. Gig. Speaker 200:27:27So while we think AI will grow faster than cloud, we're betting on classic cloud continuing to be an important aspect of our contributions. Operator00:27:38Our next question will come from the line of Amit Daryanani with Evercore. Please go ahead. Speaker 1100:27:45Good afternoon and thanks for taking my question. I guess just a question related to the updated 24% guide and I realize it's at least 14% growth for the year. But your compares actually get much easier in the back half of the year versus what you've had in the first half. So just from an H2 versus H1 basis, is it reasonable to think growth can actually accelerate in the back half of the year? And if it doesn't, why do you think it does not accelerate in the back half? Speaker 1100:28:08Thank you. Speaker 400:28:10Yes. I think that Jayshree and I Speaker 200:28:13came to this guide of Speaker 400:28:14at least 14%, because we do see multiple scenarios as we go through the second half of the year. We do expect to continue to see some acceleration in growth, but I would say that from the perspective of the forward scenarios, we were comfortable with at least 14% and we'll come to guide for the rest of the year. Speaker 200:28:32Amit, look at us. We're known to be traditionally conservative. We went from 10% to 12% to 14% and now my CFO says at least 14. So let's see how the second half goes. But I think at this point, you should think we are confident about second half and we're getting increasingly confident about 2025. Speaker 1100:28:52Perfect. Thank you. Speaker 200:28:53Thank you. Operator00:28:54Our next question comes from the line of Tal Liani with Bank of America. Please go ahead. Speaker 800:29:03Was this me, by the way? Good. Speaker 200:29:06Sally, are you there? Speaker 800:29:07Thank you. Yes. Can you hear me? Speaker 400:29:11Yes. Speaker 800:29:12Okay. My question is more in line with kind of the previous question. I calculated the implied growth in the Q4 and I'm getting a much lower growth in the Q4 than what we've seen this quarter or next quarter. And I'm wondering if it's you said conservatism in the last answer. And the question is, is it just conservatism or is there anything special with the Q4 that the implied growth is only 9% year over year? Speaker 800:29:44And it goes across everything. It goes the implied growth, revenue growth is lower, the gross margin is lower. So some of it is conservatism, but is there anything special with 4Q timing of recognition or seasonality or anything that drives a lower implied guidance? Speaker 200:30:05So Tal, if you go back to November Analyst Day, call our gross margin lower, I would disagree because I think we're just blowing it off our guide. Our guide was 63 to 64 and we have now shown 2 quarters of amazing gross margin. Hats off to Mike Capas and John McCool, Alex and the entire team for really working on disciplined cost reductions. But yet if you look at mix and general the costs and etcetera, I would say you should plan on our gross margins being as we projected. They're not lower. Speaker 200:30:38I think we just did exceptionally well the last two quarters, so it's relatively lower. That's the first thing. 2nd, in terms of growth, I would say we always aim for double digit growth. We came in with 10% to 12%. And again, Q2 is just an outstanding quarter. Speaker 200:30:51I don't want you to use it as a benchmark for how Q3, Q4 will be. But of course, we're operating off large numbers. We'll aim to do better, but we'll have more visibility as we go into Q3 and we'll be able to give you a good sense of the year. Speaker 800:31:07Got it. Thank you. Speaker 100:31:08Thanks Al. Operator00:31:10Our next question comes from the line of George Notter with Jefferies. Please go ahead. Speaker 1200:31:16Hi, guys. Thanks very much. I guess I was just curious about what your expectations were coming into the quarter for product deferred revenue. I guess, I'm curious how much you thought would be added to that product deferred category in Q2? And then also, do you have a view on product deferred revenue for Q3? Speaker 1200:31:33Thanks. Speaker 400:31:35Yes. Hi. Yes, nothing's changed in our philosophy that we don't guide product deferred revenue. So that's there's nothing new there to report. I would say in the sense of coming to this quarter, we don't guide the product deferred revenue. Speaker 400:31:49We have an idea in the sense of where we're going to land as we go through the quarter. But I would say that it met expectations from what we were having in our planning forecast process. Speaker 1200:31:58Got it. And I assume these are new products you've shipped to customers, you're waiting for customer acceptance. Any sense for when those customer acceptances might start to flow through? Is that a 2024 event? Is that a 2025 event? Speaker 1200:32:15How do you think about it? Speaker 400:32:17Yes. They all have different timings because they're unique to the customer, the use case, AI, classic cloud, etcetera. So they're all unique and bespoke that way. So there's no set trending on that. And so as we roll through the quarters, they'll come off as they get deployed and then that's where it will land from a forecasting perspective. Speaker 400:32:34And I Speaker 200:32:34think it's fair to say if it's AI, it takes longer. If it's classic cloud, it's shorter. Speaker 600:32:41Great. Okay. Thank you. Speaker 400:32:43Yes, George. Thank you. Operator00:32:45Our next question comes from the line of Samik Chatterjee with JPMorgan. Please go ahead. Speaker 1300:32:50Hi, thanks for taking the question and strong results here. But if I can just ask question on the commentary that Ashwin had in the prepared remarks. Ashwin, you mentioned the Tier 2 customer where you're refreshing the front end, as I sort of interpreted it to alleviate some of the bandwidth sort of concerns from the back end. How do you think about that opportunity across your customer base? Particularly how should we think about sort of that as being attached to the $750,000,000 target for back end revenues that you have for next year? Speaker 1300:33:23Just help us think about Speaker 300:33:28Samit. It's hard to say, right? I mean, I don't want to attach the 750 back to this one customer, right. The goal around this one customer was to demonstrate our wins in enterprise and in the non cloud space. But outside that, it would be very hard to go translate that to what's happening within the $750,000,000 But I don't know, Jishi, if you've got any comments around that at all? Speaker 200:33:54Yes. I was just going to add that. There are 4 things Ashwin and the team are seeing in the enterprise and provider sector. I think the migration to 100 gig data center is pretty solidly going on. If anybody is still on a 10 and 40, they're definitely not a early adopter of technology. Speaker 200:34:10And some of them are even moving to 400 gig, I would say, right? Speaker 300:34:14Absolutely, Jayshree. Speaker 400:34:15So that's Speaker 200:34:15on the data center. Operator00:34:16Campus, I Speaker 200:34:17know in general is a slow market, but for Arista, we are still seeing a lot of desire and you heard Ashwin talk about a campus win where they're really frustrated and they're struggling with existing campus deployments. So we feel really good about our $750,000,000 target for next year. The routed WAN again we're both in Tier 2 and service providers and even in enterprises, lot of activity going on there. And finally, the AI trials you talked about, they tend to be smaller, but it's a representation of the confidence the customer has. They may be using other GPUs, servers, etcetera, but when it comes to the mission critical network, they've recognized the importance of best of breed reliability, availability, performance, no loss and the familiarity with the data centers naturally leading to pilots and trials on the AI side with us. Speaker 1100:35:08Thank you. Operator00:35:09Thanks, Ike. Our next question comes from the line of Karl Ackerman with BNP Paribas. Please go ahead. Speaker 1400:35:17Yes, thank you. So there are several data points across the supply chain that indicate enterprise networking and traditional server units are beginning to recover. I was hoping you might discuss what you are hearing from your enterprise and service provider customers on their commitment to upgrade their servers and networking gear in the next couple of quarters? Speaker 1200:35:38And as you address that, perhaps Speaker 1400:35:40you could discuss the number of new customers being added into these verticals over the last couple of quarters out of the 10,000 or so that you have today? Thank you. Speaker 200:35:50Yes. Let me take the second question. I think we are adding systematically, we celebrated the 10,000. And so because of in the past, we used to add large numbers of customers. Now we're adding many small customers. Speaker 200:36:03So we're pleased with the systematic add of hundreds of customers every quarter and that's going very, very well. On the what was your other formal question? Speaker 1400:36:16How to think about the adoption or the growth of server and networking gear on for campus environments and what you're seeing there? Speaker 200:36:27Okay. So perhaps, it may come as a surprise to you, but servers aren't always related to campus. Devices and users are much more related to campus, right? Service tend to be dealing with more data center upgrades. So in the campus, we're tending to see 2 things right now. Speaker 200:36:43Greenfield buildings that are they're planning for 2025, 2026 and we're smack in the middle of those RFPs or they're trying to create a little oasis in the desert and prove that our post pandemic campus is much better with the Leaf Spine topology, wired wireless connecting to it at least and then enabling things like 0 touch automation, meta segmentation, capabilities, analytics, etcetera. So the campus is really turning out to in a somewhat sluggish overall market. We are finding that our customers are very interested in modernizing their campus. And again, it has a lot to do with their familiarity with us in the data center and that's translating to more success in the campus. Thanks, Paul. Operator00:37:27Our next question comes from the line of Ben Bollin with Cleveland Research. Please go ahead. Speaker 600:37:34Good evening, everyone. Thanks for taking the question. Jayshree, I'm interested bigger picture as you think about back end network architectures gradually capturing more of the traditional front end. What do you think that looks like over the next several years? How quickly could that become a more realistic opportunity to capture more of that true fabric of overall compute resources? Speaker 200:37:58Well, I think there are a lot of market studies that point to today it's still largely InfiniBand. You remember me, Ben, saying we were outside looking in just a year ago. So step 1 is we're feeling very gratified that the whole world even InfiniBand players have acknowledged that we're making Ethernet great again. And so I expect more and more of that back end to be Ethernet. One thing I do expect even though we're very signed up to the 750,000,000 number at least 750,000,000 I should say next year is it's going to become difficult to distinguish the back end from the front end when they all move to Ethernet. Speaker 200:38:35So this AI center as we call it is going to be a conglomeration of both the front and the back. So if I were to fast forward 3, 4 years from now, I think the AI center is a super center of both the front end and the back end. So we'll be able to track it as long as there's GPUs and strictly training use cases. But if I were to fast forward, I think there'll be many more edge use cases, many more inference use cases and many more small scale training use cases, which will make that distinction difficult to make. Speaker 400:39:06Thank you, Ben. Operator00:39:08Our next question will come from the line of Alex Henderson with Needham. Please go ahead. Speaker 1500:39:13Great. Thank you very much. I was hoping we could talk a little bit about the spending biases in the cloud titans. Clearly, there's an enormous amount of spending going into the AI front end, back end networking elements as well as the GPUs. But there is a rebounding growth rate of application that is ultimately driving the traditional business that has historically been called the CPU side of the data center. Speaker 1500:39:50And I'm wondering if they're under investing in there and that there is going whether there is a potential for a catch up in the spending in that area at some juncture because of the overbias to AI or whether that investment is ongoing at a reasonable rate consistent with a moderate acceleration in the application growth? Speaker 200:40:14Alex, it's a very thought provoking question. I would say there's such a heavy bias towards in the cloud titans towards training and super training and the bigger and better the GPUs, the billion parameters, the OpenAI, chat GPT and LAMAs that you're absolutely right that at some level the classic cloud, what you call traditional, I'm still calling classic, is a little bit neglected last year and this year. Having said that, I think once the training models are established, I believe this will come back and it will sort of be a vicious cycle that feeds on each other. But at the moment, we're seeing more activity on the AI and more moderate activity on the cloud. Speaker 1500:40:57Does the reacceleration of application growth, excluding AI, play into that? I mean, clearly that Speaker 200:41:05I think it does. They were clearly now on access. I don't know how to measure it, but I think the more AI back end we put in, we expect that to have a pressure on the front end of X percent. We're still trying to assess whether that's 10%, 20%, 30%. And so we do not count that in our 750,000,000 number to be accurate to only GPU native connections. Speaker 200:41:26But absolutely, as the 2 holistically come together to form this AI center, I believe the front end will have pressure. Speaker 1500:41:34Great. Thank you so much and thanks for the great quarter. Speaker 200:41:37Thank you, Alex. Appreciate your support. Operator00:41:41Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead. Speaker 1600:41:48Hi, Jayshree and Chantal. This is Jack Adair for Ben. Congrats on the good quarter. We were wondering if you could comment on the competitive environment and if you're seeing Spectrum X from NVIDIA and if so, how you're doing against it? Speaker 200:42:02Yes. But first I just want to say when you say competitive environment, it's complicated with NVIDIA because we really consider them a friend on the GPUs as well as the mix. So not quite a competitor, but absolutely we will compete with them on the spectrum switch. We have not seen the spectrum except in one customer where it was bundled, but otherwise we feel pretty good about our win rate and our success for a number of reasons. Great software, portfolio of products and architecture that's proven, performance, visibility features, management capabilities, high availability. Speaker 200:42:37And so I think it's fair to say that if a customer were bundling with their GPUs, then we wouldn't see it. If a customer were looking for best of breed, we absolutely see it and win it. Speaker 100:42:49Thanks, Jeff. Speaker 300:42:50Thank you for that. Operator00:42:52Our next question will come from the line of James Fish with Piper Sandler. Please go ahead. Speaker 1700:42:58Thanks for the question. Just wanted to circle back on the enterprise side of things. I guess, is there a way to think about how many replacements you're seeing relative to prior periods? And really, I'm trying to understand if we're starting to see that core enterprise data center network refresh actually pick up versus kind of the share gains that you guys have historically seen? And is there an underlying change in enterprise customer behavior, whether it's for the data center or Jayshree, I know you were talking about the campus earlier. Speaker 200:43:30Yes, James, let talk and Ashwin, I'm sure you have more to add since you're closer to the problem. I believe we have 3 classes of enterprise customers, the early adopters and I think in that category, Ashwin's team is seeing a lot of refresh going they already have 100 gig and they're potentially planning their 400 gig, right? Then the fast followers and those guys I think are still looking at 100 gig migrations, right? And then the real risk averse and we're still getting to know them because this is an untapped opportunity for us, right? And probably a 4th category where some of them are disillusioned with the public cloud and want to repatriate some of their workloads back into the data center. Speaker 200:44:09So I would say there's activity in all four, no saturation, still a lot of opportunity for us, largely in the speed upgrades and also in the class of customers and what stage they are. Ashwin, do you Operator00:44:23want to add anything to that? Speaker 300:44:24Yes, sure Jayshree. Right. And so to answer your question from what I'm seeing from customers, they're kind of fed up with being deployed in the data center specifically, something which is proprietary lock in, right, something which is does not give them the flexibility to join multiple use cases such as data center, campus routing and they want something that just works. They want something that is just simple. They want to make sure that when they wake up in the morning, the network is not down. Speaker 300:44:54And so Arista today actually has a brand, it has a value for there and we've actually been delivering this for the last 10 plus years. So James, I would say that message is echoing successfully in our existing customers. We're taking our risk out not only in the single use case of data center, but expanding that across data center, campus, routing, WAN and then the team is eagerly working with a new set of Global 2,000 and Fortune 500 customers to go evangelize the message to them as well. Speaker 200:45:24Our Operator00:45:26next question will come from the line of Ittai Kidron with Oppenheimer. Please go ahead. Speaker 1800:45:32Thanks. Nice numbers ladies. I wanted to go back to gross margin, Jayshree and Chantelle, if you don't mind. In your prepared remarks, you talked about manufacturing efficiencies, cost reduction. I guess I'm wondering why is that not something that carries forward to the next quarter as well? Speaker 1800:45:49I understand you're trying to be conservative, but I'm sure these are not changes that have a very short life span that they can carry forward. So why not be a bit more optimistic on what the gross margin outlook should be? Speaker 400:46:03Yes. No, it's a great question. And so our goal is always to try to do better than our guide. But within that guide, the question is there's mostly what's influencing the second half consideration is the expected mix of customers. As you can appreciate, we do have different mixes depending on the demographics of who we're selling to. Speaker 400:46:24So I think that's a big part of our second half, why we kept the guide as it is. We will keep looking for variable cost productivity and cost management as we go through and hope to deliver more. But at this point in time, what's mostly built into it is the mix assumption in the second half. Speaker 200:46:39And if you look at this isn't just this quarter, John and the team have done a fantastic job for the last year. So yes, I'm going to go back to them and ask for more, but I think they will say they've done so much and might be squeezing blood out of a stone. So we'll see if they respond to more cost reductions. But I absolutely agree with Chantal. It's largely driven by mix. Speaker 200:46:59And I think we've taken a lot of the cost reductions out in the last year. Speaker 1800:47:04Appreciate it. Operator00:47:06Thank you. Our next question will come from the line of Simon Leopold with Raymond James. Please go ahead. Speaker 600:47:12Thank you very much. I know that you typically hold off on the 10% customer disclosures until the end of the fiscal year. But what I'm hoping to gather is how customer concentration may be evolving from comparison to last year. Do you expect with your past customers growing their spending so much that it stays similar or with the diversity and the new opportunities, the concentration you've had historically declined? Any kind of indication you could offer, I'd appreciate it. Speaker 200:47:46Simon, I'll try, but you're right to say that we don't know. It's only half the year. I expect both Microsoft and Meta to be greater than 10% customers for us. I don't expect any other 10% concentration. Now in Microsoft and Meta, how they will pivot to AI and how they will reduce the spend and all of that things, that movie will play out in the next 6 months. Speaker 200:48:07So we'll know better. But at this point, I think you can assume they won't be exactly the same. Some may go up, some may go down. But these are 2 extremely vital customers, strategic customers. We co develop with them. Speaker 200:48:21We partner with them very, very well. And we expect to do well with them both in cloud and AI, depending on their priorities, of course. Speaker 600:48:30And does the pipeline suggest you can have new 10% customers next year or do you expect sort of a similar concentration next year? Speaker 200:48:41We're not aware of any customer that will approach 10% this year or next year. Speaker 600:48:46Thank you very much. Speaker 200:48:48Sure, Simon. Operator00:48:50Our next question comes from the line of Tim Long with Barclays. Please go ahead. Speaker 1900:48:55Thank you. Maybe we'll go over to the software services since the AI stuff has been beaten up a little bit here. In a good way. You talked a lot about some of the software capabilities. It seems like Arista might be leaning in a little bit more to this revenue line. Speaker 1900:49:21It's been growing faster than the hardware product the last multiple quarters. So could you talk about kind of sustainability of that strength focus that you guys have on this service software area and what that would mean to growth rate going forward? Thank you. Speaker 200:49:40Yes. Thank you. Thank you for the change in question. I appreciate that. I think we will be in the teens for some time because there are 3 building blocks and a lot of this, Sashwan, you know very well. Speaker 200:49:53There's the services building block that as product goes up, there's a lot of pressure on us as on a percentage of service to be lower, right? So that while it may historically has been in the teens can be lower. The second one is our perpetual software, which again is a strong function of use cases, particularly things like routing, etcetera, where we've done very, very well. The stronger we do there, the better we do there. An extension of that is CloudVision, which can be is more a subscription service with Cloud Vision as a service either on network as a service or on the premise. Speaker 200:50:29So that's the 2nd building block for us that's going strong. The one I want to point to a little more, which could help us, is the security and availability. You may know in May we introduced the micro and macro segmentation. We also announced UNO, our unified network observability. And while this is new, Ashwin and I have great plans for that and I think this could be a swing factor. Speaker 200:50:54As the services components may reduce over time, these new product components may increase. So 17 point whatever percentage volume is a great number. And if we can consistently stay there, I'd be very proud, particularly as our numbers get Speaker 400:51:10larger. Operator00:51:13Our next question comes from the line of Sebastien Naji with William Blair. Please go ahead. Speaker 1400:51:19Hey, thanks for taking the question and sorry to bring you guys back to the AI conversation. But this one is a little bit more high level. I guess, we keep hearing about bigger and bigger AI clusters that are being built. And as Arista is working on connecting these larger and larger clusters as they scale, I'm just wondering, does that impact your ability to capture more revenue? You've talked about 15% of CapEx. Speaker 1400:51:45Does that maybe change or go up or go down as these clusters that you're having to connect get bigger and bigger? Speaker 200:51:52Yes. So if you look at an AI network design, you can look at it through 2 lenses, just through the compute in which case you look at scale up and you look at it strictly through how many processes there are. But when we look at an AI network design, it's number of GPUs or XPUs per workload. Distribution and location of these GPUs are important. And whether the cluster has multiple tenants and how it's divvied up between the host, the memory, the storage and the wide area plays a role. Speaker 200:52:21Then the optimizations you make on the applications for the collective communication libraries for specific workloads, levels of resilience, how much redundancy you want to put in, active active, link based, load balancing, types of visibility. So the metrics are just getting more and more. There are many more permutations and combinations. But it all starts with number of diffuse performance and billions of parameters. There's a training model that are definitely centered around job completion time. Speaker 200:52:47But then there's multiple concentric circles of additional things we have to add to that network design. All this to say a network design centric approach has to be taken for these GPU clusters. Otherwise, you end up being very siloed. And that's really what we're working on. So it goes beyond scale and performance to some of these other metrics I mentioned. Speaker 100:53:10Thanks, Sebastien. Speaker 600:53:11Got it. Speaker 100:53:11Operator, we have time for one last question, please. Operator00:53:15Our final question comes from the line of David Vogt with UBS. Please go ahead. Great. Speaker 2000:53:21Thanks guys for squeezing me in. Maybe just to bring it back together and maybe both for Jaysuri and Shantanu. I guess what I'm trying to think through is this is your product introduction. You have a pretty strong ramp of AI likely next year. But does the guide imply that we're going to start to see a much bigger contribution in Q4 driven by the comments around mix earlier and the gross margin discussion, because I would imagine early in the stage of their life cycle plus the fact that they're hyperscalers, they're going to be a relatively more modest gross margin profile at the beginning of the glide path versus the end of the glide path. Speaker 2000:53:55So just any color there in terms of what Q4 might look like from an AI perspective relative to your expectations from a glide path perspective? Thank you. Speaker 200:54:04Yes. Let me just remind you of how we are approaching 2024 including Q4, right? Last year, trials, so small we did it was not material. This year we're definitely going into pilots. Some of the GPUs and you've seen this in public blogs published by some of our customers have already gone from tens of 1,000 to 24,000 and are heading towards 50,000 GPUs. Speaker 200:54:29Next year, I think there'll be many of them heading into tens of 1,000 aiming for 100,000 GPUs. So I see next year as more promising. Some of them might happen this year, but I think we're very much in the going from trials to pilots, trials being 100 and this year we're in the 1,000. But I wouldn't focus on Q4. I'd focus on the entire year and say, yes, we've gone into the 1,000. Speaker 200:54:54And I like Shantel's term for this glide path, right? So we expect to be single digits, small percentages of our total revenue in AI this year. But we are really, really expecting next year to be the $750,000,000 a year or more. Yes. Yes, I think so. Speaker 400:55:09I completely agree, Jaysh. The only thing I would add to it is you have to think of the kind of the matrix we're working within. So we have cloud and enterprise customers and we have very, very different scopes of readiness at those customers. So Q3, Q4, Q1, Q2 next year, they're all eligible for timing, for types, etcetera. So we just want to make sure that variability that I spoke to in my prepared remarks is understood in context. Speaker 200:55:35Yes. And simple things like power and cooling are affecting the ability to deploy in massive scale. So there's nothing magic about Q4. There's plenty magic about the year in its entirety and next year. Great. Speaker 2000:55:48Thanks, Appreciate it. Thanks, Chantal. Thanks, Liz. Speaker 100:55:51Thanks, David. This concludes the Arista Networks' Q2 2024 Earnings Call. We have posted a presentation, which provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today, and thank you for your interest in Arista. Operator00:56:07Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.Read moreRemove AdsPowered by Conference Call Audio Live Call not available Earnings Conference CallIndependent Bank Q2 202400:00 / 00:00Speed:1x1.25x1.5x2xRemove Ads Earnings DocumentsSlide DeckPress Release(8-K)Quarterly report(10-Q) Independent Bank Earnings HeadlinesArista price target lowered, added to ‘Tactical Outperform’ list at Evercore ISIApril 15 at 4:08 PM | markets.businessinsider.comCitigroup Issues Pessimistic Forecast for Arista Networks (NYSE:ANET) Stock PriceApril 15 at 3:35 AM | americanbankingnews.comWhat to do with your collapsing portfolio…There might be only one way to save your retirement in this volatile time. After watching investors lose $6 trillion in market cap in a matter of DAYS... And after seeing businesses bleeding dry as trade tensions spiral out of control... What the acclaimed “Market Wizard” Larry Benedict — who beat the market by 103% during the 2008 crash — is about to reveal could not only save your retirement from Trump's tariffs…April 16, 2025 | Brownstone Research (Ad)Is Arista Networks Inc (ANET) The Best Hardware Stock To Buy Now?April 14 at 8:42 PM | msn.comArista Networks price target lowered to $92 from $121 at CitiApril 14 at 3:42 PM | markets.businessinsider.comArista Networks (NYSE:ANET) Appoints Greg Lavender To Board Of DirectorsApril 13 at 3:14 PM | finance.yahoo.comSee More Arista Networks Headlines Get Earnings Announcements in your inboxWant to stay updated on the latest earnings announcements and upcoming reports for companies like Independent Bank? Sign up for Earnings360's daily newsletter to receive timely earnings updates on Independent Bank and other key companies, straight to your email. Email Address About Independent BankIndependent Bank (NASDAQ:IBCP) operates as the bank holding company for Independent Bank that provides commercial banking services to individuals and businesses in rural and suburban communities in Michigan. It offers checking and savings accounts, commercial lending, direct and indirect consumer financing, mortgage lending, and safe deposit box services. The company also provides title insurance services and investment services, as well as automatic teller machines, and internet and mobile banking services. In addition, it operates through branches, drive-thru facilities, and loan production offices. Independent Bank Corporation was founded in 1864 and is based in Grand Rapids, Michigan.View Independent Bank ProfileRead more More Earnings Resources from MarketBeat Earnings Tools Today's Earnings Tomorrow's Earnings Next Week's Earnings Upcoming Earnings Calls Earnings Newsletter Earnings Call Transcripts Earnings Beats & Misses Corporate Guidance Earnings Screener Earnings By Country U.S. Earnings Reports Canadian Earnings Reports U.K. Earnings Reports Latest Articles Tesla Stock Eyes Breakout With Earnings on DeckJohnson & Johnson Earnings Were More Good Than Bad—Time to Buy? Why Analysts Boosted United Airlines Stock Ahead of EarningsLamb Weston Stock Rises, Earnings Provide Calm Amidst ChaosIntuitive Machines Gains After Earnings Beat, NASA Missions AheadCintas Delivers Earnings Beat, Signals More Growth AheadNike Stock Dips on Earnings: Analysts Weigh in on What’s Next Upcoming Earnings Netflix (4/17/2025)American Express (4/17/2025)Blackstone (4/17/2025)Infosys (4/17/2025)Marsh & McLennan Companies (4/17/2025)Charles Schwab (4/17/2025)Taiwan Semiconductor Manufacturing (4/17/2025)UnitedHealth Group (4/17/2025)HDFC Bank (4/18/2025)Intuitive Surgical (4/22/2025) Get 30 Days of MarketBeat All Access for Free Sign up for MarketBeat All Access to gain access to MarketBeat's full suite of research tools. Start Your 30-Day Trial MarketBeat All Access Features Best-in-Class Portfolio Monitoring Get personalized stock ideas. Compare portfolio to indices. Check stock news, ratings, SEC filings, and more. Stock Ideas and Recommendations See daily stock ideas from top analysts. Receive short-term trading ideas from MarketBeat. Identify trending stocks on social media. Advanced Stock Screeners and Research Tools Use our seven stock screeners to find suitable stocks. Stay informed with MarketBeat's real-time news. Export data to Excel for personal analysis. Sign in to your free account to enjoy these benefits In-depth profiles and analysis for 20,000 public companies. Real-time analyst ratings, insider transactions, earnings data, and more. Our daily ratings and market update email newsletter. Sign in to your free account to enjoy all that MarketBeat has to offer. Sign In Create Account Your Email Address: Email Address Required Your Password: Password Required Log In or Sign in with Facebook Sign in with Google Forgot your password? Your Email Address: Please enter your email address. Please enter a valid email address Choose a Password: Please enter your password. Your password must be at least 8 characters long and contain at least 1 number, 1 letter, and 1 special character. Create My Account (Free) or Sign in with Facebook Sign in with Google By creating a free account, you agree to our terms of service. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
There are 21 speakers on the call. Operator00:00:00As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website following this call. Operator00:00:08Ms. Liz Stein, Arista's Director of Investor Relations, you may begin. Speaker 100:00:13Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer and Chantal Brightup, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal Q2 ending June 30, 2024. If you would like a copy of this release, you can access it online at our website. Speaker 100:00:42During the course of this conference call, Arista Networks Management will make forward looking statements, including those relating to our financial outlook for the Q3 of the 2024 fiscal year, longer term financial outlooks for 2024 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10 Q and Form 10 ks and which could cause actual results to differ materially from those anticipated by these statements. These forward looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on the call are expressed on a non GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non GAAP financial measures to GAAP financial measures in our earnings press release. Speaker 100:01:55With that, I will turn the call over to Jayshree. Speaker 200:01:58Thank you, Liz, and thank you, everyone, for joining us this afternoon for our Q2 2024 earnings call. As a pure play networking innovator with greater than 70,000,000,000 TAM ahead of us, we are pleased with our superior execution this quarter. We delivered revenues of $1,690,000,000 for the quarter with a non GAAP earnings per share of $2.10 Services and software support renewals contributed strongly at approximately 17.6% of revenue. Our non GAAP gross margin of 65.4% was influenced by outstanding manufacturing discipline realizing cost reductions. International contribution for the quarter registered at 19% with the Americas strong at 81%. Speaker 200:02:46As we celebrated our 10th anniversary at the New York Stock Exchange with our near and dear investors and customers, we are now supporting over 10,000 customers with a cumulative of 100,000,000 ports deployed worldwide. In June 2024, we launched Arista's EtherLink AI Platforms that are ultra Ethernet consortium compatible, validating the migration from InfiniBand to Ethernet. This is a rich portfolio of 800 gig products, not just a point product, but in fact a complete portfolio that is both NIC and GPU agnostic. The AI portfolio consists of the 7,060X6 AI Lease Switch that supports 648 100 gig or 128400 gig Ethernet ports with a capacity of 51 terabits per second. The 7,800R4AI Spine is our 4th generation of Arista's flagship 7,800, offering 100% non blocking throughput with a proven virtual output queuing architecture. Speaker 200:03:48The 7,800R4 supports up to 4 60 terabits in a single chassis corresponding to 576-800 gigabit Ethernet ports or 1152-400 gigabit port density. The 7,700R4AI distributed EtherLink switch is a unique product offering with a massively parallel distributed scheduling and congestion free traffic spraying fabric. The 7,700 represents the 1st in a new series of ultra scalable intelligent distributed systems that can deliver the highest consistent throughput for very large AI clusters. Let's just say once again, Arista is making Ethernet great. First, we began this journey with low latency in 2,009 timeframe and then there was cloud and routing in the 2015 era, followed by WAN and Campus in the 2020 era and now AI in our 5th generation in 2025 era. Speaker 200:04:50Our EtherLink portfolio is in the midst of trials and can support up to 100,000 XPUs in a 2 tier design built on our proven and differentiated extensible OS. We are quite pleased with our progress across cloud, AI, campus and enterprise customers. I would like to invite Ashwin Kohli, our newly appointed Chief Customer Officer, to describe our diverse set of customer wins in 2024. Ashwin, over to you. Speaker 300:05:20Many thanks, Jayshree. Thank you for inviting me to my first earnings call. Let me walk everybody through the 4 global customer wins. The first example is an AI enterprise win with a large Tier 2 cloud provider, which has been heavily investing in GPUs to increase their revenue and penetrate new markets. Their senior leadership wanted to be less reliant on traditional core services and work with Arista on new reliable and scalable Ethernet fabrics. Speaker 300:05:52Their environment consisted of new NVIDIA H100s. However, it was being connected to their legacy networking vendor, which resulted in them having significant performance and scale issues with their AI applications. The goal of our customer engagement was refresh the front end network to alleviate these issues. Our technical partnership resulted in deploying a 2 step migration path to alleviate the current issues using 400 gig 7200 gig, eventually migrating them to an 800 gig AI Ethernet link in the future. The second next win highlights adjacencies in both campus and routing. Speaker 300:06:35This customer is a large data center customer, which has deployed us for almost a decade. Their team was able to leverage success to help them demonstrate our value for their global campus network, which spans across 100 and 1000 of square feet globally. The customer had considerable dissatisfaction with their current vendor, which led them to a last minute request to create a design for the new corporate headquarters. Given only 3 months window, Arista leveraged the existing data center design and adapted this to the campus topology with a digital twin of the design in minimal time. CloudVision was used for visibility and lifecycle management. Speaker 300:07:17The same customer once again was struggling with extreme complexity in their routing environment as well with multiple parallel backbones and numerous technical complexities. Arista simplified their routing network by removing legacy routers, increasing bandwidth and moving to a simple fixed form factor platform router. The core spine leverages the same EOS software, streamlining their certification procedures and instilling confidence in the stability of the products. Once again CloudVision came to the rescue. The third example is the next win in the international arena of a large automotive manufacturer that due to its size and scale previously had more than 3 different vendors in the data center, which created a very high level of complexity both from a technical and also from an operational perspective. Speaker 300:08:15The customers key priority was to achieve a high level of consistency across the infrastructure, which is now being delivered via a single EOS binary image and cloud vision solution from Arista. Their next top priority was to use automation, consistent end to end provisioning and visibility, which can be delivered by a CloudVision platform. This simplification has led the customer to adopt Arista beyond the data center and extend the Arista solution into the routing component of the infrastructure, which included our 7,500 R3 Spine platforms. This once again shows a very clear example of the same Arista 1 EOS and 1 cloud version solution delivering multiple use cases. And Jayshree, this last win demonstrate our strength in service provider routing space. Speaker 300:09:09We have been at the forefront of providing innovative solutions for service provider customers for many years. As we all know, we are in the midst of optical and packet integration as a result, our router support industry leading dense 400 gig ZR Plus Coherent Pluggable Optics. In this service provider customer example, we provided a full turnkey solution including our popular 7,280 R3 routers and our newly announced AWE 7,250 WAN router as a BGP route reflector along with CloudVision and Professional Services. We showcase our strength in supporting a wide variety of these pluggable coherent optics along with our SR and EVPN solutions, which allowed this middle mind service provider customer to build out a 400 gig statewide backbone at cloud scale economics. Thanks, Rishi, and back over to you. Speaker 200:10:11Thank you, Ashwin, and congratulations. Hot off the press is our new and highest Net Promoter Score of 87, which translates to 95%. Hats off to your team for achieving that. It's so exciting to see the momentum of our enterprise sector. As a matter of fact, as we speak, we are powering the broadcasters of the Olympics, symbolic of our commitment to the media and entertainment vertical. Speaker 200:10:35And so it's fair to say that so far in 2024, it's proving to be better than we expected because of our position in the marketplace and because of our best of breed platforms for mission critical networking. I am reminded of the 1980s when Sun was famous for declaring the network is the computer. Well, 40 years later, we're seeing the same cycle come true again with the collective nature of AI training models mandating a lossless, highly available network to seamlessly connect every AI accelerator in the cluster to one another for peak job completion times. Our AI networks also connect trained models to end users and other multi tenant systems in the front end data center such as storage, enabling the AI system to become more than the sum of its parts. We believe data centers are evolving to holistic AI centers where the network is the epicenter of AI management for acceleration of applications, compute, storage and the wide area network. Speaker 200:11:35AI Centers need a foundational data architecture to deal with the multimodal AI datasets that run on our differentiated EOS network data lake systems. Arista showcased the technology demonstration of our EOS based AI agent that can directly connect on the NIC itself or alternatively inside the host. By connecting into adjacent Arista switches to continuously keep up with the current state, send telemetry or receive configuration updates, we have demonstrated the network working holistically with network interface cards such as NVIDIA BlueField and we expect to add more NICs in the future. Well, I think the Arista purpose and vision is clearly deriving our customer traction. Our networking platforms are becoming the epicenter of all digital transactions, be they campus center, data center, WAN centers or AI centers. Speaker 200:12:32And with that, I'd like to turn it over to Chantal, our Chief Financial Officer, to review the financial specifics and tell us more. Over to you, Chantal. Speaker 400:12:40Thanks Jayshree. It really was great to see everyone at the New York Stock Exchange IPO celebration event. Now turning to the numbers. This analysis of our Q2 results and our guidance for Q3 is based on non GAAP and excludes all non cash stock based compensation impacts, certain acquisition related charges and other non recurring items. A full reconciliation of our selected GAAP to non GAAP results is provided in our earnings release. Speaker 400:13:06Total revenues in Q2 were 1 point $69,000,000,000 up 15.9 percent year over year, significantly above the upper end of our guidance of $1,620,000,000 to 1,650,000,000 dollars Growth was delivered across all three sectors of cloud, enterprise and providers. Services and subscription software contributed approximately 17.6% of revenue in the quarter, up from 16.9% in Q1. International revenues for the quarter came in at $316,000,000 or 18.7 percent of total revenue, down from 20.1% in the prior quarter. This quarter over quarter decrease was driven by a relatively weaker performance in our APJ region. The overall gross margin in Q2 was 65.4 percent, above our guidance of 64%, up from 64.2% last quarter and up from 61.3% in the prior year quarter. Speaker 400:13:57The year over year gross margin improvement was primarily driven by a reduction in inventory related reserves. Operating expenses for the quarter were $319,800,000 or 18.9 percent of revenue, up from last quarter at 265,000,000 R and D spending came in at $216,700,000 or 12.8 percent of revenue, up from $164,600,000 in the last quarter. This primarily reflected increased headcount and higher new product introduction costs in the period. Sales and marketing expense was $85,100,000 or 5 percent of revenue compared to $83,700,000 last quarter, with a double digit percentage increase of headcount in the quarter versus the prior year. Our G and A costs came in at $18,000,000 or 1.1 percent of revenue, up from last quarter at $16,700,000 Our operating income for the quarter was $785,600,000 or 46.5 percent of revenue. Speaker 400:14:54Other income and expense for the quarter was a favorable $70,900,000 and our effective tax rate was 21.5%. This resulted in net income for the quarter of 672 point $6,000,000 or 39.8 percent of revenue. Our diluted share number was 319,900,000 shares, resulting in a diluted earnings per share number for the quarter of $2.10 up 32.9% from the prior year. Turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at CAD6.3 billion. Speaker 400:15:29In the quarter, we repurchased CAD172,000,000 of our common stock at an average price of CAD282.20 per share. Of the $172,000,000 $82,000,000 was repurchased under our prior $1,000,000,000 authorization, which is now complete, and the remaining $90,000,000 was purchased under the new program of $1,200,000,000 approved in May 2024. The actual timing and amount of future repurchases will be dependent upon market and business conditions, stock price and other factors. Now turning to operating cash performance for the Q2. We generated $989,000,000 of cash from operations in the period, reflecting strong earnings performance with a favorable contribution from working capital. Speaker 400:16:14DSOs came in at 66 days, up from 62 days in Q1, impacted by large service renewals at the end of the quarter. Inventory turns were 1.1 times, up from 1 turn last quarter. Inventory decreased to $1,900,000,000 in the quarter, down from $2,000,000,000 in the prior period, reflecting a reduction in our raw materials inventory. Our purchase commitments and inventory at the end of the quarter totaled $4,000,000,000 up from $3,500,000,000 at the end of Q1. We expect this number to stabilize as supplier lead times improve, but we'll continue to have some variability in future quarters as a reflection of demand for our new product introductions. Speaker 400:16:55Our total deferred revenue balance was $2,100,000,000 up from $1,700,000,000 in Q1. The majority of the deferred revenue balance services related and directly linked to the timing and term of service contracts, which can vary on a quarter by quarter basis. Our product deferred revenue increased approximately $253,000,000 versus last quarter. As a reminder, we expect 2024 to be a year of new product introductions, new customers and expanded use cases. These trends may result in increased customer trials and contracts with customer specific acceptance clauses and increase the variability and magnitude of our product deferred revenue balances. Speaker 400:17:36Accounts payable days was 46 days up from 36 days in Q1, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $3,200,000 As we enter the second half of fiscal year 4, we are encouraged by the momentum that we see in the market. Our existing innovative product portfolio, along with our new product introductions are well suited for our cloud, AI, enterprise and providers customers. We will continue to invest in our R and D and go to market through both people and processes. With all of this as a backdrop for fiscal year 2024, our revenue growth guidance is now at least 14%. Speaker 400:18:17Gross margin outlook remains at 62% to 64% and operating margin is now raised to approximately 44%. Our guidance for the Q3 based on non GAAP results and excluding any non cash stock based compensation impacts and other non recurring items is as follows: revenues of approximately $1,720,000,000 to $1,750,000,000 gross margin of approximately 63% to 64% and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5% with diluted shares of approximately 321,000,000 shares. With that, I now turn the call back to Liz. Liz? Speaker 100:18:58Thank you, Chantal. We will now move to the Q and A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away. Operator00:19:12We will now begin the Q and A portion of the Arista earnings call. Our first question comes from the line of Michael Ng with Goldman Sachs. Please go ahead. Speaker 500:19:37Hi, good afternoon. Thank you for the question. As we head into next generation GPUs with Blackwell and the MBL 3,672, there's been some discussion about whether these systems may be less modular in of its components, particularly for back end networking. Speaker 600:19:57I was just wondering if Speaker 500:19:57you could share your views and provide some clarity on the vendor modularity of Blackwell, particularly as it relates to networking and how that might affect Arista's positioning over the next couple of years, if at all? Thank you very much. Speaker 200:20:14Sure. Michael, I think as the GPUs get faster and faster, obviously the dependency on the network for higher throughput is clearly related. And therefore, our timely introduction of these 800 gig products will be required, especially more for Blackwell. In terms of its connection and modularity with NVLink and 72 port, there's always been a market for what I call scale up where you're connecting the GPUs internally in a server and the density of those GPUs connecting in the past has been more PCIe and the 6 cell and now NVLink and there's a new consortium now for called UAL that's going to specify that I believe eventually by the way even there Ethernet will win. And so that density depends more on the AI accelerator and how they choose to connect. Speaker 200:21:05As I've often said, it's more robust technology. So eventually where risk to play strongly both on the front end and back end is on the scale out, not on the scale up. So independent of the modularity, whether it's a rack based design, a chassis or multiple RU, the ports have to come out Ethernet and those Ethernet ports will connect into scale out switches from Arista. Great. Speaker 600:21:30Thank you, Jayshree. Speaker 200:21:31Thank you, Michael. Operator00:21:33Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead. Speaker 700:21:39Yes. Thank you for taking the question. I guess the metric that stands out to me the most is the deferred revenue balance up it looks like 95% in total year on year. And it looks like you based on what you disclosed, it looks like you're now at about $520,000,000 of product deferred. On the product deferred line, can you help us appreciate how do we think about that number? Speaker 700:22:02Is that related to these AI opportunities? Just the cadence of how we should expect the revenue recognition from that, again being looks like almost 60% above what was previously the peak level of product deferred? Speaker 200:22:15Yes. No, good question, Aaron. Let me start generically and of course I'll hand to my CFO, Chantal here. Product deferred sort of ebbs and flows. It goes in, it comes out and it's particularly high when we have a lot of new product and new use cases. Speaker 200:22:31But it's not extraordinary to see us running a high product deferred in 1 quarter or in 1 year and then dipping down. So the long term is consistent. The short term can ebb and flow. Do you want to say a Speaker 400:22:44few words on that? Yes. Thank you, Jason. The only thing I would add to that is that the deferred balance is always a mix of customers and use cases. So I wouldn't rotate on any one particular intersection of those. Speaker 400:22:56It really is a mix of those combined. Speaker 800:22:59Okay. Thank you. Speaker 200:23:02Thanks, Karen. Operator00:23:04Our next question comes from the line of Meta Marshall with Morgan Stanley. Please go ahead. Speaker 900:23:10Great. Thanks. Jayshree, last quarter you had mentioned kind of 4 major AI trials that you guys were a part of. Obviously, you guys are Ashwin listed off a list of kind of the 4 wins that you had during the quarter. Just trying to get a sense of maybe if that Tier 2 win was a part of those AI trials or just any update on where those 4 AI trials stand or what the current count of AI trials is currently? Speaker 900:23:35Thank you. Speaker 200:23:37Yes. No, I'm going to speak Speaker 400:23:38to it and I want to turn it over to Speaker 200:23:39Ashwin since he's here with us. First of all, all four trials are largely in what I'd call cloud and AI titans. A couple of them could be classified as specialty providers as well depending on how they end up. But those 4 are going very well. They started out as largely trials. Speaker 200:23:59They are now moving into pilots this year, most of them. And with any luck next year, maybe we won't be saying 4 out of 5 and we can say 5 out of 5. That's my hope anyway. But in addition to that, we have tens of smaller customers who are starting to do AI pilots. And Ashwin, you've been snuck in the middle of a lot of those. Speaker 200:24:18Maybe you want to speak to that a little Speaker 300:24:19bit. Yes, absolutely, Jayshree. Hi, Meta. So just wanted to clarify the example that I shared with you was more around a Tier 2 cloud provider. And if I take a step back, the types of conversations my team is having with customers is either around general purpose enterprise customers or it's around tier 2 cloud providers, right, which are different to the ones Jayshree is referring to. Speaker 200:24:42Yes. And they tend to be early adopters. Speaker 300:24:45Absolutely. Speaker 200:24:45They're about to build an AI cluster. It's a reasonably small size, not classified in 1,000 or 10,000, but you got to start somewhere. So they started about a few 100 GPUs, would you say? Speaker 300:24:56Absolutely, yes. Speaker 900:24:58Great. Thank you. Speaker 200:24:59Thanks, Vinu. Operator00:25:01Our next question comes from the line of Adith Malik with Citi. Please go ahead. Speaker 1000:25:07Hi, thank you for taking my question. Jayshree, you mentioned a $70,000,000,000 TAM number in your prepared remarks. Can you help us understand what is in that TAM and how does that relate to the 750,000,000 dollars AI networking revenue number you have provided for next year? Speaker 200:25:25Yes, I get asked that question a lot. First of all, the TAM is far greater than the $750,000,000 we've signed up for and remember that's early years. But that TAM consists of our data center TAM, our AI TAM, which we count in a more narrow fashion as how much of InfiniBand will move to Ethernet on the back end. We don't count the AI TAM that's already in the front end, which is part and parcel of our data center. And then obviously there's a campus TAM, which is very, very big. Speaker 200:25:55It's north of 10,000,000,000 dollars And then there's the wide area and routing. So these 4 are the building blocks that I call the campus center, data center, AI center and WAN center. And then layered upon that is some very nice software. If you saw, we had a nice bump in software and service renewals this quarter, which would be largely centered around cloud vision, observability and security. So I would say these are the 4 building blocks and then the 3 software components on top of it. Speaker 200:26:22Not to always of course, not to forget the services and support that are part of these TAMs as well. Thank you, Addis. Thank you, Addis. Operator00:26:33Our next question comes from the line of Antwan Shkabin with New Street Research. Please go ahead. Speaker 300:26:39Hi. I'd like actually to ask about the non AI component of your cloud and AI segment. What can you tell us about how investments in traditional infrastructure are trending? Because we heard from other vendors that the inventory digestion is now easing. So are you seeing that too? Speaker 200:26:59We saw that last year. We saw that there was a lot of pivot going on from the classic cloud, as I like to call it, to the AI in terms of spend. And we continue to see favorable preferences to AI spend in many of our large cloud customers. Having said that, at the same time simultaneously, we are going through a refresh cycle where many of these customers are moving from 100 to 200 or 200 to 400 gig. Gig. Speaker 200:27:27So while we think AI will grow faster than cloud, we're betting on classic cloud continuing to be an important aspect of our contributions. Operator00:27:38Our next question will come from the line of Amit Daryanani with Evercore. Please go ahead. Speaker 1100:27:45Good afternoon and thanks for taking my question. I guess just a question related to the updated 24% guide and I realize it's at least 14% growth for the year. But your compares actually get much easier in the back half of the year versus what you've had in the first half. So just from an H2 versus H1 basis, is it reasonable to think growth can actually accelerate in the back half of the year? And if it doesn't, why do you think it does not accelerate in the back half? Speaker 1100:28:08Thank you. Speaker 400:28:10Yes. I think that Jayshree and I Speaker 200:28:13came to this guide of Speaker 400:28:14at least 14%, because we do see multiple scenarios as we go through the second half of the year. We do expect to continue to see some acceleration in growth, but I would say that from the perspective of the forward scenarios, we were comfortable with at least 14% and we'll come to guide for the rest of the year. Speaker 200:28:32Amit, look at us. We're known to be traditionally conservative. We went from 10% to 12% to 14% and now my CFO says at least 14. So let's see how the second half goes. But I think at this point, you should think we are confident about second half and we're getting increasingly confident about 2025. Speaker 1100:28:52Perfect. Thank you. Speaker 200:28:53Thank you. Operator00:28:54Our next question comes from the line of Tal Liani with Bank of America. Please go ahead. Speaker 800:29:03Was this me, by the way? Good. Speaker 200:29:06Sally, are you there? Speaker 800:29:07Thank you. Yes. Can you hear me? Speaker 400:29:11Yes. Speaker 800:29:12Okay. My question is more in line with kind of the previous question. I calculated the implied growth in the Q4 and I'm getting a much lower growth in the Q4 than what we've seen this quarter or next quarter. And I'm wondering if it's you said conservatism in the last answer. And the question is, is it just conservatism or is there anything special with the Q4 that the implied growth is only 9% year over year? Speaker 800:29:44And it goes across everything. It goes the implied growth, revenue growth is lower, the gross margin is lower. So some of it is conservatism, but is there anything special with 4Q timing of recognition or seasonality or anything that drives a lower implied guidance? Speaker 200:30:05So Tal, if you go back to November Analyst Day, call our gross margin lower, I would disagree because I think we're just blowing it off our guide. Our guide was 63 to 64 and we have now shown 2 quarters of amazing gross margin. Hats off to Mike Capas and John McCool, Alex and the entire team for really working on disciplined cost reductions. But yet if you look at mix and general the costs and etcetera, I would say you should plan on our gross margins being as we projected. They're not lower. Speaker 200:30:38I think we just did exceptionally well the last two quarters, so it's relatively lower. That's the first thing. 2nd, in terms of growth, I would say we always aim for double digit growth. We came in with 10% to 12%. And again, Q2 is just an outstanding quarter. Speaker 200:30:51I don't want you to use it as a benchmark for how Q3, Q4 will be. But of course, we're operating off large numbers. We'll aim to do better, but we'll have more visibility as we go into Q3 and we'll be able to give you a good sense of the year. Speaker 800:31:07Got it. Thank you. Speaker 100:31:08Thanks Al. Operator00:31:10Our next question comes from the line of George Notter with Jefferies. Please go ahead. Speaker 1200:31:16Hi, guys. Thanks very much. I guess I was just curious about what your expectations were coming into the quarter for product deferred revenue. I guess, I'm curious how much you thought would be added to that product deferred category in Q2? And then also, do you have a view on product deferred revenue for Q3? Speaker 1200:31:33Thanks. Speaker 400:31:35Yes. Hi. Yes, nothing's changed in our philosophy that we don't guide product deferred revenue. So that's there's nothing new there to report. I would say in the sense of coming to this quarter, we don't guide the product deferred revenue. Speaker 400:31:49We have an idea in the sense of where we're going to land as we go through the quarter. But I would say that it met expectations from what we were having in our planning forecast process. Speaker 1200:31:58Got it. And I assume these are new products you've shipped to customers, you're waiting for customer acceptance. Any sense for when those customer acceptances might start to flow through? Is that a 2024 event? Is that a 2025 event? Speaker 1200:32:15How do you think about it? Speaker 400:32:17Yes. They all have different timings because they're unique to the customer, the use case, AI, classic cloud, etcetera. So they're all unique and bespoke that way. So there's no set trending on that. And so as we roll through the quarters, they'll come off as they get deployed and then that's where it will land from a forecasting perspective. Speaker 400:32:34And I Speaker 200:32:34think it's fair to say if it's AI, it takes longer. If it's classic cloud, it's shorter. Speaker 600:32:41Great. Okay. Thank you. Speaker 400:32:43Yes, George. Thank you. Operator00:32:45Our next question comes from the line of Samik Chatterjee with JPMorgan. Please go ahead. Speaker 1300:32:50Hi, thanks for taking the question and strong results here. But if I can just ask question on the commentary that Ashwin had in the prepared remarks. Ashwin, you mentioned the Tier 2 customer where you're refreshing the front end, as I sort of interpreted it to alleviate some of the bandwidth sort of concerns from the back end. How do you think about that opportunity across your customer base? Particularly how should we think about sort of that as being attached to the $750,000,000 target for back end revenues that you have for next year? Speaker 1300:33:23Just help us think about Speaker 300:33:28Samit. It's hard to say, right? I mean, I don't want to attach the 750 back to this one customer, right. The goal around this one customer was to demonstrate our wins in enterprise and in the non cloud space. But outside that, it would be very hard to go translate that to what's happening within the $750,000,000 But I don't know, Jishi, if you've got any comments around that at all? Speaker 200:33:54Yes. I was just going to add that. There are 4 things Ashwin and the team are seeing in the enterprise and provider sector. I think the migration to 100 gig data center is pretty solidly going on. If anybody is still on a 10 and 40, they're definitely not a early adopter of technology. Speaker 200:34:10And some of them are even moving to 400 gig, I would say, right? Speaker 300:34:14Absolutely, Jayshree. Speaker 400:34:15So that's Speaker 200:34:15on the data center. Operator00:34:16Campus, I Speaker 200:34:17know in general is a slow market, but for Arista, we are still seeing a lot of desire and you heard Ashwin talk about a campus win where they're really frustrated and they're struggling with existing campus deployments. So we feel really good about our $750,000,000 target for next year. The routed WAN again we're both in Tier 2 and service providers and even in enterprises, lot of activity going on there. And finally, the AI trials you talked about, they tend to be smaller, but it's a representation of the confidence the customer has. They may be using other GPUs, servers, etcetera, but when it comes to the mission critical network, they've recognized the importance of best of breed reliability, availability, performance, no loss and the familiarity with the data centers naturally leading to pilots and trials on the AI side with us. Speaker 1100:35:08Thank you. Operator00:35:09Thanks, Ike. Our next question comes from the line of Karl Ackerman with BNP Paribas. Please go ahead. Speaker 1400:35:17Yes, thank you. So there are several data points across the supply chain that indicate enterprise networking and traditional server units are beginning to recover. I was hoping you might discuss what you are hearing from your enterprise and service provider customers on their commitment to upgrade their servers and networking gear in the next couple of quarters? Speaker 1200:35:38And as you address that, perhaps Speaker 1400:35:40you could discuss the number of new customers being added into these verticals over the last couple of quarters out of the 10,000 or so that you have today? Thank you. Speaker 200:35:50Yes. Let me take the second question. I think we are adding systematically, we celebrated the 10,000. And so because of in the past, we used to add large numbers of customers. Now we're adding many small customers. Speaker 200:36:03So we're pleased with the systematic add of hundreds of customers every quarter and that's going very, very well. On the what was your other formal question? Speaker 1400:36:16How to think about the adoption or the growth of server and networking gear on for campus environments and what you're seeing there? Speaker 200:36:27Okay. So perhaps, it may come as a surprise to you, but servers aren't always related to campus. Devices and users are much more related to campus, right? Service tend to be dealing with more data center upgrades. So in the campus, we're tending to see 2 things right now. Speaker 200:36:43Greenfield buildings that are they're planning for 2025, 2026 and we're smack in the middle of those RFPs or they're trying to create a little oasis in the desert and prove that our post pandemic campus is much better with the Leaf Spine topology, wired wireless connecting to it at least and then enabling things like 0 touch automation, meta segmentation, capabilities, analytics, etcetera. So the campus is really turning out to in a somewhat sluggish overall market. We are finding that our customers are very interested in modernizing their campus. And again, it has a lot to do with their familiarity with us in the data center and that's translating to more success in the campus. Thanks, Paul. Operator00:37:27Our next question comes from the line of Ben Bollin with Cleveland Research. Please go ahead. Speaker 600:37:34Good evening, everyone. Thanks for taking the question. Jayshree, I'm interested bigger picture as you think about back end network architectures gradually capturing more of the traditional front end. What do you think that looks like over the next several years? How quickly could that become a more realistic opportunity to capture more of that true fabric of overall compute resources? Speaker 200:37:58Well, I think there are a lot of market studies that point to today it's still largely InfiniBand. You remember me, Ben, saying we were outside looking in just a year ago. So step 1 is we're feeling very gratified that the whole world even InfiniBand players have acknowledged that we're making Ethernet great again. And so I expect more and more of that back end to be Ethernet. One thing I do expect even though we're very signed up to the 750,000,000 number at least 750,000,000 I should say next year is it's going to become difficult to distinguish the back end from the front end when they all move to Ethernet. Speaker 200:38:35So this AI center as we call it is going to be a conglomeration of both the front and the back. So if I were to fast forward 3, 4 years from now, I think the AI center is a super center of both the front end and the back end. So we'll be able to track it as long as there's GPUs and strictly training use cases. But if I were to fast forward, I think there'll be many more edge use cases, many more inference use cases and many more small scale training use cases, which will make that distinction difficult to make. Speaker 400:39:06Thank you, Ben. Operator00:39:08Our next question will come from the line of Alex Henderson with Needham. Please go ahead. Speaker 1500:39:13Great. Thank you very much. I was hoping we could talk a little bit about the spending biases in the cloud titans. Clearly, there's an enormous amount of spending going into the AI front end, back end networking elements as well as the GPUs. But there is a rebounding growth rate of application that is ultimately driving the traditional business that has historically been called the CPU side of the data center. Speaker 1500:39:50And I'm wondering if they're under investing in there and that there is going whether there is a potential for a catch up in the spending in that area at some juncture because of the overbias to AI or whether that investment is ongoing at a reasonable rate consistent with a moderate acceleration in the application growth? Speaker 200:40:14Alex, it's a very thought provoking question. I would say there's such a heavy bias towards in the cloud titans towards training and super training and the bigger and better the GPUs, the billion parameters, the OpenAI, chat GPT and LAMAs that you're absolutely right that at some level the classic cloud, what you call traditional, I'm still calling classic, is a little bit neglected last year and this year. Having said that, I think once the training models are established, I believe this will come back and it will sort of be a vicious cycle that feeds on each other. But at the moment, we're seeing more activity on the AI and more moderate activity on the cloud. Speaker 1500:40:57Does the reacceleration of application growth, excluding AI, play into that? I mean, clearly that Speaker 200:41:05I think it does. They were clearly now on access. I don't know how to measure it, but I think the more AI back end we put in, we expect that to have a pressure on the front end of X percent. We're still trying to assess whether that's 10%, 20%, 30%. And so we do not count that in our 750,000,000 number to be accurate to only GPU native connections. Speaker 200:41:26But absolutely, as the 2 holistically come together to form this AI center, I believe the front end will have pressure. Speaker 1500:41:34Great. Thank you so much and thanks for the great quarter. Speaker 200:41:37Thank you, Alex. Appreciate your support. Operator00:41:41Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead. Speaker 1600:41:48Hi, Jayshree and Chantal. This is Jack Adair for Ben. Congrats on the good quarter. We were wondering if you could comment on the competitive environment and if you're seeing Spectrum X from NVIDIA and if so, how you're doing against it? Speaker 200:42:02Yes. But first I just want to say when you say competitive environment, it's complicated with NVIDIA because we really consider them a friend on the GPUs as well as the mix. So not quite a competitor, but absolutely we will compete with them on the spectrum switch. We have not seen the spectrum except in one customer where it was bundled, but otherwise we feel pretty good about our win rate and our success for a number of reasons. Great software, portfolio of products and architecture that's proven, performance, visibility features, management capabilities, high availability. Speaker 200:42:37And so I think it's fair to say that if a customer were bundling with their GPUs, then we wouldn't see it. If a customer were looking for best of breed, we absolutely see it and win it. Speaker 100:42:49Thanks, Jeff. Speaker 300:42:50Thank you for that. Operator00:42:52Our next question will come from the line of James Fish with Piper Sandler. Please go ahead. Speaker 1700:42:58Thanks for the question. Just wanted to circle back on the enterprise side of things. I guess, is there a way to think about how many replacements you're seeing relative to prior periods? And really, I'm trying to understand if we're starting to see that core enterprise data center network refresh actually pick up versus kind of the share gains that you guys have historically seen? And is there an underlying change in enterprise customer behavior, whether it's for the data center or Jayshree, I know you were talking about the campus earlier. Speaker 200:43:30Yes, James, let talk and Ashwin, I'm sure you have more to add since you're closer to the problem. I believe we have 3 classes of enterprise customers, the early adopters and I think in that category, Ashwin's team is seeing a lot of refresh going they already have 100 gig and they're potentially planning their 400 gig, right? Then the fast followers and those guys I think are still looking at 100 gig migrations, right? And then the real risk averse and we're still getting to know them because this is an untapped opportunity for us, right? And probably a 4th category where some of them are disillusioned with the public cloud and want to repatriate some of their workloads back into the data center. Speaker 200:44:09So I would say there's activity in all four, no saturation, still a lot of opportunity for us, largely in the speed upgrades and also in the class of customers and what stage they are. Ashwin, do you Operator00:44:23want to add anything to that? Speaker 300:44:24Yes, sure Jayshree. Right. And so to answer your question from what I'm seeing from customers, they're kind of fed up with being deployed in the data center specifically, something which is proprietary lock in, right, something which is does not give them the flexibility to join multiple use cases such as data center, campus routing and they want something that just works. They want something that is just simple. They want to make sure that when they wake up in the morning, the network is not down. Speaker 300:44:54And so Arista today actually has a brand, it has a value for there and we've actually been delivering this for the last 10 plus years. So James, I would say that message is echoing successfully in our existing customers. We're taking our risk out not only in the single use case of data center, but expanding that across data center, campus, routing, WAN and then the team is eagerly working with a new set of Global 2,000 and Fortune 500 customers to go evangelize the message to them as well. Speaker 200:45:24Our Operator00:45:26next question will come from the line of Ittai Kidron with Oppenheimer. Please go ahead. Speaker 1800:45:32Thanks. Nice numbers ladies. I wanted to go back to gross margin, Jayshree and Chantelle, if you don't mind. In your prepared remarks, you talked about manufacturing efficiencies, cost reduction. I guess I'm wondering why is that not something that carries forward to the next quarter as well? Speaker 1800:45:49I understand you're trying to be conservative, but I'm sure these are not changes that have a very short life span that they can carry forward. So why not be a bit more optimistic on what the gross margin outlook should be? Speaker 400:46:03Yes. No, it's a great question. And so our goal is always to try to do better than our guide. But within that guide, the question is there's mostly what's influencing the second half consideration is the expected mix of customers. As you can appreciate, we do have different mixes depending on the demographics of who we're selling to. Speaker 400:46:24So I think that's a big part of our second half, why we kept the guide as it is. We will keep looking for variable cost productivity and cost management as we go through and hope to deliver more. But at this point in time, what's mostly built into it is the mix assumption in the second half. Speaker 200:46:39And if you look at this isn't just this quarter, John and the team have done a fantastic job for the last year. So yes, I'm going to go back to them and ask for more, but I think they will say they've done so much and might be squeezing blood out of a stone. So we'll see if they respond to more cost reductions. But I absolutely agree with Chantal. It's largely driven by mix. Speaker 200:46:59And I think we've taken a lot of the cost reductions out in the last year. Speaker 1800:47:04Appreciate it. Operator00:47:06Thank you. Our next question will come from the line of Simon Leopold with Raymond James. Please go ahead. Speaker 600:47:12Thank you very much. I know that you typically hold off on the 10% customer disclosures until the end of the fiscal year. But what I'm hoping to gather is how customer concentration may be evolving from comparison to last year. Do you expect with your past customers growing their spending so much that it stays similar or with the diversity and the new opportunities, the concentration you've had historically declined? Any kind of indication you could offer, I'd appreciate it. Speaker 200:47:46Simon, I'll try, but you're right to say that we don't know. It's only half the year. I expect both Microsoft and Meta to be greater than 10% customers for us. I don't expect any other 10% concentration. Now in Microsoft and Meta, how they will pivot to AI and how they will reduce the spend and all of that things, that movie will play out in the next 6 months. Speaker 200:48:07So we'll know better. But at this point, I think you can assume they won't be exactly the same. Some may go up, some may go down. But these are 2 extremely vital customers, strategic customers. We co develop with them. Speaker 200:48:21We partner with them very, very well. And we expect to do well with them both in cloud and AI, depending on their priorities, of course. Speaker 600:48:30And does the pipeline suggest you can have new 10% customers next year or do you expect sort of a similar concentration next year? Speaker 200:48:41We're not aware of any customer that will approach 10% this year or next year. Speaker 600:48:46Thank you very much. Speaker 200:48:48Sure, Simon. Operator00:48:50Our next question comes from the line of Tim Long with Barclays. Please go ahead. Speaker 1900:48:55Thank you. Maybe we'll go over to the software services since the AI stuff has been beaten up a little bit here. In a good way. You talked a lot about some of the software capabilities. It seems like Arista might be leaning in a little bit more to this revenue line. Speaker 1900:49:21It's been growing faster than the hardware product the last multiple quarters. So could you talk about kind of sustainability of that strength focus that you guys have on this service software area and what that would mean to growth rate going forward? Thank you. Speaker 200:49:40Yes. Thank you. Thank you for the change in question. I appreciate that. I think we will be in the teens for some time because there are 3 building blocks and a lot of this, Sashwan, you know very well. Speaker 200:49:53There's the services building block that as product goes up, there's a lot of pressure on us as on a percentage of service to be lower, right? So that while it may historically has been in the teens can be lower. The second one is our perpetual software, which again is a strong function of use cases, particularly things like routing, etcetera, where we've done very, very well. The stronger we do there, the better we do there. An extension of that is CloudVision, which can be is more a subscription service with Cloud Vision as a service either on network as a service or on the premise. Speaker 200:50:29So that's the 2nd building block for us that's going strong. The one I want to point to a little more, which could help us, is the security and availability. You may know in May we introduced the micro and macro segmentation. We also announced UNO, our unified network observability. And while this is new, Ashwin and I have great plans for that and I think this could be a swing factor. Speaker 200:50:54As the services components may reduce over time, these new product components may increase. So 17 point whatever percentage volume is a great number. And if we can consistently stay there, I'd be very proud, particularly as our numbers get Speaker 400:51:10larger. Operator00:51:13Our next question comes from the line of Sebastien Naji with William Blair. Please go ahead. Speaker 1400:51:19Hey, thanks for taking the question and sorry to bring you guys back to the AI conversation. But this one is a little bit more high level. I guess, we keep hearing about bigger and bigger AI clusters that are being built. And as Arista is working on connecting these larger and larger clusters as they scale, I'm just wondering, does that impact your ability to capture more revenue? You've talked about 15% of CapEx. Speaker 1400:51:45Does that maybe change or go up or go down as these clusters that you're having to connect get bigger and bigger? Speaker 200:51:52Yes. So if you look at an AI network design, you can look at it through 2 lenses, just through the compute in which case you look at scale up and you look at it strictly through how many processes there are. But when we look at an AI network design, it's number of GPUs or XPUs per workload. Distribution and location of these GPUs are important. And whether the cluster has multiple tenants and how it's divvied up between the host, the memory, the storage and the wide area plays a role. Speaker 200:52:21Then the optimizations you make on the applications for the collective communication libraries for specific workloads, levels of resilience, how much redundancy you want to put in, active active, link based, load balancing, types of visibility. So the metrics are just getting more and more. There are many more permutations and combinations. But it all starts with number of diffuse performance and billions of parameters. There's a training model that are definitely centered around job completion time. Speaker 200:52:47But then there's multiple concentric circles of additional things we have to add to that network design. All this to say a network design centric approach has to be taken for these GPU clusters. Otherwise, you end up being very siloed. And that's really what we're working on. So it goes beyond scale and performance to some of these other metrics I mentioned. Speaker 100:53:10Thanks, Sebastien. Speaker 600:53:11Got it. Speaker 100:53:11Operator, we have time for one last question, please. Operator00:53:15Our final question comes from the line of David Vogt with UBS. Please go ahead. Great. Speaker 2000:53:21Thanks guys for squeezing me in. Maybe just to bring it back together and maybe both for Jaysuri and Shantanu. I guess what I'm trying to think through is this is your product introduction. You have a pretty strong ramp of AI likely next year. But does the guide imply that we're going to start to see a much bigger contribution in Q4 driven by the comments around mix earlier and the gross margin discussion, because I would imagine early in the stage of their life cycle plus the fact that they're hyperscalers, they're going to be a relatively more modest gross margin profile at the beginning of the glide path versus the end of the glide path. Speaker 2000:53:55So just any color there in terms of what Q4 might look like from an AI perspective relative to your expectations from a glide path perspective? Thank you. Speaker 200:54:04Yes. Let me just remind you of how we are approaching 2024 including Q4, right? Last year, trials, so small we did it was not material. This year we're definitely going into pilots. Some of the GPUs and you've seen this in public blogs published by some of our customers have already gone from tens of 1,000 to 24,000 and are heading towards 50,000 GPUs. Speaker 200:54:29Next year, I think there'll be many of them heading into tens of 1,000 aiming for 100,000 GPUs. So I see next year as more promising. Some of them might happen this year, but I think we're very much in the going from trials to pilots, trials being 100 and this year we're in the 1,000. But I wouldn't focus on Q4. I'd focus on the entire year and say, yes, we've gone into the 1,000. Speaker 200:54:54And I like Shantel's term for this glide path, right? So we expect to be single digits, small percentages of our total revenue in AI this year. But we are really, really expecting next year to be the $750,000,000 a year or more. Yes. Yes, I think so. Speaker 400:55:09I completely agree, Jaysh. The only thing I would add to it is you have to think of the kind of the matrix we're working within. So we have cloud and enterprise customers and we have very, very different scopes of readiness at those customers. So Q3, Q4, Q1, Q2 next year, they're all eligible for timing, for types, etcetera. So we just want to make sure that variability that I spoke to in my prepared remarks is understood in context. Speaker 200:55:35Yes. And simple things like power and cooling are affecting the ability to deploy in massive scale. So there's nothing magic about Q4. There's plenty magic about the year in its entirety and next year. Great. Speaker 2000:55:48Thanks, Appreciate it. Thanks, Chantal. Thanks, Liz. Speaker 100:55:51Thanks, David. This concludes the Arista Networks' Q2 2024 Earnings Call. We have posted a presentation, which provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today, and thank you for your interest in Arista. Operator00:56:07Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.Read moreRemove AdsPowered by