2023-11-28 19:50:21 ET
Arista Networks, Inc. (ANET)
UBS Annual Technology, Media and Telecom Conference
November 28, 2023 14:55 ET
CorporateParticipants
Anshul Sadana - Chief Operating Officer
ConferenceCall Participants
David Vogt - UBS
Presentation
David Vogt
Good afternoon, everyone. Thanks for joining us here at UBS Tech Conference. I'm David Vogt, I'm the hardware networking analyst. And we're excited to have with us Arista Networks, Anshul Sadana, Chief Operating Officer. Before we get started on so let me just read a quick disclaimer from UBS.
For important disclosures related to UBS, or any company that we talk about today, please visit our website at www.ubs.com/disclosures. So if you have any problems, you can email me later. And with that out of the way, Anshul, thank you for joining us.
Anshul Sadana
Thank you, David.
David Vogt
I'm sure you don't need to read any disclosures. I think we're good.
Anshul Sadana
I think we're good.
David Vogt
We're good. Perfect. So besides raising guidance and taking targets up, we won't get into that. Okay, so I think, we had just go here early, we had other companies here earlier, maybe just a level set where we are today with Arista, I know you just had an Analyst Day fairly recently, where you set out targets for fiscal '24 preliminary targets or framework and long-term guide. But I think there's some investors who are a little bit unclear on how we got here. I've talked to a couple people over the last couple of weeks.
So I think the shift from Arista, know, basically architecturally taking share and hyperscalers, over the last couple of years quite a lot of companies by surprise, maybe we could start there and talk about kind of what you do differently from a solution software-based architecture. And then how does that lead us to where we are today, and we're going to talk about AI, but I want to kind of level set and set the table first.
Anshul Sadana
Absolutely, I didn't expect any questions anyway. But over the last 15 years, at Arista, we've grown in datacenter networking, especially I'll come to campus as well. But we started out with building what we believed was the best solution for the whole world for datacenter networks. They call it cloud networking. It included a change in design. We went from a classic three tier access aggregation core, which was the de facto to a leaf spine design, which is a more of a distributed scale out architecture blends really well to cloud computing. But no one in the industry wanted to do that. And to do that, you have to build very high-speed products, with the first in the market with 10 gig, with 40 gig, with 100 gig and pushing the envelope, not just as consumers of merchant silicon. But as drivers of merchant silicon, we work with our partners like Broadcom or Intel, and drive their roadmap and tell them what we need for on behalf of our customers.
We coupled that with a beautiful system design that is by far, I would say the most efficient in many ways, whether it's signal integrity, which is how we are getting to leaner drive optics, or power efficiency, lower power matters to everyone, high quality. And then running a software stack that is very unique and differentiated from all the legacy stacks out there, including the way we keep all of our state in our database inside our software and memory. And as a result, small bugs, whether it's a memory leak, or a small crash of an agent, doesn't bring down your network, just have a small process restart, the system just continues to forward packets as if nothing happened.
And initially, competition [indiscernible], like, hey, this is a new kid on the block. And this is not going to succeed. But the cloud Titans, as we call them, not only embraced it, they partnered with us. And we avoid on that architecture for several generations to a point today, where we do a lot of core development with our biggest customers, it's a very unique situation, typically our vendor customer relationship, we don't have that. We have an engineering partner, customer relationship. And quite often we are telling the customer what the roadmap should be not getting some RFP, and getting surprised by it, and so on. And we've already executed our competition clearly in all of these areas and build on that. That was on the cloud side. We did the same approach to the enterprise. But the enterprise needs a little bit more help on the stack, especially with respect to deployment, and automation. That's where we build our software suite for Cloud Vision, which runs on EOS, which is our operating system on the switches, Cloud Vision runs independently to manage and automate your entire network. And now, cloud vision can run both on prem or as a managed service in the cloud.
As a result, we can cater to many, many different types of solutions that is allowed to expand in different verticals to produce different parts of the network, including now campus, that's really what the story has been for us for the last 15 years or so.
David Vogt
Great. So that's a great place to start to. Maybe we start with the Titan. So obviously, Titans have been a critical part of the business, I think in 2022. It's disclosed it was like 43% of revenue. This year, it's probably around 40% of revenue. So you've gone exceptionally strong with those partners? How do you think about you mentioned co-engineering and sharing the roadmap and helping them kind of understand what they need to go forward? How has that relationship evolved today? And since you mentioned AI with regards to their AI roadmaps, like, how are you involved in what Microsoft is doing, and Meta and others within that vertical in terms of thinking about the next couple of years or even five years for that matter?
Anshul Sadana
We are in a very privileged position in partnering with these customers. I was in a meeting recently with one of our Titan customers along with Andy Bechtolsheim, our Founder and Chairman. And after the meeting, we were talking about it. And quite often, we like to talk about what the future could be like.
And we're in one of these meetings, where we faced we define the future, that's what the world will be doing five years from now. That's how clusters will be built. That's how power will be delivered. That's how the fiber plant will be structured. We're talking about 2027 architecture. And we do that quite often. Now, after that meeting, the customer's view was that this was the best meeting they've had in the last 12 months. So this is the networking team. And they've been circling some really tough questions on what happens in the future as you get to [230s] [ph], as the cluster size increases, how do you change connectivity? What about the latency? What about different cables out there and the skewing of data between the cable ends and so on, all the way to automation, monitoring security, the buffers, versus shallow buffers low latency, helping the application stack get there faster, and using the GPUs a lot more efficiently, we're able to do that with pretty much all of our customers, all of the hyperscalers, Titans. And as a result, we have this thrust with the customer.
Very open relationship, we understand that they want to be multivendor. There's no purpose to go lock them in. Because then once you do that, they work really hard to unlock themselves and go somewhere else a few years later. And we are enjoying this growth with the Titans so far. And I think for many years to come.
David Vogt
Just that roadmap visibility or that co-engineering visibility change with AI versus maybe traditional legacy workloads where, again, you had strong product vision, EOS Cloud Vision, merchant silicon helped drive sort of the direction, but given the complexity, and whether it's power consumption, whether it's structuring the nodes, has that visibility changed with AI in terms of maybe not order, not order visibility, but roadmap visibility? What I mean by that, so you have a better sense today, what the next five years looks like, than if we had this conversation five years ago, what the subsequent five years would look like.
Anshul Sadana
I think, to some extent, what's happening is the focus on the future is a lot greater given the investment and the criticality of these AI clusters to the business. The customers are engaging. In the past, it used to be roughly a three-year roadmap vision. Now it's becoming five years, not necessarily because we know the future that easily, but because the physical buildouts 100 megawatt building with liquid cooling is far more complex today to think about versus going from a 10-megawatt building to a 30-megawatt building eight years ago. So just the nature of the problem and the complexities making our customer think harder and make us think harder as well.
And as I mentioned earlier, a lot of these discussions resulted us shaping the roadmap for our suppliers as well, which is critical. And we've been in this position for many years. But now I feel that the pace of innovation is actually picked up. There's so much happening in AI changes so quickly, that on one hand, you're thinking about a five-year plan. On the other hand, you're not sure what the next six months are going to work out as your thought or not.
David Vogt
Got it. So maybe just to clarify on how you think about AI for Arista, and we were having this conversation earlier. And Cisco has, I think, a slightly different view of their AI business. Their view is, if it's silicon, if it's optics, if they upgrade the DCI because there's more data traffic going because of an AI workload that in their mind is sort of AI. But I think you and Jayshree and the rest of the team have a much more strict stringent definition. Can you kind of walk through how you're defining is it just the back end part of the network that AI today and just how does that expand for you over time?
Anshul Sadana
David, I believe this is very much in context of the $750 million goal we gave.
David Vogt
Correct. Correct. Within goal.
Anshul Sadana
…for 2025. Now, look, we participate with every major cloud customer out there. So if there's a large AI build out going on somewhere in the United States, there's a good chance you're involved with that customer in one way or the other.
If you start counting everything as AI, there's nothing else left. So of course, 100% of our cloud revenue is AI, if you can count it that way. But quite often, when we ship a product, whether it's a top of rack or a deep buffer, 7800 spine, it's not clear to us when we ship the product is this going to get deployed as an AI cluster, or as a backbone, or as a DCI Network or as a tier two spine or WAN use case.
In some cases, we can find out by talking to the customer, but it's not easy to account for the system. So the $750 million goal, that's the only backend cluster networking for AI, it's our way best to calculate it or track it as best as we can. I think by 2025, we feel really good about that number and tracking it, or the long-term, is it going to be easy to track, I don't know, we'll find out, [observations] [ph] of product change. For the next two years or so three years, it seemed like the right thing to do. We also want to set the right expectation, because where we are with the journey in AI, with Ethernet. And where Ethernet gig especially is, we are right on the cusp of a product transition and a speed transition for our customers. And this time, the speed transition is not coming from DCI or computer storage, it's coming from AI. And we know that part of the market really wants to switch to 800 gig [Technical Difficulty] as quickly as quickly as possible. That is a little bit easier to track as well. But our numbers are purely backend networking, which is our switches, with any sort of an offer but no optics, nothing else added on top.
David Vogt
Right. And presumably, right now what you're shipping for AI-related is all training related, or is there a sense that there is inference use cases that, maybe show up in revenue in late '25? Just how do we think about kind of maybe bifurcating the market in terms of training versus inference and what your customers are -- by using equipment for?
Anshul Sadana
Today, most of our AI deployments are with the large cloud Titans. And the large cloud Titans haven't yet reached the point where they have discrete fading clusters versus inference clusters. While some of them are just talking about or just starting to do a little bit of that, most of the large clusters today, based on the jobs they want to run can be used for training or inference. So there are times where they take a very large cluster of 4000, 8000, 16,000 GPUs. And they'd run it for training on one model for three to four weeks. They can use the same cluster for inference. And the job scheduler will automatically just create mini-clusters of 256 GPUs, running training for a few hours, and so on. But these are not discrete build out so far. Does that happen in the future? There's a lot of talk about it. Maybe in two or three years, I'm not sure how quickly that will happen, especially with the Titans.
David Vogt
Got it. So does that mean, economically, that's a different sort of business model for you in the sense that maybe there's an opportunity to put more of your switches and equipment closer to the edges of the network outside of the hyperscalers, as training becomes less of the total mix and inference becomes a bigger part of the overall mix. And you could perform in, for instance, smaller clusters further away from the datacenter, more closer to the edge of the network. Does that broaden the market opportunity for you from “AI perspective”?
Anshul Sadana
Yes. Your question had a very strong assumption in there, I want to call it out that inference will happen at the edge. And I think that question is still to be answered, I just honestly don't know the answer. It could happen in the cloud; it could happen on the edge of the cloud; or it can happen on the edge of the enterprise as well. A lot of this also comes down to licensing or trading models and who owns the data, and issues related to data privacy, there's certain industries, like healthcare and medical, where just because of laws, it may be hard to just put all the data in the cloud. Many of the industries where it may be easy, I think the cloud will be more efficient had done trying to do it on a discrete to rack for clustering on the enterprise edge.
But having said that, I think number one, every non-Nvidia GPU that I'm aware of, including the ones some of our customers are building on their own their accelerators, or what competition is about to present to the market is pretty much all Ethernet. And many of them are talking up on how fine Nvidia has been training but all of these other processes will be good at inference. If that works out. That's pretty good for us too. Because wherever they are, they need Ethernet switches, inference also needs networking, and we have a really good shot at that.
David Vogt
So can I come back to that assumption that you just called out to? A lot of companies are talking about bespoke models that are unique to their own datasets, where maybe they don't want to keep them in the public cloud for governance reasons, privacy reasons. And they want to have maybe that inference closer to the end customer or whatever the end use case. So doesn't sound like you're convinced that's a longer term sort of driver of AI, either use cases and/or spend you think healthcare companies or other companies that have, privacy focused datasets are going to continue to work within the large Titan or hyperscaler community at this point?
Anshul Sadana
I'm not doubting at all that inference is a massive use case coming to us. It's going to happen, AI is going to turn every industry upside down. The question is, why would the cloud let go of inference. They can do bundling, they can do discrete build outs, the cloud customers have done build out for different governments of the world, where it's a private build out just for that one entity, no one else has access to it, then why can't they repeat some of these models for other use cases as well, or improve their edge to, There was a battle between certain service providers and foreign cloud companies in marketing pitch on edge computing a few years ago, and some ASPs had come and said, come to us, because we can offer you one millisecond round trip time to any 5G base station. And when cloud company was at a conference, I won't name them, but they're very popular. They said come to us, we can give you 700 metro pops all around the world with one millisecond round trip time. Five years later, I think we know who won.
So I think a lot will change, which is why this whole model that training will be done by a few companies, you license the model, go to on-prem, run your inference engine there is in a static world, world will change faster, there will be more competition, there'll be more services offered by the cloud companies, there will be more services offered by startups in the enterprise trying to succeed. And I don't see that future –
David Vogt
Because we hear often from enterprise customers, data storage, ingress fees are pretty considerable consideration. So being beholden or trapped, for lack of a better phrase within hyperscaler to get your data out to put it back to train it to inference, it's pretty expensive. So, obviously, enterprise doesn't have sort of the unlimited budget that the hyperscalers. So that's why, there is some thought that maybe you could be a little bit more cost centric, if you are focused on smaller clusters, more bespoke models at the edge of networks.
Anshul Sadana
I think it come down to the enterprise stack being really savvy, so operators think really savvy. If they can truly take advantage of that it will work. It's not that I'm convinced that cloud will win. I'm just not sure which direction it will go. Because if the issue is data in and out is too expensive, cloud will just reduce those costs, those prices, and then what, were the competition, I will just keep on evolving on this matter.
David Vogt
So when you think about sort of the use cases for AI? How are you thinking about how it affects sort of legacy workloads and demand for whether it's -- I don't know, if you want to define it as a legacy switch? That's not AI centric, which I know it's pretty difficult to draw that line in the sand, what's not AI? What is AI? But is there any way to think about what the workload spend on legacy applications look like versus AI? Is this completely additive? Is there a portion of the spend that's somewhat cannibalistic in your mind? And how do we think about, where the priorities are? So clearly it's AI-centric today. But we get to an equilibrium where it's a little bit more balanced in terms of capital allocation priorities.
Anshul Sadana
Our Founder and Chairman, Andy, in one of our customer meetings just two years ago, told a customer, this is what people used to do with legacy 100 gig. But for 400 gig, this is what we're shipping, I would tell him, Andy, customer still buying it, don't call it legacy. The same comment here. We call it classic compute. There's no reason not to disrespect Intel and AMD that they are innovating as well on the x86 side. But the recent three quarters worth or four quarters worth of front have totally changed the CapEx model. And customers are spending every penny they have on buying GPUs and connecting them and powering them. They don't have any CapEx dollars left for the risks. But can we maintain the status quo for the long term? I don't think so. Couple of reasons. Number one, CPUs for classic workloads for VMs and so on, are going to be far cheaper than buying expensive GPUs. GPUs are great for matrix calculations or mathematical functions, but not for everything else that you're running or standard application for. Enterprises will keep moving to the cloud. Cloud companies often build ahead, competing against each other. But at some point, they run out of capacity, if they are only spending on GPUs that someone will come back. They don't lose all the business either. But enterprise also spending more on AI stuff left dollars to move to the cloud right now. I think over time that will smoothen out just a little bit not as harsh as it's been.
But the classic cluster of compute storage, on top of rack spine, right now there is less investment going on there and a lot more in AI. Net-net, I think Arista whichever side wins will do? Well, I don't think it changes any material outcome for us, maybe AI is actually more dollar secure in the bandwidth intensity that's needed and is good for us. But even if customer came back to build it.
David Vogt
Yes. I mean, I think, we look at companies that are in a position that have a much stronger foothold with the hyperscalers, like yourself than some of the legacy network companies that have kind of missed some of this.
Anshul Sadana
Calling them legacy is okay.
David Vogt
Sure, I will call them legacy. But, obviously, there's a reinvigoration effectively, right. And there's a lot of discussion that the largest broadly defined networking company has wins with three of the four, hyperscalers. And I think you've said publicly at your Analyst Day, obviously, you guys welcome the competition, and you'd expect to remain sort of competitively successful. Do you think there's other entrants? Like, how does Whitebox play into this AI strategy? Obviously, they were a big player in the prior cycle, given the complexity, how does that play into, what hyperscalers? Even were even enterprise is doing within AI today?
Anshul Sadana
Yes. So we touched on this a little bit on the Analyst Day as well, companies that everyone associates the most with white boxes, also happened to be our largest customers. They were just using white boxes, they wouldn't be customers, we partner with them very, very well. And the last decade or so, the industry has largely been on status quo. Now Amazon and Google started building their own switches, 15, 20 years ago, for various reasons, long discussion, we can have that later.
But when Meta had to make that decision around 2013, 2015, they decided, let's do build because they want the learning as well, but also buy from a good partner. And we partnered really well with them, done multiple generations of products that are co-developed with them to the same spec. And I think they found a really good match over there. The cadence of networking products has roughly been one new generation every three to four years, for the last 15 years.
Now, with AI, the world is moving faster. And with [100 gig and 200 gig] [ph] coming soon. And the chip, and the power to signal integrity to linear drive optics, the software stack, the tuning of load balancing and congestion control RDMA, UEC, specs being added on top things are actually getting far more complex very quickly. In the next 24 months, there'll be more products infused into the market than what has been introduced in the previous four years. And as you will very well know from all the layoff news, now, the cloud companies are increasing their headcount right now. They're also limited resources. And it's an opportunity cost. So they invest in building more of their own or they partner with someone and invest the resources, maybe in an AI application, that would give them a lot more revenue or security for public cloud and so on.
So not only have we found a balance, but we had a place for the cloud companies want to depend more on us not less. So at the same time, they do have some religion on this topic, I don't expect white boxes to go away at all completely. I think the market will mostly maintain status quo. If anything, it will flip things just a little bit in favor of companies like us that are good at developing with these companies, rather than the other way around. And I think we just stay there.
David Vogt
Got it. So can we just maybe move down a step and touch on tier two cloud, right. We always talk about the hyperscalers. There's been some in your definition, some resegmentation of hyperscalers, I think Oracle, OCI has been sort of called out based on their server count. What did you have to call players doing today? And what's the opportunity look like for you there with regards to their investment in AI? And is the landscape any different with competitors, whether it's large -- in large networking companies or white box, because we hear about Microsoft CapEx continuing to go up, Meta, maybe not so much, but just maybe help us understand how you would define what's happening within the tier two cloud ecosystem.
Anshul Sadana
So, Oracle used to be an RTO to cloud segment. But as you said, based on the number of servers and the size they're at now, it is right to upgrade them to the cloud Titan category. The other tier two clouds are mostly serving their own space. It's a software hosted company. And they cater to millions of enterprise customers that come to their cloud for their software services, or the software stack as a SaaS. And we do really well in those as well. A lot of the tier two Cloud is also evolving to offer AI services, especially because sometimes these days even tier one cloud has no capacity to take on other customers, some of the cloud companies are signing, they sit back and easy to come to the market and rent a computer by the hour.
Today, not every cloud is letting you rent a GPU by the hour, their opportunity cost is just too high. You have to sign a multiyear contract, if you want a GPU cluster, and just use it for multiple years yourself. The tier two cloud is finding an opportunity in that ecosystem saying, hey, you know what, there's some open space here, let me offer my services to and on top of that some of the AI startups that are offering their own cloud services are building on their own as well. And we are finding a very good match and opportunity there. But just to set expectation, that's a smaller segment than the Titans. Titans are way bigger. But do well in this space –
David Vogt
Do they have enough capacity or availability from GPUs to really meet that spillover demand, or that excess demand, right. So if I think about what NVIDIA is shipping, I would imagine the top five or six companies account for 80%, 85%, 90% of GPU capacity today. So I'm just going to kind of get a sense for how you're seeing that play out.
Anshul Sadana
So some of these companies also have either their own processors or non-NVIDIA GPUs and offer other services that they can within that. I think that's actually doing okay for us as well. But just like the previous comments on tier two from a few years ago. Tier two cloud is just like Cloud Titan, the smaller the typically, ex-Google, ex-Microsoft, ex-Space for people in these companies are already having customers, they like working with us, they like automation. They don't like a legacy stack. They do exactly the way a bigger company does just on a smaller scale. We do fairly well, I think that will continue to stay strong as well.
David Vogt
With the time that we have left, I wanted to maybe just touch on enterprise. It's been a key driver of the business the last couple of years. You've taken your software, your hardware stack, and just kind of replicated the success in the hyperscaler community within enterprises taken a lot of share. How do you define sort of the opportunity today? I mean, you've been growing by 20%, 30% in the enterprise, the market doesn't grow anywhere close to that. So we get pushback from a lot of investors saying, look, you pick the low hanging fruit where people know the Arista, EOS Cloud Vision, they know the hardware. How do we think about, maybe across a cycle, what the enterprise looks like for you putting aside campus for a second.
Anshul Sadana
When we're just getting started, one of our competitors was Force10. Force10, they were tagged to big customers. They went to small HPC shops, they went to universities, they went to customers I've never heard of, before they even approached the fortune 500 customers. That is what I call low hanging fruit. What we've done is the opposite, we've gone up to the hardest, toughest customers first, won that over from competition. These sales cycles have taken five to 10 years. Now, the next round is actually a bit easier. But these customers not as big either. So it's a longer tail of enterprise. But we think customers come to us, thank you, Arista, we have not only heard good things about you, we're fed up of some legacy stack we have, it's causing outages, or we have subscription related challenges, we just want to come over. We are winning over there. So I think enterprises will just continue growing and gaining share when nowhere, as penetrated as we are it's in the title race. They have a long way to go. But that's on the data center side.
But also growing in enterprise campus. Enterprise campus, we're getting started from very small numbers, and our Cloud Vision, EOS, our switches, our Wi Fi fit really well for these customers needed first. But these customers have a slow rollout, typically seven years to refresh and so on. There'll be a long tail, but just keeps on growing. That's why we feel pretty good about enterprise space. Remember datacenter networking, plus campus networking added together as a $50 billion TAM. This share makes doing just over $5.5 billion in revenue. They have a long way to go.
David Vogt
No, I get it. But I'm like I've looked at campus, and what other companies have tried to do versus Cisco. And yes, Cisco is a shared donor over time. But to get more than 2%, 3%, 4% market share has proven to be very difficult for competitors over decades. So obviously, you've been very successful from zero to new targets 750, which you reaffirmed a couple of weeks ago. Is it, do you need to invest more in channel, whether it's, I know you're not going to be like Cisco, but where do you need to get to from a channel perspective, to really have this business be like a multibillion dollar business.
Anshul Sadana
The global 2000 fortune 500, maybe on fortune 1000 customers, we can address with a direct sales force. The fulfillment for the channel but we address and sell through a direct sales force. For the rest of the market, the mid-market, we absolutely are more depending on the channel as well. Winning more with the channel internationally. And even in the U.S., I would say the smaller regional partners have become really good channel partners for us. The bigger channel partners often are dependent on the rebate dollars and so on the bigger companies, they will generate enough pull from the market from customers before they will pivot. I think we're starting to get there. We feel good about our opportunity there too.
David Vogt
So I will in the limited time that we have left. Let me just ask you, is there anything we didn't cover that you think maybe is misunderstood by the market or the street at this point? I think your story has been pretty well discussed the last couple of months on AI, is sort of the winner here, at least the markets indicating but just want to give you an opportunity to maybe touch on anything that maybe is not fully understood at this point.
Anshul Sadana
I think we've covered it all between the earnings call, the Analyst Day and in our discussion today.
David Vogt
Got it. Great. So I think we'll just end it there. Thank you, Anshul. Thank you, everyone, and have a great day.
Anshul Sadana
Thanks so much.
Question-and-Answer Session
Q - David Vogt
For further details see:
Arista Networks, Inc. (ANET) UBS Annual Technology, Media and Telecom Conference