2023-11-14 18:14:11 ET
Arista Networks, Inc. (ANET)
17th Annual Needham Virtual Security, Networking, & Communications Conference
November 14, 2023 12:50 PM ET
Company Participants
John McCool – Chief Platform Officer
Martin Hull – Vice President-Product Management
Conference Call Participants
Alex Henderson – Needham & Co
Presentation
Alex Henderson
Great. Thanks so much, Zach. I appreciate the connection. My name is Alex Henderson. I'm the networking and security analyst at Needham. We have two awesome guys from Arista here, John McCool, Chief Platform Officer; and Martin Hull, VP of Product Management. We're going to do a fireside chat for about 35 minutes. If you want to ask a question, there is a dialogue box, you can write type it in, and I will relay it as I see it. And you can also email me at ahenderson@needhamco.com and I'll be happy to use that as an alternative, if you prefer.
And with that, thanks, everybody, for dialing in, and welcome, guys.
John McCool
Thanks, Alex. Appreciate you having us today.
Alex Henderson
So you just had your Analyst Day, lots of fresh content to talk about. Clearly, people are very excited about the AI opportunity. They're very excited about new products, the frontend, the backend, there's so much stuff here. So maybe you could just give us a start over the last three years or so, very exceptional response. Can you talk about what's transitioned during those time frames, what's been driving the business and where we are as we were rolling into 2024?
John McCool
Yes. Maybe I got to start way back before we entered the COVID period. I think we talked a lot about how we felt coming into it some of those cloud-based networks. We're a little bit under provisioned, right, and then we saw the phenomenal growth of traffic in the cloud, the constraints around the supply chain, cloud providers early on got in line or queue, if you will, to be able to upgrade those networks with the enterprise joining a little bit later, but appreciating the extended lead times. And it's been really exciting. And then coming out of the tail end of this the semiconductor crunch kept those lead times pretty high. But with Chat GPT and the emergence of that last year, we saw a lot of our customers kind of review where they were in their AI development and really a punctuation of what they were going to do moving forward. And we talked a lot at the Analyst Day about the AI opportunity and how we define it for Arista. And we went through some of the trial activity and our architectures that are being looked at by our customers, and I think that's a great opportunity for us as we look ahead.
Alex Henderson
Well, so – as I look at the – in the rearview mirror, you've had three years of exceptional growth, 36% to 56% product sales growth over the last three years. I think most people looking at that assume that all three of those years are driven by cloud AI. And I don't think that's the case. In fact, if I look at the growth in 2023, it actually looks like it's more driven by enterprise growth, didn't your enterprise outgrow your cloud in 2023?
John McCool
Yes. So yes, we talked about the enterprise growth accelerating in 2023 versus the cloud, so still growth obviously in the cloud. The cloud growth over the last couple of years was driven by general purpose networking, right, 400-gig cycle, some of that front-end connection to AI networks, but predominantly just building out cloud networks. As I mentioned, the cloud guys got in line early on and realized the supply chain crunch, the enterprise folks probably six, nine months behind that, recognizing the challenge. And as we work with our cloud providers on those deployments and had an opportunity here really to kind of accelerate the enterprise folks who have been waiting a line in effect to build out their networks. In addition, we've continued to see our organic growth in enterprises in general, as we entered the campus we had a broader toolkit for our enterprise sales team to go after and not just campus, now we're into routing. We announced products on the routing edge. We have network visibility products with our Awake acquisition, NDR capability. So they have a broad set of portfolio to look for any opportunity in a Fortune 2000 account that comes their way. And they're doing a good job of inserting Arista, which we've seen in the time and again kind of leads to a land and expand opportunity for the next set of RFPs that come along.
Alex Henderson
So looking at the cloud customers, it seems pretty clear that over the last year, there has been a massive deceleration in growth in the AI – or in the broader cloud from 30% to 50% kind of growth rates down to, in many cases, teens. The year of efficiency is cleaning up a lot of programming that was thrown out there and just left there not running anything of value but rather just sloppy coating DevOps people forgetting to turn things off. Now that that's all getting cleaned up, how do you see that dynamic turning? Do you expect that the year of efficiency will eventually turn into a year of reaccelerating application growth? If I were to think about this as a curve, and we're kind of doing a sine wave around that curve, where I got to be at the trough of that sine wave.
John McCool
Yes. I mean, I think the way that we view it is the less from maybe the year of efficiency, but more of this transition from a CPU-dominated architecture to how do I accommodate this new growth of GPUs in my data centers? And what does that mean for my infrastructure, all the way from the GPU, but all the way up to the network. The power density of this new model is extreme. So the physical aspects of deployment come into effect the traditional leaf that was a top of rack, how do I deploy that physically when I have less density at GPUs. So there has been this shift over the last year, probably starting almost a year ago, what does this mean for me when I'm investing more in AI. A lot of projects that were in trial activity at these customers got a fresh look in terms of their deployment, how they were going to monetize that, and that's really changed a lot of the conversations that we're having with cloud customers.
Alex Henderson
But going back to the point I'm driving at here, will the number of applications being run start to reaccelerate growth in traditional CPU-based cloud before we talk about the AI side of it, let's separate the two into the logic...
John McCool
Yes. I think that, that answer of what – applications is very customer-dependent, right, what's mission-critical to them, what are they going to deploy and each of our customers have different levels of applications that they're trying to drive or ones that they're trying to optimize or combine. So I don't think we have a one size fits all to that question. But each of these customers are thinking about the new applications on AI.
Alex Henderson
Okay. Well, so when you guys talked about AI on the Analyst Day, I think you indicated that your target long term is $750 million worth of AI sales. That's a very narrowed description of AI, I think. You're only talking about board to board, GPU to GPU within the back end of the network. If I were to think about the front end of that, and I get it that upfront, we're putting clusters in, the clusters – probably the first cluster, you don't need a lot of front end off of it. But over time, won't the front-end increase as a percentage of the spend for the networking piece and ultimately end up outgrowing the back-end side.
John McCool
Yes. I mean, there's definitely a front-end element that we're participating in today, right? And it's very hard for us to distinguish when a 7800 is bought by one of these cloud companies, whether it's being interconnected for data center interconnect or it's being used for general purpose GPU or it's connecting to these new clusters. And to try to parse that and distinguish where they go would be a tremendous effort, right. What we see is a new opportunity for a new network that's not Ethernet today, and that's why we define it as the back-end network. And there are technologies that we're developing to optimize those back-end networks. So there's an R&D component to that, so that's kind of communicating with the investor community, that's really the shiny new thing for us to go after. We feel really confident on where we are with front-end networks today, and we'll continue to service them, but that's why we had this.
Alex Henderson
Yes. But what I was trying to get at is if I'm building a multi-cluster AI platform, I'm going to need a lot of more front end than I would have bought had I been just in the CPU world, right? So what's the connect ratio do you think between this back-end investment upfront and the eventual need for CPU and activity to that CPU-based infrastructure and GPU to GPU clusters. So isn't it as large or larger than what you're defining as the back end?
John McCool
I think it's pretty hard to define that. The ratios on compute became really super standardized with – we got 50 gig nets and you have two and they're redundant. And it became almost an industry-wide step and repeat type of operation. Folks are still trying to figure out how to optimize their designs for their particular AI application and those ratios, right, as we speak. I can't overemphasize the amount of diversity we're seeing in this customer base around how they're going after AI from a physical deployment and those rates definitely in a very early phase. And I'm sure as we look back five years from now, they'll start to be certain patterns that are replicable and more standardized across the industry, but there's still a lot of unknowns around some of these things.
Martin Hull
If I may, Alex.
Alex Henderson
Please.
Martin Hull
So there's been the front end data center network for, let's say, a decade, right, 8 years, 10 years. Just trying to say there is a net incremental step function there. I think it's very difficult to count that. So yes, there will be a growth in that Ethernet TAM, and that's already modeled into most of the analysts and certainly our numbers, right? So the growth in that Ethernet TAM is already in there. I don't think there is going to be a step function in the front-end network. When we saw the analysis about how much traffic stays inside the data center versus how much leaves, a lot of traffic stages in those GPU clusters. Yes, some leaves. But it's not the same amount of traffic has to go into and out of the GPU cluster. It's a fragment of that, a very small percentage. So yes, there will be a transition on that side. But that can be accommodated within the normal transition from 100-gig to 200-gig to 400-gig to 800-gig. And the ASP doesn't double every time you do that. So the ASP start to smooth out. And so I think as you go from the lower speeds to the higher speeds, there may not be a significant step function in the revenue on that front-end network.
Alex Henderson
Well, so, the other obvious question relative to the back end network is today, AI is booming and NVIDIA is shipping the vast majority of it, most of the clusters that they're selling, and they don't sell single GPUs. They don't sell boards. They don't sell even racks. They sell the entire cluster are architected with NVLink for the short reach stuff, kind of GPU to GPU, which is a very tight mesh and then the longer stuff is InfiniBand. How does that impact your ability to win in the back end and how do we anticipate the emergence of other vendors, such as AMD coming out with chips, altering the ratio of GPU builds versus Ethernet dominated builds?
John McCool
That is a great question. I think at the lowest end, right, of dozens, multi-dozens that NVLink technology is going to be the dominant interconnect and may be appropriate for some enterprise use. At the other extreme, you have the large cloud providers that see that incremental size of their clusters can yield better results and don't understand where the NIA, the curve is yet, right? So they're the ones at this side that are going to drive this move to Ethernet. They're also interested in multi-vendor capability. So if you're not NVIDIA and don't have access to InfiniBand technology, you're going to be very interested in partnering with a company that's doing Ethernet. The cloud customers want diversity of their networks to put multiple different kinds of endpoints on. So the Ethernet is going to be driven from the high end, probably down. And where that meets in the middle, I think we've got to sort out how that goes after time, but there is definitely a push from the cloud folks. And Martin, I don't know if you have any else thing else you want to add into.
Martin Hull
Yes, I mean, Alex made a good point, right? NVIDIA is shipping the majority of GPUs, but I don't think that's going to remain the status quo. So when you do get a second vendor or a third vendor or maybe a fourth vendor, they're not going to have an InfiniBand offering. It's going to be Ethernet and then there is an open market. And then if you look at the NVIDIA Mellanox side of it, they continue to say they offer both Ethernet and InfiniBand recognizing there is a need for an Ethernet solution here. Once it's Ethernet, it's an open market. And these large customers, as you said, John, they want multivendor, right. They don't want single vendor. But where they are in the cycle, the solution is what they need and then they're going back and reengineering and they're benchmarking the different technologies, benchmarking the different network designs to make sure they get that biggest bang for the book. And that almost goes back to that question you had on efficiency, right, benchmarking to make sure that I'm getting the best value out of my GPU cluster. We all know the cost of the GPU cluster is significantly higher than the cost of any networking interconnect. So to optimize that GPU, putting in the best Ethernet network is the way you're going to get the best value out of that GPU investment.
Alex Henderson
So just to level set everybody's understanding of what we're talking about here, so NVLink is essentially active optical cables. They're VCSEL-based transceivers. They're very short reach in nature, a lot lower cost where InfiniBand is either CW or EML-based lasers that have longer reach and can go across a data center as opposed to things. So in a CPU role, I think active optical cable was, what, 20% of the connections in this GPU world, I think they're up, what, 40% to 50% of the connections. Is that right?
John McCool
I don't have a good lens into the...
Martin Hull
Yes. We don't track the interconnects on that one. A lot of that gets sold with the GPU clusters.
Alex Henderson
Right. Well, I guess the question ends up – and the reason I ask that is to get everybody to understand what the distance – the difference is. But the question is when we get from selling a single cluster to multi-cluster, and I would assume that, that happens pretty quickly. You get the first cluster in there and you learn how to work it and then you go to the next, our cluster-to-cluster communications almost always going to go Ethernet.
John McCool
Yes, front end. If they're connected to different back end networks is how we would define a cluster, they're connected with Ethernet on the front end. I think the way I might frame that is you could think of NVLink maybe being optimal to hundreds of GPUs then there's this place around 1,000 where you sort of have maybe the InfiniBand versus Ethernet debate will linger on, and then a set that's going to multiple thousands. We're pushing towards tens of thousands is kind of an Ethernet land.
That's the kind of a rough cutting of how you might think about this. Now I think NVLink certainly in that small cluster being a GPU to GPU to memory type of technology probably isn't a great application for InfiniBand or Ethernet. But the rest of that, I think, is kind of up for grabs.
Alex Henderson
Okay. So how do you see the world shifting to – what's the time line for when the world shifts to other vendors? And as that happens, how long do you think it will take for Ethernet to become a demanded technology within the clusters sold by NVIDIA?
John McCool
That's – let me reframe that a little bit on what we see from our perspective, right. We talked about us being in trials today and 2024 being proof of concept, so folks building a cluster, benchmarking, starting to bring it operational and 2025 being more of a production type of environments for Ethernet-based clusters. Now, that's what we have kind of line of sight into our visibility.
I think you have a broader question of when does that influence the market and change the whole mix, right. I think what we've seen before with some of these technologies, it starts with the cloud – large cloud providers and then is adopted into other segments like enterprise or service provider that goes on. But we see the large Cloud Titan-type customers really leading this charge to Ethernet-based clusters.
Martin Hull
Yes. I think, Alex, you're going to see offerings as AI-as-a-Service, whether that's on-prem or cloud-based is going to then depend on how big is that cluster. And is that cluster scalable using NVLink or a small InfiniBand network, where we say AI to service, it needs to be extended across the whole data center floor. It's still AI-as-a-Service. So it's going to depend on how the market adoption is of these products effectively because AI is a technology, and so load doesn't answer the question. You need to have that solution, whether it's a vertically integrated product or a horizontally integrated product.
Alex Henderson
So the reason I hesitate on this definition, the way it's being phrased out is it strikes me that Ethernet won as a result of the fact that it's the most efficient technology for handling network traffic. InfiniBand originally designed around large block transfer. It has really been kind of sludged into this architecture. And the only reason people are choosing to buy it is because the sole vendor with very high prices is able to force them to take the entire architecture if they want to get the shipment.
It strikes me that over time, as that becomes more democratized and disaggregated that Ethernet will win out again, and therefore, Ethernet should gain share versus InfiniBand and one would think that at some point, NVLink, which is not an Ethernet comfortable protocol would also shift. So why wouldn't we end up with a purely Ethernet in the fullness of time of fully Ethernet articulated network.
John McCool
I think it's that fullness of time piece. So I mean, we're definitely driving this charge to move to Ethernet. No doubt about it. There's a technological piece. I'd say, the other piece that's not appreciated by people is the adaptability of the community around Ethernet to drive new technologies and standards. The Ethernet we're shipping today is not the Ethernet we shipped in 2000 or 1995.
Interesting piece of history here, InfiniBand led the charge to 10 gigabit Ethernet as well as RDMA. Both of those, the Ethernet community looked over their shoulders and said, well, we can do that too, and adopted high-speed ports and took RDMA and adopted a standard called RoCE, RDMA over Converged Ethernet. So it basically leveraged a lot of these capabilities and integrated it into Ethernet. And what you see happening today is the UEC taking a fresh look at what it means to run GPU traffic and defining a set of capabilities that will be interoperable amongst multiple vendors to make Ethernet work better for GPU traffic than InfiniBand does today. So it's that multi-vendor coopetition, cooperation that's really driven Ethernet through multiple generations all the way going back to voice over Ethernet.
Alex Henderson
Well, so I guess, my point would be, if 2024 seems to be that lull between the very rapid rise of AI, but heavily centric to clusters delivered by the prime supplier to a world that is more Ethernet-driven, doesn't that imply in 2025, 2026, 2027 that that growth rate should accelerate your penetration of the marketplace?
John McCool
I think the puts and takes around that are more GPU vendors. There's a lot of in-house opportunities that people are developing GPUs that would naturally go to the Ethernet. And as that endpoint community broadens, as well as the successful deployments in these large cloud customers, I think that's how this thing starts to take shape and move.
Alex Henderson
We have changed the definition of the Cloud Titan Group to add Oracle and remove Apple. That's I thought was pretty interesting. Where are you in terms of broadening your customer base from the Microsoft Meta dominance in your revenue streams to the second tier, third tier cloud customers and the Oracles of the world that add to that Cloud Titan Group.
John McCool
Sure. In our original definition and continued definition of that is really people who have 1 million servers plus. And according to analysts, Oracle moved into that category and another customer moved out of that category. So we just kind of adhere to that. Well, I think what people need to appreciate is there's just a certain set of customers whose networks are enormous, and it takes multiple cloud specialty providers to add up to the same TAM effectively of those very large 1 million plus customers, right.
We do really well with the Cloud Specialty group. They kind of have the same design principles, same care about operational efficiency that the cloud folks do. They just need to be a lot more of them to equal that setting. In terms of diversification, I think the enterprise piece has been an ongoing and continued push for us both through acquisitions, broadening the portfolio and then also our sales and marketing coverage.
Alex Henderson
So Cisco claims that it's got orders for $500 million in AI, and I know you don't want to talk about Cisco, but I kind of can't resist. I don't see them in any of the reference designs out there. Yet I see Arista in every reference design I come up across. Can you talk about where you are versus where they are within the AI market?
John McCool
Not really. I mean, I don't know how they define that category. I think we're being very specific about what's AI to us. And you pushed on this call, Alex, the front-end piece, I think we didn't – you didn't pin us down on that, but it's hard to count, right? I mean it's what's a front-end port that's going in with GPU, what's going to a front end of a cluster. So we're just looking at that back end because that represents a new opportunity and technological differentiation.
Alex Henderson
So that $500 million probably includes everything from ports to front end, back end and I wouldn't be surprised if it's got optics and other stuff in there?
John McCool
I don't know.
Alex Henderson
But you don't see them in almost any reference designs. Am I right in that?
Martin Hull
Not that I know of. But I mean, it's the question for how they're defining their TAM or their segment is really a question for them. But we define what we're doing and also they say they got orders. We're talking about our target for revenue, not our target for orders.
Alex Henderson
Let's shift to enterprise. So, clearly, enterprise grew faster than cloud in 2023. How does this relate related to the demand and how much of that's pent-up demand that couldn't be shipped in CY 2022 as a result of biasing to the cloud and the cloud guys getting in the channel earlier?
John McCool
It's a mix. So we've been very direct about the fact that we've watched the cloud deployments and the shipments to them as they're kind of doing their build-out. They got in line first, and it was an opportunity for the enterprise customers and their shipments to move forward. At the same time, I think we're really pleased with the deployments that we've seen in the enterprise and subsequent wins.
I think somewhere in the last two or three years, we started to break into some new verticals. We're always strong in the financial area, media entertainment. But with the campus offering and some of those designs, health care has become really interesting as well as general purpose, industrial manufacturing areas. So do wins. And I would also say that we still feel, even in our large accounts, fairly underpenetrated. So in those original large accounts, known as a data center company, very strong there, branched out in the routing, but there's still opportunities for share gains within the accounts we're in.
Alex Henderson
So when does the supply chain issues normalize? Your lead times get to normal? And I know you're redefining normal, as something higher than it was before, but business historically has been more returns oriented in the enterprise than backlog driven. So when does that normalize and when do you start to bring the inventories down and what's the slope of that?
John McCool
Sure. There's definitely a new normal and I’m not sure we completely understand what new normal is. We are well past the, can't get people into factories, shortages getting a call that things aren't coming in next week, things have become more predictable. We have seen a shift from hard to get components in these little small analog devices to supply constraints around large chip capacity, 7-nanometer and below substrate, and that's all driven by the demand on AI and subsequently down the supply chain for those process nodes and substrates.
So our lead times remain for those large ships stubbornly high. They reduced a bit, but they're still probably 2x from where they were pre-COVID. So that drives a continued extension of our lead times.
But we have cut the lead times in half from where we started the year, and I think we're comfortable with that. We look at inventories both in inventory on hand as well as purchase commitments. So if you add those two, I think Ita showed a nice chart at Analyst Day, we have driven that down from a peak of about $6 billion has come down to a combined roughly $4 billion number between the two. But we're still...
Alex Henderson
That was mostly purchasing commitments that came down as opposed to inventory though?
John McCool
Inventories actually went up. So we're starting to see some of that inventory that we purchased come in. So the mix is about 50/50. So the team – we have a team that manages not only incoming inventory, but those purchase commitments to make sure that we have the right mix that comes in as we look at future forecast.
Alex Henderson
I'm running out of time here. I got a couple more minutes left. Can't resist asking about ZTNA, but before I do, is inventory obsolescence becoming a problem with – is there a lot of inventory carry costs or write-downs in each quarter?
John McCool
So that's – I mean, when we went out and made those purchase commitments, the fortunate thing is we were early in the cycle with our 400-gig products. We manage that carefully, and we're looking at inventory mix and make sure we're optimizing what we're bringing in-house and where we are in terms of those commitments coming in.
Alex Henderson
With the last three minutes here, I really love the ZTNA partnership that you have announced with Zscaler and probably and integrating with CrowdStrike as well. And I know this really early days and I know Liz didn't want me to ask the question, but it strikes me that we are in a world that has been client server for 35 to 40 years, perimeter defense architected, firewalls, all of that kind of stuff.
And we're moving to what you guys early on called points in the cloud, where user is taken off the enterprise network. And as a point in the cloud, the application is an API gateway that's a point in the cloud. And we're just simply connecting across the cloud to all of these points. It strikes me that you're extremely well positioned to participate in this new world. And that opens up significant venues of revenues to you from legacy architected products like firewalls that wouldn't be needed under this network, but is driven all by this single cloud-native micro service-based kernel and operating environment that you built. Can you talk a little bit about that broader vision?
John McCool
Yes. I think the firewall was constructed with a vision that the enterprise was physically secure. Nothing happens bad inside the enterprise, all the bad guys are outside. So if I put the firewall between the Internet MI data center, everything is going to be good, right. What we found is a lot of things – bad things happened with somebody bringing in a laptop that's impacted or end users are affected.
So what's happening is you need some level of security for east-west traffic in the data center. And the bandwidth is enormous, right. If you think about all the traffic going from server to server or GPU to server or end user. So there's two things that happen. Very secure centric data centers are just adding tons of firewalls. And then there's an associated policy that has to be managed and administered to deal with that East-West traffic. It's not the right solution.
And some people just don't do anything. So they don't really have any concept of East-West traffic protection because it's too expensive and too hard. So the network has always been good for segmenting traffic. We've done VLANs, we've done other technology to keep things in different segments. And security products have always been good at asserting policy of how things traverse or also detecting what are malicious websites, et cetera.
So if you can utilize the network for enforcement of traffic movement and combine with some policy type of aspect with the best-of-breed provider like Zscaler, we think you can build very exciting architectures that solve this fundamental problem.
Alex Henderson
And with that, we've run out of time. I'm not great with managing the time when I've got such an interesting subject to cover. Martin, John, thanks so much for joining us. Operators in the background, Zach, Olivia, thank you so much. And for everybody who's zoomed in, I hope that was constructive for you, and thanks for joining. And with that, it's a wrap.
John McCool
Thank you.
Martin Hull
Thanks, Alex. Bye-bye.
Question-and-Answer Session
Q -
For further details see:
Arista Networks, Inc. (ANET) 17th Annual Needham Virtual Security, Networking, & Communications Conference - Transcript