2023-06-06 17:29:06 ET
Arista Networks, Inc. (ANET)
Bank of America 2023 Global Technology Conference Call
June 06, 2023, 13:40 ET
Company Participants
John McCool - Chief Platform Officer & SVP, Engineering Operations
Conference Call Participants
Presentation
Unidentified Analyst
Okay. So I have know John McCool for many, many years. He's now at Arista. He was at cisco. We used to talk about the 6500 Catalyst 6500 when I was still my hair was less salt and pepper. And now we -- the good thing about, John, is we can go deep into technology and we can speak about basically what drives the market, et cetera.
So I'm going to -- like always, I have a list of questions. If you have -- we're going to make it interactive. If you have a question, please raise your hand, and we have a microphone and I'll just switch the microphone to you. Because I'm sure that you have as many questions as I have.
Question-and-Answer Session
Q - Unidentified Analyst
And I want to start from the end, John. First of all, welcome. Thank you very much.
John McCool
Thank you for having us.
Unidentified Analyst
I want to start from the end. NVIDIA drops a good bomb and says that they're growing so much in AI and generative AI. And the question is as much as you can see, what's the role of Ethernet in generative AI and what's the role of InfiniBand? Meaning, is this an opportunity for the Internet switch guys like yourself and others, or is it just going to be kind of interconnected with InfiniBand and there is less of a need for Ethernet switches?
John McCool
Sure. Let me kind of talk about maybe that $11 billion or...
Unidentified Analyst
Yes.
John McCool
Customers are, first of all, in early phases of deploying the technology, investing in applications for AI and trying to figure out how they're monetizing it. So I think it's important despite that number to think where we are in the cycle. It's very early, very exciting. The other aspect that we see is because they're so expensive, people really want to make sure that GPU is being utilized. It's not stalling and it's driving interest in the highest speed grade of Ethernet. The second piece of it is they're very interested in the nonblocking nature of our VoQ, Virtual Output Queuing technology in our higher-end platforms like the 7800. So all that's good stuff.
But then when you get into, okay, how many of those GPUs are connected together with NVLink, which is NVIDIA's proprietary technology, there's an industry variant called CXL, you can build small clusters of GPUs and CPUs together connected to memory through that. You can build a bigger cluster with InfiniBand or Ethernet. And then at some point, you're going to have to connect that cluster to Ethernet and drive bandwidth in and out of that cluster. So this whole question about InfiniBand versus Ethernet is around, okay, if I'm moving from NVLink to a cluster, how big is that cluster. And we view that InfiniBand will take it in a certain way. But at some point, as those clusters grow larger and you're interested in technology like multi-tenancy, you want to share that NVIDIA cluster with multiple tenants. And that might be even multiple properties within your cloud. The inherent segmentation technology in Ethernet becomes more important. So there's some breakpoint in there of cluster size InfiniBand versus Ethernet that we're still going to try to figure out.
Unidentified Analyst
Got it. Is there a different -- so when we talk about generative AI, we speak about a training infrastructure and we speak about inferencing infrastructure. Is there a difference in the use of Ethernet for training versus inferencing, meaning our -- once we pass right now, everything is in the center of the cloud. It's relatively small. We are playing with it. We're writing songs, to our loved ones. We are not using it yet for applications.
Once we pass it and we find applications to drive generative AI, how do you view, first of all, from an architecture point of view, how do you view the inferencing portion? And then how do you view your participation, your or Ethernet participation in the inferencing portion of generative AI? Is there a difference or not?
John McCool
I'm not sure that I've seen the difference personally. There could be at this point. But getting things in and out of that cluster is going to be extremely important.
Unidentified Analyst
Got it.
John McCool
And the speed of which you can do that and then the latency of the interaction becomes critical.
Unidentified Analyst
And is Ethernet giving something that InfiniBand cannot give, meaning is there any advantage of Ethernet switches that you can accomplish something that you cannot accomplish with InfiniBand?
John McCool
Absolutely. The scale, what we hear from customers is also the familiarity with Ethernet and the consistency of operation between clusters for AI, general purpose computing, et cetera. So that universal aspect, the commercial aspect is very important. So it's a competitive market in Ethernet, there's multiple suppliers. InfiniBand is a captive technology at this point in single-source solution. Beacuse those things come into play.
Unidentified Analyst
Got it. I have a general question and then I'm going to go back to basically more Arista. And the general question is whenever we think about dependency on a single vendor, we -- the market over the last 25 years went from a Cisco-centric market into a lot of new players like yourself and Juniper and everyone is addressing different sides of the market. But the dependency moved to a single source for the semiconductor, how is that going to change or not?
John McCool
Right. I think semiconductors are hard and they've gotten harder as you've gone down technology nodes from 16-nanometer to 7 and now people talking about 5 and 3. The silicon processes are more expensive. Networking is still a very small piece of the spend at the silicon level, right? People are investing in GPUs and CPUs at a much higher scale. So you want to leverage those technologies from those other markets to build your networking silicon.
And that's where we've really benefited as a company going to a merchant silicon approach. Many of us built silicon in prior lives for networking, but it just became a point where you had to amortize substrate costs, HBM memories that are on chip, multichip modules and being able to leverage that. So it's just difficult. It's expensive. And if you don't find a home in a large cloud provider, it's very difficult to keep up with the expense possibly.
Unidentified Analyst
Got it. Last year, Cloud Titans grew very, very fast for you, 128%. And we've seen slowdown already. I mean we can talk about slowdown of orders or slowdown when companies are taking from backlog. At the end of the day, we've seen even public statements of cloud companies saying they're going to slow down some investments. Today this morning, Ciena was very explicit and said that contracts of -- and they're seeing things in the optical layer. But they said that Cloud Titans or cloud providers, are slowing down deployments, they're pushing out deployment. Things that were supposed to be deployed now are being deployed in the future. How -- what is your outlook for Cloud Titans at 2 levels. Number 1 is the actual need deployment. Forget how they order. They might order a little bit every quarter or they might give you a 3-year contract. So forget how they order.
In terms of deployment, what's your outlook for deployment? Are you seeing a slowdown or potential for slowdown in actually deploying products? And the second thing is the other side, the ordering pattern how will that work?
John McCool
Yes. Look, I mean, it was a phenomenal year for Cloud in 2022. We're still saying that Cloud is going to be a significant mix for us in 2023. When we got on the call, just looking at the customer engagements we have and the projects we're involved in, we reconfirmed the consensus revenue of 26% and you kind of look out to 2024 as lead times have shrunk that visibility has gotten shorter as well, commensurate with the lead time. So we'll see what happens. I think there is a lot of excitement around AI. We'll see how that picks up. We've been involved in a lot of those use cases so far, and we'll see how it goes.
Unidentified Analyst
Got it. Is there a risk that Cloud Titans stop, when I say stop, I take it to extreme just for the sake of asking your question, but it could be a slowdown, but stop investing in their network for a while because they just have enough capacity.
John McCool
I think that we've seen a long-term trend of consistent growth of those backbones and networks to keep up with demand. Are they cyclical? Is there some nature to that? We don't know. I mean we've seen kind of one cycle in our entire lifetime as a company. But there's probably some -- maybe some aspects of that.
Unidentified Analyst
Got it. Okay. The AI, we spoke about AI at the beginning. Whenever we talk to the semiconductor guys, it's one picture. But when we talk to the switching guys, they kind of temper our expectations to say it's not a big contributor this year, maybe in the future. So how is the evolution and think about now in the next 3 years. What do you think are going to be the steps of AI? What do you think without a timetable, but what do you think is going to be the steps of contribution of AI? How Arista will participate over the years? And what needs to happen for you to participate big time in AI.
John McCool
Sure. I think people are moving from kind of trials to these early deployments. We've certainly, from a software stack perspective, made some enhancements, if you will, to make AI work better, like you've done with other workloads coming over to Ethernet. I think in sort of a tipping point, when you start saying announcements from end customers on how they're monetizing AI and what that's impacting their business, they'll be looking at wider deployments and maybe the efficiency and scale of those deployments related to their ROI. And that's when I think you'll start to see this kind of Ethernet momentum around AI start to pick up.
Unidentified Analyst
Got it. What kind of applications are going to drive AI. And again, today, I'm using it every birthday of my loved ones, I'm using it.
John McCool
But I wish I was smart enough to answer that question. That's really the great question is how it's going to be deployed and what it's going to disrupt, right? I think it's just kind of amazingly the anecdotal things we've heard so far, but I don't know.
Unidentified Analyst
Got it. Okay. On the conference call, you spoke about reduced visibility from Cloud Titans. What are the components of the -- you touched on it a little bit. What are the components of reduced visibility? What does it mean reduced visibility?
John McCool
Sure. I think if I go back pre-COVID, we talked about visibility from kind of a procurement standpoint of 6 months, maybe longer, though, from an architectural and product investment time frame. As lead times started to extend and particularly with the cloud folks, they were buying chips and processors, and they saw this early that lead times were extending, and they really focused their plans way longer than they used to because of the issues with supply chain.
Enterprise took a little bit longer to realize that was happening or maybe even admit that they're going from very short lead times to these extended lead times. So as lead times increase, there's a build commensurate with the lead time increases. And now as supply chain has gotten more predictable, [indiscernible] I want to say normal, but more predictable and those lead times start to condense visibility goes down with it.
Unidentified Analyst
Got it. Now your -- you gave guidance for the year. In the first half, First half '23 versus first half '22, you're going to grow 41% for guidance. And what's left for the second half is about 14%. So we're going from 41% to 14%. A lot of it is tough comps in the beginning. What is the risk? And I'm asking it, I'm not looking for a number, I'm looking for qualitative discussion. What is the risk that you go 41% first half; 14%, second half, negative first half '24, meaning could there be a scenario where we're going to see a reduction of orders, a reduction of deployments, it's not orders. As the backlog is drawn down, and actually, 2024 could be a negative year instead of a growth year.
John McCool
Right. So we haven't talked about 2024. I'm glad you're not looking at the number, but I think that we still see a lot of interest from cloud customers. And also we have an enterprise business. We haven't talked a lot about...
Unidentified Analyst
I know we're going talk about next...
John McCool
Where we've seen growth, and we're optimistic on our growth projections as a company.
Unidentified Analyst
Got it. Okay. So I'm not going to get an answer for it.
John McCool
Yes, absolutely not going to get the answer for it. Know you for a long time.
Unidentified Analyst
Okay. Your title is Chief Platform Officer, what does it mean?
John McCool
I have 2 things. So I have hardware, hardware development at Arista as well as the manufacturing and supply chain.
Unidentified Analyst
Got it.
John McCool
And I spent a lot of time in the last couple of years on that latter part of my...
Unidentified Analyst
You are the one to blame for the supply constraint. I want to talk about is we -- I think we spoke already about the Cloud Titans to the end of it, right? There's no need to go back to it. But I want to talk about service providers and I want to talk about enterprise and I want to talk about second-tier cloud.
John McCool
Okay.
Unidentified Analyst
So let's start with service providers. How do you participate in the service provider market? A high-level question.
John McCool
Sure, sure. So we participate through both switching and routing, conventional products. We've done really well, I think, in the data center portion of those service providers. with routing. We saw in service provider some very early wins that kind of looking back, we categorize as greenfield opportunities. So not necessarily needing legacy features and detailed functionality of MPLS kind of looking forward.
In fact, I think that was probably people not moving to those architectures as quickly possible. So we've seen more change in that market, which is good. We've also really built out our routing stack with more details and functionality. So we're highly engaged in that market. But still, it's moving slower for us than we would like or hoped, but continue to focus.
Unidentified Analyst
Right. And what is it so early on, the functionality of your router was more limited, meaning you went up there certain opportunities. You didn't go after the entire routing market. How do you envision yourself 5 years down the road.
John McCool
I think that the architectures in those service provider stacks have to change to more modern cloud-like architectures. So rather than focus on being the 19th vendor, if you will, to build out that legacy stack, we're kind of focused on that next-generation architecture.
Unidentified Analyst
Got it. Just keep on routing, going back. Where is your routing today being deployed? Meaning outside of service providers, is there a demand for the router also in the other markets?
John McCool
Absolutely. The reason we went into routing was really based on cloud providers. So they got to a point where they're building their data centers, and they wanted to connect one logical data center, but couldn't actually fit it in a physical plant. So they added another tier, we call it the Universal Spine and interconnect it effectively with multiple routers. So legacy was kind of too high high-powered routers, very expensive, interconnecting data centers to be redundant to an end way backbone that was routing between sites. That was our entry. Then we extended it to the enterprise.
The enterprise TAM for routing for backbone is much smaller than top of rack and distribution switches, but highly strategic. You have site-to-site recovery, disaster recovery. Maybe I'm a company that has multiple assets, multiple companies, and I'm just controlling the backbone, they want to interconnect them but segment them. So that's been really important. The service provider is the third, which has been more of the legacy protocols and not moving it quick on quickly as we'd like.
Unidentified Analyst
Switching to the enterprise market. First, what are the trends in the enterprise market? Are they going -- did they go through the same capacity or constraints or constraints throughout 2022, 2021, and now we're going to -- we are seeing the same kind of trends we're seeing in the cloud? Or is it more normalized growth without the big ups and downs?
John McCool
It's kind of like the housing market, right? You have all these different enterprises, and they all make their decisions to buy or sell their different times, right? So they don't all kind of aggregate on a technology cycle as pronounced as cloud, but just general trends, they had kind of 40 gig in the data center. 10 gig to interconnect their wiring closets, the modern technology is kind of 100 gig, 25 gig. Some of their access points, we're running at 1 gig interconnecting now with WiFi 6, WiFi 7, you want to upgrade those to 5 gig ports.
The other thing that's happening is more IoT infrastructure. If you're a hospital, you're connecting mission-critical equipment that isn't in the data center, but it's out in the hospital itself. That needs to be secure. Security is a top of mind issue in the campus today. So that's all happening.
Unidentified Analyst
Got it. Okay. I'm looking at the time. We're good. I want to talk about the, first of all, go to market. Before I go into technology, keep on technology, but go to market. Do you feel that you have all the components that the go-to-market to service providers is different. Cloud is different and enterprise is different. Do you feel that you have all the components needed to address all these opportunities? And if not, where do you put your focus on go-to-market. The question is a higher level. I want to understand if the company's focus is entirely technology or go-to-market is also a focus for the company.
John McCool
Absolutely, go to markets of focus. I think sometimes when people look at campus, I think of that as a low end or mid-market. Our focus is on the top Fortune 2000 or higher companies. And then what networking needs do they have holistically. We started in that market just as a data center point player, then we added routing. And then we had a campus, but that's the focus, right? So as the portfolio has gotten broader, those companies view us as a credible alternate to the incumbent as an enterprise networking company. And before our sales team would have to wait for the data center refresh and if they just started their job and they missed that opportunity, it's 3 years before the next refresh. Now they have the security opportunity, a campus opportunity, a routing opportunity.
So that's where we're focused. So it's one holistic focus on that market and what can I sell into that.
Unidentified Analyst
I understand very clearly our value proposition to cloud. I understand very clearly our value proposition to service providers. To the enterprise, I ask you to articulate it at both levels, both the campus and noncampus environment data center environment. What value do you bring to someone like Bank of America. No, someone like Bank of America, a big financial institution or kind of big -- and what value do you bring to smaller customers?
John McCool
Yes. I know kind of in the cloud, we think the value proposition is high-performance networking. But in fact, our origins in the cloud where they had very few people to operate a network with millions of users. So there was an operational efficiency argument for Arista entering the cloud. It's the exact same thing in the enterprise. How can I deploy and be more operationally efficient. How can I do upgrades? What's the cost of quality in my software, if I have to get a security alert and upgrade, can I upgrade quickly? All those aspects of ease of use and operations are what we sell into the enterprise.
Unidentified Analyst
Got it. Campus, what's in...
John McCool
It's exactly the same. And in fact, I can operate on a campus network in the same manner, in my data center and my routing infrastructure using CloudVision and automation. The other thing that we've done, maybe a little bit different in the enterprise is we see the architectures that are happening in the cloud, and how they manage their infrastructure and many of the tools they develop on their own were helping enterprises on that road to automation. Most of them aren't ready to go to full automation like a cloud provider, but there are a discrete number of steps they can make to get right from command line interfaces that are pervasive in our industry to a full automated stack.
Unidentified Analyst
What is your focusing on campus. I want to understand it. Who are the customers you're going after? Are these your existing customers, data center customers, and you give them some streamlined operations with a campus or could it be greenfield, you're not selling to the data center, but you're trying to sell campus?
John McCool
The plan was that we thought we would have our data centers be the first customers of our campus. And some of that was true, but we were surprised that there were a lot of prospects. Some we had been calling on for data center opportunity, but they weren't ready, actually took us first in campus. And as you deal with maybe outside the financials, there's just more campus opportunities in a lot of these enterprises because they've outsourced a lot of their workloads to the cloud already.
Unidentified Analyst
Got it. I'm going to stop here for a second before I continue. Is there any question from the audience? No, good. You give me more time.
John McCool
Okay.
Unidentified Analyst
Okay. 400 gig as a driver, explain, first of all, the -- where -- what's the target market, who is deploying for 100 gig? And what kind of a driver is it?
John McCool
Sure. I think the initial 400-gig deployments we saw as data center interconnect. So the top tier of the network around aggregating all the bandwidth and integrating sites. That was a driver to 400 gig. Now we see some AI kind of how do I connect these clusters and get performance into the clusters. We've also seen use cases outside of the cloud financial verticals, media and entertainment.
Unidentified Analyst
Got it. And is it a big revenue driver? Or is it -- I mean, on one hand, 400 gig is a driver because it's on an absolute terms, it's more dollars. On the other hand, if it replaces 100 gig, then every 4 ports, you sell it for the price of 2.5 ports. So it could be both ways. So is it the revenue driver or it's more of a technology driver. That's what I'm trying to understand.
John McCool
From a market analyst, they measure us all in ports and port share, right? We tend to think about the platform generation. So the silicon is going to 25 terabits. And you can use that silicon to connect a lot of 100-gig ports or less 200-gig ports or 14 -- 400-gig ports. And all our products have those variants today. So we're pretty agnostic on the port selection. We just would like you to buy a 7800.
Unidentified Analyst
Got it. That's kind of how we think...
John McCool
And we have there's been more disparity in this cycle around the port types. When people move from 40 to 100 gig, there was one big click because it wasn't as elegant technically to split the ports. But now we see customers make different choices based on legacy optics, what they have in their fiber plan, et cetera.
Unidentified Analyst
Got it. There's something I don't understand in the market, which is market share. Cisco is the legacy provider in switches. They have -- they've been losing share for many years, but they have a respectable market share. When I look at the 400 gig market share, you have 40, they have 6, what is driving 40% market share for you and only 6% market share for Cisco? And maybe the market share data, maybe it's not correct, but maybe the market -- we have seen before, the market share data is not precise, but the gap is so significant that there's something in it. And I'm trying to understand it.
John McCool
Arista has done well with high-performance networking kind of since inception. And I think with the 100 gig market really was an inflection point for us. We built on that momentum with merchant silicon. We think that, that approach was the right one. The quality around our software stack that goes with it and the ease of operational use. And we're just running that playbook and try to execute on it as well as we can. And we're very happy with the share results we've gotten.
Unidentified Analyst
Got it, great. As the Chief Platform Officer, what's your challenge that you see in the next 3 to 5 years? What do you focus on? Where do you put your money on in terms of R&D, in terms of product evolution? And 3 years from now, when we look back and discuss it, what do you think will be the change in the market basically?
John McCool
If I look up the hill in terms of performance and capability, it seems like I was where I was 4 years ago. I mean there's just more faster, it's unbelievable. And that's going to bring new technical challenges. We fought challenges around signal integrity. Now it's about thermal design and how we can pack these things together and get performance. I also think during COVID, we learned a lot about supply chain, how much we knew how much we didn't know. And as the world has gotten more complicated, I think that's an area where we have to continue to be prepared for both risks as well as new capability and challenges.
Unidentified Analyst
Got it. One topic that we hardly discuss anymore, and I think it's still important is white box. There are some vendors that are doing more white boxing, some vendor -- sorry, some cloud companies doing more white boxing some less, what's your view on white box switching, white box routing as a threat?
John McCool
Yes. It's been pretty static. I mean, there are 2 large cloud customers that drive a significant portion of that spend. Our 2 large customers are all multi-vendor, one of them having other OEMs and one thing white box. I think that it hasn't much changed I think that the threat of white box going into more an enterprise or even SP has diminished as the software stack availability is limited, right? And the options there. The large cloud customers that do that have their own software stacks, they have their own supply chain teams. So there's quite a bit of additional investment that you have to make white box happen.
Unidentified Analyst
And in the case of a white box customer. So we're not going to mention names, Google. But forget Google, forget the name of the customer. In the case of a white box customer, how do you participate? And I'm keeping it without a customer. I'm giving you at the high level. I want to understand if the customer is choosing white box for the leaf or for the low end. Does it mean that Arista is not in the network at all? Or can you still work in an environment where there are parts that are white boxing.
John McCool
Absolutely. I mean it's interoperable and there's some areas where maybe the software stack is different or they need different capabilities. And we have some opportunity in those white box environments for sure.
Unidentified Analyst
Same question about AI, and that's going to be my last question because of time. I'll mention names, but it doesn't matter. I'm not referring to the names. Google is using its own GPU. Microsoft is using NVIDIA's GPU, different GPUs. Do you have different opportunities where the -- when the GPU is different.
John McCool
I think that the network is sufficiently abstracted from that GPU, that it won't make much difference. Similar to what we saw with different CPUs over time. You got to connect the cluster somehow -- that's my viewpoint today.
Unidentified Analyst
Good. With that, I want to thank you.
John McCool
Thank you very much. Appreciate it.
Unidentified Analyst
Excellent.
John McCool
Thank you.
For further details see:
Arista Networks, Inc. (ANET) Bank of America 2023 Global Technology Conference Call Transcript