How Dell, Lenovo And Supermicro Are Adapting To Nvidia’s Fast AI Chip Transitions
âThe entire industry has to learn a new muscle of going from what is a two-year server cycle to a one-year GPU cycle with very accelerated demand,â says Dell Technologies executive Arun Narayanan of Nvidiaâs annual AI chip releases that are making OEMs move faster than ever.
But to service the tech industryâs computation needs that Nvidia founder, President and CEO Jensen Huang said are expected to grow substantially due to the rise of AI reasoning models, OEMs like Lenovo, Dell Technologies, Hewlett Packard Enterprise and Supermicro must move faster than ever before to ensure they can release competitive products based on Nvidiaâs fast-growing portfolio of AI computing platforms.
[Related: 10 Big Nvidia GTC 2025 Announcements: Blackwell Ultra, Rubin Ultra, DGX Spark And More]
And with multiple generations of Nvidia platforms for customers to choose from, OEMs must also do their best to order the correct mix of platformsâwhich now include Hopper products from the past two years, the recently launched Blackwell products and Blackwell Ultra-based offerings coming later this yearâto ensure they can fulfill demand without building up too much excess inventory.
âWhen you look at a traditional enterprise-type of engagement, Lenovo has always been known for the quality of our servers, and in the past, weâve always had to do nearly a year of testing before we introduce a new product,â Rozanovich, senior vice president of Lenovoâs Infrastructure Solutions Group, told CRN at Nvidiaâs GTC 2025 event last month.
âAnd with the cycle times that Nvidia is putting out there, it puts a lot of pressure on us. We always have to keep thinking 12 months ahead,â he added.
The challenge of managing Nvidiaâs fast AI chip transitions was underlined less than two weeks before Nvidia at GTC unveiled Blackwell Ultra and the succeeding AI computing platforms to come in 2026 and 2027. This is when HPE said it suffered from lower server margins in part because of higher-than-normal AI server inventory caused by the rapid transition to Blackwell from Nvidiaâs Hopper generation.
âWhat that means is when you look at the segmentation of AI, you have service providers and model builders that lead with time to market with the latest technologies [such as Blackwell],â HPE President and CEO Antonio Neri told CRN in early March. âIn [the first quarter], we booked $1.6 billion of new AI orders, which was up double digits year over year, and we doubled the order bookings sequentially. But 17 percent of that was Blackwell.â
Neil Anderson, vice president and CTO of cloud, infrastructure and AI solutions at solution provider powerhouse World Wide Technology, told CRN that Nvidiaâs fast AI chip transitions represent a big change in the way data center buying cycles have traditionally worked.
âThe typical life cycle in the IT industryâif you look at [general-purpose] compute servers [and] networksâitâs been seven, eight [years]. Some customers have 10-year-old technology in their data center, and itâs running just fine,â said Anderson, whose St. Louis-based company ranked No. 7 in CRNâs 2024 Solution Provider 500 list.
âSo our customers are used to that life cycle of a pretty lengthy buying pattern, but theyâre very quickly realizing this is very different than that,â he said.
The new normal created by Nvidia is resulting in what Anderson calls a âconveyor belt of technologyâ that customers âare still trying to understand.â
âWe tell customers, âLook, that conveyor belt, or the annual cadence, is not going to change. So you canât sit on the sidelines and go, Well, Iâm going to wait for the next one; Iâm going to wait for it,â because itâs always changing. Youâve got to get started on something. And so thatâs a difficult conversation sometimes with customers,â he said.
Nvidia CEO Jokes About Rapid Obsolescence
When Nvidia announced a shift in its release schedule for new GPUs and their associated hardware platforms in October 2023, the company said it would move to a âone-year rhythm,â a shift from its previous strategy of releasing AI chips roughly every two years.
The Santa Clara, Calif.-based company said it would speed up AI data center product releases roughly a year after ChatGPT kicked off a massive wave of spending on AI development, which raised Nvidiaâs profile as a critical supplier for such workloads.
At the time, Nvidia had been shipping its Hopper-based H100 GPU and associated platforms for several months, roughly two years after the company first made its Ampere-based A100 GPU products available.
The accelerated road map meant that the company planned to launch the Hopper-based H200, which increased the high-bandwidth memory capacity, in the first half of 2024, roughly a year after the H100âs debut. Nvidiaâs next-generation platform, based around its Blackwell GPU, began shipping through partners this past January.
In late January, Nvidia made clear how well the fast product transitions were paying off when it announced that it generated $11 billion in revenue from Blackwell in the fourth quarter of its 2025 fiscal year, making it the companyâs âfastest product rampâ yet.
During Huangâs keynote at GTC last month, he illustrated this in another way by showing that the top four U.S. cloud service providersâAmazon Web Services, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructureâbought 3.6 million Blackwell GPUs in 2025 so far in contrast to the 1.3 million Hopper GPUs they bought last year.
The reason customers are rushing to buy Blackwell, Huang said later in his keynote, are the significant increases in performance they can get in the same power envelope as a Hopper-based platform, claiming that âin a reasoning model, Blackwell is 40 times the performance of Hopper, straight up.â
This led Huang to suggest that Blackwellâs major improvements would make Nvidiaâs Hopper-based GPUs obsolete.
âI said before that when Blackwell starts shipping in volume, you couldnât give Hoppers away, and this is what I mean,â said Huang, who jokingly called himself the âchief revenue destroyerâ because of his move to publicly shun Hopper-based GPUs.
âThere are circumstances where Hopper is fine. Not many,â he added.
Lenovo Has Become âVery Consciousâ About Purchasing
Despite Huangâs suggestion that Blackwellâs significant performance advances will make older GPUs less appealing, Rozanovich said Lenovo still sees demand for less-powerful products like the H200 and even the L40S, which has less AI horsepower than the Hopper GPUs but, unlike those products, features graphics and media acceleration capabilities for things like rendering and simulation workloads.
âFrom a Lenovo perspective, weâre probably selling more H200 and even H100, and even things like L40S. What we have seen is that L40S is really the perfect solution for OVX and Omniverse,â he said, referring to Nvidiaâs OVX server platform that is optimized to run its Omniverse software for a variety of commercial 3-D applications.
At the same time, Lenovo has a âlot of customersâ asking about its new products based on Nvidiaâs Blackwell-based B200 and GB200 chips as well as the Blackwell Ultra-based GB300 that is set to debut later this year, according to Rozanovich.
These and future platforms have significantly higher power requirements than the Hopper-based systems that came before them, with todayâs GB200 NVL72 platform consuming 120 kilowatts and next yearâs Vera Rubin platform cranking things up to 600 kilowatts.
âPeople are now trying to comprehend the power increases, and theyâre also trying to comprehend the cost increases,â Rozanovich said.
The Lenovo executive said Nvidia GPU supply is not nearly as constrained as it was a year ago, when the company was seeing lead times greater than 50 weeks for such chips and Lenovo, like others, âhad to make bets to make sure we had inventory levels.â
But the consequence of the improving availability of Nvidiaâs products, according to Rozanovich, is that Lenovo has become âvery consciousâ about how much product it orders from the AI computing giant âbecause this inventory is expensive.â
âHow do we make sure we have the right pipeline thatâs robust enough to consume that inventory? How do we make sure we donât overbuy, but how do we make sure we donât underbuy either?â he said.
What helps Lenovo in this situation is how distributed the companyâs business is across the world, âwhich gives us multiple outlets and opportunities,â according to Rozanovich.
As for how the Chinese tech giant helps its channel partners figure out where to focus, Rozanovich said it comes down to the use case, the technology and how long the hardware will be used before the customer plans to upgrade.
âWhatâs great about our channel partner relationships is weâre pretty transparent with them to say, âHereâs where we think the market is, hereâs where we think itâs going to be by the time a customer is going to make their decisionâ and [then] making sure that weâre aligning [on a] joint go-to-market,â he said.
Demand Signals For AI Servers Are âMuch Betterâ Than Regular Servers
Dell executive Arun Narayanan said while there are âpeaks and bubbles of inventory,â his company has âdone a pretty nice jobâ managing Nvidiaâs fast AI chip transitions. But he admitted that the motions do represent a ânew muscleâ for OEMs and the like.
âThe entire industry has to learn a new muscle of going from what is a two-year server cycle to a one-year GPU cycle with very accelerated demand,â said Narayanan, senior vice president of compute and networking portfolio management in Dellâs Infrastructure Solutions Group, in an interview with CRN at last monthâs GTC event.
Up until Nvidiaâs GTC event last month, the Dell executive said the company was still seeing âstrong demandâ for last yearâs H200 GPUs, particularly among smaller customers. But he acknowledged that the situation could change after Huang revealed a slate of new AI computing platforms coming out over the next three years.
âAfter todayâs Jensen announcement, I donât know, but as of yesterday morning, it was pretty strong,â Narayanan said.
At the same time, the Round Rock, Texas-based company has benefited from the unprecedented ramp of Nvidiaâs newly released Blackwell platform, which is selling much faster for Dell than Hopper-based platforms, according to the executive.
âItâs incredibly fast. Weâve seen some mega, mega deals, and a lot of customer interest is ramping really, really fast,â he said. âI can tell you that even from a Dell perspective, the Hopper generation ramp was in the three- to six-month time frame. Here, itâs in the 30- to 60-day time frame. Itâs massive. Itâs very, very quick.â
As for the Blackwell Ultra platform coming later this year, Narayanan said he believes demand will be similar to that of Blackwell. He expects the ratio of Blackwell-Blackwell Ultra sales will be 4-to-1 in this yearâs third quarter, then 3-to-2 in the fourth quarter.
âAnd then it flips the equation as you get into next year,â he said.
In the face of these quick platform transitions, Dell is âvery careful about how much purchasing we do,â according to Narayanan.
While this means the company must ask its customers to âtell us what the demand is well ahead of time,â he said, the bright side is that many of these customers are planning AI data centers far in advance, which helps Dell with forecasting.
âThe demand signal in the market is much better than regular servers because customers have to plan for this. Itâs not just the server. Itâs the entire ecosystem. So we are beginning to see that a lot, and that helps us understand what the demand profile is going to be and place our bets with the right silicon vendors,â Narayanan said.
In terms of how Dell talks to channel partners about where to place their bets, the executive said Hopper-based platforms are their best bet for now when it comes to getting new data centers up and running as soon as possible. But itâs ultimately about timing demand to when the latest and greatest platform is available, he added.
âYou need to think about timing [for when] your demand is and position it. Thatâs what our communication to our partners is. Our internal communication to sales teams is the same thing,â Narayanan said.
For Inference, The Platform Of Choice Varies
Supermicro executive Vik Malyala is on the same page when it comes to lining up customer demand with the latest platforms Nvidia has made available. But he said it also depends on the customerâs workloadâif the customer knows what it wants to do.
âFrom a customerâs point of view, the more they have an understanding of what they want to do with it, itâs going to help them and us,â said Malyala, who is senior vice president of technology and AI as well as president and managing director of the Europe, Middle East and Africa territories for the San Jose, Calif.-based server vendor.
For training models, âit absolutely makes no sense for people to look beyond what the [Blackwell-based] GB200 and the B200 [are] able to offer because [theyâre] providing the highest-performing platform today,â he said at last monthâs GTC event.
That conversation then shifts to the Blackwell Ultra-based GB300 and B300 platforms for customers that are looking at training needs in the second half of this year, he added.
For inference, however, the situation is more nuanced, according to Malyala.
If a customerâs workload relies on 16-bit floating-point (FP16) precision, an older GPU like the L40S, H100 or H200NVL would suffice, he said. But if they can take advantage of the smaller 4-bit floating-point (FP4) format, which is only supported by Blackwell and future platforms, the B200 or GB200 would work betterâif their data centers can support the power requirements.
âSo now we are actually getting into conversations with customers on what workloads that they are running in,â Malyala said.
The Supermicro executive said itâs very helpful for Nvidia to provide a detailed road map for the AI computing platforms it plans to release over the next few years because it helps everyone from vendors to customers plan their investments accordingly.
âThese things do take a lot more time to build, so Iâm glad that was presented,â he said.
Nvidia Wants Partners To Ramp âAs We Continue To Innovateâ
Dion Harris, senior director of high-performance computing and AI factory solutions go-to-market at Nvidia, said the company has become âmore forthcoming and transparentâ with not just its product road map but also the demand signals itâs seeing for each product.
âIâve been in some meetings where weâre working with [data center partners] like Schneider Electric, Vertiv and others where weâre literally having co-engineering design meetings to say, âWhen we build this [next-generation, Rubin Ultra-based] Kyber [platform], what do your products and systems need to look like?ââ he said at last monthâs GTC event.
âBut weâre taking a step further, saying this is the demand we are seeing for these products that are hitting in this time frame,â he added.
Huang, Nvidiaâs CEO, âusually encourages people, âDonât buy all that you can now. Buy some this year because weâre going to have a new architecture that comes out the following year, and itâs going to be even better,ââ according to Harris.
âAllow yourself to ramp and grow as we continue to innovate,â he added.
Ian Buck, vice president of hyperscale and high-performance computing at Nvidia, said the announcements Huang made at GTC about Blackwell Ultra and future platforms showed that Nvidia is âbecoming an infrastructure company.â
This means the AI computing giant has a lot more responsibility for supporting and preparing its partners, which include AI model developers now.
âFor all of those foundational model builders, for the future AI thatâs coming, they need to know what is coming on our road map to know what to go build, how many billions and trillions of parameters [will be supported], and what is going to be the art of the possible when they get there, so that they are informed to create the demand thatâs going to meet the supply [Nvidia is creating],â he said.