AI is evolving at a speed the industry has never seen. AI cloud providers—built solely to deliver the fastest, most efficient GPU clusters—are pushing the boundaries of compute, networking, storage, power, and cooling. Their innovations are influencing hyperscalers, sovereign AI initiatives, and the entire data center ecosystem.
Leaders from CoreWeave, Dell, NVIDIA, and VAST Data joined Solidigm in conversation with TechArena at the 2025 Open Compute Project (OCP) Global Summit to push for relentless efficiency and redefine the AI innovation curve.
Watch the replay on YouTube and continue reading for key considerations and predictions impacting the next few years of AI infrastructure.
Open standards are an innovation multiplier. Industry leaders that include CoreWeave’s Jacob Yundt, Dell Technologies’ Peter Corbett, NVIDIA’s CJ Newburn, Solidigm’s Alan Bumgarner, and VAST Data’s Glenn Lockwood examined how the ecosystem must evolve to deliver real value to the players setting the pace for AI infrastructure demands.
During the early morning and packed discussion, Dell’s Peter Corbett stated that standards are essential for innovation across hardware and protocol interfaces—enabling multiple vendors to innovate behind them and lowering barriers to entry for newcomers.
Standards and OCP unlock velocity. Velocity is important because it's speed and direction—it doesn't help us if the industry is moving fast, but we're all in different directions. To figure out AI factories and warehouse-scale computing, we need that velocity.Jacob Yundt, Senior Director of Compute, CoreWeave
On the topic of rushing standards to meet the pace of innovation, NVIDIA’s CJ Newburn offered a nuanced view, reinforcing that the industry, “shouldn’t rush to standardize designs that limit concurrency or add unnecessary complexity.” He instead suggested that the industry should first run experiments and share data, then bring the minimum viable interfaces to standards bodies to minimize time to useful production without locking in untested ideas.
If you listen to folks who run warehouse-scale computers, the problems they face transcend individual components. The more the industry comes together and realizes these are the things we need to NOT happen while running warehouse-style computers, I think that's extremely helpful.Alan Bumgarner, Director, AI Technologist, Solidigm
One size never fits all, and in the AI era, it fits even less. Smart disaggregation isn't about separation; it's about optimization. Panelists examined why different workloads demand different storage profiles and concluded that the future belongs to architectures flexible enough to deliver exactly what each application needs.
Highlighting the benefits of disaggregating storage, Newburn, of NVIDIA, shared that over time, you can vary different kinds of storage still relatively close to the ever-denser GPU computing.
Decoupling may actually be a key step forward in giving us freedom to tune it well.CJ Newburn, Distinguished Engineer, NVIDIA
Glenn Lockwood added that in addition to more network connectivity, VAST Data is calling for application-specific disaggregation. He also pointed to checkpointing patterns and the pairing of global shared storage with node-local SSDs to maximize performance and economics.
I'm really excited about seeing disaggregated computing coming back, but in a thoughtful application-specific way because AI is such a specific use case. Rack-scale interconnect allows us to disaggregate inference, which drives up efficiency. Whereas in times past, you would throw a big data center network together and call it disaggregated, but you wouldn't really get the level of efficiency.Glenn Lockwood, Principal Technical Strategist, VAST Data
The stakes have never been higher: Idle GPUs are expensive GPUs. Compute utilization is a defining factor for AI data center business outcomes.
Corbett outlined that Dell’s focus is ensuring high utilization—feeding accelerators while not recomputing what you already computed. He added that operators need to build the surrounding storage and network capabilities so intermediate results and checkpoints can be reused at speed.
Newburn added that by NVIDIA’s math, you would need about 30 drives to keep up with one GPU at 512-byte granularity access. If those drives are 25 watts each, that’s nearly a kilowatt in a box per GPU.
Bumgarner alluded that every customer with GPUs is doing the math right now, and that high random read IOPS, low latency AI-optimized storage architectures, like the Solidigm P5336, are essential to ensure AI workloads stay fed and power efficient. He reiterated that storage investments should more accurately be positioned as “GPU ROI protection.”
Panelists agreed that the future will feature concentrated power, intelligent design, and sustainable innovation.
The amount of power being consumed is so huge that if it's done primarily from low-carbon renewable sources, it will create economies of scale which could accelerate the transition of the power grid entirely. There's hope there.Peter Corbett, Fellow, Dell Technologies
VAST’s Lockwood added that the AI factory of the future will be moving and evolving as efficiency requirements change. He also believes that empty space will get filled with the ancillary infrastructure required for end-to-end efficiency, from raw input data to queries per second.
By 2030, you'll have football fields of infrastructure for power and cooling. Then you walk into the data center and there will be one mega rack consuming 50 megawatts.Jacob Yundt, Senior Director of Compute, CoreWeave
Meeting AI buildout demands pushes compute beyond isolated individual drive specs. There is a critical shift happening from box-level thinking to rack-level system design.
Rack scale is now the wrapper for compute, network, and storage. You can almost think of it as the rebirth of mainframe-style computing where everything is in that wrapper at the same time. Within that, it's highly modular and there's room for multiple vendors to participate.Peter Corbett, Fellow, Dell Technologies
Corbett continued that we are seeing a complete rethinking of networking in-and-between racks in addition to the evolution of how storage is structured, delivered, and integrated with compute for AI deployments.
Yundt also explained CoreWeave’s move from thinking of compute as individual units to racks, rows, and entire data centers, where, “everything just needs to work together flawlessly—whether it’s power delivery, liquid cooling, networking, or storage.”
Solidigm closed the discussion with confidence that new standards are paving the way for a future where every component of an AI server, including storage, is liquid-cooled. The company recently introduced the world’s first Cold-Plate-Cooled eSSD, working with NVIDIA to address liquid-cooling challenges for fanless server environments.
As we move into the future, there's a dramatic change coming—beyond how we do things at a data center, but how we do things at a board, and inside of an SSD all the way down to the chip level.Alan Bumgarner, Director, AI Technologist, Solidigm
About Solidigm
Solidigm, a pioneer in enterprise data storage, leverages decades of product leadership and technical innovation, collaborating with customers to transform their business and propel them into the data-centric future. Our legacy of industry leadership is helping enable AI and more with our robust end-to-end product portfolio for core data centers to the edge. Headquartered in Rancho Cordova, California, Solidigm operates globally as a standalone subsidiary of SK hynix Inc. Discover how we're advancing the industry at solidigm.com.
SOLIDIGM and the Solidigm “S” logo are trademarks of SK hynix NAND Product Solutions Corp. (d/b/a Solidigm), registered in the United States, People’s Republic of China, Japan, Singapore, the European Union, the United Kingdom, Mexico, and other countries.
Contact details
Copy link
https://news.solidigm.com/en-WW/256016-ai-leaders-address-unprecedented-demands-on-dc-supply-chain/Related topics
Related news
Solidigm Introduces World’s First Cold-Plate-Cooled eSSD for Next-Generation Fanless Server Designs
The introduction of the Solidigm D7-PS1010 E1.S SSD expands the high-performance D7-PS1010 family with a powerful, first-to-market eSSD equipped with single-sided, direct-to-chip liquid-cooling tec...
The AI Challenge: Meeting Data Demands with Efficient Solutions
This article explores how high-capacity SSDs can help reconcile AI’s data demands with more responsible practices.
Solidigm Develops One of the World’s First Liquid-Cooled Enterprise SSDs for AI Deployments
Solidigm, a pioneer in enterprise data storage, unveiled one of the world’s first liquid-cooled eSSDs — helping remove the fans historically needed for storage devices and enabling future fully liq...
Solidigm Extends AI Portfolio Leadership with the Introduction of 122TB Drive, the World’s Highest Capacity PCIe SSD
Solidigm™ D5-P5336 SSD improves power and space efficiency for critical IT infrastructure, meeting challenges from data center core to edge.