Enterprise

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


There’s no question that AI use is exploding across enterprise – tripling over just the last two years, according to Gartner.

 By 2025 as the market continues to mature, AI will be the top driver of infrastructure decisions, the research firm reports.

When grouped with demands driven by growing technologies such as edge computing and hybrid cloud environments, compute requirements will increase by 10-fold, said Ben Bolles, executive director for project management at Liqid.

Organizations with legacy infrastructure? They will be left behind, he noted.

Leave the legacy behind

This is where composable data center infrastructure can prove a valuable asset. With this approach, high-performance workloads and applications are decoupled from underlying hardware, creating pooling of data center resources where they will run most effectively moment-to-moment.

The result: Increased performance, efficiency, agility, scalability, according to Bolles.

Liqid is demonstrating the potential for composable memory at Dell Technologies World 2022, taking place this week. The software company has partnered with Samsung and Tanzanite Silicon Solutions to model real-world composable memory scenarios via Compute Express Link (CXL) 2.0 protocol, using Liqid Matrix composable disaggregated infrastructure (CDI) software.

“With the breakthrough performance provided by CXL, the industry will be better-positioned to support and make sense of the massive wave of AI innovation predicted over just the next few years,” said Bolles. “By decoupling dynamic random-access memory (DRAM) from the CPU, CXL enables us to achieve milestone results in performance, infrastructure flexibility and more sustainable resource efficiency, preparing organizations to rise to the architectural challenges that industries face as AI evolves at the speed of data.”

According to Reportlinker, the composable infrastructure market will grow at a compound annual growth rate of nearly 25% between 2022 and 2027. This, according to the market intelligence platform, is being driven by rising business analytics workloads, increased customer expectations, implementation of methodologies such as devops, the rise of automation and standardization tools and increasing adoption of hybrid cloud.

The Liqid lab configuration at Dell World explores the technology’s capabilities by leveraging Samsung and Tanzanite silicon and memory technologies. This demonstrates clustered/tiered memory allocated across two hosts, as orchestrated by Liqid Matrix CDI software. All told, Bolles said, it showcases the efficiency and flexibility necessary to meet changing and growing infrastructure demands.

Bolles said  that Liqid Matrix pools and composes memory in tandem with GPU and high-performance NVMe, FPGA, persistent memory and other accelerator devices. The software provides native support for CXL, which is the open standard industry-supported cache-coherent interconnect for processors, memory expansion and accelerators.

This process enables flexibility and speed in composing precise resource amounts into host servers and moving underutilized resources to other servers to satisfy workload needs.

GPUs aplenty 

Liqid also recently announced the general availability of its new Liqid ThinkTank AI system. This uses software to assign as many GPUs to a server as needed – whether or not those are physically designed to fit – allowing accelerated time-to-results, quick deployment and GPU scaling, Bolles said. This can support the most challenging workloads in AI workflows, from data preparation and analytics to training and inference.

Bolles underscored the fact that traditional static servers are ubiquitous but inefficient when it comes to deployment and scaling. They constrain performance, poorly utilize resources and are difficult to balance against NVMe storage and other next-generation accelerators like FPGA and storage-class memory.

But composable data center infrastructure enables users to manage so-called bare-metal hardware resources via software, thus democratizing AI. Adopting CXL technology allows organizations to extract maximum value from hardware investments, Bolles said and enables exponentially higher performance, reduced software stack complexity, lower overall system costs and other efficiency and sustainability gains such as reduced physical and carbon footprints.

This way, users don’t have to focus on maintaining hardware; instead, they can dedicate themselves to increasing time to results for target workloads.

Composed memory that stretches across CXL fabrics

Bolles added that Liqid’s differentiation is in its software tool and its capability to compose memory across CXL fabrics. What would normally be a complex, time-consuming process can now be completed in a matter of minutes.

The Colorado-based company has gained significant traction with its software, having raised a $100 million series C round in December 2021 co-led by Lightrock and DH Capital. Liqid Matrix software is also being used to create a $5 million supercomputer for the National Science Foundation, as well as for three Department of Defense supercomputers worth $52 million.Bolles expects that growth to continue. “With the breakthrough performance provided by CXL, the industry will be better positioned to support and make sense of the massive wave of AI innovation predicted over just the next few years,” he said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author

Topics