BELGIUMCOMPUTEIN THE LOOPSEED

Memory, the overlooked bottleneck of AI: VERTICAL COMPUTE raises €57 million to tackle the “memory wall”

📩 To contact the editorial team: editorial@startup-in-europe.com

With the rise of generative models, artificial intelligence is often portrayed as a race driven primarily by computing power. Announcements of massive infrastructures, GPU clusters and new accelerators dominate the technological debate. Yet a more discreet constraint is gradually emerging as one of the main limits to AI: memory.

In classical computer architectures, processors perform computations while data is stored in separate memory modules. This model, inherited from several decades of computer engineering, requires constant movement of data between compute units and memory. As the volumes processed by AI models explode, this movement becomes increasingly costly, both in terms of latency and energy consumption.

Specialists now refer to the “memory wall”, a structural limit that appears when processor performance advances faster than memory technologies. “Memory technologies face limitations in terms of density and performance, while processor performance continues to increase,” explains Sébastien Couet, CTO of Vertical Compute. According to him, the data-access requirements imposed by AI workloads make it “imperative to overcome the memory wall to enable the next wave of innovation.”

This structural constraint is particularly visible in modern AI infrastructures. Large models require continuous access to massive volumes of parameters and intermediate data. In current architectures, this information is generally stored in external memory, notably high-bandwidth memory (HBM), before being transferred to processors responsible for computation. This operating model involves constant exchanges between components, generating latency, energy consumption and infrastructure costs.

As models grow larger, this data movement becomes one of the dominant factors in overall system performance. In some cases, moving information consumes more resources than the computation itself.

Toward a vertical memory architecture

This is precisely the problem Vertical Compute aims to address. The startup is developing an architecture that integrates memory directly above compute units within a vertically stacked structure embedded in the chip.

The objective is to drastically reduce the distance data must travel. In traditional architectures, data can move across several millimeters—or even centimeters—between components. In a vertical architecture, these exchanges occur at the nanometer scale.

This proximity could significantly reduce inefficiencies linked to data transport, increase memory bandwidth and lower energy consumption. The technology’s designers also point to the possibility of approaching the performance of fast memory while increasing storage density.

The approach relies on a modular chiplet architecture, combining stacked memory structures and compute units within a single system. This design could enable more efficient integration of memory into computing systems, particularly for embedded AI applications or edge-computing environments.

According to the founders, such an evolution could also reduce dependence on centralized infrastructures. Current AI systems largely rely on data centers, partly because of the cost and complexity of the memory architectures they require. More compact integration could make it easier to run models directly on devices or embedded systems.

A discreet but strategic transformation of AI infrastructure

If validated at industrial scale, this approach could reshape how artificial intelligence systems are designed. For several years, most investment has focused on specialized processors—GPUs, AI accelerators and dedicated architectures.

Memory, despite being essential to overall system performance, is often treated as a secondary component. Yet the rapid growth of AI models—both in size and data requirements—makes this dimension increasingly critical.

In this context, innovations that bring memory and compute closer together could play a structuring role in the next generation of computing architectures. The objective is not necessarily to replace existing processors, but to improve their efficiency by reducing one of the key bottlenecks of modern AI.

A European deeptech company born from nanoelectronics research

Founded in 2024, Vertical Compute is a spin-off from the European research center imec, one of the world’s leading institutes in nanoelectronics and semiconductor technologies. The company is developing a technology designed to integrate memory and compute into a vertical architecture aimed at artificial intelligence systems.

The company was founded by Sylvain Dubois, a former semiconductor project lead at Google, and Sébastien Couet, a researcher who spent several years leading memory-technology research programs at imec.

Vertical Compute announced that it has secured an additional €37 million, complementing a previous €20 million raise, bringing its seed round to a total of €57 million. The round was led by the investment fund Quantonation, with participation from Flanders Future Techfund (managed by PMV), Wallonie Entreprendre, Sambrinvest, Noshaq, InvestBW, Drysdale Ventures and Kima Ventures. Existing investors Eurazeo, XAnge, Vector Gestion, imec.xpand and imec also joined the financing.

With a team of around twenty-five employees based in Belgium and France, the startup says it has recently completed a first test chip integrating its vertical memory architecture. This milestone is intended to validate the industrial feasibility of the technology ahead of a broader industrialization phase.

Related Articles

Back to top button