Reinforcement learning vs LLM: INEFFABLE INTELLIGENCE raises €937 million
📩 To contact the editorial team: editorial@startup-in-europe.com
We do not yet know whether the emergence of Ineffable Intelligence on the European technology scene still constitutes a break from the traditional sequences of venture capital, or whether it signals the rise of a new investment standard, of which some recent transactions, such as AMI Labs, may represent the first milestones.
With a €937 million seed round valuing the company at €4.3 billion, the London-based startup is not merely entering the market; it is challenging the dominance of language models by betting on an intelligence built through experience.
Since the rise of large language models, the industry has progressively aligned around a now-dominant paradigm. Systems developed by OpenAI, Anthropic, and Google DeepMind rely on the large-scale aggregation of data derived from human-generated content. Their effectiveness stems from the ability to model statistical regularities at scale, supported by a continuous expansion of computational resources. This approach has enabled the rapid deployment of use cases—from software copilots to conversational agents—and has established LLMs as a standard infrastructure.
However stable this framework may appear, it reveals structural limitations. Dependence on existing data raises legal, economic, and epistemological questions. These systems, however sophisticated, remain fundamentally constrained by the corpora on which they are trained. Their ability to generate genuinely new knowledge—beyond recombination or extrapolation—remains uncertain. Alternative approaches are therefore increasingly being explored.
It is precisely one of these that underpins the project led by David Silver. Former head of reinforcement learning at Google DeepMind, he is known for work that marked a turning point in the field. Systems such as AlphaGo and AlphaZero demonstrated that an agent could reach unprecedented performance levels through interaction-based learning, without relying on annotated datasets. By replacing imitation with a logic of exploration and optimization, these approaches opened an alternative path, long confined to closed environments.
Ineffable Intelligence aims to design a system capable of acquiring skills and generating knowledge from its own experience. Where language models learn by observing human traces, this approach assumes direct interaction with an environment—real or simulated—in which the agent tests and refines its behavior.
The distinction between these two paradigms reflects two conceptions of artificial intelligence. On one side, systems that reproduce and recombine existing structures based on accumulated data. On the other, agents that progressively construct their understanding through action. While promising, this second path still faces significant constraints. Modeling sufficiently rich environments, managing the computational costs associated with exploration, and ensuring the stability of large-scale learning processes all remain unresolved challenges.
The composition of Ineffable Intelligence’s funding round reflects the nature of this bet. Alongside Sequoia Capital and Lightspeed Venture Partners are major technology players, including NVIDIA and Google, as well as UK public institutions. This convergence—unusual at such an early stage—suggests that the project is seen less as an immediate commercial opportunity than as a strategic option on the future trajectory of AI.
The involvement of the British Business Bank and the Sovereign AI Fund introduces a political dimension, reflecting an ambition to position the United Kingdom in a still-emerging segment of the market by betting on a differentiated approach rather than attempting to catch up with existing models. In a context of intensifying competition between major economic blocs, this strategy signals a view in which control over learning architectures becomes a matter of sovereignty.
From a competitive standpoint, Ineffable Intelligence aligns more closely with advanced research laboratories such as DeepMind or OpenAI—where fundamental research is coupled with industrial deployment—than with the traditional startup ecosystem.
For investors, the deal also departs from conventional venture capital logic. The €4.3 billion valuation rests less on execution metrics than on the credibility of a scientific thesis and the team’s ability to demonstrate its viability. David Silver’s profile lends particular legitimacy to the project. Yet the bet remains uncertain and unfolds over a long time horizon, where economic returns are neither immediate nor guaranteed.
Implicitly, Ineffable Intelligence’s initiative does not challenge the relevance of language models, but questions their finality. As their limitations become more visible, the prospect of learning less dependent on human data gains traction. Whether this path can be made operational at scale remains to be seen. Between the consolidation of an established paradigm and the exploration of alternative trajectories, artificial intelligence now appears to be entering a phase of bifurcation whose outcome remains open.




