
‘Why AlphaGo, Not ChatGPT, Will Shape the Progress’
by Pawel Skrzypek
Luxembourg, August 2025 – The world is currently entranced by the linguistic brilliance of large language models. ChatGPT and its cousins have captured our imagination, filling timelines and boardrooms with promises of conversational AI, autonomous customer service, and auto generation of business strategies. But while LLMs dazzle us with coherence and creativity, they mask a crucial truth: they are ultimately trend followers, not trailblazers. To glimpse where true innovation lies, one must trace a quieter lineage – back to an event in 2016 that went far beyond entertainment: AlphaGo’s historic victory over Go champion Lee Sedol.
AlphaGo wasn’t trained on books or documents, but it did rely initially on human expertise. Its first learning phase used tens of thousands of recorded professional Go games to imitate strong human play through supervised learning. Only then did it improve through self-play, combining reinforcement learning with simulation and strategic planning to refine and ultimately surpass human strategies. This hybrid method marked a departure from interpolation-based LLMs and demonstrated that AI could evolve beyond what it had been shown – experimenting, reflecting, adapting, and ultimately overcoming even the best human intuition. It wasn’t just mimicry; it was a gateway to invention.
That invention echoed across disciplines. DeepMind’s AlphaFold, borne of the AlphaGo paradigm, solved protein folding and led to a 2024 Nobel Prize in Chemistry – the first time a deep learning system was credited with a Nobel level scientific breakthrough. The message was clear: AlphaGo’s architecture was not niche – it was foundational. It transformed how we approach molecular biology, designing a model that learns underlying physical principles rather than regurgitating sequences seen before.
This evolution continues. In July 2025, at the prestigious Lindau Nobel Laureate Meeting, John Jumper, co-developer of AlphaFold and recipient of the Nobel Prize, joined fellow laureates and AI researchers to reflect on how AlphaGo’s legacy is reshaping the sciences. While panelists praised large language models for their expressive capacity, it was the AlphaFold lineage that emerged as the most impactful force in chemistry, biology, and materials science. Jumper emphasized that systems like AlphaFold are not merely products of big data and computing power, but the result of painstakingly designed, learning-centric architectures – descendants of the AlphaGo framework. The Lindau panel also acknowledged the launch of genome-scale AI models capable of decoding the vast regulatory regions of DNA, marking yet another milestone in this family of agents that learn by doing, not just by reading.
It is worth noting that DeepMind’s “Alpha” systems have continued to evolve. AlphaZero generalized the original AlphaGo approach and learned to master chess, shogi, and Go entirely through self-play, without any reliance on human game data. Meanwhile, AlphaStar brought similar ideas to the real-time, multi-agent world of StarCraft II—and it now plays at a level far beyond anything I could hope to achieve myself. These advances show that Alpha-like architectures can scale from turn-based to real-time strategy, from perfect to imperfect information, and from games to real-world applications.
Of course, predictions are not facts. This was a key point made at the Lindau discussion – especially by Nobel laureate Joachim Frank, who cautioned against interpreting AlphaFold’s structural predictions as substitutes for biological truth. As he rightly put it, no matter how advanced the model, empirical validation through laboratory and clinical testing remains indispensable. Yet John Jumper and others clarified that AlphaFold was never designed to replace experimentation – it was built to accelerate it, narrowing down the vast space of biological possibilities and allowing researchers to pursue more promising leads, faster and at lower cost.
Meanwhile, autonomous systems inspired by AlphaGo are quietly revolutionizing global finance. Funds like Omphalos Fund are deploying multi-agent trading platforms that simulate markets, learn, and evolve without human intervention – embracing a capability first wielded by Go playing agents now directing capital flows. These systems do not parse paragraphs; they anticipate dynamics and act with precision.
This same distinction as for biology mentioned before applies in financial markets. The predictive capacity of a system like the Omphalos Fund is not a crystal ball – it is a statistical edge, embedded in a complex, risk-adjusted strategy that knows its own limits. Not every signal leads to the intended outcome. But the majority do: with over 60 percent of trades consistently aligning with actual market behaviour, the system generates a superior Sharpe ratio and stable returns over time. In both cases – protein structures and portfolio positions – the value lies not in perfect prediction, but in directionally accurate guidance, repeated at scale, and embedded in a workflow that allows for correction, calibration, and growth.
Large language models may entertain, inform, and even draft convincing prose, but they neither conceive nor execute novel strategies. They are backups and offer some kind of shortcuts in data generation, but they are not breakthroughs. In contrast, AlphaGo’s legacy lies in its capacity to build new frameworks for discovery – from atoms to assets, genomes to financial markets. When scientists harness its successors to unravel the complexities of biology, when finance embraces its emergent autonomy, when agents begin to operate, adapt, and discover – without being told what to think – the world shifts.
The distinction between imitation and innovation sits at the heart of this debate. Language models like ChatGPT are marvels of statistical representation. They draw from the past—sometimes artfully – but they are fundamentally tools of replication. They lack an understanding of causality. They cannot evaluate outcomes or set goals. They generate with no memory of consequence. They are, in short, extremely talented improvisers. AlphaGo-style systems, on the other hand, are strategic learners. They operate within the boundaries of cause and effect, accumulating experience through interaction with the world – be it a game board, a molecular structure, or a financial market.
As we consider the next frontier in artificial intelligence, it is worth asking what kind of systems we truly need. Do we want machines that can mimic our communication styles, or do we want systems that can solve problems we have failed to master ourselves? If we aim to discover new medicines, engineer climate solutions, stabilize financial systems, and design resilient infrastructure, we cannot rely on predictive text. We must build agents that learn by doing, that adapt without bias, and that can operate effectively in the unknown.
In that regard, AlphaGo should be remembered not simply as the machine that beat a world champion, but as the one that quietly showed us how to build intelligent systems that discover – not repeat. It was the first clear sign that artificial intelligence could surpass human intuition in domains that require reasoning, foresight, and experimentation. And while ChatGPT may be the symbol of our present fascination, AlphaGo remains the blueprint for our future breakthroughs.
In the race to define the next era of intelligence, the winner won’t be the model that talks the most. It will be the one that learns the deepest.
Copyright (C) 2024 by Omphalos Fund – Legal Notice – Privacy Policy