Created on

3

/

6

/

2026

,

9

:

21

Updated on

3

/

6

/

2026

,

9

:

47

The Move That Changed the World: How DeepMind Solved Go and Predicted the Future of AGI

Ten years after Lee Sedol’s historic battle with AlphaGo, we look back at the "alien" logic that launched a trillion-dollar industry and changed science forever.

Preface: Co-written with Gemini. 

Reading Time: 8 minutes

Categories: Artificial Intelligence, Technology History, Future of Science


In March 2016, the Four Seasons Hotel in Seoul, South Korea, became the center of the world. Outside, the streets were filled with a tension usually reserved for a World Cup final. Inside a hushed, wood-paneled room, a man sat opposite a machine.The man was Lee Sedol, a legendary 9-dan professional and arguably the greatest Go player of the modern era. He was the personification of human intuition—a master of an ancient game so complex that it has more possible board positions than there are atoms in the observable universe. For over 2,500 years, Go had been considered the "holy grail" of artificial intelligence, a game that required a "human touch" that no silicon chip could ever hope to replicate.

The machine was AlphaGo, a program developed by a small London-based lab called Google DeepMind. Before the first stone was placed, the consensus among experts was clear: AI was still a decade away from competing with a grandmaster of Sedol’s caliber. Sedol himself confidently predicted a 5–0 or 4–1 victory in favor of humanity. He wasn’t being arrogant; he was simply stating the technical reality of 2016. Computers were good at math, but they were famously terrible at "feeling" the flow of a game. What happened over the next seven days would do more than just shatter that consensus; it would fundamentally alter our understanding of what machines are capable of. As hundreds of millions of people watched the livestream globally, the world didn't just see a computer play a game. It saw the birth of a new kind of intelligence.

To understand why AlphaGo’s victory was a shock to the scientific community, one must first understand the terrifying mathematics of the Go board. Since IBM’s Deep Blue defeated Garry Kasparov at chess in 1997, the world had become accustomed to the idea that computers were superior at strategy. But Chess and Go are fundamentally different animals. Chess is a game of "brute force"—a finite, albeit large, puzzle that a sufficiently powerful supercomputer can solve by looking ahead at every possible outcome. Go, however, resists this kind of mechanical bullying.

The complexity of Go is born from its simplicity. Played on a 19x19 grid, players take turns placing black or white stones, seeking to encircle territory. This leads to a "branching factor" that is mathematically staggering. In an average chess position, a player has about 35 legal moves. In Go, that number is 250. This means that after just two turns, there are 62,500 possible positions. By the time you reach a few dozen moves into a game, the number of potential variations exceeds 10^170—a figure so vast that if every single atom in the observable universe were itself an entire universe, the total number of atoms in that "multiverse" would still be smaller than the number of legal positions on a Go board. For a traditional computer, Go was an infinite maze with no exit. There was simply not enough computing power in existence to "calculate" the right move. Before 2016, the best programs relied on "heuristics"—hand-coded rules written by humans. But these rules were brittle. They lacked the "intuition" and "global awareness" that defined human masters. Lee Sedol didn't just follow rules; he felt the "energy" of the board. To beat him, DeepMind had to prove that a machine could finally master the "ineffable" qualities of human thought: judgment, strategy, and instinct.

To solve a game with more moves than atoms in the universe, DeepMind had to abandon the traditional "if-this-then-that" logic of 20th-century computing. Instead, they built a system modeled after human cognition. They gave AlphaGo two distinct neural networks—essentially a set of "eyes" to see the board and a "gut" to judge the winner. The first challenge in Go is the "Breadth Problem." At any given moment, there are roughly 250 legal moves. A human player doesn't look at all 250; they immediately narrow their focus to the two or three that "look" right. DeepMind replicated this using the Policy Network. This was a deep convolutional neural network trained on 30 million moves from games played by human experts. Its goal was simple: look at a board position and predict what a master would do. By using this network, AlphaGo could ignore the 247 "garbage" moves on the board and focus its computing power entirely on the top 3 possibilities. It effectively gave the machine the "eyes" of a professional.

Even if you know where to look, you still have to know if you are winning. This is the "Depth Problem." In Chess, you can count the pieces. In Go, the winner isn't clear until the very last stone is placed. DeepMind solved this with the Value Network. This network didn't look for the next move; it looked at the current board and estimated the probability of a win. Instead of needing to play out every single variation to the very end of the game—which would take trillions of years—AlphaGo could look at a mid-game scramble and say, "I have an 82% chance of winning from this position." If the neural networks were the "intuition," the Monte Carlo Tree Search (MCTS) was the logic that tied them together. Think of MCTS as a high-speed "imagination." Before making a move, AlphaGo would run thousands of "simulations" in its head. It used the Policy Network to pick the most promising moves, and then used the Value Network to judge the outcomes of those imagined futures. It was a perfect marriage of human-like instinct and machine-like speed.

While AlphaGo started as a student of human history, DeepMind knew that simply mimicking humans would never be enough to beat a world champion. To surpass the ceiling of human talent, AlphaGo had to go where no human had ever gone: it had to play against itself. DeepMind initiated a process known as Reinforcement Learning (RL). They took the initial version of AlphaGo and created thousands of clones of the program. These clones were then set against each other in a relentless, 24/7 "digital dojo." When a version of AlphaGo won, its moves were "reinforced"; when it lost, it learned to avoid those patterns. Unlike a human, who might play a few thousand games in a lifetime, AlphaGo played millions of games against itself in a matter of weeks. In this accelerated environment, the AI began to "evolve." It didn't just refine human strategies; it began to prune them, realizing that some traditional human openings were actually suboptimal. It had effectively compressed 3,000 years of human Go wisdom into a few weeks of silicon practice.

In the second game of the five-match series, the world witnessed something that moved beyond mere computation. The match was still in its early stages when AlphaGo reached for a "shoulder hit" on the fifth line—Move 37. The room went silent. In Go history, playing that high so early was considered a fundamental error. "I thought it was a mistake," remarked the English commentator, Michael Redmond. Lee Sedol was visibly shaken. He stood up and walked out of the room to compose himself, taking nearly fifteen minutes to return. DeepMind later revealed that AlphaGo’s data showed a human professional would have made that move only one out of every 10,000 times. AlphaGo knew it was a "non-human" move. However, its Value Network insisted that this specific stone would be the key to victory a hundred turns later. Move 37 was the "Rubicon" moment for AI. It proved that an AI could be creative, not just a "stochastic parrot." It wasn't just calculating; it was innovating.

Humanity had one final stand. In Game 4, Lee Sedol played Move 78—the "God’s Touch"—a brilliant wedge in the center that AlphaGo hadn't predicted. The AI "glitched," failing to realize its win probability had plummeted until it was too late. It was the only game humanity would win, but it proved that AI still had "blind spots." However, the evolution didn't stop. A year later, DeepMind unveiled AlphaGo Zero. Unlike its predecessor, it was given zero human data. It started by placing stones randomly. Through pure self-play, it surpassed the "Lee Sedol version" in just three days. This was the "Tabula Rasa" moment. It proved that human knowledge might actually be a limitation; by ignoring our traditions, the AI was free to discover a "pure" version of Go.

As we stand in 2026, marking the 10th anniversary of that match, the legacy of AlphaGo has moved far beyond the board. The victory in Seoul was a proof of concept for the most powerful tool ever devised. The most direct descendant is AlphaFold. Using the same spatial reasoning, DeepMind tackled the "protein folding problem." Recognized with a Nobel Prize in 2024, AlphaFold has, by early 2026, predicted the structures of nearly all 200 million proteins known to science. This "digital microscope" is currently being used to engineer plastic-eating enzymes and develop vaccines at light speed.

Today, professional Go players study with AI as a standard. They have adopted "alien" openings and abandoned centuries of rigid dogma. Human play has actually become more creative because the AI gave us permission to break the rules. As Lee Sedol reflected recently in 2026, AI is not a rival, but a "force multiplier" for the human mind. AlphaGo didn't end the game; it simply showed us how much more of the board there was to play on. ☀️



sunny.xiaoxin.sun@doubletakefilmllc.com

Sunny Xiaoxin Sun's IMDb


©2025 Double Take Film, All rights reserved

I’m an independent creator based in California. My writing started from an urgent need to express. Back in school, I often felt overwhelmed by the chaos and complexity of the world—by the emotions and stories left unsaid. Writing became my way of organizing my thoughts, finding clarity, and gradually, connecting with the outside world. Right now, I’m focused on writing and filmmaking. My blog is a “real writing experiment,” where I try to update daily, documenting my thoughts, emotional shifts, observations on relationships, and my creative process. It’s also a record of my journey to becoming a director. I’m currently revising my first script. It’s not grand in scale, but it’s deeply personal—centered on memory, my father, and the city. I want to make films that belong to me, and to our generation: grounded yet profound, sensitive but resolute. I believe film is not only a form of artistic expression—it’s a way to intervene in reality.

我是base湾区的自由创作者。我的写作起点来自一种“必须表达”的冲动。学生时代,我常感受到世界的混乱与复杂,那些没有被说出来的情绪和故事让我感到不安。写作是我自我整理、自我清晰的方式,也逐渐成为我与外界建立连接的路径。我目前专注于写作和电影。我的博客是一个“真实写作实验”,尽量每天更新,记录我的思考、情绪流动、人际观察和创作过程。我正在重新回去修改我第一个剧本——它并不宏大,却非常个人,围绕记忆、父亲与城市展开。我想拍属于我、也属于我们这一代人的电影:贴地而深刻,敏感又笃定。我相信电影不只是艺术表达,它也是一种现实干预。

sunny.xiaoxin.sun@doubletakefilmllc.com

Sunny Xiaoxin Sun's IMDb


©2025 Double Take Film, All rights reserved

I’m an independent creator based in California. My writing started from an urgent need to express. Back in school, I often felt overwhelmed by the chaos and complexity of the world—by the emotions and stories left unsaid. Writing became my way of organizing my thoughts, finding clarity, and gradually, connecting with the outside world. Right now, I’m focused on writing and filmmaking. My blog is a “real writing experiment,” where I try to update daily, documenting my thoughts, emotional shifts, observations on relationships, and my creative process. It’s also a record of my journey to becoming a director. I’m currently revising my first script. It’s not grand in scale, but it’s deeply personal—centered on memory, my father, and the city. I want to make films that belong to me, and to our generation: grounded yet profound, sensitive but resolute. I believe film is not only a form of artistic expression—it’s a way to intervene in reality.

我是base湾区的自由创作者。我的写作起点来自一种“必须表达”的冲动。学生时代,我常感受到世界的混乱与复杂,那些没有被说出来的情绪和故事让我感到不安。写作是我自我整理、自我清晰的方式,也逐渐成为我与外界建立连接的路径。我目前专注于写作和电影。我的博客是一个“真实写作实验”,尽量每天更新,记录我的思考、情绪流动、人际观察和创作过程。我正在重新回去修改我第一个剧本——它并不宏大,却非常个人,围绕记忆、父亲与城市展开。我想拍属于我、也属于我们这一代人的电影:贴地而深刻,敏感又笃定。我相信电影不只是艺术表达,它也是一种现实干预。

sunny.xiaoxin.sun@doubletakefilmllc.com

Sunny Xiaoxin Sun's IMDb


©2025 Double Take Film, All rights reserved

I’m an independent creator based in California. My writing started from an urgent need to express. Back in school, I often felt overwhelmed by the chaos and complexity of the world—by the emotions and stories left unsaid. Writing became my way of organizing my thoughts, finding clarity, and gradually, connecting with the outside world. Right now, I’m focused on writing and filmmaking. My blog is a “real writing experiment,” where I try to update daily, documenting my thoughts, emotional shifts, observations on relationships, and my creative process. It’s also a record of my journey to becoming a director. I’m currently revising my first script. It’s not grand in scale, but it’s deeply personal—centered on memory, my father, and the city. I want to make films that belong to me, and to our generation: grounded yet profound, sensitive but resolute. I believe film is not only a form of artistic expression—it’s a way to intervene in reality.

我是base湾区的自由创作者。我的写作起点来自一种“必须表达”的冲动。学生时代,我常感受到世界的混乱与复杂,那些没有被说出来的情绪和故事让我感到不安。写作是我自我整理、自我清晰的方式,也逐渐成为我与外界建立连接的路径。我目前专注于写作和电影。我的博客是一个“真实写作实验”,尽量每天更新,记录我的思考、情绪流动、人际观察和创作过程。我正在重新回去修改我第一个剧本——它并不宏大,却非常个人,围绕记忆、父亲与城市展开。我想拍属于我、也属于我们这一代人的电影:贴地而深刻,敏感又笃定。我相信电影不只是艺术表达,它也是一种现实干预。

PRODUCT

Design

Content

Publish

RESOURCES

Blog

Careers

Docs

About