A deceptively simple math problem
“Which number is bigger, 9.11 or 9.8?”
Until recently, most AI systems answered this question incorrectly. The internet even mocked this failure as proof that AI is nowhere near ready to dominate humanity. In fact, even in 2025, Google's search engine still provides the wrong answer to this simple question.
But why does this happen? Why would such an advanced technology fail at what seems like elementary school math? Is it because computers operate fundamentally differently from human brains?
Ironically, it's the opposite.
Today’s AI is built on the concept of artificial neural networks, designed to mimic how human neurons work. This similarity to humans leads to the issue.
AI treats numbers like words, not quantities
AI doesn’t “understand” numbers as mathematical entities. It sees them as a form of textual data. When an AI sees "7428", it processes it in the same way as it would process the word "cat" — as a token in a vast sea of patterns.
So, when asked to compare 9.11 and 9.8, AI doesn't execute a numerical comparison algorithm. Instead, it recalls learned patterns from its training data — like a student trying to memorize all possible number comparisons instead of learning basic math principles.
AI, fundamentally, is trained to recognize patterns, not to follow rules.
During training, we don’t teach AI explicit rules — not for games, not for values, and not for comparing numbers. We simply expose it to massive datasets and call its ability to perform unexpected tasks “emergence”. For example, the way it can now write coherent text without ever being taught grammar.
The alignment problem
This lack of explicit rules causes unexpected and sometimes illogical errors, often referred to as hallucinations. AI doesn’t know what's right or wrong — it was never taught.
This brings us to a core challenge in AI research: the alignment problem. AI does not inherently follow human-defined rules or values. While AI can perform reading, writing, listening, and speaking tasks remarkably well, it doesn’t understand the formal rules behind them.
But isn’t that just like humans?
Newborns don’t have any rules wired into their brains, but they have incredible learning capacity. Over time, humans learn rules created by society — to function, survive, and contribute.
Rules are not innate — they must be learned. And modern AI resembles this process.
Why AI is still like a child
AI trained via traditional neural networks is like a newborn packed with the entire internet’s worth of patterns, but without an understanding of when or how to apply any rule.
So when asked a question, AI responds by remixing familiar patterns, hoping to produce the most plausible answer based on prior data — which can sometimes be partially correct, partially wrong.
This is why teaching AI rules has become a key part of solving the alignment problem. Just like teaching a child the differences between integers, decimals, fractions, and irrational numbers takes time and effort, so does training AI.
Solving alignment not only helps prevent AI from going off the rails — it also gives AI the capacity to reason and make decisions in new, unseen situations. It brings us closer to the holy grail of AI research: Artificial General Intelligence (AGI).
And recently, scientists have made a critical breakthrough. AI has started to learn what algorithms are, and more importantly, how to follow them step-by-step — even in problems it has never seen before. This means AI is no longer just guessing based on patterns but beginning to apply structured logic, approaching the kind of rule-based reasoning that has so far been the exclusive domain of humans. This shift marks a foundational change in how AI can adapt, solve, and operate far beyond its training data — a significant step on the path toward truly general intelligence.
Keywords: AI, alignment problem, emergence, neural networks, rules, AGI, hallucinations, algorithmic learning
中文摘要(繁體中文)
在人工智慧(AI)發展迅速的今天,看似簡單的數學題「9.11 和 9.8,哪個比較大?」竟仍難倒了許多 AI 系統,甚至連 Google 搜尋在 2025 年依然答錯。這並非因為電腦與人腦的根本差異,而是 AI 模仿人類神經元的設計方式所致。AI 把數字視為文本資料來處理,而非數學實體,因此缺乏真正的數學邏輯。它無法執行具體的比較演算法,只能依賴訓練資料中學到的模式去猜答案,就像沒學過數學的學生靠記憶來解題。
這樣的運作模式也讓 AI 容易出現錯誤甚至幻覺,因為它不具備「對與錯」的概念,這正是 AI 領域的核心挑戰之一:「對齊問題」。AI 雖然能展現強大的語言能力,卻不理解背後的正式規則,如同一個剛出生的小孩,雖然學習力強,但需要後天學習社會規則才能生存。
傳統神經網路訓練出的 AI 雖然擁有龐大的資料模式,卻不知道如何應用,也無法理解其意義。因此,如何教導 AI 正確的規則,讓它能夠在陌生的情境下做出準確的判斷,就成了推動通用型人工智慧(AGI)的關鍵技術。
值得慶幸的是,科學家最近在這方面有了關鍵突破。他們成功讓 AI 開始「理解什麼是演算法」,並且可以嚴格依照演算法的步驟逐一操作,甚至應用在從未看過的新問題上。這代表 AI 不再只是依賴模式記憶來猜答案,而是開始擁有結構化邏輯推理的能力。這樣的能力躍進不僅提升 AI 在陌生場景中的準確性與應變力,更象徵著朝向真正的「通用智慧」邁出關鍵一步。
0 Comments