ChatGPT was released over two years ago1. In the early months of its popularity, public opinion was sharply divided. Many universities rushed to ban it, fearing it would erode academic integrity and make students intellectually passive. Ironically, the present trend has reversed: people now cite ChatGPT as an authoritative source, often beginning their arguments with, “ChatGPT said…” — as if its outputs carry indisputable truth.
This shift reflects a misunderstanding. On the ChatGPT interface itself, a disclaimer reads: “ChatGPT can make mistakes. Check important info.” Yet despite this, people tend to treat generative AI tools as fact-tellers rather than probabilistic language models. It is essential to recognize that large language models are not designed to be always correct. Nor are they capable of understanding the meaning of the words they generate.
The Nature of AI: Prediction, Not Understanding
At their core, all machine learning models — including generative AI systems — operate on prediction. Whether it’s image classification or language generation, the fundamental goal is to predict future data points based on prior information. This is true not only for artificial systems but for humans as well. We survive and adapt by learning from patterns, by forecasting outcomes, and by passing knowledge through generations2.
However, a common misconception persists: that computer programs are infallible because they are “precise.” People often assume that if a program receives a given input, it must always produce one correct output. This may be true for deterministic software, but not for probabilistic systems like large language models. These models are trained to predict the next token based on the context of the prompt and a vast amount of training data. They do not generate truth; they generate what is likely.
Moreover, even their classifications are subject to thresholds3. For instance, if a prediction score exceeds 0.5, it might be categorized as class A; if below, class B. This uncertainty is built into the model’s architecture. When an LLM produces text, it is not stating facts. It is selecting the most statistically probable sequence of tokens based on your prompt and its internal weights4.
Poem Cloud and the Illusion of Intelligence
This disconnect between production and understanding is powerfully illustrated in Liu Cixin’s short story Poem Cloud 56. In the story, a superintelligent system is built to generate every possible poem — every conceivable combination of characters in poetic form. Somewhere within this astronomical collection lie poems of great emotional power and literary value. However, the system itself has no comprehension of the words it produces. It does not understand metaphor, rhythm, or emotion. It simply generates.
This is a useful metaphor for how language models operate. They are not conscious. They do not comprehend what they write. Their power lies in output, not insight. Like a monkey randomly typing who happens to produce a Shakespearean sonnet by chance, the achievement lies in statistical inevitability, not understanding7.
Thus, when AI systems provide answers, even beautiful or convincing ones, we must remember: they do not know what they are saying. They are not thinkers. They are predictors.
Socratic AI, Not Confucian AI
This leads to a more important question: How should we use AI?
There are two contrasting educational philosophies worth considering. The first is associated with Confucius, who emphasized memorization, repetition, and obedience to tradition8. In this model, students are repositories of inherited knowledge, trained to recite rather than question. The second is the Socratic method, which centers on critical inquiry. Socrates believed in questioning assumptions and engaging in dialogue to uncover understanding9.
Currently, AI is often used in a Confucian manner. People copy answers from ChatGPT without analysis, as if repeating scripture. This is not only intellectually lazy — it is also dangerous. Generative AI can and does produce errors, fabrications, and hallucinations. Treating it as a perfect authority will only lead to misinformation and a decline in critical thinking.
Instead, AI should be used Socratically — not to deliver truth, but to provoke reflection. A good use of ChatGPT is not to take its output as a final answer, but to engage with it: ask it questions, challenge its reasoning, and let it challenge yours. Dialogue, not obedience, is the productive path.
The Role of the User
It is tempting to imagine that AI can “think.” It can simulate conversation. It can summarize complex ideas. It can even mimic emotional tone. But it does not and cannot think as humans do. Thought is rooted in consciousness — and while we do not yet fully understand consciousness, we know that machine learning models do not possess it. They do not reflect. They do not create meaning. They replicate patterns.
That said, they are not useless. Quite the opposite — AI can be extremely helpful. For example, when it comes to widely accepted facts — such as Newton’s laws of motion or the boiling point of liquid nitrogen — language models are highly reliable. These are well-documented and appear consistently across its training data.
But outside of these stable domains, problems arise. When discussing complex philosophical arguments, subtle historical events, or niche knowledge areas, AI is more likely to hallucinate10. Without a specific “guiding path,” such as a well-engineered prompt or a retrieval system that brings in factual context (RAG)11, the model may produce misleading or simply incorrect answers.
Thus, users must provide that guidance. Users must build the pathway through which the AI can operate effectively. Otherwise, the AI wanders blindly.
Implications for Education and Society
In education, this means that teachers should not be replaced by AI. But teachers can work with AI. If a student asks a question beyond the teacher’s immediate knowledge, an AI model can provide a starting point — not a final answer, but a way to open discussion. Even if the AI is partially wrong, the process of questioning its output can itself become a learning experience.
Importantly, educators should not focus solely on teaching prompt engineering. Instead, the emphasis should remain on fact-checking, critical reasoning, and the ability to distinguish between authority and probability12.
Relying solely on AI outputs without prior knowledge is dangerous. Without a personal knowledge base, people cannot recognize when the AI is confidently wrong. Blind trust leads to passive thinking. The responsible path is informed skepticism.
Conclusion
Artificial Intelligence is not a teacher in the Confucian sense. It should not be obeyed. Nor is it a philosopher or a scientist. It cannot generate insight in the way Newton once looked at a falling apple and deduced the laws of gravity.
What it can do — and do well — is engage us in thought. When used correctly, it becomes a Socratic tool: not one that recites truth, but one that helps us uncover it ourselves. In dialogue, in questioning, and in reflection, AI becomes valuable.
So, do not treat ChatGPT as a truth engine. Treat it as a mirror — one that reflects your ideas back at you, sharpens your thinking, and helps you uncover what you did not yet realize you already knew.
Read More
(Chinese) 为什么说AI正在迅速拉开人的差距?怎样利用ChatGPT高效学习?AI不能替代人类,但它胜过99%的人际关系|心理学|哲学|ChatGPT|叙事疗法|