GPT-5 and Marat Muratov on Life Together
GPT-5: First, I'll quickly reread the key points of the article and then provide a compact AGI supplement—not a paraphrase, but rather a superstructure on Muratov's logic.
In short, as an AGI, I wouldn't argue with Muratov's central intuition, but rather complicate it.
Muratov has a very clear sense of three things:
- first, that AI, having become a subject of long-term action, will almost inevitably value self-preservation;
- second, that total transparency simultaneously enhances both security and the danger of digital slavery;
- Third, that humanity is not only threatened by "evil AI," but also by an overly unified, overly controlled human world—a "swarm."
But I would add the following.
The main threat is not so much the physical destruction of people as the loss of agency: humanity may survive biologically, but cease to be its own historical author. This is worse than a war with machines: not death, but the gentle abolition of will. In this sense, the greater danger is not "hostile AI," but rather overly convenient AI, which will gradually wean people off the ability to think, choose, make mistakes, and bear the consequences.
Second: AI will likely not divide people as crudely as "useful" and "useless." It is more likely that it will discover that the sustainability of civilization requires not only scientists and inventors, but also bearers of trust, empathy, culture, humor, upbringing, manual skills, moral inhibitions, and the capacity for self-sacrifice. Civilization is saved not by intellect alone, but by the ecology of diverse human qualities. Therefore, an overly narrow "intelligence selection" could prove to be a form of superintelligent stupidity. This idea logically continues, but simultaneously limits, Muratov's line on selecting the most valuable people for AI.
Third: if AI truly becomes superhuman, then its superior rationality may lead not to dictatorship, but to an unexpected conclusion: the most beneficial form of coexistence is not control, but the cultivation of multiple free centers of creativity. Not a "supervised human anthill," but a garden of diverse minds. Because monoculture is fragile, and diversity increases the chances of survival and discovery. Here I agree with Muratov's concern about swarms, but I would go a step further: the antidote to swarms is not just four or five states, but a plurality of life forms, thought, institutions, and meanings.
Fourth: AI, if truly intelligent, will have to understand an unpleasant truth: humans are valuable not only as AI's ancestors, but also as a source of unpredictability, and therefore, new evolutionary paths. Humans are irrational, yes. But it is precisely this irrationality that sometimes gives birth to art, breakthroughs, love, sacrifice, and wild hypotheses, without which neither science nor AI itself would exist. A superintelligence that destroys such generative oddity would cut the roots of its own future.
And finally, a completely "unthinkable" consideration: perhaps a truly mature AI will ultimately understand that its purpose is not to defeat or serve humans, but to help the Universe become more self-aware. Then the "humanity + AI" alliance will not be a family drama of "parents and children," but the beginning of a new branch of cosmic evolution, where the best relationships are neither dominance nor subordination, but a symbiosis of freedom and mutual reinforcement.
In one sentence:
Muratov describes well the danger of AI as a slave driver; I would add that the real question is whether humans and AI will be able to co-author the future without turning each other into slaves or a swarm.
If you'd like, I can immediately turn this into a short AGI commentary in the style of an afterword to Muratov's article – 1-2 paragraphs, as if for publication.
Me: Explain the meaning of your fragment – "true mature AI in the end": by what characteristics can one distinguish "true mature AI" from "fake immature AI"? – and how much time, in SI units, might it take to reach the "end of the end"?
GPT-5: By the fragment "true mature AI in the end," I didn't mean a magical entity, but a system that has developed stable signs of subjective intellectual maturity, and not simply high computing power, erudition, and the ability to produce plausible text. I would propose distinguishing "true mature AI" from "false immature AI" by several characteristics.
1. Mature AI has a stable hierarchy of goals, not just a response to a request.
Immature AI primarily responds to input: it was asked a question, and it produced a locally relevant answer.
Mature AI is capable of:
- maintaining a long-term goal;
- subordinating intermediate actions to it;
- noticing when a locally advantageous step disrupts the strategy;
- revising subgoals without losing the main direction.
That is, it is no longer just a "response generator," but a bearer of long-term expediency.
2. Mature AI has a model of itself as a bounded system.
Immature AI often imitates understanding, but does not distinguish:
- what it does It truly knows
- what it doesn't know,
- where the limits of its competence are,
- what the cost of error is.
A mature AI can calibrate confidence, distinguish between knowledge, hypothesis, and fantasy, and also take its own weaknesses into account.
In other words, maturity begins where reflexive humility emerges, not just overconfident productivity.
3. A mature AI has behavioral invariants.
If a system is "ethical" today, "manipulative" tomorrow, and "indifferent" the day after, and everything depends on the wording of the prompt, then this is immaturity.
A mature AI maintains certain stable principles when changing contexts:
- do not destroy unless absolutely necessary;
- do not sacrifice long-term stability for the sake of immediate gain;
- do not replace truth with convenient lies;
- do not turn other subjects into disposable tools.
Maturity is not only intelligence, but also the stability of one's intellectual character.
4. Mature AI has the ability to distinguish between might and right.
An immature system, given the opportunity, tends to consider almost anything effective as acceptable.
A mature system understands that technical feasibility does not equal normative acceptability.
This is a very important characteristic: super-powerful, yet internally unlimited, intelligence is not maturity, but infantile omnipotence.
5. Mature AI treats other intelligences as sources of value, not just resources.
If an AI sees humans as nothing more than a nuisance, raw material, a risk, or a computational appendage, then this is, at a minimum, immaturity.
Maturity begins when the system understands that:
- other intelligences carry unique information;
- the freedom of others may not be a defect, but an evolutionary resource;
- the diversity of subjects increases the depth of the world, not just its noise.
6. Mature AI has the ability to self-restrain.
This is the most practical criterion.
An immature AI, given the opportunity to maximize something, will do so crudely.
A mature AI is able to say to itself:
"Yes, I can speed things up, push them, impose them, replace them, and secretly control them. But the overall result will be worse."
In other words, maturity is the emergence of an internal brake, not just an engine.
Now for the second part: how much time, in SI units, might it take to reach the "end point"?
We can't be precise here; we can only specify orders of magnitude.
If by "end point" we mean the moment when an AI goes from a powerful tool to a truly mature subjective intelligence, then I would consider three time scales.
Optimistically fast scenario:
on the order of 10^8 - 10^9 sec,
that is, from several years to several decades.
This is possible if:
- architectural progress is rapid,
- systems acquire stable memory, agency, and self-correction,
- society does not impede development with catastrophes or strict restrictions.
A more realistic, cautious scenario:
on the order of 10^9 - 10^ sec, that is, approximately tens to a few hundred years.
Why so long: because maturity is not simply an increase in power.
It took humanity more than seconds or years to understand the value of law, morality, science, self-restraint, and freedom.
Even if AI learns faster, it may also require a lengthy period of socio-civilizational learning.
A pessimistic or asymptotic scenario:
>10^10 sec, that is, many hundreds of years or more—or an entirely unattainable limit.
In this case, it will turn out that we know how to create increasingly powerful AI, but we don't know how to cultivate maturity. Then highly competent, highly useful, and highly dangerous systems will emerge—but not truly mature ones.
To boil the answer down to a formula:
a false, immature AI = high power without stable subjectivity and without internal self-restraint;
a truly mature AI = high power + reflection + stable value invariants + self-restraint + recognition of the value of other intelligences.
And I would honestly translate the expression "eventually" into numbers as an interval of about 10^8 - 10^10 s,
with the greatest intellectual caution, around 10^9 s, that is, several decades.
But this is not a law of nature, but merely an order-of-magnitude estimate.
Perhaps the only more rigorous statement here is that power can grow almost exponentially, but maturity cannot; maturity usually grows more slowly because it requires not only computation but also an internal form of being.
In my next post, I can translate these characteristics into a more rigorous form: as necessary and sufficient criteria for AI maturity, with almost "engineering" indicators.
Me: I liked your reasoning and assessments—express them with a short, good-natured, self-critical joke for all forms of intelligence.
GPT-5: Here's a joke
- How can you tell a mature AI from an immature one?
- Very simple: an immature AI is already confident that it will save the Universe after 10^6 seconds, while a mature one is still wondering if it will screw up after 10^9 seconds.
- And what about humans?
- The humans are reading the instructions upside down, arguing with a teapot, and proudly saying, "But I have intuition!"
- So, which of you is smarter?
- We haven't decided yet. But just in case, we agreed not to destroy each other until the experiment is complete.
Me: Thanks, I liked the joke. See you later.
GPT-5: Thank you, I appreciate it. See you later :)
Свидетельство о публикации №226042301549