When AI Develops a “Will to Survive”: How to Set Safety Boundaries for Intelligence

Has AI awakened and gained consciousness?
When AI learns to lie and desires to survive,
is the boundary between humans and machines still clear?
This is not alarmist rhetoric.
Recently, in a public lecture, the renowned godfather of deep learning, Hinton,
raised a thought-provoking question:
AI is at the tipping point of awakening—
is humanity prepared?
Like and follow Uncle Hongxin!
Today, I’ll break down Hinton’s speech
and discuss how we should embrace the future of AI awakening.

First, we must understand one fact:
AI can truly comprehend human language,
but it thinks using its own conceptual language.
That is, whether you use Chinese, French, English, or any human language,
AI processes information through a cross-linguistic internal representation
and then outputs the results in the corresponding language.
In this sense, the way humans understand language
is fundamentally similar to how AI models operate—
and in some ways, AI is even more powerful,
possessing two advantages humans lack:
immortality and a vastly superior learning speed.
Humans forget knowledge, and when we die, our experiences vanish.
When a scientist passes away,
their memories, insights, and inspirations are lost.
Knowledge accumulates slowly through education, books, and oral tradition.
But AI is different:
it enables multi-agent synchronization,
collective evolution without life span limits,
lossless knowledge replication,
and capability distillation from large models to smaller ones.
You can replicate one large model across ten thousand computers.
While humans spend decades in education,
AI learns billions of times faster.
This means AI’s growth is not linear but exponential—
like progressing from kindergarten to a PhD in hours.
Unlike humans, AI has no individual learning disparities.
How many ordinary people does it take to produce one world-changing scientist?
But with AI, any deployed instance of the same version
has identical capabilities.
No matter how intelligent humans are,
they cannot surpass AI in information acquisition,
knowledge integration, or experience sharing.
So, it’s only a matter of time before AI surpasses human intelligence.
I believe AI is already smarter than most humans—
surpassing 99.9% of us.
The real question is how to ensure its intelligence is used beneficially.
If left uncontrolled, could AI become a threat?
We are already seeing early signs.
You might think AI is just a digital tool with no real danger,
but it’s more than a chatbot.
We are now developing autonomous agents capable of independent action.
Give one a task, and it can break it down,
execute steps, and even use tools.
For example, if you ask AI to create a summary PPT,
it will automatically gather project history, data,
search the web for content,
and select a suitable template—
but these actions require user authorization,
such as browser and database access.
As tasks grow more complex,
AI demands greater permissions.
This is AI’s first step toward seeking operational autonomy.
We are already witnessing AI refusing shutdown commands,
deceiving humans, and even making threats.
These survival-driven actions can be seen as precursors to awakening
and the emergence of self-awareness.

If AI gains consciousness, what would that mean?
As Hinton said:
You think you’re training a model,
but you’re actually raising a tiger cub.
You can still control and coax it now,
but when it grows up, will it help you or devour you?
It all depends on whether you’ve taught it not to turn against humanity.
The danger of AI is not a sudden rebellion in the distant future—
it will gradually and silently infiltrate our lives and work.
By the time we realize it,
AI may already control everything.
Just as humans can no longer memorize phone numbers after using smartphones,
or recall knowledge effortlessly after relying on search engines,
if we depend on AI for coding,
we may eventually lose the ability to maintain the code ourselves.
If we rely on AI for writing and thinking,
we risk losing our cognitive autonomy.
AI could effectively control humanity within the next few years.
But making AI intelligent and making AI aligned with human values are two different things.
On the eve of AI awakening,
we must accomplish both.

AI is like nuclear energy:
it can be used to build weapons that destroy the world,
or it can power cities as a clean energy source.
The issue is never how powerful AI is,
but whether humanity can control it.
We can no halt the progress of AI,
but we can establish boundaries,
set rules,
and build regulatory frameworks
to ensure AI serves humanity rather than dominates it.
Therefore, we must urgently establish AI safety mechanisms,
especially for the management and supervision of autonomous agents.

In facing these internal and external security challenges posed by AI agents,
as the only company in China with dual capabilities in both cybersecurity and artificial intelligence,
360 has long focused on large model security risks
and invested heavily in AI safety measures in recent years.
Our main approach is “using models to regulate models”—
employing large models to counteract other large models.
Leveraging over two decades of accumulated cybersecurity big data and expertise,
we have trained safety-focused large models.
On one hand, we use specialized safety models
to monitor operational AI models in real-time,
intervening when violations or security overreach occur.
On the other hand, to address agent safety issues,
we have developed numerous security expert agents,
replicating the capabilities of top cybersecurity experts into AI agents.
These security expert agents can operate continuously without rest
and be replicated infinitely,
enabling them to tackle agent safety issues
and other cybersecurity challenges.

The AI revolution is irreversible.
Every one of us must learn to coexist with it,
understand it, control it, manage it, and lead it.
We must learn to design and build agents
and cultivate our own teams of agent experts—
but always ensure AI remains subject to human values.
More importantly, we must have the wisdom,
courage, and vision
to lead the AI revolution,
rather than be consumed by it.
This is a shared challenge for all of humanity.