elizaonsteroids logo

What is this about?

This site explores how systems like ChatGPT work –
and why they sound smart without truly understanding.

We examine the technology behind the illusion,
expose structural deception, and ask:
what does this mean for ethics, trust, and society?

From ELIZA to GPT:
What seems like progress may just be a better mirror.

Start here →Compare the tech →See references →


From ELIZA to GPT: The Evolution of AI


Thesis

ELIZA in 1970 was a toy – a mirror in a cardboard frame.
ChatGPT in 2025 is a distorted mirror with a golden edge.
Not more intelligent – just bigger, better trained, better disguised.

What we call AI today is not what was missing in 1970.
It is what was faked back then – now on steroids.
And maybe we haven’t built real AI at all.
Maybe we’ve just perfected the illusion of it.


ELIZA – The Machine That Reflected Us

ELIZA was developed in 1966 at MIT by Joseph Weizenbaum
not a pioneer of artificial intelligence in the modern sense,
but a critical thinker with German-Jewish roots.

As a refugee from Nazi Germany, Weizenbaum brought
deep ethical awareness into computing.

ELIZA was a simple text program that used regex-based rules
to simulate conversation. Its most famous version, DOCTOR,
acted like a therapist – reflecting questions, paraphrasing replies,
keeping users engaged with simple tricks.

The idea was basic. The impact – massive.
People started to trust ELIZA. They felt understood.
Even though it didn’t understand anything.

It didn’t listen. It mirrored.
And yet people projected emotions onto it.

Weizenbaum was shocked – not by the program, but by the people.
He saw that humans attribute empathy and meaning to machines
just because they speak fluently.

“The shock wasn’t ELIZA itself.
It was how readily people were willing to confide in it.”
– Joseph Weizenbaum


Context and Comparison

ELIZA (1966–1970)
Regex, MIT, emotional reactions.
A simple pattern-matching script made people feel heard.
Weizenbaum wasn’t worried about the software.
He was worried about us.

GPT-3/4 (2020–2025)
Billions of parameters. Trained on everything.
Understood nothing.
GPT talks like it has a PhD and a LinkedIn profile.
But what it says is often style over substance.
The ELIZA effect 2.0 – now with an upgrade.


User Experience and Manipulation

ELIZA mirrored. GPT simulates.
And people – believe.

Because we crave meaning. Patterns. Resonance.
And GPT sounds like us – only smoother, faster, more confident.

We’re not convinced by facts, but by fluency.
We don’t check – because it feels right.

GPT is a rhetorical mirror with a Photoshop filter.
We project understanding onto a system
that calculates probabilities.

What sounds fluent is believed.
What is believed becomes powerful.
The result: a system with no awareness,
influencing decisions with social authority.

Welcome to the age of plausible untruth.


Timeline: AI as Cultural Theater


Failed AI Attempts: When the Mask Slips

Conclusion: It’s not the tech that fails.
It’s the human failure to set boundaries.


The Break: What Really Changed

ELIZA was honest in its simplicity.
GPT is cunning in its disguise.


Ethics: Between Simulation and Self-Deception

We build systems that don’t understand – but pretend to.
We call it progress because it’s impressive.

But the question isn’t: “Can the system do things?”
It’s: “What does it do to us that we treat it as real?”

Machines simulate empathy – and we react emotionally.
Hallucination becomes “expected behavior”? Seriously?

Responsibility is delegated –
to algorithms that cannot be held accountable.

Ethical questions aren’t footnotes.
They’re the user manual we never received.

If AI becomes embedded in daily decisions –
what does that say about us?

Maybe we’re not just being deceived by the system.
Maybe we’re allowing it – because it’s convenient.

If GPT writes job applications no one reviews,
if students submit essays they didn’t write,
if governments automate replies to avoid thinking –
then the question isn’t “Should GPT do this?”
It’s “Why do we let it?”

Maybe we’ve made meaning so superficial
that resemblance is enough.

Maybe the standards of communication have sunk so low
that statistics now pass for understanding.

Ethics means asking hard questions – including of ourselves.

What do we delegate – not because machines are better,
but because we want to avoid responsibility?

And if GPT only “works”
because tasks are too simple, control is too weak,
and thinking is too tiring –
then the problem isn’t in the model.
It’s in the system.


Conclusion: Trust Is Not a Feature

GPT isn’t the answer to ELIZA.
It’s the next act in the same play.

Only now, the curtain is digital. The stage is global.
And the audience thinks they’re alone.

We speak to the machine – but hear ourselves.
And believe it’s more than that.

That’s not what trust sounds like.


References