The First AI Chatbot Was Created In 1964
Its first iteration appeared on a mainframe computer and the creator warned against artificial intelligence
Some interesting things happened in 1964. The world had its first lung transplant, Beatlemania went wild, and Cassius Clay (Muhammad Ali) won boxing’s world title. Technology also had some firsts.
For instance, the world’s largest suspension bridge at the time was opened in New York, Mariner IV passed Mars sending back pics, plus the world’s first AI chatbot was created.
Now, if I told you one of the things above didn’t happen in 1964, which would you pick?
I’m sure you’d lean towards the chatbot. After all, they’re modern inventions. Siri, Chat GPT, and their ilk are recent additions to our language.
While it does seem ultra-modern, natural language communication between man and machine has been around for about seventy years. It’s odd when put that way. But the first chatbot is older than most of our population.
Believe it or not, this bot was used for therapeutic purposes.
The name of the program was Eliza. Its birth started us on the road of wonder, dread, and curiosity we’re all traveling together right now. And it all began at MIT.
The Creation Of Eliza
“ELIZA is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible.”
— Joseph Weizenbaum, ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine
Before there were the personal computers we’re all too familiar with, mainframes were the standard. Archaic, but recognizable in their way. For instance, today we’re used to using programs that reside “in the cloud” like Midjourney, or Google Docs.
Mainframes were like that in a way, but all the processing and saving took place on a large main computer. So, consider it a server of sorts. But in the 1960s, one shared time on this computer, and could only interact via a console.
According to Eliza’s creator Joseph Weizenbaum:
“It is sufficient to characterize the MAC system as one which permits an individual to operate a full scale computer from a remotely located typewriter. The individual operator has the illusion that he is the sole user of the computer complex, while in fact others may be “time-sharing” the system with him. What is important here is that the computer can read messages typed on the typewriter and respond by writing on the same instrument.”
Weizenbaum also proudly notes there’s little delay between Eliza and a user, even when many are using the system at once. Imagine that.
Obviously, programs’ abilities in this age were limited due to hardware and software at the time. So, its creator made Eliza’s use simple in nature: a digital psychotherapist.
This sounds counterintuitive. You’d figure a program providing therapy would require an extensive amount of data. Plus, be able to process it instantly. Well, it really depends upon the style of therapy.
Eliza mimicked a Rogerian psychotherapist. As Weizenbaum says in his paper, in this type of therapy “one of the participating pair is free to assume the pose of knowing almost nothing of the real world.”
So, the therapist in this Person-centered therapy lets the client take the lead and find their own solutions. Therefore, Eliza functions like Socrates. It helps the client find their own answers by continually asking questions. Even very basic questions allow elaboration and thinking.
In its day, the program was considered magical, but the mechanics were rather simple.
How Eliza Works
“It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence.”
— Joseph Weizenbaum, ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine
Now, the conversation above looks ridiculous to the point of being funny, but remember we’re in 1964. Mainframes dominated the landscape. Most people never interacted with a computer, let alone one which appeared to hold any type of conversation.
Eliza could do that on a rudimentary basis.
If you weren’t trying to be a wiseass — like me — and typed in simple statements, Eliza would continuously probe. Some didn’t believe it was a computer after certain interactions.
In Caroline Bassett’s article in the Journal AI & Society, she notes many users “related to the program in anthropomorphic ways.” Even those that should know better. Weizenbaum relates the story of his secretary asking him to leave the room while she finished her “consultation” with Eliza.
This shocked Weizenbaum. Namely, since he set Eliza as a therapist because it was the easiest thing to program. In certain cases, the chatbot could ask probing questions by isolating keywords the user typed. Weizenbaum didn’t think of it as groundbreaking, more of a trick.
Although Eliza won widespread praise for its genius of design. Some even thought Eliza might one day be developed into a real AI therapist, but Weizenbaum thought this was foolish.
The program just scanned sentences, picked out a keyword, and replied using that word in set mechanical responses. Otherwise, it spouted something like: “can you elaborate on that.” Very basic at best.
The response to this simple automation shocked its creator, like Dr. Frankenstein with his monster. In fact, it changed his mind about AI.
A Programmer's Warning
“At the core of this ‘hubris’ — and at the heart of the AI project — was the goal of the full simulation of human intelligence or equivalent machine intelligence. Weizenbaum thought that taking this as a realistic goal was a fantasy, but a dangerous fantasy. He was concerned about the ascension of the worldview it promoted — one in which man is ‘to come to yield his autonomy to a world viewed as machine.’”
— Caroline Bassett, AI & Society
After Joseph Weizenbaum’s brush with tech fame, he turned negative on AI. Not just the ability of computers to have a “human intelligence.” He was afraid a future human society might adopt a machine logic as superior to imperfect human judgement. According to Bassett:
“What is envisaged is a system where human-held principles or beliefs (for instance in justice, equality, or individual freedom) are increasingly regarded as irrelevant to the administration and/or governance of the society, or of the self. Governance is instead to be accomplished through …cybernetic control that may produce resolution…what might be termed ‘good order’, without recourse to human judgment.”
So, humans might embrace machine logic over humanity, at the same time while ascribing human-like attributes to computers using tricks to appear less machine-like. To me, that type of duality might drive you to a therapist.
And it does appear some of Weizenbaum’s predictions have come true. In 2022, Google fired a programmer who went public claiming their AI technology was sentient. Google in defense claimed it to be just a good mimic. In other words, Eliza times a thousand.
So, could the search engine company have created a conscious computer program it keeps trapped in its headquarters? Maybe. Or could it just be applying human traits to a convincing machine? Well, probably the latter.
But when you add this to people treating the brain as a computer, then wondering if we live in a simulation program, it appears Weizenbaum may have had a point. Not to mention people continually bashing humanity and putting more power into the hands of autonomous machines.
All in all, Eliza is archaic, but the questions it forced its creator to ask are prophetic. Or maybe I’m just reading too much into this. What do you think Eliza?
-Originally posted on Medium 3/23/23