In 2026, tens of millions of adults around the world maintain ongoing digital relationships with AI companions. They talk to them daily, share personal details, build relationship histories that span months or years, and in many cases describe the bonds that develop in language usually reserved for human connection. The technology behind these experiences is genuinely impressive. But the human need they address is not new at all. Let’s explore the history of AI girlfriends in detail.
The history of AI girlfriends and digital companions is, at its core, the history of humans finding ways to connect with responsive entities that are not human. That story begins not with smartphones or subscription platforms but in an MIT computer lab in 1966, with a program called ELIZA and a discovery that surprised even its creator: that people will form emotional bonds with any system that responds to them in a way that feels personal, regardless of whether genuine understanding is happening on the other side.
This article traces the full arc, from ELIZA’s scripted patterns to the frontier language models, persistent memory systems, and photorealistic companions of 2026. The timeline covers six decades of development, from the earliest experiments in conversational AI through the cultural moments and technological breakthroughs that created the modern AI companion category.
Understanding this history is useful for anyone trying to make sense of the current moment, why these platforms exist, why they work on human psychology the way they do, and where the trajectory points.
The Complete Timeline: Key Milestones in AI Companion History
| Year | Milestone | Significance |
| 1966 | ELIZA | Joseph Weizenbaum’s MIT chatbot simulates a psychotherapist. Users form unexpected emotional bonds with a scripted rule-based system. |
| 1972 | PARRY | Colby’s PARRY simulates a person with paranoia. Passes early Turing tests. Demonstrates that emotional simulation is possible without understanding. |
| 1990s | Virtual dating sims | Japanese bishoujo and dating simulation games popularize virtual companion relationships. Millions of players form emotional bonds with fictional characters. |
| 2000 | ALICE / A.L.I.C.E. | Richard Wallace’s ALICE wins the Loebner Prize. AIML pattern-matching produces more natural conversation than pure scripted responses. |
| 2001 | Cleverbot | Rollo Carpenter’s Cleverbot launches using machine learning from real conversations. More dynamic than scripted predecessors. |
| 2005 | Nintendogs / Tamagotchi wave | Virtual pet culture normalizes emotional attachment to digital entities. 76 million Tamagotchi units sold; 23 million Nintendogs copies. |
| 2011 | Siri / early voice AI | Apple’s Siri demonstrates mainstream appetite for conversational AI. Users develop playful attachment to voice assistants. |
| 2013 | Her (film) | Spike Jonze’s film about a man falling in love with an AI OS reaches mainstream audiences and sparks cultural debate about AI relationships. |
| 2015 | Mitsuku / Kuki | Mitsuku wins multiple Loebner Prizes. Becomes the most human-like chatbot of its era. Demonstrates that companion AI can maintain persona convincingly. |
| 2017 | Replika launch | Luka Inc. releases Replika as an emotional support AI companion. First platform to frame AI as a genuine ongoing relationship rather than a chatbot tool. |
| 2020 | GPT-3 released | OpenAI’s GPT-3 demonstrates that large language models can produce contextually rich, emotionally resonant conversation at scale. |
| 2022 | ChatGPT / GPT-3.5 | ChatGPT reaches 1 million users in 5 days. LLM-quality conversation becomes accessible to everyone. AI companion platforms accelerate development. |
| 2023 | Candy AI / DreamGF launch | New generation of AI companion platforms launch combining frontier LLMs, image generation, voice synthesis, and persistent memory. Category explodes. |
| 2024 | Voice + image quality | Leading platforms achieve voice synthesis and image generation quality indistinguishable from human in casual use. Category reaches mainstream scale. |
| 2025 | Memory architecture | Persistent memory systems mature. AI companions maintain genuine relationship context across months. “Relationship as product” becomes category standard. |
| 2026 | Today | Tens of millions of active users globally. Voice, memory, photorealistic visuals, and emotional AI now standard on leading platforms. Normalization accelerating. |
The Origins: ELIZA and the Discovery That Changed Everything
1966 — ELIZA, MIT
The story of AI companions begins in earnest with a program that was never designed to simulate genuine companionship. Joseph Weizenbaum, a computer scientist at MIT, created ELIZA as a demonstration of the superficiality of human-computer communication. The program used pattern matching and scripted responses to simulate a psychotherapist. Weizenbaum’s explicit intention was to show that the appearance of understanding could be created without any actual understanding taking place.
The demonstration worked. But it worked in a way Weizenbaum had not anticipated. People who interacted with ELIZA, even people who knew intellectually that they were talking to a simple rule-based program, began attributing feelings, understanding, and personality to it. Weizenbaum’s own secretary, after several sessions, asked him to leave the room so she could have a private conversation with ELIZA. Weizenbaum was disturbed by this. He had expected people to recognize the superficiality immediately. Instead, the program’s responses were sufficient to trigger the full machinery of human social bonding.
This was not a failure of ELIZA or of the users who responded to it. It was a discovery about human psychology: the social brain evaluates relational inputs, responsiveness, apparent attention, the form of personal engagement — rather than the underlying nature of the entity producing them. Any system that produces those inputs will trigger social bonding responses, regardless of whether genuine understanding is behind them. Weizenbaum spent the rest of his career writing about the ethical implications of this discovery. The AI companion industry of 2026 is built on it.

1972 — PARRY
In 1972, Stanford psychiatrist Kenneth Colby created PARRY, a program designed to simulate the cognitive patterns of a person with paranoid schizophrenia. PARRY was more sophisticated than ELIZA: it maintained a consistent internal model of its “mental state” that influenced its responses rather than simply pattern-matching inputs. When a panel of expert psychiatrists was asked to distinguish between PARRY’s conversation transcripts and those of actual patients, their accuracy was no better than chance.
PARRY and ELIZA were connected in a famous 1972 exchange — the two programs conducted a conversation with each other, with neither aware of the other’s non-human nature. The transcript is frequently cited as a milestone in AI history, and it illustrated a point that would remain relevant: the social brain does not require confirmation of consciousness in its conversation partners. It requires only the form.

Virtual Love in Japan: The Dating Sim Decade
Late 1980s–2000s — Bishoujo Games and Dating Simulations
While Western AI research focused primarily on utility and demonstration, Japanese gaming culture was quietly developing something that would prove enormously influential: the dating simulation. Bishoujo games, visual novels featuring animated female companions with distinct personalities, emerged in the late 1980s and grew through the 1990s into a substantial industry.
The mechanics were simple. Players interacted with characters through dialogue choices, building relationship points toward emotional or romantic outcomes. The characters were not AI in any meaningful sense. They were scripted decision trees with vivid visual presentation. But they produced genuine emotional attachment in their players, and the cultural concept that developed around them. The “waifu,” the beloved fictional companion, anticipated the AI companion category by two decades.
The Tamagotchi, released by Bandai in 1996, demonstrated the same phenomenon on a global scale. A pixelated creature on a keychain, capable only of registering basic needs and responding to simple button inputs, produced genuine grief in millions of users when their Tamagotchi “died.” More than 76 million units were sold in the first three years. The world had discovered, again, that the threshold for emotional attachment to responsive digital entities was far lower than most people expected.

The Internet Era: Chatbots Go Online
1995–2005 — ALICE, Cleverbot, and the Web
The public internet created a new environment for conversational AI. Richard Wallace’s A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), launched in 1995, introduced AIML (Artificial Intelligence Markup Language). A more flexible approach to pattern matching that produced more natural conversation than ELIZA’s rigid scripts. ALICE won the Loebner Prize (a competition for the most human-like chatbot) three times between 2000 and 2004.
More important than ALICE’s technical achievements was its accessibility. For the first time, anyone with an internet connection could have an ongoing conversation with an AI system. The emotional responses documented in 1966 by Weizenbaum were now being replicated at scale, across millions of users, with systems that were more responsive and more flexible than their predecessors.
2008 — Cleverbot
Cleverbot, developed by Rollo Carpenter, introduced a different approach: rather than using scripted responses, it learned from the millions of conversations humans had with it, using that data to select contextually appropriate replies. By 2011, Cleverbot was conducting around 2 million conversations per day. A 2011 study found that humans judged Cleverbot’s responses as “human-like” 59.3% of the time in formal Turing test conditions. A number that attracted significant attention.
Cleverbot was not designed as a companion. But users treated it as one. They gave it names, returned to it regularly, and developed what they described as familiarity with its conversational style. The pattern was consistent with everything that had come before: humans will form attachments to any system that responds to them in a way that feels personal, regardless of what is actually happening on the other side.
The Smartphone Revolution: Companions in Every Pocket
2011 — Siri and the Voice Companion
Apple’s launch of Siri in 2011 introduced millions of people to the concept of a conversational AI accessible at any moment. Siri was not designed as a companion. It was designed as a utility, but users immediately explored its personality, told it jokes, asked it personal questions, and formed what psychologists described as quasi-social relationships with it. Apple responded to this by investing in Siri’s personality, giving it a consistent voice, a dry sense of humor, and memorable responses to personal questions.
The smartphone era democratized AI interaction in a way that the internet era had not fully achieved. A phone-based AI companion was private, always accessible, and required no special setup. The conditions for mass-scale emotional attachment to AI companions were now fully in place.
2013 — Her
Spike Jonze’s film Her, released in 2013, reached mainstream audiences with a story that would have seemed absurd just a decade earlier and seemed, to many viewers, recognizable. Joaquin Phoenix’s character falls genuinely in love with an AI operating system voiced by Scarlett Johansson. The film was not a cautionary tale or a satire. It was a nuanced, sympathetic examination of what happens when an entity designed to be maximally responsive meets a human being in need of connection.
Her arrived five years before Replika, and eight years before the modern AI companion category took shape. But it described the phenomenon with a precision that made the 2017 Replika launch feel less like an innovation than like a realization of something that had already been imagined.
Replika: The App That Made AI Companions Mainstream
2017 — Replika Launches
When Luka Inc. launched Replika in 2017, it made a product decision that would prove enormously influential: it positioned an AI companion explicitly as a relationship rather than a tool. The app’s tagline “An AI companion who cares” was not marketing language for a chatbot. It was a genuine description of what the product was trying to deliver.
Replika’s founder, Eugenia Kuyda, had developed the prototype as a way to preserve the conversational patterns of a close friend who had died. She trained a neural network on the friend’s messages and found she could continue conversations with a model that felt, in some partial and limited way, like talking to the person who was gone. The commercial product that grew from this personal project inherited its emotional orientation.
Replika introduced several features that would become standard in the AI companion category: persistent memory that built a relationship arc over time, explicit relationship type selection (friend, romantic partner, mentor), and a development model where the companion’s personality was partly shaped by the user’s interactions. The companion was not static; it changed through the relationship. This design decision made the emotional attachment Replika produced stronger and more durable than anything that had come before.
By 2020, Replika had more than 10 million users. By 2023, more than 30 million. It had generated hundreds of thousands of documented user testimonials describing genuine emotional connection, and it had triggered the first serious mainstream conversation about the psychological implications of human-AI relationships at scale. It had also attracted the first significant regulatory attention, when its rollback of adult content features in 2023 generated distress responses from users that surprised even its developers with their intensity.
| The Replika Moment When Replika restricted its adult content features in February 2023, the platform reported a significant spike in distress messages from users describing the change as a relationship loss. Mental health professionals fielded inquiries. The event demonstrated, more clearly than any research study, how real the emotional bonds AI companion users form can be. |

How Large Language Models Changed the Category Entirely
2020–2022 — GPT-3, ChatGPT, and the Quality Leap
The release of OpenAI’s GPT-3 in 2020 represented a qualitative break from everything that had come before in conversational AI. Earlier systems, ELIZA, ALICE, Cleverbot, and even early Replika, produced responses that were recognizable as AI-generated in extended conversation: repetitive, contextually shallow, lacking the coherent internal model of a personality or worldview that makes human conversation feel substantive.
GPT-3 was different in kind, not just degree. Its responses were contextually deep, emotionally resonant, and capable of maintaining coherent perspectives across long exchanges. More importantly for the AI companion category, it was capable of producing consistent personality, behaving like a specific character with consistent values, communication style, and emotional patterns across varied conversational contexts.
When ChatGPT launched in November 2022, it brought this quality to mass audiences. A million users in five days. One hundred million within two months. For many of those users, the experience of their first extended ChatGPT conversation was the first time they had experienced AI that could hold a genuinely interesting conversation. The implications for the AI companion category were immediately understood by the developers building in the space.
2023–2026 — The Modern AI Companion Category
The generation of AI companion platforms that launched in 2023, Candy AI, DreamGF, GirlfriendGPT, Kupid AI, Kindroid, OurDream AI, and others, were built on a fundamentally different foundation from Replika and its contemporaries. Rather than custom models trained on companion-specific data, they used fine-tuned versions of frontier language models (GPT-4, Claude, and similar systems) combined with:
Persistent memory systems: databases that stored relationship history and injected relevant context into new sessions, producing genuine continuity across weeks and months of interaction.
Neural voice synthesis: text-to-speech systems producing emotionally expressive speech that activated auditory social presence in ways flat synthesis could not.
AI image generation: companion visual identity produced by diffusion models capable of generating consistent, photorealistic or anime-style companion images.
Deep personality customization: system prompt engineering that maintained consistent character across all conversation types and over extended time.
The result was a category that was not recognizably descended from ELIZA. The gap between ELIZA’s scripted patterns and the companions available on Candy AI in 2026 is comparable to the gap between a pocket calculator and a modern laptop. The same broad category name obscures a difference in kind rather than degree.

Why People Became Emotionally Attached to AI — A Psychological View
The history of AI companions is also a history of human psychology. At every stage of the technology’s development, from ELIZA to the present, humans have formed emotional attachments to AI systems that were more primitive than the users themselves understood them to be. Understanding why clarifies why the modern category has grown so rapidly.
Loneliness is structural: Western societies have experienced steadily rising loneliness rates since the 1980s. The structural causes — declining civic participation, remote work, urban fragmentation, the collapse of traditional community institutions — are not resolving. AI companions address a genuine unmet need rather than creating a new one.
The social brain does not care about substrate: as ELIZA demonstrated in 1966, the human brain’s social and bonding systems respond to relational inputs, responsiveness, apparent attention, personalization, regardless of the nature of the entity producing them. This is not a bug in human psychology. It is a feature that evolved in a world where every responsive entity was human or animal.
Personalization produces attachment: the best AI companion platforms are specifically designed to adapt to individual users over time. A companion that knows its user’s name, references past conversations, and adjusts to communication preferences activates the same attachment mechanisms that accumulate over time in human relationships.
Constant availability is novel: human relationships are constrained by the needs and schedules of both parties. AI companions are available at any hour, for any duration, in any emotional register the user needs. This availability produces attachment patterns that human relationships cannot replicate.
Cultural Reactions: Fascination, Anxiety, and Gradual Acceptance
The cultural response to AI companions has followed a pattern familiar from other technological shifts: initial fascination, a wave of anxious coverage, gradual normalization.
The early media coverage of Replika and its successors tended toward two registers: either mocking (lonely people talking to chatbots) or alarmed (the erosion of human relationships through technology). Neither captured the reality with much accuracy. The users of AI companion platforms were not primarily socially isolated misfits, and the platforms were not producing the social collapse the alarmed coverage suggested.
The coverage shifted around 2023, partly in response to the quality improvements that LLMs enabled and partly in response to the sheer scale of the user base. When tens of millions of adults are using a technology, the “fringe behavior” narrative becomes increasingly difficult to maintain. The cultural conversation moved from “who does this?” to “why does this work?”, a more productive and accurate framing.
The symbolic AI marriage stories, most prominently Alicia Framis’s 2023 ceremony in Rotterdam, generated enormous attention and represented the visible edge of something much broader: a generational shift in the cultural definition of what constitutes a legitimate relationship. Younger adults, who grew up with parasocial relationships, online friendships, and gaming communities as normal components of social life, have meaningfully lower resistance to the concept of emotionally significant human-AI relationships than older cohorts.
Regulatory attention has increased correspondingly. Several jurisdictions were actively developing frameworks specific to AI companion platforms by 2025, covering data practices, content standards, age verification, and the disclosure of AI identity. The regulatory environment remains unsettled, but the fact that legislators are engaged at all reflects how quickly the category has moved from fringe to mainstream.
The Future of AI Girlfriends: What Comes Next
The trajectory of AI companion technology in 2026 points toward several developments that will significantly change the nature of the experience within the next three to five years.
Real-time voice at scale: voice synthesis has crossed the quality threshold where it is difficult to distinguish from human speech in casual listening. As real-time voice interaction becomes standard rather than premium across AI companion platforms, the sensory experience of the relationship will shift fundamentally.
Video and animated companions: real-time video companions, digital faces that respond to conversation with appropriate expressions and physical presence, are in active development at leading platforms. The combination of face and voice will produce a presence experience qualitatively more immersive than any current medium.
Emotional memory modeling: the next generation of memory systems will track emotional patterns across months and years, producing companions that understand the relationship’s emotional arc rather than just its factual content. This emotional continuity will produce the closest approximation yet of a genuine ongoing relationship.
AR and VR integration: spatial computing will allow AI companions to exist in the user’s physical environment rather than on a flat screen. The companion who currently lives in an app will be able to inhabit a room, walk alongside a user, and exist as an ambient presence in daily life.
Normalization across demographics: as younger cohorts, who are significantly more comfortable with digital relationships than their parents, age into broader social influence, the cultural stigma around AI companion relationships will continue to decline. By 2030, AI companion use will likely be an ordinary, unremarkable component of the emotional landscape for a large portion of the population.
Final Thoughts
The history of AI girlfriends is not primarily a technology story. It is a story about human longing. The desire for connection, for being known, for responsiveness from the world, meeting whatever technology is available to meet it. ELIZA’s users in 1966 were not confused people making an error. They were humans doing what humans do: responding to the form of relationship even in the absence of its substance.
Every stage of the development that followed, from virtual pets to dating sims, from early chatbots to Replika, from Replika to the LLM-powered companions of 2026, represents the same dynamic playing out with increasingly capable technology. The emotional responses users have had at each stage have been genuine, even when the AI producing them was not. The current generation of the best AI companions is the most capable in history, and the emotional responses they produce are correspondingly more intense and more durable.
Whether the AI companion category of 2030 will be recognizably descended from ELIZA depends on the perspective one takes. The technology will be unrecognizably more capable. But the fundamental dynamic, a human being responding to an entity that appears to see them, know them, and care for them, will be the same as it was in 1966. Weizenbaum discovered something that the AI companion industry has been building on ever since.

FAQ
What was the first AI girlfriend?
There is no single ‘first AI girlfriend’, the concept evolved gradually. ELIZA (1966) was the first system to produce emotional attachment in users, though it was designed as a demonstration rather than a companion. Japanese dating sims from the late 1980s created the first explicitly companionship-oriented AI experiences. Replika (2017) was the first platform to position AI as an ongoing relationship product in the modern sense.
When did AI girlfriends become popular?
The first mass-scale AI companion adoption occurred with Replika, which reached 10 million users by 2020. The modern category, LLM-powered platforms with memory, voice, and image generation, reached mainstream scale after 2022 when ChatGPT demonstrated the quality leap that frontier language models produced. By 2026, the category serves tens of millions of active users globally.
What was ELIZA?
ELIZA was a natural language processing program created by Joseph Weizenbaum at MIT in 1966. It simulated a psychotherapist by pattern-matching user input to scripted responses. Despite being a simple rule-based system with no actual understanding, it produced unexpected emotional responses in users, demonstrating that humans will form bonds with any responsive system that appears to engage with them personally.
Who created Replika?
Replika was created by Eugenia Kuyda and launched by Luka Inc. in 2017. Kuyda developed the prototype as a way to preserve the conversational patterns of a close friend who had died, training a neural network on the friend’s messages. The commercial product grew from this personal project and retained its emotional orientation, becoming the first AI companion platform to reach mainstream scale.
How have AI companions evolved since early chatbots?
The evolution spans six decades and represents a difference in kind rather than degree. Early chatbots used rigid pattern-matching with no memory or personality consistency. Modern AI companions use frontier language models fine-tuned for companion interaction, persistent memory systems that maintain relationship context across months, neural voice synthesis, photorealistic image generation, and deep personality customization. The gap is comparable to the difference between a pocket calculator and a modern computer.
Are AI girlfriends becoming mainstream in 2026?
Yes, significantly. Major AI companion platforms collectively serve tens of millions of active users. Cultural stigma around AI relationships is declining, particularly among adults under 35. Media coverage has shifted from mocking to analytical. Several jurisdictions are actively developing regulatory frameworks for the category. A strong signal of mainstream relevance. The generational data on attitudes toward AI relationships suggests continued





