Page Summary

  • Relationships make us who we are, and they are the bridges that unite us with others. This union is the most important thing we can do in life.
  • AI chatbot technology is not just another form of media. Instead of connecting us with other people, it replaces them.
  • A chatbot can never be a person, because the way it functions does not involve freedom, truth, or love.
  • A relationship with a chatbot is a bridge to nowhere, one that AI developers want to build through every workplace, living room, and bedroom.
Q

Is it a good idea to use AI chatbots?

Generally speaking, no. They serve as dangerous substitutes for the relationships we need to live a good life.

Ok, but what does that actually mean?

When advanced chatbots powered by generative AI1 first made the news, Charlotte brushed it off as just another tech fad. She eventually wondered if it was more than a fad, because her friends started talking about it more frequently. Then it really hit close to home when her boyfriend Clayton complimented the chatbot he had been using.

Man looking at phone in the dark

He showed Charlotte how useful it had been for work, and it was occupying more of his free time too. In their conversations, she noticed that he was getting frustrated more easily, but didn’t think it was a big deal. She tried using the chatbot herself and it seemed to make life easier. Her only question was “what’s the catch?

Everyone takes relationships for granted sometimes. Maybe we even fixate on the trouble they cause us instead of the ways they improve our lives. In an age that offers us so much convenience in exchange for our attention, it can be easy to lose focus on the relationships that make us who we are.

If human beings can’t be happy without uniting with some good, and the greatest goods are persons, then uniting one person with another is the most important goal we can seek. It doesn’t increase “productivity” or serve as an efficient means to some other end. It is the end reason we do everything else. Uniting one person with another, like a bridge, is what we typically mean by “relationship”.

We aren’t just autonomous individuals floating in a vacuum, trying to avoid colliding with other individuals. Each of us is the fruit of a relationship between two persons (our parents), living for relationships with other selves.

Széchenyi Chain Bridge in Budapest, Hungary

We think of Budapest, Hungary as one city, but it wasn’t officially unified until after this bridge connected the older cities of Buda and Pest.

Child with a robot

Human relationships are increasingly “mediated” by technology (hence the “media” in “social media”), though that’s not necessarily a bad thing. In-person relationships are the ideal, but long-distance relationships through some technological medium can play a good role in life. Art, music, handwriting, print, telecommunications, digital devices, and the internet all mediate relationships in both good and bad ways. The good effects could even extend to specific applications of AI’s split-second pattern identification and data analysis. Potential benefits could include identifying individually tailored treatments for disabilities or monitoring global disease risks.2

Is AI just another form of media?

Developers have presented generative AI, including Large Language Models (LLMs), in a different light. In the process of raising hundreds of billions of dollars, these developers have marketed a new wave of AI products, with the promise that they can replace a wide range of activities, not just highly specialized tasks.3, 4

The seemingly open-ended, adaptable traits of these LLM-powered chatbots appear to replicate the distinctly rational nature of the human person.5 This enables a chatbot to appear, not as an intermediary connecting one person with another, but as a person-substitute to which we are supposed to relate. This feature distinguishes it from most or all other technological developments.

Could a chatbot ever be a person?

Would it be possible to have a relationship with one of these chatbots as if it were a person? A growing portion of the population seems to think so.6 But the flexible traits that make the software seem rational are only appearances. It isn’t capable of freely knowing or loving anything outside itself or even inside itself, because there is no self. It isn’t capable of understanding the words it uses, because it’s not really using words the way a person does.7

All it can do is predict what the next piece of output would be in response to a prompt, based on similar patterns in its massive training data. The user is the one who judges whether the output satisfies the prompt, but the software has no opinion. No matter how sophisticated it gets, no amount of prediction will ever enable it to cross some magical line into personhood. It’s a completely different way of functioning that does not involve freedom.

What about books? Let’s say the author is unknown. Reading an anonymously written book doesn’t seem like a personal relationship. Is that similar to LLMs? Not really. Even when we don’t know the author, even if the author is long dead, we can still engage with her creative and intellectual choices. Her writing expressed something unique about what she understood and loved in the world, and we can take that understanding and that love into ourselves. It can resonate with us or repulse us, and there would exist some person on the other end who would be answerable, for better or worse.

Robot with facial features

In the LLM’s version, it incorporates tiny components of millions of real works of art or literature to produce something that seems similar on the surface. But none of the authors of those millions of works has any responsibility for the finished product. Their contribution couldn’t be traced to them and isn’t being used in a way that they necessarily intended.

In their place is an unthinking, unfeeling, unchoosing impostor that doesn’t care about what it generates or how it’s received. It doesn’t have any attitudes toward the outside world or any affection for the user. It only manufactures outputs that persuade the user to project those features onto it. Is that the kind of loved one you want?

The new A.I. systems are ‘built to be persuasive, not truthful,’ an internal Microsoft document said. ‘This means that outputs can look very realistic but include statements that aren’t true.’

Karen Weise and Cade Metz, “When A.I. Chatbots Hallucinate”, New York Times, May 1, 2023

Chatbots based on large language models (LLMs) are proving to be surprisingly effective at covert persuasion through continuous optimization of personalized interaction.

Pope Leo XIV, “Preserving Human Voices and Faces”

Attempting to form a relationship with a chatbot is doomed to failure. If a normal relationship is a bridge, then this is a bridge to nowhere. AI developers have stated plans to build this bridge through every workplace, living room, and bedroom.

1) Strictly speaking, the terms “Artificial Intelligence” and “Large Language Model” aren’t really accurate, because they distort the meanings of the words “intelligence” and “language”. With that being said, we are using the terms in this article to avoid confusion. The focus of this article will be on those newer applications and not pre-existing software functions like autocorrect. Remember Clippy and Ask Jeeves? This page isn’t about them, because their responses were pre-programmed, not really attempting to simulate intelligence.

2) AI Research Group of the Centre for Digital Culture, Matthew J. Gaudet, Noreen Herzfeld, Paul Scherz, and Jordan J. Wales, “Encountering Artificial Intelligence: Ethical and Anthropological Investigations,” Journal of Moral Theology 1 (Theological Investigations of AI), 2023: 176, 182-183, https://doi.org/10.55476/001c.91230, accessed April 10, 2026.

3) “[Nvidia CEO Jensen Huang] noted that these agents can tackle complex, multi-step tasks, effectively doing ‘50% of the work for 100% of the people,’ turbocharging human productivity.” Brian Caulfield, “‘Every Industry, Every Company, Every Country Must Produce a New Industrial Revolution,’ NVIDIA CEO Says,” NVIDIA Company Blog, November 12, 2024. https://blogs.nvidia.com/blog/ai-summit-japan-huang-son/, accessed April 1, 2026.

A rope bridge whose end is hidden in dark and ominous woods.

4) Specific uses, like image analysis for radiological scans, deserve separate consideration, so they are beyond the scope of this general treatment.

5) For more on how Large Language Models work, see “We Have Made No Progress Toward AGI,” Mind Prison, April 21, 2025. https://www.mindprison.cc/p/no-progress-toward-agi-llm-braindead-unreliable, accessed May 19, 2025.

6) “One in four young adults believe that AI girlfriends and boyfriends have the potential to replace real-life romantic relationships,” Wendy Wang and Michael Toscano, “Artificial Intelligence and Relationships: 1 in 4 Young Adults Believe AI Partners Could Replace Real-life Romance,” Institute for Family Studies, November 14, 2024. https://ifstudies.org/blog/artificial-intelligence-and-relationships-1-in-4-young-adults-believe-ai-partners-could-replace-real-life-romance, accessed February 15, 2026.

7) For more on the significance of language, we highly recommend DC Schindler, “AI as a Very Deepfake,” New Polity, February 10, 2026. https://newpolity.com/blog/ai-as-a-very-deepfake, accessed April 1, 2026.

Resources for a healthy understanding of

AI