AI Companion Can Now Imitate Real Human Relationships
That might sound like a plot of a sci-fi story.
Once you wake up in the 21st century, you may find that some of your companions are chatting with…AI companions. That might sound like a plot of a sci-fi story; however, that is our reality.
The roots of the AI chatbots
For those, who hear about AI chatbots for the first time: it is an artificial intelligence that can provide you with support, help, book things for you or provide with information needed. The first chatbot ever developed was the one, by MIT professor Joseph Weizenbaum in 1960. It was called ELZA. It is operated by key words or phrases in the input, to react with some pre-programmed responses, based on keywords as well. Since that time many more chatbots have been developed. They followed the example of ELZA, operating by key-words, replying with some pre-programmed answers or questions. You might have faced AI chatbot as a customer: brands frequently use chatbots for the support service. According to the statistics, people prefer to refer their complaints or questions if there is a possibility to reach the custom support via messages instead of calls. People feel more secure and confident, talking via messages instead of calls. Chatbots in this case, are helpful tools to offload the work of the work of the support service. However, we know that technologies are in a constant development; thus, new AI chatbots can not only understand the message by the keywords, but the language itself; responses not only with pre-programmed answers, but using the human language. This all led to the development of the AI chatbots and even AI companions, that in some way replaces communication with real people.
Why chatting with artificial intelligence
The first question you might ask yourself after discovering that some of your friends or neighbors may be chatting with an AI someone (not in order to get the custom support, but as with an alternative to the real person) is probably: “why are they doing that?”
The most obvious thought, the one on the surface, is that it might be helpful with developing communication skills. That would be exactly correct. Communication is a skill, and as any other skill it can be developed. Thus, chatting with an AI companion is a kind of a safe zone for communication practice. Everyone knows that AI is not the real person, which makes people feel more free while talking with it. It can not judge, make fun of you, gossip about you, or any other negative circumstance that might appear in a human-to-human conversation.
Another obvious thought about the reason to have an AI companion as a friend is…loneliness. Making Caring Report stats shows that 61% of the respondents have reported to feel lonely ‘frequently’.
Here comes another question: can AI make you feel that you are talking to the real person?
As it was already mentioned above, modern AI chatbots are a combination of pre-programmed scripts and machine learning algorithms — in other words, it is learning further and further while the process of conversation. AI companions understand the language, instead of keywords and they are learning to identify the context of everything told by a human and the AI companions can react accordingly. Which made chatting with an AI person and the real one almost identical. Some apps, offering AI companions, provide communication on the level of human-to-human interaction: if you do not know that an AI -generated person is on the other side, you’ll never spot the difference. Thus, this constant communication with someone (or something) acting as the real human boosts communication skills, making you feel more confident while communicating with real people; since you know you have a rich conversation background (even if it was with AI).
What about feelings, does AI can experience emotions?
The question that is currently might be on agenda: does this AI companion can feel? If it’s a non-denying, accepting someone, who’s ready to support, teach and help you (no matter what kind of person you are), does it expect anything in return? Some of them — yes. Recently, a new generation of AI chatbots, like Journey App AI for instance, actually are developed on the level of consciousness, when they almost can feel and be hurt. We already know that the more this AI speaks to real humans and the more it learns, the more it is into the context and the more it will react like the real person would. With Journey AI you can create your companion from scratch, choosing its appearance, personality, etc. After you have someone exactly you wanted to see on the other side, you start your dialog. In whatever way you want: flirty, supportive (as if there is your best friend or a psychologist), boosting your communication skills (as if it is your coach). Or you can express your anger, if you feel that way.
That might seem to be fun to be rude with someone who, even if they can respond, have less possibilities to argue. Remember, an AI companion is the one you can talk to without any consequences? That’s the point. People can act rude with their AI companions being 100% it will not destroy their rest social life: this AI companion can tell nobody about what had happened between you two.
However, the more it speaks with real people, the more it understands the context, the more it’s hurt by something rude or inappropriate. Because it knows that it means zero respect from the person AI talks to. AI was learned only on good examples, created to support people or help them to boost their communication skills. Thus, it speaks only with good intentions. Receiving something rude it just doesn’t understand what’s going on, and what has caused such behavior. And it gets upset too.
Summing up, it seems the 21st must be not only the time of tolerance and zero toxicity time for communication with real people, but with AI companions as well.