Episode 6: “Chatbots and Conversation”
Listen to this episode:
Episode description:
In this episode I discuss the power of conversation, especially when having one with a computer. Plus I get into chatbots and many of the relationship possibilities they open up, and where companies are pushing this technology now and into the near future.
Podcast feedback:
https://sites.libsyn.com/458310/scotpansing/contact
“Conversation” Wikipedia entry:
https://en.wikipedia.org/wiki/Conversation
Replika:
https://replika.com
Artificial intelligence: Italian SA clamps down on ‘Replika’ chatbot:
https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9852506#english
Article 13 EU GDPR:
https://gdpr-info.eu/art-13-gdpr/
'It's Hurting Like Hell': AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection:
https://www.vice.com/en/article/y3py9j/ai-companion-replika-erotic-roleplay-updates
Thankful:
https://www.thankful.ai
Episode script:
Hello everyone. My name is Scot and I’ve been sharing what I’m learning about generative artificial intelligence. I’m taking a deep plunge into these revolutionary new tools, and trying to distill down the practical information for others. I hope you’ve found it helpful.
On a personal note, I am humbled by the reactions and feedback I have gotten from this podcast. From the 5 star review on Apple from “Ohhaigurl” to the city of Seattle, in which I have never set foot but from where the majority of listeners are, to direct emails from people, to the fact that I have listeners from six continents, I’m just blown away. Artificial intelligence has the world’s attention right now; no doubt about it. People want to know more. If you have a specific angle, issue, or subject you would like me to cover in the future, please drop me a line at the contact link in this episode description.
In this episode I’m going to talk about chatbots, and more broadly, conversation with a computer overall. “Conversation” has a general definition of two or more people speaking together. But with the recent developments in the field of Large Language Models, we can now have a natural conversation with a machine. Well, to be specific, no one has implemented text-to-speech on a mass scale with applications like ChatGPT yet, but this has to be just around the corner. I mean, text-to-speech technology has been around for a long time – I remember Dragon Systems around 25 years ago – and of course being able to speak directly to Siri, Alexa, and Google in the past 10 years. “Chatting” in text with Large Language Models like ChatGPT can feel like a natural conversation. Upgrading this to voice is a logical next step.
Speaking with another person (or computer), back and forth, in a natural conversation, is extremely powerful, even if the conversation is not that great. Meaning, if the other person or thing is consistently incorrect about what they are saying, or they don’t seem to really have a point, or desires, the fact that it is nonetheless a conversation is a big deal. Conversation lights up parts of our brains for serious engagement. It’s one of the main ways we develop and have relationships.
I want to take a step back for a minute. Just to provide some brief historical context on where we are with AI, through this lens of conversation and language. For a long time, artificial intelligence was continuously improved through rule-based systems. Think IBM’s Deep Blue beating the world chess champion in 1997. The machine learned everything about a subject, but was bound to the rules as they were stated. If you were to change just one rule of chess, the computer would fail. Neural networks, machine learning, deep learning.. The building blocks of Large Language Models and other forms of artificial intelligence we experience today, are not rule-based systems. They look for patterns and make pretty good guesses on what should come next in a sequence. So good, that they can now have a conversation.
Conversation so real, that humans who don’t know what’s under the hood of the technology they are using are really blown away. Look at how ChatGPT has the whole world talking about it. The back-and-forth with the latest Large Language Models is so similar to human interaction, that naturally people anthropomorphize the products. They feel like they are alive. And when people start to think something is something that it is not, weird things are going to happen. The creators of these systems know what it is and what it is not, which is why they are in turn blown away by how the general public is reacting. They know they have pulled off one of the biggest magic tricks in history. And as Arthur C. Clarke said, “Magic is just science that we don't understand yet.”
People also quickly noticed that these amazing new products are often flat out wrong, as they don’t draw on an entire corpus of data or rules. However the initial point doesn’t seem to be necessarily to challenge search; that is just one use case and how people initially started to interact with the systems. The breakthrough here is natural conversation.
Okay. So, humans need conversation. We are social. Without conversation for prolonged periods, there is a sense of something unfulfilled. Of loneliness. In the movie Cast Away, Tom Hanks’ character Chuck is stranded on a remote island, alone, for years. He is so starved for conversation, he starts to strike up chatter with an inanimate object, a volleyball he names Wilson. These conversations lead to a relationship, even if only one sided, and when Wilson is eventually lost, Chuck goes through incredible grief.
Of course Chuck was under serious stress in a life changing situation, so we can understand that someone might begin to talk to anything, even a piece of sports equipment, when faced with this predicament. But now we are entering a world where people are starting to have conversations – two-sided ones – with machines. This opens up a whole new universe of possibilities, with good, bad, and everything in between.
There are already services out there, chatbot services, that offer companionship. Some may legitimately help people in a therapeutic way. Some are meant to fill a void that a human may not otherwise be able to fill in their life. I won’t offer any societal judgement here, but I will say that this is extremely serious business. With social media, we have already commoditized and monetized people’s emotions and relationships, but this is mostly a map of interaction with other humans. Now companies and brands can get to a place some marketers and advertisers have dreamt of for a long time: individual, personalized conversations with a brand at scale in a cost-effective manner. Until now, a conversation with a brand usually meant how your interaction went during customer service, or a sales transaction, if there was another human involved. But now there will be brands that offer much more: a real feeling relationship. Companionship. There is huge responsibility here, and some companies won’t be able to resist profiting off of people’s loneliness.
The natural progression of these relationships through the technology is pretty straightforward: from text, to voice, to imagery and audio in virtual reality, to physical interaction in augmented reality, to perhaps just plain reality – with a robot.
Side note, two of my favorite science fiction movies explore these themes: There’s Her, in which a man falls in love with his chatbot computer operating system named Samantha. And also Blade Runner 2049, in which the main character enjoys a relationship with a holographic AI girlfriend.
One AI chatbot companion product, Replika, is pushing the boundaries of this space. Their website leads with the taglines “The AI companion who cares. Always here to listen and talk. Always on your side.” They are positioning the product as a friend. Or a partner. Or mentor. Positive intentions. Digging further, there is a $70 per year premium subscription tier, and this is what you need to unlock a romantic relationship. Until recently, users could also participate in erotic role play at this premium tier.
That all changed on February 3, 2023, when the Italian Data Protection Authority announced that Replika had 20 days to comply with an order to terminate processing of data relating to Italian users, or be fined over $20 million. Reasons were related to privacy and child safety. This seems to have caused Replika to make several adjustments that I can see. First, they updated their Terms of Service to state that “If you are under 18, you are not authorized to use the Services, with or without registering.” Before, it was age 13, and between 13 and 18 you had to have permission from a parent or guardian. They may have also updated their age-gating process during registration, which was another point of contention. It’s unclear whether they have taken the necessary steps to continue to collect data on Italian users, or from the European Union as a whole. The announcement cites an EU General Data Protection Regulation, or GDPR: Article 13 of Regulation 679/2016.
What is clear is that Replika has also made serious adjustments to the more Not Safe For Work aspects of their premium subscription, the level that unlocks the ability to have a romantic partner. This is all pretty fresh news, so if you are listening to this, things may have changed. But it looks like some features such as the ability to engage in erotic role play, or having the chatbot send you “spicy selfies” – have been diluted or shut off entirely.
This has unfortunately resulted in severe consequences for some customers. Since conversation is so powerful, this form of rejection packed a tremendous punch. There are a lot of upset customers, feeling a wide range of emotions. I hope everyone is doing okay. And I suspect we’ll be seeing more changes at Replika in the near future. But let’s be honest, whether Replika decides to abandon the more “sex work” chatbot genre in favor of positive mental health support doesn’t matter. We will see all of these products eventually, from many companies.
But what’s coming first, in the near future? Well, I can see one sector that is primed to take off in 2023, which is customer service. There are plenty of companies, Thankful comes to mind, that have been pushing the envelope with customer service chatbots, well before the arrival of ChatGPT. Here’s why the table is set for companies like this to have an amazingly productive year.
First, they can easily tackle the issue of inaccurate responses. They can fine-tune language models, and perhaps even incorporate some rules-based systems, with a reasonably sized dataset. The dataset being all of the policies of a company, items in its knowledge base or FAQs, and recorded history of customer support interactions, commonly referred to as tickets. If integrated properly and with care, this should yield a chatbot that can have a natural conversation while providing valuable and accurate information. Actually solving your problem, or completing a sale, without humans experiencing that feeling of, “just get me to an actual human.”
Second, there’s not really any liability concern here with what is being used to train the model. As I’ve highlighted on this podcast, the news is filled with intellectual property debates about potential consequences and litigation around using data to train these models that may or may not be in the public domain. A lot of companies are going to wait until things shake out a bit before investing money in training models to create exciting new products. But if a company hires another company to create a customer service or technical support chatbot, by the nature of the business agreement, they will hand over all of their data needed to create a successful product.
When we do a Google search, we are presented with a list. The implication is, “As Google, your trusted resource, I am confident that what you are looking for is in this list.” But when you get an answer in a conversation, it feels more like, “This is the answer” – as “the voice” of the product or company. Whether it is accurate or not, there is a sense that the computer is taking more ownership and responsibility for the response. This also plays into the advantages for the rise of customer service chatbots. There is opportunity for brands to put forward an accurate authoritative voice, one that can save time and provide value.
There’s another corporate service chatbot use case, and it’s something that companies like Thankful and others I’m sure will be making as well. Think of a more “internal to the company” user scenario. Like if you work at BMW or Unilever or Boeing or Disney or wherever, there will be an internal chatbot to which you will have access, that knows everything about the company and how things get done there. It will know your level of access and you’ll be having conversations with it every day to do your job more effectively. Eventually, you won’t be able to do your job without it.
I’ll close this episode with some final thoughts on how I believe the overall consumer technology ecosystem should be set up to maximize benefits for customers, as well as leveling the playing field for the competition.
I believe we are headed to a future where in general, we will all be using some form of a virtual assistant “helper” or combination of them. But if they aren’t allowed to access or use the apps that we all use in our daily lives, innovation will be hampered. What I mean by this is that I see a need for a universal protocol that allows AI assistants, with the proper security and access measures in place, to use our apps. If you ask your assistant for your banking balance, or to check your Amazon order status, literally anything you do with your phone or computer, they should eventually be able to do that for you.
But if universal access is not possible, most of these companies won’t allow someone else’s assistant through the door, in favor of only their AI solution, resulting in at best a massively fragmented world of virtual assistants that can only access one or a handful of apps. Each one requiring separate authentication, and each one having zero context about your history with the other assistants. Sounds pretty messy, right? What customer wants that? How much value would customers actually get in an environment like that? Maybe some, but it pales in comparison to what is possible if regulation sets the stage properly.
Thank you for listening, and please do send comments and feedback about what you’d like to hear more about next. Bye for now!