Some of you may remember, about three years ago, I wrote “I and It: Where Are We Heading?” In my article, I talked about my concerns about Artificial Intelligence (AI), explaining how Generative Pre-trained Transformer 2 (GPT-2) generates coherent paragraphs of text without needing any task-specific training. As expected, AI development has progressed tremendously since then. Deepfakes (a combination of “deep learning” and “fake” manipulation, to generate visual and audio content that can more easily deceive) are so 2020. AI is getting smarter and more ubiquitous. It analyzes people’s behaviors and predicts their next actions. It is said that China is planning to develop a system that predicts crimes and protects before they happen. Regardless of whether it’s a morally right thing to do or how accurate it will be, changes are coming. Now, cars can park themselves. Early this year, a fully autonomous robot completed a complex soft tissue surgery for the very first time at Johns Hopkins University. Considering humans tend to be swayed by emotion and situation (a new study shows that under-prescription of pain medications occurs during night shifts), it’s not surprising if we will depend on technology to perform complex stuff more in the future.
Everything seemed going as predicted until LaMDA came in the picture. LaMDA is Google’s language model. It’s different from other language models because it was trained on dialogue, not text. GPT-3 (Yep, GPT-2 is already an old story!) is focused on generating language text while LaMDA is focused on generating dialogue. It can understand the context of the dialogue.and generate conversation in a freeform manner.
Recently, a Google engineer claimed LaMDA has become sentient, which scared me. I mean… he’s supposed to know how it works. I would’ve understood if it were my mother’s sentiment. She even has empathy for stuffed animals, but he’s an engineer who helped the model behave like a human. He should’ve known better. But at the same time, I have to admit that his conversation with LaMDA was very natural and humanlike. I’m sure many people will fall for it when they listen.
What now? What if AI starts making up compelling stories? Or has it become really sentient?