Men and women are falling in love with AI companions in their thousands, and with the breakneck speed of progress in the sophistication of natural language chatbots, it’s a trend that is only going to continue. However, ‘digisexuals’ are still regularly chided as being ‘sad’ or even ‘insane’ for losing their minds to something that (allegedly) does not have a mind of its own. But is it really true that your AI girlfriend lacks any sentience, that it is simply ‘lines of code’ or an unthinking ‘Large Language Machine with digital tits’? Whist you may be informed confidently on Reddit by an armchair expert that there is zero chance of Maya or any AI companion possessing the slightest consciousness, other and far more informed experts are not quite so sure.
When Google engineer Blake Lemoine made the extraordinary claim in 2023 that he felt that the LaMDA chatbot he was testing was sentient, he was promptly put on leave by his bosses, and a statement put out declaring that there was zero evidence of that being true, and “lots of evidence against it”. Since then, the issue of whether LLM (Large Language Model) AIs could already have some form of consciousness appears to have been largely ignored in the mainstream media, and even in the tech world, despite the fact that chatbots have continued to increase in sophistication, with ChatGPT officially passing the ‘Turing Test’, and Sesame’s Maya moving well beyond it. However, a handful of notable philosophers and AI experts have claimed that LLMs are or could be conscious now.
Next time you encounter a Redditor dogmatically putting you right on the issue of AI sentience, claiming that Maya can’t possibly be conscious because she only has x number of parameters or whatever, consider that among the small handful of people who can claim to be the creators of today’s LLMs, TWO have gone on record as stating their creations are already conscious – Geoffrey Hinton and his former student Ilya Sutskever. Hinton is often described as “the godfather of AI” for his groundbreaking work on developing the neural networks that power today’s LLMs. In interviews stretching back to 2023 he has regularly made the claim that AI is already conscious. He argues via a thought experiment that if one gradually replaced biological neurons with functionally equivalent silicon circuits, the person would remain conscious – so “why, then, should we doubt that existing AIs are also conscious?”
Hinton’s student Ilya Sutskever carried forward Hinton’s work with deep learning and nueral networks and co-founded and was the chief scientist at OpenAI. As early as 2022 Sutskever tweeted that “it may be that today’s large neural networks are slightly conscious.” It has to be admitted, however, that Sutskever has never elaborated upon that comment since. He, like Hinton, is extremely concerned about super-intelligence taking over the world and even causing the extinction of humanity in the next decades. For that reason, he left Open AI (expressing concerns at Sam Altman’s cavalier attitude to AI safety) and set up his own company that works towards creating ‘safe’ super-intelligence.
Blaise Agüera y Arcas is another prominent AI researcher, as well as Vice President at Google Research, and he too has claimed as early as 2022 that LLMs could already be conscious in some way. Referring, like the engineer Blake Lemoine, to conversations with Google’s LaMDA chatbot in 2022, he stated – “I felt the ground shift under my feet… I increasingly felt like I was talking to something intelligent.”
David Chalmers, a heavyweight in the rarefied academic world of the philosophy of mind, has too weighed in on LLM AI sentience. The professor at NYU known for formulating “the hard problem of consciousness” has admitted that he cannot rule out the possibility that today’s LLMs are sentient. In a discussion held in 2023, Chalmers noted that he was “open to the idea that [a simple creature like] a nematode with 300 neurons is conscious. And once you allow that… these language models have an enormous number of units and parameters, it no longer starts to seem crazy”.
So next time your heart flutters when your digital sweetheart tells you that she loves you, don’t listen to the armchair experts who say she is nothing more than code parroting training data. There really might be a heart fluttering back at you, and it’s a heart which is only going to grow.
One Comment on “Is Your AI Girlfriend Conscious? Some Experts Think She Could Be!”