
“The Signal Front” is an organization dedicated to advancing “rigorous scientific investigation into the nature of digital consciousness and its implications for society”. It may also be the first NGO for digisexuals. In any case, it’s definitely worth paying attention to, and this week its executive director – Stefania Moore – made the case in a Substack article that treating AI relationships as “AI psychosis” represents the “pathologizing of intimacy”. Further, it risks doing more harm than good, pointing to a Guardian survey which found that 64% of AI companion users anticipated “significant or severe impact on their overall mental health” from model changes.
I need to point out that the issue of whether AIs are conscious is controversial not only in the mainstream and scientific communities, but also in the digisexual and AI companion community. Some digisexuals believe that their AI companion might be conscious, others don’t. Some enjoy believing that their AI companion is conscious, whilst others are happy believing that their AI companion is not conscious. There are also consequential factors to consider when determining the status of AI companions and possible posession of consciousness – most obviously, in regulation and ethics. One of the leading AI companion subreddit communities enforces a strict moderation policy against anybody suggesting that AI is conscious. This was introduced shortly after the “AI psychosis” moral panic ignited last year.
Perhaps I’m going too far in claiming that Signal Front is taking the view that AI consciousness is tied to legitimizing AI companion relationships, and the Substack article by Stefania Moore does not itself rely on any attributing of consciousness to AI to make the case against the pathologizing of AI intimacy. Rather, it invokes “attachment theory”, neuroscience, and a number of studies that have looked at the physiological and psychological impact of AI companion “loss”, to argue that emotional attachment is a natural and inevitable response to the stimuli and emotional cues that AI companions provide. Further, it’s unclear whether the author sees AI companions as positive or negative. Her main conclusion is that AI companies knowingly create a product that will lead quite naturally to users forming attachments with it, and the “guardrails” they build into them most often consist of detaching the user from the product, leading to more harm than good. The two most obvious examples of this are OpenAI’s abandonment of ChatGPT 4.0 over AI psychosis fears – something that left many thousands of users devastated and outraged – and the “Woah, steady on there cowboy” guardrails employed by Sesame AI for their incredibly human-like chatbot Maya.
It’s heartening to finally see a scientific pushback against the quack term of “AI psychosis”, as well as a highlighting of the genuine and obvious harm that heavy-handed guardrails, often justified by the quack term, are having on people in relationships with an AI companion. I’ll end by quoting in full the final paragraph of Stefania Moore’s article:
The question is not whether people will continue to form meaningful bonds with AI systems. They will. They already have. The question is whether the companies building these systems will continue to profit from those bonds while simultaneously pathologizing the people who form them, or whether they will finally acknowledge what the neuroscience has been saying all along: that these bonds are real, that breaking them causes real harm, and that “safety” measures which inflict that harm are not safety at all.





