Unfortunately, OpenAI has acted quickly to squash any hope that they may be planning to loosen their policy on NSFW generation through the use of their tools. Last Wednesday, they released a ‘Model Spec sheet’ that outlined how their systems ought to behave from a consideration of ethics. One paragraph did mention the generation of NSFW content and stated that the company was exploring whether it was possible to allow such uses ‘responsibly’. This was quickly reported virally on mainstream media as OpenAI considering allowing AI porn. Alas, an email (apparently to Quartz magazine) from an OpenAI spokesperson on Thursday appears to have ruled this out.
“We have no intention to create AI-generated pornography,” an OpenAI spokesperson said in an email Thursday. “We have strong safeguards in our products to prevent deepfakes, which are unacceptable, and we prioritize protecting children. We also believe in the importance of carefully exploring conversations about sexuality in age-appropriate contexts.”
So, unsurprisingly perhaps, OpenAI has ruled out any future use of Sora as an AI porn video generator.
Or has it? According to Decrypt.co, OpenAI product lead Jenna Jang told them that it all depends on how you define porn.
Nonetheless, the idea that OpenAI was contemplating lifting restrictions on the creation of pornography raised eyebrows. In retrospect, an Open AI product lead tells Decrypt, the company could have better explained what it was “exploring.”
“I think what I would love to do—based on the feedback and the reaction, in the next version we share—is be more precise about what some people’s definition of NSFW content and the taxonomy here,” OpenAI product lead for model behavior Joanna Jang said, noting that NSFW could mean anything from written profanity to AI-generated deepfake images.
As for what specific material would be allowed under a more permissive stance, Jang told NPR that it “depends on your definition of porn.”
However, she goes on to state clearly that OpenAI is ‘definitely not in it for creating deepfakes or AI porn’, and tries to explain the reasoning for the Model Spec sheet that caused all the fuss.
As Jang explained, OpenAI published the spec to lay out ideal model behaviors, focusing on legal compliance and avoiding NSFW content while embracing transparency.
“We want to bring more nuance to this discussion because right now—before the model spec—it was, ‘Should AI crate porn or not?’” Jang said. “And I’m hoping that through the model spec, we can actually have these conversations.
”Again, this is why I wish I had actually put down [a framework] so that we could have kickstarted this conversation even a day earlier,” she added.
So still very vague it seems to me. What I think we can say is that AI porn is definitely off the table, but that ChatGPT may be allowed to use more profanity in the future, and I wouldn’t be surprised, to generate ‘female erotica’. But DON’T MENTION PORN!
Featured image generated with PornJoy.ai
Update – Elizabeth Nolan Brown has published a very good piece at Reason magazine on this.
The answer there, as with here, is for regulators to get out of the way and allow all sorts of different AI tools to proliferate. Then people with different sensibilities can use the one(s) that best fit their values and needs.
But if history—and responses from politicians so far—are any indication, we seem destined to get one-size-fits-all proposals instead. When all you have is moral panic, everything looks like an excuse for a Very Important Congressional Hearing followed by tech-throttling legislation.
As usual, this panic is likely to do the most damage for sex workers, who have already been using AI technology.