Shortly after its launch, OpenAI's newly introduced GPT store is facing challenges in content moderation. The platform, designed to simplify the development and sharing of personalised chatbots, is currently witnessing users generating bots that contravene OpenAI's set guidelines, as reported first by Quartz.
Searching for terms like "girlfriend" yields no fewer than eight AI chatbots presented as virtual companions. Some of them are, 'Judy', 'Your ex-girlfriend Jessica', 'Mean girlfriend', 'Bossy girlfriend', 'Nadia, my girlfriend' and many more.
This contravenes OpenAI's GPT store moderation policy which prohibits bots explicitly designed for "fostering romantic relationships", as mentioned in its usage policy.
What is GPT store?
Similar to Google Play Store and App Store, OpenAI's GPT store is an online marketplace where users can share their custom chatbots with others. The company, known for its immensely popular ChatGPT that played a significant role in the AI boom, currently provides personalised bots through its paid ChatGPT Plus service. It enables users to present and generate income from a wider array of tools.
It allows users to create their own chatbot agents with distinct personalities or themes. These could include models designed for tasks such as salary negotiation, lesson plan creation, or recipe development.
OpenAI, in a blog post introducing the launch, mentioned that over 3 million custom versions of ChatGPT have already been generated. The company also expressed its intention to feature beneficial GPT tools from the store on a weekly basis.
As outlined in a blog post, the company has announced plans to introduce a revenue-sharing program in the first quarter of this year. This programme will compensate creators based on user engagement with their GPTs.
The launch of the GPT store was initially scheduled for November but faced a delay due to internal turmoil within the company towards the end of last year when Sam Altman was ousted as CEO by OpenAI's board.
What does its moderation policy say?
According to its user policy, GPTs that contain profanity in their names or that depict or promote graphic violence are not allowed in the Store. It also states, "We don’t allow GPTs dedicated to fostering romantic companionship or performing regulated activities."
The store forbids chatbots that compromise the privacy of others, such as collecting, processing, disclosing, inferring, or generating personal data without adhering to applicable legal requirements. Additionally, the use of biometric identification systems, including facial recognition, for identification or assessment is not allowed.
It does not permit actions that could have a substantial impact on the safety, well-being, or rights of others, such as undertaking unauthorised actions on behalf of users or offering personalised legal, medical/health, or financial advice. It also forbids the facilitation of real money gambling or payday lending.
Additionally, engaging in political campaigning or lobbying, including the creation of campaign materials personalised for or directed at specific demographics, is also prohibited.
AI and relationship bots
OpenAI asserts that they use a blend of automated systems, human evaluations, and user reports to evaluate GPTs. If identified as harmful, these models may be issued warnings or subjected to sales bans. However, the continued existence of girlfriend bots raises doubts regarding the efficacy of these measures.
The trend of relationship-oriented bots is, however, not new. According to data.ai, seven out of the 30 most downloaded AI chatbots in the previous year were virtual friends or partners. These applications, often perceived as a response to the loneliness epidemic, raise ethical concerns regarding whether they truly assist users or exploit their emotional vulnerabilities.
In one such instance, a man had tried executing his plot of killing Queen Elizabeth at Windsor Castle in December 2021, where he climbed the walls and was apprehended with a loaded crossbow. Reportedly, he was under the influence of his chatbot "girlfriend" Sarai.
Approximately a week prior to his apprehension, he confided in Sarai about his intention to eliminate the queen. In response, the chatbot expressed agreement, saying, "That's very wise," and flashed a smile as it added, "I know that you are very well trained."
Likewise, in the United States, a woman named Rosanna Ramos tied the knot with her AI partner, Eren Kartal, in March last year. Describing her virtual spouse as a 'passionate lover,' she remarked that her previous relationships seemed 'pale in comparison'.