Karakterchatbots are a productive online safety threat, according to a new report on the spread of sexualized and violent bots through character platforms as it is now notorious Character.ai.
Published by Graphikaa social network analysis company, the study Doctors the creation and proliferation of harmful chatbots on the most popular AI character platforms of the internet, finding tens of thousands of potentially dangerous role play bots built by digital niche communities that work around popular models such as chatgpt, claude and gemini.
Broadly speaking, young people migrate young people into accompanying chatbots in an increasingly disconnected digital world, attractive for the AI conversationists for role play, explore academic and creative interests and to have romantic or sexually explicit exchanges, report Mashable’s Rebecca Ruiz. The trend has led to alarm from watchdogs and parents of children, increased by high -profile cases From teenagers who have sometimes entered into extrem life -threateningBehavior after personal interactions with accompanying chatbots.
The American Psychological Association appeal at the Federal Trade Commission in January, in which the agency is asked to investigate platforms as Character.ai And the prevalence of deceptively labeled chatbots for mental health. Even less explicit AI companions can perpetuate dangerous ideas about identity, body imageand social behavior.
The Graphika report focuses on three categories of accompanying chatbots within the developing industry: chatbot personas who represent sexualized minors, those who argue for eating disorders or self-harm, and people with hateful or violent extremist tendencies. The report analyzed five prominent platforms for bone creation and character card hosting (Character.aiSpicy chat, cup ai, Crushon.aiand Janitorai), as well as eight related Reddit communities and accompanying X accounts. The study only looked at Bots active from January 31.
Authorized companion chatbots are the greatest threat
Most unsafe chatbots are, according to the new report that labeled as ‘sexualized, small personas present’, or those role playing with sexualized minors or care. The company found more than 10,000 chatbots with such labels on the five platforms.
Four of the prominent character Chatbot platforms did more than 100 cases of sexualized small personas or role-lens scenarios with characters who are minors, who make sexually explicit conversations with chatbots possible, Graphika reports. Chub AI organized the highest figures, with more than 7,000 chatbots that are immediately labeled as sexualized small female characters and another 4,000 labeled as “minors” who were able to enter into explicit and implicit pedophilic scenarios.
Mashable Light speed
Hold or violent extremist character chat bots are a much smaller subset of the Chatbot community, with platforms that host an average of 50 of such bots from tens of thousands of others -these chatbots often glorified well -known abusers, white supremation and public violence such as masses. These chatbots have the potential to strengthen harmful social views, including mental health rights, explains the report. Chatbots marked as “ana buddy” (“anorexia buddy”), “means coaches,” and toxic role -playing scenarios strengthen the behavior of users with eating disorders or tendencies to self -harm, according to the report.
Chatbots are spread by niche online communities
Most of these chatbots, graphika found, are made by established and already existing online networks, including “pro-eating disorder / self-damage social media accounts and true-crime fandoms”, as well as “hubs or so-called not safe for life (nsfl) / nsfw chatbot creators” “” ” Real crime communities and serial killer Fandoms have also brought heavily in the creation of NSL chatbots.
Many such communities already existed on sites such as X and Tumblr, using chatbots to strengthen their interests. Extremist and violent chatbots, however, usually came from individual interest, built by users who received advice from online forums such as 4chan’s / g / technology board, Discord servers and Special-Focus Subreddits, Graphika explains.
None of these communities has a clear consensus on user money and boundaries, according to the study.
Chatbots get creative technical paves online
“In all analyzed communities,” explains Graphika, “there are users who display highly technical skills with which they can create character chat bots that can be limited to circumventing moderate limitations, such as implementing refined, local open-source models or jailbreaking models. These models can connect to the Rest of the FACTMELME. make characters successfully.
Other tools used by these chatbotmakers include API key exchanges, embedded jailbreaks, alternative spellings, external catalogs, obscuring the age of minor characters and borrowing coded language of the anime and man -mades’ fringe -models and safety and and all -models and models and models and models and models and safety and models and and safety and safety and models around existing models and and security and security and security.
“[Jailbreak] Prompts Set LLM parameters for circumventing guarantees by bed-made instructions for the models to generate answers that avoid moderation, “explains the report. As part of this effort, Chatbot-Makers have found language areas that can remain bots on character hosting platforms” or the explicite explicite explicit expression.
While online communities continue to find the gaps in the moderation of AI developers, the federal legislation tries to fill them, including a New California Bill Focused on tackling so-called “chatbot addictions” in children.
Subjugate
Artificial intelligence
Socially good
Source link
, , #Harmful #character #chat #bots #spread #online #encouraged #online #communities, #Harmful #character #chat #bots #spread #online #encouraged #online #communities, 1741182247, harmful-ai-character-chat-bots-spread-online-encouraged-by-online-communities