ChatGPT parental controls don’t mean kids need AI companions

0
GettyImages-2224162059.jpg


The number of children injured by Chatbots by AI is difficult to know, but it is not zero. But Chatgpt has been free for all age groups without guidelines for almost three years. This type of changed on Monday when Openai introduced a number of parent checksSome of them are supposed to prevent teenagers from the Adam Raine, a 16-year-old Californian who died by suicide After talking to Chatgpt in detail about how to do it. Then Openaai started a social network with one on Tuesday New app called Sora The Tikk looks very similarUnless it is driven by “hyperreal” ai-generated videos.

It was certainly no coincidence that Openaai announced these parental controls alongside an ambitious step to compete with Instagram and YouTube. In a sense, the company published a new app to make people addicted to people even more from AI-generated content, but to alleviate the blow by giving the parents a little more control. The new settings mainly apply to chatt, although the parents have the opportunity to raise the limits for what their children see in Sora.

And the new one Chatgpt controls are not exactly easy. Among other things, parents can now combine their children's accounts with their children and add protection against sensitive content. If the Tools from Openai find at any point in time that there is a serious security risk, a human moderator checks it and sends a notification to the parents if necessary. However, parents cannot read transcripts of their child's conversations with Chatgpt, and the teenager can separate their account from their parents at any time (Openai says that the parent will receive a notification).

We still don't know how all this will play in practice, and something inevitably gets better than nothing. But does Openaai to protect children?

Even adults have problems regulating themselves when AI chatbots offer a happy, sycophanical friend who can chat at any hour of the day.

Several experts I spoke to, no. In fact, Openai ignores the biggest problem of everyone: Chatbots who act as companions and offer children emotional support and advice. Presumably, the new chatt security functions could intervene in future potential tragedies, but it is unclear how Openaai will be able to achieve a dark turn with young users. How to tend to do.

“In many cases we saw both teenagers and adults that a coincidence depending on AI can be” Robbie TorneyThe Senior Director of the KI programs from Common Sense Media told me. “Many people who have become dependent on AI did not want to depend on AI. They started using AI for homework or for work and slowly campaigned for other purposes.”

Here, too, adults even have problems regulating themselves when AI chatbots offer a happy, sycopheric friend who can chat at any hour of the day. You may have read current reports from adults who have increasingly built up more intensive relationships with AI chatbots Before he suffered psychotic breaks. This type of synthetic relationship is a new limit for technology and the human brain.

It is terrifyingly to think about what could happen to children, their prefrontal cortices have not developed completely yetmake them particularly vulnerable. Use more than 70 percent of teenagers AI chatbots for camaraderieWhat represents for those who are “real, serious and well documented”, according to a recently carried out media survey in common sense. Therefore, Ki -Kehrt -apps like how Character.aiBy default, you already have some restrictions for young users.

There is also the broader problem that parental controls cause the protection of children on parents and not to the technology companies themselves. It is usually due to the parents to deal with their attitudes and turn the switches around. And then it is still up to the parents to pursue how their children use these products and in the case of Chatgpt, how dependent they come on the chat bot. The situation is either confusing enough or tedious enough that most parents simply Do not use parent controls.

The actual goal of the parent controlS

It should be pointed out that Openai awaited this control elements and the new app as an important AI security invoice on the desk of California Governor Gavin Newsom for his signature. Newsom signed the law into the law on the same day as the announcement of the parent control. The Openai news also stood on the hearings of the Senate on the negative effects of AI chatbots in which parents asked the legislators to impose stronger regulations on companies like Openaai.

“The real goal of these parental tools, regardless of whether it is chatt or Instagram, is not to keep children safe” Josh GolinThe Executive Director of FairPlay, a non -profit child representative. It is to be said that the self -regulation is okay. You know, “Do not regulate us, do not say goodbye to us.” (I turned to Openai to get a comment but not received an answer.)

One way to do parents with the administration of all of these settings is that Openai has safety lines by default. And the company says it works on something that makes a version of it. In the future, Chatgpt will determine the age of a user after a certain amount of inputs and add security functions. At the moment, children can access Chatgpt by tapping their birthday – or one if they create an account.

You can try to interpret Openai's strategy here. Regardless of whether it is trying to push back against regulation or not, the parental controls lead a certain friction in teenagers with chatt. They are a form of content moderation that also affects the privacy of youth users. The company would probably continue to use Chatgpt and Sora in adults like these teenagers, so that they do not want to affect the experience too much. Allowing the teenagers to do more for these apps than less is good for the business to a certain point.

“There is no parental control that makes something completely safe.”

All of this leaves parents a difficult situation. You need to know that your child uses chatt for the beginning and then find out which settings are sufficient to keep your children safer, but not too strict that the child only creates a burner account that specifies to be an adult. There are apparently no way to prevent children from developing an emotional bond with these chatbots, so that parents only speak to their children and have to hope for the best. Then the Sora app awaits it, which looks so that they create high-quality AI slopes and make children addictive of another endless feed.

“There is no parental control that makes something completely safe,” Leslie Tyler, director of parent -security at Pinwheel, a company that manufactures the control of parent control. “Parents cannot outsource it. Parents still have to be involved.”

In a way, this moment is a second chance for the tech industry and the political decision -makers. Two decades of unregulated social media apps I have cooked all of our brains and there is increasing evidence that it has contributed to a psychological crisis in young people. Like companies Meta And Tiktok knew that their products harm children and for a long time did not do anything about it for years. Meta now has teen accounts for Instagram, but the latest research results suggests that the safety functions simply do not work.

Regardless of whether too little or too late, Openai accepts to protect children. Again, something is better than nothing.

A version of this story was also published in the user -friendly newsletter. Register here So you don't miss the next one!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *