Social media still pushing suicide-related content to teens despite new UK safety laws | Internet safety

0
6880.jpg


Social media platforms are still transferring depression, suicide and self-injured content for young people, although new online security laws are supposed to protect children.

The Molly Rose Foundation opened Dummy accounts as a 15-year-old girl and then dealt with suicide, self-harm and depression items. This prompted algorithms to bomb the report of the charity organization with “a tsunami of harmful content on Instagram roles and TikToks for their side”.

Almost all recommended videos regarded on Instagram roles (97%) and Tikok (96%) were determined as harmful, while more than half (55%) of the recommended harmful contributions to TikKoks contained.

These contributions also reached a huge audience: one of ten harmful videos on TikKOs for their page had been liked at least 1 m map, and on Instagram one of five harmful videos was more than 250,000 times.

Andy Burrows, Managing Director of The Molly Rose Foundation said: “Harmful algorithms continue to bomb teenagers with shocking harmful content, and on the most popular platforms for young people, this can happen on an industrial scale.

“It is shocking that in the two years in which we recently carried out this research, the extent of the damage was still not properly treated, and the risks have been actively worsened on TikKok.

“At best, the measures to combat algorithmic damage are a embroidery plaster and are not enough to tackle avoidable damage. It is important that the government and regulatory authority act decisively to achieve much more measures that cannot play or ignore platforms.”

The researchers, who analyzed content on the platforms between November 2024 and March 2025, found that both platforms enabled the teenagers to give negative feedback that content that was recommended by OFcom according to the online security law also enabled them to give positive feedback on the same content, which resulted more.

That of the foundation reportIn cooperation with bright data, that platforms had taken steps to make it more difficult to search for dangerous content with the help of hashtags. The report also found that platforms tend to be used through close definitions of damage.

In research, growing evidence for the relationship between exposure to harmful online content and the resulting suicide and self-harm risks was cited.

It also found that social media platforms benefit from advertising in addition to some harmful contributions, including for fashion and Fast food brands that are popular with teenagers and British universities.

OFCOM has started to implement the security codes of the children of the online security law, which should “tame toxic algorithms”. The Molly Rose Foundation, which is financed by META, is concerned that the regulatory authority has recommended platforms to only issue £ 80,000 for the correction of these expenses.

An OFCOM spokesman said: “There has been a change since this research. Since this research, our new measures to protect children have been put into effect online. These will make a meaningful difference for children-and help to prevent exposure to the most harmful content, including suicide and self-sufficiency.

Technology secretary Peter Kyle said that 45 locations have been examined since the online security law. “OFCOM also thinks how existing measures can be strengthened, including the suggestions that companies use proactive technologies to protect children from self -harm content and that websites continue to ensure algorithms,” he added.

A spokesman for Tikok said: “Young reports about Tikok have more than 50 functions and attitudes that help you to express yourself, to discover and learn, and parents can further adapt through more than 20 attitudes for privacy through family pairings. With over 99% of Tikoks' settings, the findings will not reflect the real experience of the report of people on the platform.

A Meta spokesman said: “We do not agree with the claims of this report and the restricted methodology behind it.

“Ten million of teenagers are now in Instagram-Teenager accounts that offer integrated protective measures that can be restricted, who can contact you, the content you see and the time you spend on Instagram. We continue to use automated technology to remove content to promote suicide and self-harm.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *