Advertisement

TikTok 'French Scar' challenge triggers safety probe in Italy

Image Credits: Jonathan Raa/NurPhoto / Getty Images

TikTok has another problem to add to its growing pile: Italy's consumer watchdog has opened an investigation over user safety concerns -- stepping in after a so-called "French scar" challenge went viral on the video sharing platform in which users have been seen apparently pinching their faces in order to create and show off red lines as mock scars. (Yes, really.)

In a press release today, the AGCM accused TikTok of lacking an adequate moderation systems for user generated content, asserting that it's failing to uphold community guidelines set out in its T&Cs where it claims to remove dangerous content, such as posts inciting suicide, self-harm and eating disorders. But apparently pinching yourself doesn't make the bar.

The AGCM's investigation is targeting the Irish company, TikTok Technology Limited, which it says handles the platform's European consumer relations, as well as English and Italian TikTok entities. And it said it carried out an inspection at the Italian headquarters of TikTok today, aided by the Special Antitrust Unit of the Guardia di Finanza.

The authority said it decided to look into TikTok after numerous videos of teens emerged engaging in "self-injurious behavior" -- including the aforementioned "French scar" challenge, which last month led to a number of warnings from dermatologists that the activity could lead to permanent marks or redness.

ADVERTISEMENT

The AGCM said it's concerned TikTok has not set up adequate content monitoring systems, especially given the presence of particularly vulnerable users such as minors. It is also accusing the platform of failing to apply its own rules and remove dangerous content that its T&Cs claim is not allowed.

Additionally, it wants to look into the role of TikTok's artificial intelligence in spreading the problematic challenge.

The platform famously uses AI to select content shown to users in the 'For you' feed, which is 'personalized' based on TikTok's tracking and profiling of users, including factoring in signals like other similar content they've viewed or otherwise interacted with through the like function. Although how it works exactly is a commercially guarded secret. So one question to consider is how much of a role TikTok's algorithm had in amplifying and spreading this potentially harmful challenge?

We reached out to the AGCM with questions, including whether it intends to audit the TikTok algorithm. But the regulator told us it's unable to provide further public comment at this time.

TikTok was also contacted about the investigation. A company spokesperson sent us this statement: