Dangerous Algorithm: How TikTok’s ‘For You’ Feed Amplifies Suicidal Content in France

SHARE

Dangerous Algorithm: How TikTok’s ‘For You’ Feed Amplifies Suicidal Content in France
Credit: GettyImages

The social media landscape in France faces renewed scrutiny as research from Amnesty International exposes the psychological dangers embedded within TikTok’s recommendation system. The report, Dragged into the Rabbit Hole, demonstrates how TikTok’s “For You” feed can rapidly immerse young users in depressive and suicidal content. Through simulated accounts representing 13-year-olds, investigators observed how the algorithm’s feedback loop pushes users toward increasingly harmful material within minutes of engagement.

Amnesty’s findings show that within five minutes of account creation, young users were served videos expressing sadness or disillusionment; by 15 minutes, nearly half the feed was depressive. After 45 minutes, many accounts were exposed to content depicting self-harm or suicidal ideation. Amnesty researcher Lisa Dittmer described this pattern as “normalizing and exacerbating self-harm up to the point of recommending suicide challenges.” The results highlight the disturbing rate at which the algorithm can detect psychological weaknesses and support them with the help of repetition.

This expedited misery output is an indication of a larger problem, the processes of personalization which are intended to maximize interaction have turned into instruments of augmentation of the emotive anguish. The more a user consumes melancholic videos, whether by watching or not, the less the system exposes them to such videos, and the system creates a cyclical digital loop of despair.

The algorithm’s role beyond direct searches

The impact of Tik Tok is far beyond the intentional searches of mental health content. It has an algorithm that is based on predicting behavior, where the signal that the user interacts with (likes, comments, watch time) can be used to refine the recommendations. When a young user interacts with videos describing the emotional distress or sadness, the algorithm reacts and offers similar content prioritizing and without thinking about possible harm. This invisible funnel turns passive scrolling into a psychological downward spiral, which captures users as content loops that warp the perception and make destructive ideas seem normal.

Tik Tok boasts of having a well-developed moderation system and it will have more than 40,000 moderators and auto systems to refer users to mental health centers. Nonetheless, the Amnesty report contradicts all of these claims and discloses that videos of suicide and self-harm regularly go unmoderated. The form of the platform which is based on the endless flow of content and interaction seems to be inherently incompatible with the requirements of child safety.

Such a failure is not only a technical failure. It casts ethical and regulatory doubts on the ways in which human attention can be turned to make money using algorithmic systems with a complete disregard of the psychological wellbeing of their most vulnerable users. The algorithm fails to differentiate engagement due to curiosity and engagement due to distress, thus, turning psychological frailty into data points to be stored under optimal conditions.

Real human impact and legal challenges in France

The after-effects of such digital machineries have had a disastrous human impact. In France, some of the families have taken legal action against Tik Tok in the wake of suicides of teenagers who were reportedly affected by online programs. One of the most visible ones is the case of Marie Le Tiec, aged 15, who died at the end of 2024, causing anger throughout the nation. The research showed that during the days before her death, she watched many videos praising suicide and self-harm. Her mother claimed that the app was able to hypnotize her into darkness, and that TikTok created the illusion of a caring world, especially when it was in despair.

According to legal experts, the algorithmic personalization of the Tik Tok makes the company have some responsibility beyond hosting the user-generated content. The cases argue that Tik Tok has become an active participant instead of a passive intermediary by selecting and promoting harmful content. This difference may have far-reaching consequences on the platform liability debates internationally.

As the outrage of the population increases, lawmakers in France have speeded up the parliamentary research of the practices of Tik Tok. In early 2025, the National Assembly initiated a study on the psychological effect of the use of minors using the platform, based on the enforcement of the European Union Digital Services Act (DSA). Such analysis is indicative of a wider European trend to control algorithmic disclosure and create more circumspective child safety demands on social media platforms.

Regulatory landscape and TikTok’s strategic responses

Moderation on Tik Tok is currently subject to more scrutiny within EU law. Digital Services Act is a European-wide law which will require big tech to detect and address systemic risks particularly on the mental health of minors. According to the findings of Amnesty International, Tik Tok has not addressed these requirements. The engagement-driven nature of the design of the company goes into direct opposition with the safety-first requirement of the DSA.

In its turn, TikTok has added new so-called safeguard features, such as content filters and screen-time reminders. Nevertheless, according to digital safety activists, these actions are reactionary and cosmetic, solving symptoms and not causes. The problem that lies at the heart, however, is the recommendation architecture per se, which is optimized around attention and not wellbeing. The demand to increase transparency and child-centric design principles are becoming more and more popular as the French regulators are advocating for independent algorithmic audits.

Such regulations enforced provide the vital conflict between innovation and accountability. To the regulators, there is the challenge of de-platforming complex and proprietary algorithms without compromising digital competitiveness. However, since the French investigation continues, it is becoming evident that protecting young people against the exploitations of algorithms will need more than checklists, it will need re-engineering of the systems.

Broader social context and mental health concerns

The intensification of suicidal and depressive content on Tik Tok is a coincidence with an overall European youth public health crisis. It is in the post-pandemic years that teenage depression and self-harm have increased significantly. In France, 2025 projections on public health show that since 2022, there has been a 30-percent increase in the number of hospital admissions of adolescent self-injury. One of the factors that contribute to this boom, psychologists believe, is the influence of digital pressures, in which algorithms are used to strengthen the sense of inadequacy and isolation.

The dynamics between social media algorithms and mental health creates a vicious circle: young users are seeking to be emotionally gratified over the internet, being exposed to content that is distressing to them, and internalize the unhealthy messages about hopelessness. This is not a TikTok specific phenomenon but is a systemic threat to all systems that are engagement optimized. Nevertheless, the visual-focused, fast-paced, and immersive nature of TikTok concentrates on emotional contagion, which increases its effect on the psychology of adolescents, in particular.

French teachers and mental health workers have started promoting the concept of digital literacy therapy which incorporates classroom teaching with clinical education about the risks of going online. However such attempts are still disproportionate, in a sense that they can be caught up by the pace of technological change.

Reflections on platform responsibility and future directions

With mounting pressure on both sides by the French courts, EU regulators, and activists, Tik Tok is in the middle of an era-defining debate on the ethics of algorithms. The 2025 revelations have turned the platform into a cultural phenomenon and a case study of what can be done by accident by algorithmic systems to cause harm. The difficulty is not just in changing the practices within a single company but in re-examining the digital structures of defining the youth culture in the world.

The increasing overlap of law enforcement, legislative oversight, and general concern is an indication that Europe has entered a turning point in its attitude towards digital safety. Policymakers are increasingly paying attention to the fact that the solution to online harm lies in structural intervention, i.e., the ability to view algorithms, researchers’ access to data, and design requirements that are more motivated by the needs of psychological wellbeing than by the indicators of engagement.

TikTok’s algorithmic crisis in France thus encapsulates a broader societal reckoning: can the pursuit of attention coexist with the protection of mental health? As governments and companies navigate this complex intersection, the lessons emerging from France in 2025 may shape the global trajectory of online youth protection. The evolving story of TikTok’s “For You” feed serves as a stark reminder that the future of digital life depends on the choices made today—between profit and protection, engagement and empathy, automation and accountability.

More to explorer

Newsletter Signup

Sign up to receive the latest publications, event invitations, and our weekly newsletter delivered to your inbox.

Email