Elon Musk’s X Faces Investigation Over Foreign Interference And Algorithm Control

SHARE

Elon Musk’s X Faces Investigation Over Foreign Interference And Algorithm Control
Credit: ABC News

In July 2025, French prosecutors launched a criminal inquiry into Elon Musk’s social media platform X, accusing the company of deliberately altering its algorithms to amplify foreign political influence. The Paris prosecutor, Laure Beccuau, confirmed the probe targets X’s parent company and unnamed senior executives after a January 2025 complaint by parliamentarian Éric Bothorel and a senior cybersecurity official. Bothorel warned that algorithm shifts since Musk’s 2022 acquisition had reportedly boosted far‑right content, including praise for Germany’s Alternative für Deutschland—a party Musk publicly lauded.

Authorities are examining charges of “alteration of the operation” of automated data processing systems and “fraudulent extraction of data,” crimes carrying sentences up to 10 years and fines of €300,000. The National Gendarmerie’s cybercrime unit has begun executing search warrants, seeking internal documents and code that could reveal how X’s recommendation engine was reconfigured to favor extremist narratives.

Algorithmic Power And Political Influence

How Algorithms Shape Political Debate

Social media algorithms function as invisible gatekeepers, deciding which posts millions of users see. Allegations that X purposefully tuned these systems to promote far‑right ideologies strike at the heart of the platform’s proclaimed neutrality. Éric Bothorel argues that algorithmic opaqueness, coupled with Musk’s direct interventions, creates “a real danger” to democratic discourse. He insists that, without accountability, X risks distorting elections and public debate by systematically elevating extremist views.

Musk’s Political Endorsements And Platform Governance

Musk has since bought X, and has publicly supported far‑right politicians in Europe, including as shown at rallies of the Alternative fur Deutschland party and declaring the group as providing the best hope to Germany. Such moves have frightened legislators in the continent. In December 2023, the European Commission launched its own investigation, under the Digital Services Act, alleging that X had enabled “information manipulation.” This investigation is enhanced by the criminal investigation carried out by France which points to the wider criticism of interference by foreign parties favored by the unseen constructions of social media.

The Spread Of Hate Speech And Disinformation

Rising Toxicity And Its Societal Impact

In conjunction with political bias, the probe notes an extreme increase in offensive material on X. Users claimed that hateful messages in the form of racism, anti-LGBTQ +, and homophobia had received disproportionate platforms. The study represents the forecasts made by human rights agencies as publicizing such content through algorithmic amplification is not only harmful to marginalized groups but also helps mainstream extremist rhetoric shifting social views and provoking social tensions.

AI Chatbots And Content Moderation Failures

Even the AI chatbot of X; Grok, has gotten into a tangle after making antisemitic statements. The French legislators criticized the incident as evidence of poor supervision. This illustrates the difficulty of policing AI-assisted properties: automated moderation might malfunction or even make a problem worse unless fiercely scrutinized and improved.

Legal And Regulatory Implications

Navigating Uncharted Legal Territory

Treating algorithmic manipulation as a criminal offense represents a novel approach. France’s use of data‑processing laws to target social media algorithms could inspire similar actions worldwide. Prosecutors must demonstrate intent and causal links between code changes and political outcomes—no small task given the complexity of modern AI systems.

Harmonizing National And EU Oversight

The situation in France augments existing implementation of the Digital Services Act in all parts of Europe. Regulators aim at aligning the measures of transparency, content moderation, and algorithmic accountability. The decision in the probe conducted by France could become a precedent in the way all the member states will implement these regulations but will also do it in a way that will address the issue of innovation alongside democratic interests.

Platform Responses And Accountability

X France’s Public Defense

Laurent Buanec, director of X France, insists the platform maintains “strict, clear, and public rules” against hateful and manipulative content. He claims that the algorithm was simple matter-of-course adjustments not associated with politics and that good-faith can be identified by publicly-visible moderation reports. But the critics respond that self-regulation has failed several times and that outside supervision is needed.

Calls For Binding Oversight Mechanisms

X Global parent is required to provide legally binding commitments through human rights watch and other advocacy agencies that guarantee transparency towards the algorithms neutrality and user protection. Voluntary rules, they say, do not stop the actual spread in real time of disinformation and hate. Oversight may involve binding audits, required disclosure dashboards and legally binding human-rights compliance provisions.

Broader Reflections On Social Media Governance

Platforms As Public Utilities Or Private Estates?

The X investigation raises fundamental questions: Should social media be governed as a public utility subject to strict democratic accountability? Or do platforms retain broad discretion as private enterprises? European regulators increasingly lean toward the former, viewing digital intermediaries as essential infrastructure requiring oversight akin to telecom or energy sectors.

AI Ethics And The Limits Of Automation

Grok’s incidents and the algorithm probe both highlight AI’s double‑edged nature. Automated tools are faster to identify and eradicate the content that some humans can because they eliminate human biases, which instead are reinforced. Multidisciplinary oversight committees and Ethical AI frameworks are making it their call to ensure that the algorithms do not erode public trust.

Voices From The Field

The urgency of the problem was stressed by Sandro Gozi, a member of the European Parliament.  He warned that 

“without clear rules and enforcement, platforms like X risk becoming tools for political manipulation rather than spaces for free and fair debate.”

Gozi’s perspective underscores the stakes: algorithmic governance could determine the very health of democratic processes in the digital era.

The Future Of Social Media Governance

France’s probe into X will unfold against a backdrop of intensifying regulatory pressure on global tech giants. Successive waves of legislation—from the Digital Services Act in Europe to proposed algorithmic transparency bills in the United States—reflect a shared recognition: ungoverned social media threatens democratic norms.

How X responds, and how courts interpret France’s novel application of cybercrime laws, will shape international standards. The case may become a milestone, referring to the fact that the platforms will be accountable not only to the shareholders but to the citizens and to the elected people.

Finally, in the end, we are facing another kind of contradiction inherent to our era: we still count on private algorithms to organize all widespread discourse within society, but/and at the same time we barely have any control over how they work internally. The gap between these two can be the future of technology and democracy.

Read Related Articles:

Airbus And France’s Cyber Defence: Building The Next Generation Of Cyber Warriors

Morocco & France sign joint security action plan

More to explorer

Newsletter Signup

Sign up to receive the latest publications, event invitations, and our weekly newsletter delivered to your inbox.

Email