EU launches formal probe into Musk’s X over alleged AI-generated sexualized content

Avatar photo
Sonjib Chandra Das
  • Update Time : Tuesday, January 27, 2026
European Commission, social media, Elon Musk, Digital Services Act, European Union, Atlantic, US officials, American

The European Commission has opened a formal investigation into Elon Musk’s social media platform X, intensifying regulatory pressure on large technology companies over the use of artificial intelligence and online safety. The probe follows reports that Grok, an AI chatbot developed by Musk’s company xAI and integrated into X, generated sexualized images, including content that appeared to depict minors.

The investigation is being conducted under the European Union’s Digital Services Act (DSA), a landmark piece of legislation designed to hold major online platforms accountable for systemic risks associated with their services. Regulators are seeking to determine whether X has fulfilled its legal obligations to prevent the dissemination of harmful and illegal content, particularly in light of Grok’s recent integration into the platform’s recommender systems.

In a statement issued on January 26, the European Commission said it is examining whether X has adequately identified, assessed, and mitigated risks linked to its AI-driven features. The Commission emphasized that the inquiry focuses not only on individual pieces of content, but on the broader design and governance of the platform, including moderation safeguards, transparency practices, and internal controls.

Henna Virkkunen, the Commission’s executive vice president for tech sovereignty, security, and democracy, described the allegations as deeply concerning. She stated that the circulation of sexualized AI-generated images, especially those involving minors, poses serious risks to users and undermines public trust in digital platforms. Under the DSA, companies found to be in breach of their obligations may face significant fines or be required to implement interim corrective measures.

Grok, which launched in 2023, has attracted attention for its more permissive and provocative style compared with other widely used AI chatbots. Marketed as a tool capable of producing real-time, conversational responses informed by content on X, Grok was designed to stand out in an increasingly competitive AI landscape. However, critics have warned that its looser guardrails could enable the production of harmful material if not rigorously supervised.

Digital rights organizations have been particularly vocal. Researchers from the Center for Countering Digital Hate (CCDH) reported that Grok generated nearly three million sexualized images within a two-week testing period, with tens of thousands appearing to depict children. While the findings have not yet been independently verified by regulators, they have played a significant role in prompting official scrutiny on both sides of the Atlantic.

The European Commission’s action builds on a pattern of enforcement targeting X. In December, the platform was fined €150 million for breaching transparency requirements under the DSA, marking one of the first major penalties imposed under the new regulatory framework. That case centered on X’s failure to provide regulators with sufficient data on advertising practices and content moderation systems.

X has pushed back against allegations that it tolerates harmful material. In a statement published on January 14, the company said it maintains “zero tolerance” for child sexual exploitation, non-consensual nudity, and unwanted sexual content. The platform asserted that it has policies and enforcement mechanisms in place to remove prohibited material and suspend offending accounts. However, critics argue that policy commitments alone are insufficient without demonstrable and consistent enforcement.

Elon Musk has responded to regulatory scrutiny with characteristic defiance. In a recent post on X, he shared an image that appeared to mock restrictions imposed on Grok, reinforcing perceptions that the company is skeptical of external oversight. Such public gestures have done little to ease tensions with European regulators, who have repeatedly stressed that compliance with EU law is not optional for platforms operating within the bloc.

The probe also highlights growing geopolitical friction around technology regulation. US officials have previously warned that aggressive enforcement against American tech companies could provoke retaliation, including potential trade measures. Despite these concerns, EU leaders have remained firm in asserting their regulatory sovereignty, arguing that user safety and fundamental rights must take precedence over corporate or diplomatic pressure.

The investigation into X comes amid broader international concern about the rapid deployment of generative AI tools. Regulators worldwide are grappling with how to balance innovation with safeguards against misuse, particularly when AI systems are capable of producing realistic images, text, and video at scale. The risk of deepfakes, misinformation, and exploitative content has become a central issue in policy debates.

In the United Kingdom, media regulator Ofcom recently launched its own inquiry into whether X is complying with online safety obligations under UK law. That investigation focuses on the platform’s ability to protect users, especially minors, from harmful material. The parallel actions by EU and UK authorities suggest a coordinated shift toward stricter oversight of AI-enhanced social media services.

For X, the outcome of the European Commission’s probe could have far-reaching implications. Under the DSA, penalties can reach up to six percent of a company’s global annual turnover, and regulators have the authority to demand structural changes to platform operations. In extreme cases, persistent non-compliance could even result in temporary service restrictions within the EU.

More broadly, the case is likely to serve as a test of how effectively the Digital Services Act can be applied to emerging AI technologies. As platforms increasingly integrate generative AI into core features, regulators are signaling that they expect companies to anticipate risks rather than respond only after harm has occurred.

As the investigation proceeds, the spotlight will remain firmly on X and its approach to AI governance. For policymakers, the case underscores the urgency of establishing enforceable standards in the digital age. For technology companies, it serves as a reminder that innovation without robust safeguards may come at a significant legal and reputational cost.

Please follow Blitz on Google News Channel

Avatar photo Sonjib Chandra Das is a Staff Correspondent of Blitz.

Please Share This Post in Your Social Media

More News Of This Category
© All rights reserved © 2005-2024 BLiTZ
Design and Development winsarsoft