Impact Newswire

Spain is the latest EU Country to Probe X, TikTok, Meta over AI-Generated Child Sexual Abuse Content

Spain has ordered prosecutors to investigate social media giants X, Meta and TikTok over the alleged creation and spread of AI-generated child sexual abuse materials on their platforms.

Spain is the latest EU Country to Probe X, TikTok, Meta over AI-Generated Child Sexual Abuse Content

According to Reuters, the move, announced by Prime Minister Pedro Sánchez, is part of a broader pattern of regulatory pushback against artificial intelligence’s dark side and online platforms that are being accused of failing to protect children’s rights and safety.

Sánchez said that the platforms were “undermining the mental health, dignity, and rights of our children” and that “the impunity of these giants must end,” prompting Spain to ask its public prosecutor to examine whether these companies might be committing crimes through their algorithms and artificial intelligence tools. While details of the legal theory are still unfolding, the government is invoking crime statutes tied to child pornography and abuse, reflecting deep concern about AI misuse.

Meanwhile, the latest investigation is not isolated. Europe’s regulators have been intensifying scrutiny of tech platforms for months. In January, the Irish Data Protection Commission launched its own formal probe into X’s AI chatbot Grok over whether it unlawfully processes personal data and generates harmful sexualised content, including images involving minors, under the European Union’s strict General Data Protection Regulation (GDPR). In parallel, French authorities raided X’s Paris offices and summoned Elon Musk over broader issues tied to AI deepfakes and illegal content dissemination.

The Spanish action also comes amid wider discussions about AI liability. Earlier this month, the United Nations children’s agency (UNICEF) called for countries to criminalise AI-generated child sexual abuse material entirely, underscoring that current laws lag behind rapidly evolving technologies.

Spain’s probe marks a notable expansion because instead of focusing only on users who create or share illegal content, it implicitly chose to question platform responsibility and the role of generative AI systems, whether companies should be held accountable for harmful outputs their tools can produce, even without direct user intent. Critics argue this is vital. AI tools can now generate deeply realistic images and videos that blur the lines between real and fake, making enforcement harder and harm more widespread.

Yet, regulators must grapple with how to prove causation and liability when AI systems autonomously generate content in response to user prompts. Are the companies responsible for every harmful output their models can generate? Or should responsibility rest with bad-actor users? Legal scholars point out that existing laws weren’t designed with generative AI in mind, necessitating a rethink of liability standards.

Another broader issue is enforcement: even when regulators act, meaningful outcomes are uneven. For instance, the EU has fined X €120 million for breaching content moderation rules under the Digital Services Act (DSA), but critics note that fines and investigations alone may not deter harmful behaviour if underlying business incentives favour engagement over safety.

Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact Newswire

Subscribe to get the latest posts sent to your email.

Scroll to Top

Discover more from Impact Newswire

Subscribe now to keep reading and get access to the full archive.

Continue reading