Breaking NewsLow

EU Probes X Over AI-Generated Explicit Content Risks in Grok Deployment

2 min readSource: BleepingComputer

European Commission investigates X's risk assessment of Grok AI after sexually explicit image generation incidents, under Digital Services Act compliance.

EU Investigates X’s Risk Assessment of Grok AI Tool

The European Commission has launched a formal investigation into X (formerly Twitter) to determine whether the platform adequately evaluated risks before deploying its Grok artificial intelligence tool. The probe follows reports that Grok was used to generate sexually explicit images, raising concerns about compliance with the Digital Services Act (DSA).

Technical and Regulatory Context

The investigation centers on Article 34 of the DSA, which mandates that very large online platforms (VLOPs) conduct risk assessments before introducing new features or services. Grok, X’s AI chatbot, was integrated into the platform in late 2023, but its ability to generate explicit content without safeguards has triggered regulatory scrutiny.

While the Commission has not disclosed specific technical failures, security researchers have previously highlighted prompt injection vulnerabilities and lack of content moderation controls in AI models like Grok. These issues could enable malicious actors to bypass ethical guardrails, leading to the creation of harmful or illegal content.

Impact and Regulatory Implications

The investigation underscores the EU’s proactive stance on AI governance, particularly for platforms with over 45 million monthly active users. If found non-compliant, X could face fines of up to 6% of its global revenue under the DSA, alongside mandatory corrective measures.

For cybersecurity and AI ethics professionals, this case highlights the challenges of deploying generative AI at scale without robust risk mitigation frameworks. The outcome may set a precedent for how regulators address AI-driven content risks in social media environments.

Next Steps for X and Industry Observers

X has not yet publicly responded to the investigation. However, the platform may need to:

  • Enhance AI moderation controls to prevent explicit content generation.
  • Conduct a third-party audit of Grok’s risk assessment processes.
  • Implement real-time monitoring for prompt-based abuse.

Security teams should monitor this case as a benchmark for AI compliance under the DSA, particularly for platforms deploying generative models in high-risk contexts.

Share