According to Sky News, Ofcom is investigating X after reports that its Grok tool was used to generate sexualised images of children and undressed images of people. The report frames the immediate issue as illegal content exposure and child safety risk, with platform controls now under scrutiny.

According to Reuters, Malaysia has restricted access to Grok as backlash over non consensual sexualised images widens, following similar action in Indonesia. The report links the restriction to regulator concern that safeguards and reliance on user reporting are not adequate for preventing harm. 

According to Reuters, Meta is excluding an Italian order that would require WhatsApp to include a rival’s chatbot, and the company says it will not apply the measure in Italy. The story highlights how AI assistant distribution inside major platforms is becoming a live competition and platform governance issue in Europe. 

Regulation

Ofcom has opened a formal investigation into X under the Online Safety Act, focused on whether duties to protect UK users from illegal content are being met in relation to Grok generated sexualised imagery. This is a clear signal that AI enabled content generation features are being treated as part of a platform’s compliance perimeter, not a separate product problem. 

Cases

Ofcom has published the opened case file for its investigation into X Internet Unlimited Company and its service X, setting out the scope and compliance duties being examined. For AIJurium purposes, this is a trackable enforcement case that may become a practical reference point for how risk assessment and child safety duties apply where generative tools are integrated into a platform. 

Academia

arXiv. A new paper argues that EU AI Act “regulatory learning” needs a clearer technical basis for scalable evidence flow, and it positions AI technical sandboxes as a micro level engine for evidence generation. It is a useful lens for explaining why governance regimes are investing in structured testing and documentation pathways, not only rules on paper. 

Events

Wharton AI and Analytics Initiative. The Accountable AI Research Conference is scheduled for 6 February 2026 and is explicitly framed around accountable AI research and governance themes. It is a strong horizon scan point for responsible AI governance signals that may later influence policy and compliance expectations. 

CREATe. The AI Regulation ECR Conference is scheduled for 31 March to 1 April 2026 and is centred on AI regulation research, making it directly relevant for AI law and governance tracking. It is also a good anchor for Scotland and UK adjacent academic and policy developments in 2026. 

ERA’s Annual Conference on Artificial Intelligence Systems and Fundamental Rights is scheduled for 15 to 17 April 2026 and is structured around legal compliance and rights impacts. It is a practical marker for EU facing governance and fundamental rights framing that corporate teams often need to translate into internal controls. 

Takeaway

The key development is Ofcom’s formal investigation into X over Grok related sexualised imagery, which shows UK online safety enforcement is now directly testing how generative features are governed in practice. The related signals from Malaysia and Indonesia point to fast moving cross border restriction risk for the same harm category, while the Italy WhatsApp story shows that distribution and access conditions for AI assistants are also becoming a live regulatory battleground. 

Sources: Ofcom, Sky News, Reuters, arXiv, Wharton AI and Analytics Initiative, CREATe, ERA