According to TIME, the UK is bringing into force an offence covering the creation of non-consensual sexualised intimate images, with the Grok incident accelerating political focus on enforcement against distribution channels and tool access. The immediate operational pressure is on platforms and providers to prevent creation and circulation pathways, not just react to reports.

According to Reuters, Spain’s cabinet has approved a draft bill to curb AI deepfakes and tighten consent rules around the use and reuse of images, including rules tied to AI-generated likeness and voice. If enacted, it would harden consent-based controls for AI-mediated depiction and commercialisation, alongside labelling expectations for synthetic content.

According to The Guardian, the UK Online Safety Act enforcement toolkit includes escalation routes that could in extreme cases support court-backed service restriction measures, though this is framed as a last resort. The near-term practical pathway is more likely to be remedial directions, fines, and rapid compliance demonstration around illegal content risk management.

Regulation

Ofcom has opened a formal Online Safety Act investigation into the provider of X, focused on duties relating to illegal harms and the protection of children, including whether required risk assessment and mitigation duties were met. The investigation posture makes documentation and evidencing of controls central, particularly around how generative functionality is constrained and monitored.

The European Commission process on a Code of Practice for marking and labelling of AI-generated content signals a move towards interoperable, practical transparency measures that can be operationalised by providers and deployers. Even before finalisation, the direction of travel is towards detectable, machine-readable signalling for synthetic and manipulated content.

Cases

Keystone Law. In the UK Getty Images v Stability AI litigation, recent reporting highlights that appeal progression keeps UK secondary infringement mechanics and the meaning of ‘infringing copy’ in focus for 2026 disputes linked to model training and outputs. The practical implication is continued legal uncertainty for UK-facing training pipelines and downstream product exposure while appellate posture develops.

Academia

A January 2026 arXiv paper argues that legal rules and methods can be used as design inputs for safer and more ethical AI alignment, shifting “compliance” from a check at the end to an upstream design constraint. This is a useful framing for building internal governance that treats legal duties as system requirements, not policy statements.

Events

IAPP UK Intensive 2026 in London includes AI governance alongside privacy and cybersecurity law, with workshops on 24 February and the main conference on 25 to 26 February 2026. This is a practical UK-facing forum for governance, risk, and compliance teams dealing with AI-enabled services.

Legal Geek Conference returns to London on 14 to 15 October 2026, with legal technology themes that typically include AI and regulation in practice. It is useful for monitoring how legal services and in-house teams operationalise AI governance.

Takeaway

The main shift is enforcement-grade treatment of intimate deepfake creation and distribution, with regulators focusing on whether platforms can prove risk assessment, feature-level safeguards, and rapid mitigation. Organisations deploying or integrating generative features should treat synthetic sexual imagery as a priority governance risk area, with clear controls, audit trails, and escalation paths designed in from the start.

Sources: Reuters, TIME, The Guardian, Ofcom, European Commission, arXiv, IAPP, Legal Geek, Two Birds