Reuters The UK government urged X to act urgently after Grok was used to generate intimate ‘deepfakes’, and Ofcom contacted X and xAI on compliance with UK duties to prevent and remove illegal content.

Reuters  A German minister called for EU legal steps to stop Grok enabled sexualised AI images, explicitly framing this as a Digital Services Act enforcement problem rather than a platform moderation debate.

Regulation

  • UK Parliament The House of Lords schedule for the week highlights a debate framed around ensuring AI development remains ‘safe and controllable’, which signals continued parliamentary attention on control mechanisms and accountability framing in early 2026.
  • Supreme Court of Jamaica Practice Direction No. 1 of 2025 sets court facing expectations for responsible GenAI use in proceedings, anchoring integrity, accuracy, and confidentiality as operational duties for court users rather than abstract ethics.
  • Turks and Caicos Islands Practice Direction 1 of 2025 provides detailed guidance on GenAI in court proceedings, including explicit treatment of ‘hallucination’ and verification duties, which is a useful model for practical guardrails and auditability language.

Cases

  • Courts and Tribunals Judiciary In D (A Child) (Recusal) the Court of Appeal addressed risks from AI shaped submissions in sensitive proceedings, reinforcing that parties remain responsible for accuracy and integrity even where tools assist drafting or research.
  • BAILII In MS (Bangladesh) v SSHD the Upper Tribunal addressed AI generated material and professional conduct expectations, reinforcing that automated outputs do not dilute duties of candour and verification in tribunal litigation.

Academia

  • Cambridge University Press ‘Fairness and Artificial Intelligence’ in The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence is a compact reference point for how fairness claims translate into compliance expectations and contestability demands.
  • Cambridge University Press ‘Legal Tech and Access to Justice’ in Legal Tech and the Future of Civil Justice supports a governance narrative where adoption must be paired with explainability, process integrity, and protections for unrepresented users.

Adoption of AI

  • West Sussex County Council The council digital strategy explicitly signals planned use of AI, data analysis, and chat style interfaces, which makes local public sector adoption an assurance and transparency question, not only a service efficiency move.

Events

  • TISA The AI Conference is scheduled for 3 February 2026 with a focus on practical implementation and governance adjacent issues in services, which is relevant for client ready framing of controls and accountability.
  • IAPP An AI Governance breakfast is scheduled for 26 February 2026, which is a useful watchpoint for how governance leaders are translating regulation and enforcement into operational controls.

Takeaway

Deepfake enforcement pressure is turning AI governance into a test of demonstrable control. The same evidence logic is spreading across supply chain scrutiny and court practice, where organisations must show verification, traceability, and responsible use rather than relying on policy statements.

Sources: UK Parliament, Reuters, Courts and Tribunals Judiciary, BAILII, Supreme Court of Jamaica, Turks and Caicos Islands Legal Information Institute, Cambridge University Press, West Sussex County Council, ISACA, TISA, IAPP