AI provenance and control become practical compliance tests

Lenovo announced a deeper NVIDIA partnership around the Lenovo AI Cloud Gigafactory and set out a consumer and device layer push via Qira at CES. Meta (Reuters) was reported as facing China regulatory review hurdles around a proposed purchase of Manus, highlighting that AI deals now carry multi layer governance risk, including national security framing.

Deepfake enforcement and supply chain scrutiny

Reuters The UK government urged X to act urgently after Grok was used to generate intimate ‘deepfakes’, and Ofcom contacted X and xAI on compliance with UK duties to prevent and remove illegal content. Reuters A German minister called for EU legal steps to stop Grok enabled sexualised AI images, explicitly framing this as a Digital Services Act enforcement problem rather than a platform moderation debate.

Courts Tighten the Rules on Training Data

Reuters E-discovery is becoming an AI governance problem, with legal teams being pushed to show defensible preservation, retention, and ‘legal hold’ discipline for new data sources, including GenAI outputs and deepfake style evidence risks. GOV.UK The UK Anti Corruption Strategy includes a policy signal that enforcement bodies intend to pilot the use of artificial intelligence to speed up complex investigations, framing AI as a tool that must be controlled and audited in sensitive state functions.

Holiday lull in official AI governance steps

SSRN. A new paper surfaced with immediate governance relevance for “emotional” and “empathetic” AI systems, framing “frame amplification” and feedback-loop risks as a safety and accountability failure mode that regulators could treat as a consumer-vulnerability and manipulation risk in deployment contexts.

Procurement-led AI governance and court filing controls

GOV.UK. DSIT, acting through the Commercial Innovation Hub for MHCLG, published a tender for “Augmented Planning Decisions”, seeking an AI-augmented tool to assist planning officers with policy research, citation and report generation, material-considerations analysis, and reasoned recommendations, explicitly framed around verifiability and integration into existing planning systems.

Cyber resilience moves and tighter supplier governance around public-sector AI

GOV.UK. DSIT published research on cyber security vulnerabilities in operational technologies, relevant to AI governance where AI-enabled monitoring and control systems sit inside critical and high-risk industrial environments. GOV.UK. The Department for Transport published a detailed evaluation of its AI Consultation Analysis Tool (CAT), setting out accuracy testing against human benchmarks, a human-in-the-loop pilot design, and bias checking using protected-characteristic proxies, with an explicit aim of building trust in government AI use through transparent evidence.

UK algorithmic transparency plus EU AI Act readiness signals

The Law Society. The Law Society gave evidence to the Joint Committee on Human Rights on AI and human rights, pointing to rule-of-law and accountability issues as policy work on AI continues in the UK. European Data Protection Supervisor. The EDPS published Newsletter 117, highlighting EU-level work on preparing public administration for AI risk and AI Act implementation, including an inventory-style mapping exercise of AI systems within EU institutions and broader horizon-scanning on AI trends.

Healthcare AI inquiry and frontier risk evidence

GOV.UK. The Medicines and Healthcare products Regulatory Agency has launched a high profile Regulation of AI in Healthcare call for evidence, presented as a pivotal moment for the UK framework and inviting views on safety checks, liability allocation and post deployment monitoring for AI medical devices. GOV.UK (+AISI). The UK AI Security Institute has released its first Frontier AI Trends Report with an accompanying government factsheet, publishing aggregated test results on more than thirty frontier systems and showing that capabilities in areas such as code generation and biology are advancing rapidly while serious vulnerabilities and misuse risks remain.

Algorithms at Work, Human Rights Scrutiny and AI Transparency

Ministry of Justice (MoJ) published an Engineering AI Governance Framework for its engineering teams, setting practical rules for using tools like GitHub Copilot and for building bespoke AI systems across the development lifecycle. GOV.UK. European Parliament adopted a resolution urging the Commission to propose new rules on algorithmic management at work, including transparency, worker consultation and restrictions on intrusive monitoring.

Infrastructures, creators and courts in the AI spotlight

Council of Europe. The Chair of the Committee on Artificial Intelligence uses a parliamentary conference at the UK Parliament to urge MPs to treat the new Framework Convention on Artificial Intelligence as a central tool for protecting democracy, human rights and the rule of law, and to focus on ratification and implementation rather than abstract debates on technology.