The Department for Science, Innovation and Technology has set out an expansion of free AI skills training, aiming to upskill 10 million workers by 2030 and making newly benchmarked courses available to all adults. The announcement also signals how government intends to frame “responsible adoption” as an economic and labour-market policy tool.
The House of Lords Library notes a scheduled debate on an international moratorium on the development of “superintelligent AI”. Even if non-binding, it is a useful barometer for where UK political attention is moving on frontier-risk narratives. .
Regulation
- The Competition and Markets Authority has opened a consultation on proposed conduct requirements for Google’s general search services under the UK digital markets regime. The consultation is explicitly framed around fairness, transparency and publisher treatment, including AI-driven search features.
- The Department for Science, Innovation and Technology, the National Cyber Security Centre and the AI Security Institute have issued a call for information on “secure AI infrastructure”, focused on practical constraints and capabilities for protecting model weights and sensitive assets without undermining availability. This is a concrete governance hook for anyone running or procuring high-value models in the UK.
- The European Commission has opened specification proceedings to assist Google’s compliance with Digital Markets Act obligations on interoperability and online search data sharing. The announcement directly connects competition compliance architecture with access to Android-based AI features and search datasets.
Cases
- The First-tier Tribunal (Tax Chamber) decision in Elden v Revenue and Customs ([2026] UKFTT 41 (TC), 8 January 2026) is being tracked for its discussion of inaccurate case summaries linked to unverified AI use in the litigation workflow. The decision is a practical reminder that “AI-assisted” does not reduce professional responsibility for accuracy and procedural compliance.
- The Competition Appeal Tribunal’s disclosure ruling in Gormsen v Meta Platforms, Inc. and others (16 December 2025) provides detailed, court-facing parameters for how AI can be used in a disclosure exercise. It remains one of the clearest UK competition-litigation touchpoints for operationalising AI controls in e-disclosure.
Academia
- SSRN has posted “AI Deployment Authorisation: A Global Standard for Machine-Readable Governance of High-Risk Artificial Intelligence”, proposing a regulator-oriented, machine-readable authorisation approach across multiple governance dimensions. Even if aspirational, it is a useful reference for translating governance requirements into auditable artefacts.
- arXiv has posted “AI Deployment Authorisation: A Global Standard for Machine-Readable Governance of High-Risk Artificial Intelligence” (preprint), mirroring the same core concept in a format that is easier to cite in technical governance discussions. This is relevant for compliance teams trying to align model evaluation, documentation and assurance into one pipeline.
Events
- ETSI AI and Data Conference 2026, 9–11 February 2026 (Sophia Antipolis).
- Webinar: Lawyering the Enterprise AI Stack in 2026, 11 February 2026 (online).
- AI x Dispute Resolution Webinar Series 2026, first session 24 February 2026 (online).
- British Legal Technology Forum 2026, 10 March 2026 (London).
Takeaway
Today’s signals point to a UK posture of “deploy and secure” (skills, services, compute assurance) while regulators harden the market rules around AI-mediated access, content use and data sharing. Governance teams should treat access controls, provenance/accuracy checks, and disclosure-grade audit trails as baseline operational requirements, not optional enhancements.
Sources: Department for Science, Innovation and Technology, UK Parliament (House of Lords Library), Competition and Markets Authority, Financial Conduct Authority, European Commission, First-tier Tribunal (Tax Chamber), Competition Appeal Tribunal, arXiv, SSRN