Online safety investigations and public sector AI transparency tools

This report tracks concrete shifts in UK AI governance that change what organisations may need to do in practice. The strongest signals this fortnight were online safety enforcement moving into active investigations, and central government tightening the practical foundations for public sector AI use through data readiness and transparency tooling.

Scotland AI Governance Map

Scotland’s AI Strategy is framed as a collectively developed governance approach, built through public and stakeholder engagement, and delivered through ‘Collective Leadership’ rather than a fixed, top-down legal framework. It relies on a ‘co-production’ model and a living playbook style of implementation, so principles and practices evolve with participation as the ecosystem learns what works.

Online Safety Enforcement, Public-Sector AI Governance Tools, and Cyber Resilience

This period shows UK AI governance becoming more operational: Ofcom is now issuing repeat Online Safety Act penalties and activating the fees regime; government teams are rolling out practical ethics tooling for AI use in public services; and cyber resilience legislation is moving through Parliament with the ICO setting out what expanded oversight could mean for digital and managed service providers. EU sandbox implementation work and wider transatlantic friction around online regulation remain important for UK organisations with cross-border exposure.

Enforcement Signals and Strategic AI Partnerships

This fortnight is defined by (1) new UK international AI and science partnership announcements, (2) a clear shift from guidance to enforcement under the Online Safety Act, and (3) EU implementation work (sandboxes) alongside the Digital Omnibus simplification track that UK EU-facing providers must monitor.

Summary on Policy Communication in Supercomputing Quantum and AI

This summary provides a public overview of recent correspondence on supercomputing, quantum technologies and artificial intelligence in a Scottish policy context. The exchange began with a briefing note on a possible Scottish Supercomputing, Quantum and AI Innovation Strategy (Briefing Note), which was submitted to Keith Brown MSP as the constituency representative.

Investment, Science Strategy and Online Safety

Introduction

This fortnight’s UK AI landscape is shaped by three strands: central government pushing AI as an engine of economic growth and scientific discovery. Regulators sharpening expectations around online safety and data protection enforcement; and the EU adjusting the implementation of its AI rulebook in ways that will affect UK organisations with EU-facing systems. Together, these developments tighten the link between AI investment, infrastructure and concrete governance duties.

Snapshot

AI infrastructure, copyright litigation and public-sector adoption in the UK

In the last fortnight the UK has pursued three intersecting tracks for AI governance. First there is a strong focus on infrastructure and regional industrial policy through the creation of AI Growth Zones and associated data-centre commitments. Second the High Court has handed down a landmark judgment in Getty Images v Stability AI which clarifies the limits of UK copyright law in relation to model training and recognises a narrower field of trade mark liability. Third the state continues to expand operational AI use in justice and planning systems while regulators refine their strategic approach to AI and biometrics. Together these developments stress territoriality, infrastructure, and institutional practice rather than a single AI statute.

Territorial Reach and Sectoral Oversight: The UK Sharpens AI Accountability through Data and Safety Frameworks

The UK continues to advance AI oversight through existing statutory regimes and targeted consultations. Recent activity concentrates on online safety duties, data access frameworks and evidence gathering to shape workforce and productivity policy. Enforcement and tribunal outcomes sharpen territorial scope for data protection and signal higher compliance expectations for organisations that deploy or supply AI systems.