Introduction
In the last fortnight the UK has pursued three intersecting tracks for AI governance. First there is a strong focus on infrastructure and regional industrial policy through the creation of AI Growth Zones and associated data-centre commitments. Second the High Court has handed down a landmark judgment in Getty Images v Stability AI which clarifies the limits of UK copyright law in relation to model training and recognises a narrower field of trade mark liability. Third the state continues to expand operational AI use in justice and planning systems while regulators refine their strategic approach to AI and biometrics. Together these developments stress territoriality, infrastructure, and institutional practice rather than a single AI statute.
Legislative and regulatory framework
- AI Growth Zones and data-centre policy. The Department for Science, Innovation and Technology published the policy paper Delivering AI Growth Zones on 13 November 2025. It sets out plans for geographically focused AI Growth Zones intended to attract data-centre investment and advanced compute. The paper describes a mix of planning flexibilities, support for grid connections and incentives to cluster research, skills and high-value employment around designated sites. It frames data-centre capacity as a strategic enabler for foundation models and public-sector AI use rather than a purely commercial matter.
- AI Growth Zones as regional economic intervention. A parallel government news release on AI Growth Zones highlights projected investment and job creation in locations such as North Wales and the North East of England. It emphasises that zones will support both frontier AI firms and smaller local businesses that rely on cloud and compute resources. The announcement links the policy directly to the UK’s wider growth and productivity agenda and to efforts to secure energy-efficient infrastructure for data-intensive AI workloads.
- Ministry of Defence AI Model Arena. The Ministry of Defence’s Defence AI Centre has launched the AI Model Arena, a controlled environment to test AI models against defence-specific tasks. According to the guidance, the Arena enables evaluation of models for capabilities, vulnerabilities and alignment with defence requirements and ethical principles. It aims to support procurement decisions for both open and closed-source systems and to build a more systematic evidence base for defence use of AI. This represents an important sector-specific governance layer that operates through testing and assurance rather than formal statute.
- ICO strategy for AI and biometrics. The Information Commissioner’s Office has set out a programme of work on AI and biometrics, including a planned update to its guidance on automated decision-making and profiling and the development of a statutory code of practice on biometric data. In its plan of action, the ICO signals priority themes such as biometric surveillance, facial recognition, and the fairness of algorithmic profiling, and it highlights the link between AI oversight and its core data protection enforcement role. This positions the ICO as a central actor in AI governance through existing UK GDPR and Data Protection Act powers rather than through a bespoke AI statute.
Case law
- Getty Images v Stability AI High Court judgment. On 4 November 2025 the High Court delivered its judgment in Getty Images v Stability AI. The approved judgment explains that the claim began with allegations of primary copyright infringement, secondary copyright infringement, database right infringement, trade mark infringement and passing off in relation to the Stable Diffusion image-generation model. The court records that Getty later abandoned the claims about where training occurred and about specific allegedly infringing outputs, which significantly narrowed the issues. The remaining claims focused on secondary copyright infringement, trade marks and passing off.
- Furthermore:
- Model weights and secondary copyright infringement. The judgment holds that the model weights of Stable Diffusion are not a copy of Getty’s images in the sense required by the Copyright, Designs and Patents Act 1988. The court rejects the argument that an AI model becomes an “infringing copy” merely because its creation would have involved infringement if undertaken in the United Kingdom. It stresses the absence of stored or recognisable reproductions of the photographs within the model parameters and concludes that importation or distribution of the model in the UK does not amount to secondary copyright infringement.
- Trade mark infringement and watermarks. The court takes a more receptive view of trade mark claims. It accepts that earlier versions of Stable Diffusion could, under realistic prompts, generate synthetic images that contained Getty or iStock watermarks. The analysis treats those outputs as capable of being “synthetic image" outputs that fall within relevant goods and services specifications for the Getty and iStock marks. However, the judgment characterises infringement as “limited” and historically tied to earlier iterations, noting that subsequent filtering measures reduce the risk of watermark replication and that the evidence did not establish detriment to distinctive character or reputation.
- Territoriality and forum strategy. A key legal consequence is the emphasis on territoriality. Because Getty accepted that training took place outside the United Kingdom, the court did not rule directly on whether training on copyrighted images constitutes infringement as a matter of principle. Rights holders are therefore encouraged, in commentary on the judgment, to monitor where training and hosting occur and to consider multi-jurisdictional strategies. For UK doctrine the case clarifies the limits of extending secondary infringement provisions to AI models while leaving broader questions about training to future litigation.
Regulatory enforcement and oversight
- ICO focus on AI and biometrics in enforcement planning. The ICO’s AI and biometrics programme states that it will prioritise areas such as facial recognition, biometrics in policing and commercial deployments, and high-risk automated decision-making. It indicates that guidance and possible future codes will be directed at sectors already using biometric systems at scale, including law enforcement and border control, where proportionality and equality law concerns are acute. This approach signals a regulatory preference for targeted enforcement and sectoral guidance rather than generic AI rules.
- Data Controller Study and preparedness for AI compliance. The ICO’s Year 2 Data Controller Study finds that use of AI and automated decision-making remains uneven across controllers and that understanding of AI-specific risk is limited. Many organisations report experimenting with generative tools but lack systematic risk assessments or clear lines of accountability. The study flags gaps in training, governance structures and transparency measures, which has direct implications for future enforcement when AI tools are embedded in customer-facing services.
Industry and adoption
- Public sector adoption and data residency arrangements. An IT Pro report describes an agreement under which OpenAI will offer data residency in the United Kingdom for business and public-sector customers. ChatGPT Enterprise and ChatGPT Edu content, including text, images and files, will be stored in UK locations. The article notes that the Ministry of Justice plans to roll out these services to thousands of staff after a pilot that showed time savings in tasks such as drafting, compliance work and document analysis. OpenAI’s statement refers to UK data residency as part of laying a foundation for “trusted and secure AI adoption”. This arrangement raises questions about how UK data protection and procurement frameworks will apply to foundation model services embedded in justice administration.
- Government AI tools in the planning system. The same reporting highlights the government’s own deployment of AI tools such as Extract and Consult, which are designed to speed up analysis of planning documents and consultation responses. Officials anticipate an “arms race” in which authorities use AI to cope with rising volumes of responses, while residents use AI to generate more sophisticated objections. This dynamic illustrates how AI adoption can create new strains on administrative law principles such as rationality, procedural fairness and participation, especially where AI outputs include fabricated legal references.
Research and academic insight
- AI at every stage of the criminal process and guidance for legal practitioners on generative AI. The Oxford Institute of Technology and Justice has published a detailed country study on AI in the UK criminal process. The report maps the use of tools such as OASys and the Offender Group Re-Conviction Scale for risk assessment, predictive analytics programmes like the National Data Analytics Solution, and multiple forms of facial recognition. It notes that live facial recognition has expanded significantly, with millions of faces scanned in a single year, and that a Strategic Facial Matcher platform is being developed to link multiple databases. The study stresses that, despite extensive operational deployment, there are still no express statutory rules governing AI use in criminal or civil proceedings, so authorities rely on existing legislation such as the Police and Criminal Evidence Act, the Equality Act and UK data protection law, supplemented by professional guidelines.
The same report synthesises guidance from the Courts and Tribunals Judiciary, the Bar Council, the Law Society and the National Police Chiefs’ Council. Judicial guidance updated in April 2025 describes generative AI as a “useful secondary tool” for tasks like summarising documents, but warns that public chatbots are not authoritative sources and that anything entered into them should be treated as if it were published to the world. Bar Council guidance emphasises that failing to verify AI-generated citations and analysis could be regarded as grossly negligent and may lead to disciplinary action. These materials collectively reinforce a governance model that relies on professional responsibility, transparency and human oversight rather than automation of core legal reasoning.
Conclusion
The fortnight illustrates a pattern of AI governance that works through infrastructure, sectoral practice and litigation rather than a single comprehensive statute. AI Growth Zones embed AI policy within regional economic planning and energy-intensive data-centre strategy. The Getty Images v Stability AI judgment narrows the reach of secondary copyright infringement for AI models while confirming that trade mark law can still bite when outputs reproduce marks and watermarks. The ICO’s programme on AI and biometrics and its Data Controller Study show regulators preparing to scrutinise real deployments, particularly in high-risk contexts. At the same time, public-sector deals with OpenAI, AI-enabled planning tools and extensive AI use across the criminal process reveal a rapid normalisation of AI in public administration, with governance carried by existing legal frameworks and professional standards. The central question remains how to align these dispersed initiatives into a coherent accountability architecture before path dependency makes later correction difficult.
Sources: gov.uk, ICO, judiciary.uk, Mayer Brown, IT Pro, Guardian, Oxford Institute of Technology and Justice