• GOV.UK (DESNZ/DSIT). According to the UK government, the latest meeting of the AI Energy Council in London focused on speeding up grid connections and building infrastructure for new AI data centres and ‘AI Growth Zones’. Ministers and regulators discussed reforms to accelerate grid access, discounted tariffs for data centres that can use excess capacity, and the broader goal of ensuring that AI’s growing energy demand is matched by sustainable, well governed energy infrastructure across the UK. 

  • International Electrotechnical Commission (IEC), International Organization for Standardization (ISO) and International Telecommunication Union (ITU). According to IEC, ISO and ITU, the International AI Standards Summit in Seoul issued the ‘Seoul Statement on Artificial Intelligence’, setting out a joint vision for international standards that support ‘trustworthy’ AI while upholding fundamental rights and bridging the digital divide. The statement frames standards as tools to operationalise principles of safety, inclusiveness and accountability, and to align national AI regulations around interoperable technical requirements.

  • Digital Health (reporting on Scottish Government strategy). According to Digital Health, the Scottish Government has published a refreshed Life Sciences Strategy that explicitly aims to harness new technologies ‘from genome editing to AI’, backed by dedicated investment and a sector-wide delivery plan. The strategy positions AI as part of a ten-year plan to make Scotland a leading location for developing, testing and commercialising life sciences innovation, linking AI to infrastructure, skills and NHS innovation hubs. 

Regulation

  • European Commission (AI Office). According to the European Commission, a public consultation has opened on a draft implementing act for AI regulatory sandboxes under the EU AI Act. The draft text defines how national authorities should set up and supervise sandboxes, the conditions under which start-ups and other organisations can test AI systems in a controlled environment, and how to balance innovation support with respect for fundamental rights and existing sectoral rules. This consultation is an early test of how the AI Act’s innovation tools will interact with data protection, consumer and financial regulation. 

  • Australian Department of Industry, Science and Resources. According to the Australian Government, the new National AI Plan released today sets out a framework to ‘capture the opportunity, spread the benefits and keep Australians safe’ in the AI age. The plan links investment, skills and research with commitments on safety, inclusion and regional equity, and signals further regulatory work on transparency, accountability and sector-specific guardrails as AI systems are deployed across public and private services. 

Academia

  • Taylor Wessing – ‘Copyright in 2026: clarification, review and reform’. According to Taylor Wessing, 2025 has seen AI and copyright disputes move into higher courts in both the EU and UK, including litigation over training data, text and data mining exceptions, and memorisation of works by generative models. The article predicts that 2026 will bring CJEU rulings on AI-related copyright questions and further UK reform debates, highlighting that GPAI providers under the AI Act must implement copyright compliance policies and publish training-data summaries, while rights holders test the limits of existing exceptions and opt-outs. 

  • Education International – ‘Using collective bargaining to regulate the use of technology and artificial intelligence in higher education’. According to Education International, new guidance in the ‘Education Voices’ series stresses that collective bargaining is becoming a key tool for regulating AI in universities, including safeguards on data use, algorithmic monitoring of staff and students, and the substitution of teaching with AI systems. The piece argues that union-negotiated clauses can require transparency, human oversight and proper impact assessment before institutions introduce AI-driven platforms into core educational functions.

Business

  • Adams & Adams / AI Impact – SMARTAI IP Portal and AI IP Readiness Test. According to AI Impact and Adams & Adams, the launch of the SMARTAI IP Portal and associated AI IP Readiness Test is intended to help organisations understand and manage AI-related intellectual property and trade secret risks. The portal’s November newsletter, flagged in the launch announcement, covers recent UK and US AI/IP decisions (including Getty Images v Stability AI), board-level responsibilities for AI under South Africa’s King V governance code, and practical steps for output governance and provenance. This marks a growing market for specialist AI-and-IP compliance tools aimed at boards and in-house teams.

Adoption of AI

  • Scottish Government (via Life Sciences Scotland strategy coverage). According to the Scottish Government’s refreshed Life Sciences Strategy, as reported by Digital Health, AI is now framed as a core enabler for Scotland’s ambition to grow life sciences turnover to £25 billion by 2035. The strategy emphasises using AI in diagnostics, precision medicine and health data infrastructure, alongside investment in NHS regional innovation hubs, and highlights the need for sustained public-sector support to ensure that AI-enabled innovation is both commercially successful and aligned with health and equality goals (Digital Health).

  • United Nations Development Programme (UNDP). According to Observer Online Report, a new United Nations development report warns that AI could deepen global inequality if access to skills, infrastructure and governance capacity remains uneven. The report stresses that governments must couple AI adoption with investment in public digital infrastructure, safety standards and inclusive education, and that international cooperation on AI governance is crucial to prevent concentration of benefits in a small group of highly digitalised economies.

Takeaway

Today’s developments show AI governance becoming more structured and more sector-specific. In Europe, the Commission’s work on regulatory sandboxes and the Seoul Statement on AI standards point to a future in which the AI Act is supported by detailed technical and procedural frameworks. In parallel, the UK is explicitly tying AI growth to energy infrastructure, Scotland is embedding AI into its life sciences strategy, and Australia’s National AI Plan frames adoption around safety and inclusion, while IP specialists and education unions begin to operationalise AI risk and accountability in practice.

 

Sources: Department for Energy Security, GOV.UK; European Commission; Australian Department of Industry, Science and Resources; International Electrotechnical Commission (IEC), International Organization for Standardization (ISO) and International Telecommunication Union (ITU); Scottish Government; Taylor Wessing; Education International; Adams & Adams / AI Impact; Observer Online Report.