• According to the Bank of England, the December 2025 Financial Stability Report warns that elevated equity valuations for technology companies focused on artificial intelligence, together with debt-financed AI infrastructure spending and leveraged positions in private credit and gilt markets, now pose heightened risks to UK and global financial stability, even though core UK banks remain resilient under stress tests.

  • According to Raconteur, a new briefing on EU AI Act deadlines approaching for UK businesses stresses that many UK organisations underestimate their exposure: any provider, deployer, importer or distributor placing AI systems on the EU market can fall under the Act, and firms in financial services, health, transport, critical infrastructure and recommender systems are singled out as particularly likely to be caught once the relevant provisions apply. 

  • According to Reuters, the European Commission has opened an EU antitrust investigation into Meta over plans to use data from its WhatsApp messaging service to train AI models, with concerns that Meta’s approach could distort competition in emerging AI markets. The report notes that this comes against the backdrop of the EU AI Act, which establishes guardrails for high-risk and other AI applications.

Regulation

  • According to the UK Department for Science, Innovation and Technology (DSIT), the new report “The Fairness Innovation Challenge: key findings” summarises four cross-sector projects (higher education, financial services, healthcare and recruitment) that developed tools and methods to address bias and discrimination in AI. The report explicitly situates fairness work within existing UK law (Equality Act 2010, UK GDPR, Data Protection Act 2018) and highlights socio-technical design, bias testing, documentation and stakeholder engagement as practical techniques for compliant AI deployment.

  • According to the UK Government’s speech “Developing the automated vehicles regulatory framework”, the Automated Vehicles Act is presented as one of the most comprehensive legal frameworks of its kind, with safety at its core, clear legal responsibilities for authorised self-driving entities, and regulatory powers to govern the use of automated driving systems (including those relying on AI) on UK roads. The speech emphasises an evidence-based safety framework and protection of vulnerable road users as central legislative aims. 

  • According to the European Commission and White & Case analysis, the Digital Omnibus package – including the “AI Omnibus” proposal – would amend the EU AI Act to delay the application of core high-risk obligations until the end of 2027, refine definitions of high-risk systems, and streamline documentation and transparency requirements, particularly to reduce burdens on SMEs. These remain proposals, but they already shape compliance planning and risk assessments for EU-facing AI systems. 

  • According to the Australian Department of Industry, Science and Resources, Australia’s newly released National AI Plan sets out a roadmap to “capture opportunities, share benefits and keep Australians safe” by investing in AI skills, data centres and public-sector capability, while relying on existing laws and sectoral regulators rather than introducing a standalone AI Act. The plan is accompanied by a strengthened Policy for the Responsible Use of AI in Government, which now requires agencies to adopt strategic AI governance, assign clear accountability, and conduct risk-based assessments for AI use cases. 

  • According to UNESCO, the newly published Guidelines for the Use of AI Systems in Courts and Tribunals provide principles and safeguards for judiciaries worldwide, aiming to ensure that AI supports, rather than undermines, human-led justice. The Guidelines, launched in London alongside the Athens Roundtable on AI and the Rule of Law, respond to survey findings that only a small minority of judicial actors have received AI training despite rapidly increasing use of AI tools in legal work. 

Cases

  • According to Reuters, the European Commission’s antitrust probe into Meta over plans to use WhatsApp data for AI training marks a significant enforcement step at the intersection of competition law, data protection and AI governance. While not yet a final decision, the investigation will test how EU competition authorities treat data aggregation and AI training as potential abuses of dominance or unfair leveraging in digital markets. 

Academia

  • According to ING Think, an economic note on “Europe lags and regulation shifts: 3 calls for AI” argues that Europe remains behind the US and China in AI model development and deployment, and that regulatory choices – including the AI Act and Digital Omnibus – will need to be carefully calibrated to avoid further widening the innovation gap while still providing credible guardrails. The piece calls for accelerated investment, deeper capital-market integration and more innovation-friendly implementation of AI rules.

Business

  • According to AI Journal, Aristek Systems has announced an initiative to develop ethical, secure and efficient AI solutions for the legal sector, including tools to assist law firms and in-house teams with document analysis, workflow automation and compliance. The expansion is positioned as a response to growing demand for AI-enabled legal services that remain compatible with confidentiality, data-protection and professional-ethics requirements. 

  • According to the Trowers & Hamlins briefing on the Digital Omnibus, providers of AI-enabled products in life sciences, medtech and financial services are being advised to use the proposed delay in high-risk AI obligations to strengthen internal AI governance, update technical documentation and align product-safety, data-protection and AI-risk processes, rather than pausing compliance work. 

Adoption of AI

  • According to UNDP, the flagship report “The Next Great Divergence: Why AI May Widen Inequality Between Countries” warns that unmanaged AI could reverse decades of convergence in development outcomes by widening gaps in economic performance, skills and governance capacity between rich and poorer states. The report stresses that AI policy is now a development-and-governance issue, calling for investment in infrastructure, human capital and regulatory capacity to avoid AI-driven divergence. 

Takeaway

Today’s picture shows AI law and governance moving simultaneously at domestic, regional and global levels. In the UK and EU, regulators are recalibrating risk – the Bank of England is integrating AI-driven market risks into financial-stability work, while the Digital Omnibus proposals would slow the AI Act’s high-risk obligations without reducing their eventual depth. Australia’s National AI Plan confirms a strong preference for governing AI through existing laws and sectoral regulators, mirroring the UK’s “no single AI Act” approach. Globally, UNESCO’s judicial guidelines and AI-governance training, together with UNDP’s inequality warning, underline that institutional capacity, fairness and access to governance tools are becoming as important as the substantive AI rules themselves.

Sources: Bank of England; Department for Science, Innovation and Technology (UK); UK Department for Transport; White & Case; Australian Department of Industry; UNESCO; United Nations Development Programme; Reuters; Raconteur; ING Think; Trowers & Hamlins; AI Journal.