United Nations – UN rights chief warns of “Frankenstein’s monster” risk. Reporting from Geneva describes UN High Commissioner for Human Rights Volker Türk warning that generative AI could become “a modern-day Frankenstein’s monster”, with human rights “the first casualty” if powerful firms deploy systems without safeguards, transparency and accountability.
Yahoo Finance – AI delay may threaten Europe’s economic future. Coverage of a speech by ECB President Christine Lagarde notes her warning that Europe is “missing the boat” on AI and risks jeopardising its future competitiveness. She calls for faster deployment, interoperable standards, diversified infrastructure and more uniform regulation to avoid fragmentation.
Reuters – AI and UK productivity: “from two weeks to two hours.” A feature on UK services firms (for example, Moore Kingston Smith) shows AI tools cutting certain tasks from two weeks to two hours, with AI-enabled teams reporting higher margins. Economists suggest the UK’s comparatively lighter regulatory and labour frameworks may accelerate adoption, though professional bodies still flag uncertainty over acceptable AI use.
NL Times – Dutch experts demand ambitious national AI strategy. More than fifty Dutch experts have called for a comprehensive national AI plan to prevent the Netherlands falling behind in the global AI race, urging stronger investment, coordination and ethical safeguards; their proposals implicitly respond to wider EU-level debates around the AI Act and Digital Omnibus.
Regulation
UK government (DSIT) – Sovereign AI Open Call: Autonomous Labs. DSIT has launched the “Sovereign AI Open Call: Autonomous Labs”, a preliminary market-engagement exercise to map UK autonomous lab capabilities, including AI-driven experimentation and closed-loop optimisation, and to inform future government interventions. It sits squarely in the “sovereign AI infrastructure” agenda and will shape expectations for security, data governance and collaboration between academia, industry and the public sector.
Bank of England / PRA – AI/ML within SS1/23 model-risk expectations. The PRA’s roundtable summary confirms that AI and ML models used by banks are fully in scope of Supervisory Statement SS1/23 on model risk management. Supervisors emphasise comprehensive model inventories, explainability, independent validation, strong governance and board-level challenge, signalling more granular supervisory dialogue on AI model risk.
European Commission / Cooley – Digital Omnibus on AI: streamlining the AI Act. A new Cooley update explains how the Commission’s “Digital Omnibus on AI”, adopted on 19 November and discussed publicly today, would amend the AI Act to streamline implementation, ease compliance burdens and adjust timelines ahead of full application in August 2026. The focus is on simplifying obligations for high-risk AI systems while retaining core safeguards, raising questions about whether simplification might also weaken protections.
Telefónica / Keystone / BISI – broader Digital Omnibus package and GDPR / cookie overhaul. Practitioner and policy blogs unpack the wider Digital Omnibus package, including a parallel set of proposals to revise GDPR, cookie rules and related digital legislation to support AI development. Commentators highlight a competitiveness-driven pivot, with more reliance on legitimate-interest processing for AI and streamlined consent mechanisms, but warn that over-correction could dilute privacy and consumer protection.
Academia
BISI – “EU’s AI regulatory pivot: Digital Omnibus and simplification under pressure.” A new report analyses the Digital Omnibus as a potential inflection point in EU AI regulation, arguing that the classic “Brussels effect” is less certain for AI than for GDPR. It frames the Omnibus as both a response to competitiveness concerns and a test of whether the EU can maintain high rights-protection standards while easing compliance burdens for AI developers and deployers.
Business
RICS – AI use by expert witnesses under scrutiny. A RICS Modus article warns that increasing use of AI by expert witnesses carries serious risks if not tightly controlled. Survey data show only 20% of expert witnesses have used AI (double the 2024 figure, but still far below general workplace use), and the piece stresses that experts must not outsource analysis or professional judgement to AI tools, foreshadowing forthcoming global RICS professional standards on AI in 2026.
Legal Futures – Qanooni AI and Nexian partnership for UK law firms. Legal Futures reports a strategic partnership between Qanooni AI (a legal-intelligence platform embedded in Microsoft 365) and Nexian (an IT and managed-services provider). The collaboration aims to give UK law firms a “complete, secure and scalable” AI stack integrated with existing document-management and practice-management systems, reflecting rapid commercialisation of AI in legal practice and the need for robust security and governance in deployments.
Adoption of AI
Reuters – AI deployment in UK professional services. The UK case study of Moore Kingston Smith illustrates concrete adoption: AI tools embedded in workflows are reducing time-intensive tasks from weeks to hours, with measurable margin impact. This underscores that, in practice, AI adoption in professional services is already ahead of much of the regulatory detail, reinforcing the importance of interim professional standards and sectoral guidance.
Events
Crowell & Moring – Webinar on AI in the workplace (EU). A Crowell & Moring session taking place today focuses on “AI in the Workplace: EU Rules for When Humans and Bots Team Up”, offering practical guidance on the AI Act’s impact on employment law, employer-provided AI tools and employee-initiated AI use. It signals growing practitioner attention to the intersection of AI regulation, labour law and workplace governance.
Takeaway
Today’s developments show AI governance moving on several fronts at once: states are building sovereign AI and infrastructure strategies, regulators and courts are sharpening enforcement and professional expectations, and the EU is experimenting with “simplification” of its dense digital rulebook. At the same time, the UN’s human-rights warning and rapidly evolving professional standards underscore that rights, ethics and accountability cannot be treated as optional extras in the rush to scale AI systems. For AIJurium, this underlines the need to track both high-level regulatory pivots (like the Digital Omnibus) and granular practice standards in sectors such as finance and legal services.
Sources: GOV.UK (DSIT, Sovereign AI Open Call), Bank of England / Prudential Regulation Authority, European Commission, Cooley, Telefónica, BISI, Lexology, RICS, Legal Futures, Reuters, Arab News / Asharq Al-Awsat, NL Times, Crowell & Moring