• UK: Lords question on AI legislation timetable. In the House of Lords, today’s oral questions include a specific item on “artificial intelligence legislation”, with peers scheduled to press ministers on when the government will bring forward concrete proposals beyond the current pro-innovation framework. The issue is flagged in the Hansard Society’s weekly preview of parliamentary business for 17–21 November, which notes that support for children with dyscalculia, AI legislation and other tax and education issues are on the agenda for Lords oral questions this afternoon. Hansard Society

  • EU: Commission evaluates the Digital Services Act’s interaction with other EU laws. The European Commission has released an update on its ongoing evaluation of the Digital Services Act (DSA), focusing on how the DSA interacts with existing EU legislation and how to refine the criteria for designating very large online platforms and search engines. This work, though not AI-specific, directly affects AI-intensive recommender systems and content-moderation tools used by large platforms, and will influence how the DSA and the AI Act operate together in practice. Digital Strategy

  • Global: South-led debate on inclusive AI governance. In Geneva, the South Centre is hosting a pre-summit event for the 2026 AI Impact Summit under the theme “Advancing Innovation for Equitable AI Access”. The session focuses on how countries can co-operate to ensure AI supports inclusive and sustainable development and to strengthen national and regional capacities in global AI governance processes. southcentre.int

Regulation

  • UK: Powers to test AI models for deep-fake child sexual abuse material. Legal commentary today highlights new UK government proposals, introduced via amendments to the Crime and Policing Bill, to allow designated bodies to access and test AI models where there is a risk they are being used to generate deep-fake child sexual abuse images. The provisions would enable regulators to require developers and deployers to co-operate with investigations, potentially including access to models and training data, marking a significant expansion of statutory powers over high-risk AI systems in the child-protection context. Today's Family Lawyer

  • EU: Reviewing DSA designation thresholds for large, AI-enabled platforms. The Commission’s DSA evaluation update notes that it is reviewing the thresholds and methodology for designating very large online platforms and search engines, aiming to ensure that obligations keep pace with systemic risks arising from algorithmic and AI-driven services. This review will inform how future guidance and delegated acts address AI-related transparency, recommender-system governance and risk assessments under the DSA, in parallel with the EU AI Act’s forthcoming implementation. Digital Strategy

  • Global: Kazakhstan adopts Central Asia’s first AI law. Kazakhstan has adopted what is reported as Central Asia’s first dedicated AI law, signed by President Kassym-Jomart Tokayev and aimed at regulating AI development and use while supporting innovation. According to domestic reporting, the law sets out basic principles for AI deployment, including requirements around data protection, non-discrimination and transparency, and provides a framework for classifying AI applications and supporting a national AI ecosystem. This positions Kazakhstan as an early mover on AI regulation in the region and adds a new comparative reference point for global AI governance debates. The Astana Times

Cases

  • Germany/EU: GEMA v OpenAI: Munich court on memorisation and output liability. A detailed report from Bar & Bench today summarises a judgment of the Munich Regional Court I (GEMA v OpenAI) holding that ChatGPT infringed copyright by memorising and reproducing song lyrics. The court found that (1) storing protected lyrics within the model parameters as numerical values constitutes a reproduction because the works are “reproducibly contained,” and (2) generating lyrics in response to simple prompts is an unauthorised act of making works available to the public. It rejected reliance on text-and-data mining exceptions and held that liability rests with the AI provider rather than the end user, awarding damages and ordering OpenAI to cease and desist the infringing activities. Bar and Bench - Indian Legal news

Academia

  • Health: Systematic review on AI in emergency care. The International Journal of Emergency Medicine has published an open-access review, “Revolutionizing emergency care: an overview of the transformative role of artificial intelligence in diagnosis, triage, and patient management”, analysing how AI tools support imaging, triage, and resource allocation in emergency departments. The authors conclude that AI is driving a “paradigm shift” in emergency medicine, with deep-learning systems improving image quality and workflow efficiency, while also raising questions about validation, safety and governance in high-stakes clinical environments. BioMed Central

  • IP/trade marks: Commentary on Getty and Cohere AI cases. Technology and copyright lawyer Barry Sookman has published a blog post, “Trademark Infringement and AI: the Getty and Cohere cases”, examining the High Court of England and Wales decision in Getty Images (US) Inc v Stability AI Limited and related Canadian litigation. The post emphasises the court’s findings on potential trade mark infringement where Stability’s outputs reproduced Getty watermarks, and discusses how AI-generated outputs can create trade mark and passing-off risks even where secondary copyright liability is rejected. Barry Sookman

Adoption of AI

  • Scotland/UK: Fife Council’s AI-enabled public-sector transformation. A new case study on techUK describes how Fife Council and CGI are using AI and automation to redesign workflows across the council, modelling the impact of 33 emerging technologies on more than 19,000 roles. The programme uses tools such as Faethm and Soroco’s Scout to map automation opportunities and is framed around “evidence over intuition”, with strong emphasis on governance, ethics, and long-term workforce planning rather than short-term cost-cutting. The case study highlights the need for robust AI governance, transparency and data management frameworks in local authorities adopting AI at scale. TechUK

  • OECD: Tax Administration 2025 and AI in public-sector decision-making. OECD materials released today around the Tax Administration 2025 report reflect on ten years of digitalisation in tax systems, pointing to increasing use of AI and advanced analytics for risk assessment, compliance management and service delivery. The analysis frames AI not just as a technical tool but as a governance challenge requiring clear accountability, auditability and safeguards against bias in automated decision-making. OECD

Events

  • UK: AI regulation and legal practice events. Today techUK is running an online “AI Regulation Drop In Session” and hosting “The Future of Law: AI-Powered Opportunities in Legal Tech”, a panel discussion focused on how AI tools are reshaping legal services and the regulatory environment around them. Both events sit within techUK’s wider AI Hub and “Seizing the AI Opportunity” campaign, which actively engages with government on AI regulation and assurance frameworks. 

  • Global: AI Impact Summit 2026 pre-summit in Geneva. The South Centre’s pre-summit event for the AI Impact Summit 2026, “Advancing Innovation for Equitable AI Access”, is taking place this afternoon at the Palais des Nations. It brings together developing-country voices, IT for Change and the Centre of Policy Research and Governance, with support from India’s mission in Geneva, to discuss inclusive AI governance and equitable participation in international rule-making processes. southcentre.int

Takeaway

Today’s picture reinforces three trends central: first, UK institutions are still in the “pre-legislative” phase, with parliamentary scrutiny (Lords questions, child-safety amendments, regulatory drop-ins) outpacing a single, consolidated AI bill. Second, the EU continues to refine the broader digital-platform framework (DSA) in ways that will shape how the AI Act is applied to systemic, platform-based AI services. Third, global AI governance is diversifying, with Kazakhstan’s new AI law and South-led initiatives in Geneva providing important comparative reference points alongside evolving case law such as GEMA v OpenAI.

Sources: Hansard Society, European Commission, South Centre, Today’s Family Lawyer, Astana Times, Bar and Bench, International Journal of Emergency Medicine, Barry Sookman, techUK, OECD