• UK: AI and data tools for children with SEND. The UK government announced a new research programme to develop “data tools” to help schools and local authorities identify and support children with special educational needs and disabilities earlier, as part of a cross-government “Missions Accelerator”, with AI and advanced analytics clearly implied in the design of these tools. The initiative raises governance questions about children’s data, algorithmic decision support in education and the transparency of any AI models embedded in local authority systems.

  • Scotland: Life sciences strategy foregrounds AI and diagnostics. The Life Sciences Strategy for Scotland: 2025–2035 was launched with an explicit focus on using advanced data, digital technologies and artificial intelligence to improve diagnostics, personalised medicine and clinical innovation, supported by an initial £1 million Scottish Government investment. This positions AI as a core enabler of health innovation, but also implies greater regulatory attention to data sharing, medical-device rules and algorithmic safety within Scottish life sciences. 

  • Thailand: sectoral AI governance in insurance. Thailand’s insurance regulator issued AI governance guidelines for insurers, setting expectations on accountability, risk management and oversight when deploying AI for underwriting, claims and customer interaction. This is another example of sector-specific AI governance emerging outside Europe and the UK, relevant for comparative analysis of financial-services and insurance regulation. Asia Insurance Review

Regulation

  • EU Digital Omnibus: simplification or deregulatory turn for AI Act and GDPR. The European Commission’s Digital Package introduced a Digital Omnibus Regulation proposal that makes technical amendments across the GDPR, Data Act, DSA, DMA and AI Act, framed as “simplifying” rules on AI, cybersecurity and data, and accompanied by a Data Union Strategy and plans for European Business Wallets. The AI-relevant elements include streamlining overlapping obligations, restructuring data-governance legislation and consolidating reporting channels for incidents and breaches. European Commission

  • High-risk AI obligations delayed and linked to standards. Analysis of the Digital Omnibus shows that some high-risk AI Act obligations (notably under Annex III) would only apply six months after the Commission confirms that harmonised standards and support tools are in place, effectively pushing parts of the regime back to December 2027. Commentators argue this “breaks” the earlier clear timetable of 2026–2027 and could allow AI models to use more intrusive datasets in areas such as financial services and access to essential services before tighter controls bite. euronews

  • Digital rights backlash against the Omnibus. Civil-society organisations responded sharply: EDRi described the Digital Omnibus and Digital Omnibus on AI as a “major rollback of EU digital protections”, warning that safeguards on profiling, tracking and AI accountability risk being dismantled; Amnesty Tech similarly argued that “simplification” could weaken protections against unlawful surveillance and discriminatory automated decision-making under the guise of competitiveness and innovation. This frames the Omnibus as a critical new front in EU AI and digital-rights governance debates. European Digital Rights (EDRi)

  • India: AI Governance Guidelines and emerging critique. A new analysis from IndiaLaw dissects India’s 2025 AI Governance Guidelines, emphasising that they reshape privacy, copyright and liability by aligning AI oversight with the Digital Personal Data Protection framework while maintaining a largely voluntary, principles-based model. The piece underlines that India is consciously positioning itself between the prescriptive EU AI Act and the fragmented US model, with a view to later hardening these guidelines into enforceable norms. IndiaLaw LLP

Cases

  • Generative-AI copyright suits: intensified motion and discovery practice. Recent docket activity in Authors Guild v OpenAI and The New York Times v Microsoft in the Southern District of New York includes multiple new entries on motions to compel, sealing disputes, redacted filings and the filing of hearing transcripts, indicating that the litigation is moving deeper into discovery around training data, logging and internal model-development records. These cases remain the central US forum for testing how copyright, fair use and licensing apply to large-scale model training. CourtListener

Academia

  • Cambridge report: generative AI and the future of the novel. A new report from the University of Cambridge’s Minderoo Centre for Technology and Democracy, launched today, finds that 51% of UK novelists believe AI is likely to replace their work entirely, with many already reporting loss of income and widespread unconsented use of their books in training datasets. The study, and accompanying commentary from the Institute for the Future of Work, calls for stronger copyright protections, consent-based data-mining rules, and transparency obligations on AI developers to protect literary labour and cultural value. 

  • India’s AI Governance Guidelines and information rights. A contemporaneous article from IndiaLaw analyses how India’s AI Governance Guidelines reconfigure privacy, copyright and liability by tying AI practice to data-protection norms and sectoral regulators, while remaining non-binding for now. It highlights a potential trajectory where voluntary guidelines become reference points for courts and regulators, much as soft-law and codes of practice have influenced AI governance in the EU and UK. IndiaLaw

Adoption of AI

  • Education and children’s services in the UK
    The UK SEND “data tools” mission signals growing adoption of AI-enabled analytics in schools and local authorities to flag support needs earlier, with central funding encouraging experimentation. This offers opportunities for more targeted intervention but also reinforces the need for robust safeguards on profiling, explainability and child-rights impact assessment in educational AI deployments. GOV.UK

  • Health and life sciences in Scotland
    Scotland’s new life sciences strategy explicitly envisages broader use of AI for drug discovery, diagnostics and medical innovation, linking this to investment and industrial policy. This is likely to increase the importance of compliance with medical-device rules, health-data governance and alignment with both UK and EU standards on AI in health. healthandcare.scot

  • Insurance and financial services in Asia
    Thailand’s insurance-sector AI governance guidelines illustrate how regulators outside Europe and North America are starting to specify expectations for explainability, human oversight and risk management when insurers deploy AI in pricing, underwriting and fraud detection, reinforcing the trend towards sector-specific governance rather than a single horizontal AI act (Asia Insurance Review).

Events

  • Report launch: “The Impact of Generative AI on the Novel” (Cambridge / IFOW). Today’s launch event in Cambridge presents the findings of the Impact of Generative AI on the Novel project, bringing together researchers, authors and policy experts to discuss copyright, working conditions and the future of the literary ecosystem in light of generative AI. MCTD

  • Local training: Introduction to creative AI. Regional business-support initiatives, such as “Introduction to Creative Artificial Intelligence” sessions for small firms at The Bloxwich Launchpad in Walsall, continue to familiarise SMEs with AI tools and their risks, indirectly shaping how non-tech businesses approach compliance with data-protection and IP rules when adopting generative tools. Local Government

Takeaway

The Digital Omnibus is rapidly becoming a central fault line in EU AI governance, promising simplification but, according to rights groups, risking a dilution of hard-won protections and a delay in high-risk AI obligations. At the same time, the UK is quietly embedding AI and advanced analytics into education and health strategies, while India and Thailand refine principle-based and sectoral approaches to governance. Across courts and academia, the tension between innovation, labour, and rights is increasingly visible, particularly in creative industries and AI-driven hiring, underlining that AI law is now as much about power and distributional effects as it is about technical compliance.


Sources: European Commission (Digital Strategy), GOV.UK, Scottish Government / Healthandcare.scot, EDRi, Amnesty International, IndiaLaw / AZB Partners, University of Cambridge / IFOW, Cybernews, Asia Insurance Review, CourtListener, Computing