The Financial Times reports five major UK media groups have formed a coalition to develop shared standards and licensing frameworks aimed at controlling and monetising AI use of publisher content.
Reuters reports Norway’s sovereign wealth fund is using AI systems to screen for ESG risk signals, positioning model driven monitoring as a core compliance capability rather than an innovation pilot.
Nature reports a new UN scientific advisory structure will scrutinise AI impacts, signalling a shift towards institutional review mechanisms that can translate technical risk into policy agendas.
Regulation
New Zealand’s Digital Government guidance publishes updated Responsible AI Guidance for the Public Service on GenAI, setting an operational baseline for safe, transparent, and responsible adoption across agencies.
UNESCO reports advocacy for an ethical AI and data governance framework at the Pakistan Governance Forum, reinforcing the direction of travel toward values based governance paired with data governance controls.
Cases
SCOTUSblog reports the US Supreme Court is tracking the Thaler v Perlmutter dispute concerning copyright claims for AI generated art and the requirement for human authorship, keeping the authorship threshold live as a practical constraint on AI output monetisation.
Academia
arXiv publishes an empirical study of 998 bug reports from LLM agent frameworks, building a taxonomy of root causes and symptoms that can be used as evidence for engineering controls, audit trails, and safe deployment obligations in agentic systems.
Events
The American Association of Colleges of Nursing lists an AI Seminar Series session titled AI at the Bedside dated 26 February 2026, reflecting health sector uptake and governance attention in clinical settings.
The Inter American Development Bank lists a webinar titled Interview GPT on 26 February 2026 focused on open source AI for qualitative interviews, relevant to public sector and research governance for sensitive data collection.
Takeaway
The day’s pattern is converging control points. Content owners are moving from complaints to enforceable standards, public services are codifying GenAI operating rules, and courts are continuing to define which legacy doctrines still bind AI outputs, together pushing AI governance toward clearer permissioning, documentation, and accountability.
Sources: Financial Times; Reuters; Nature; digital.govt.nz; UNESCO; SCOTUSblog; arXiv; AACN Nursing; Inter American Development Bank