Reuters reports that Anthropic has launched “Project Glasswing”, a cybersecurity initiative with partners including Amazon, Microsoft, Apple, Google, Nvidia, CrowdStrike and Palo Alto Networks. Frontier-model deployment is being framed less as a general productivity story and more as controlled use in defensive security, with credits and donations attached to encourage testing around critical software infrastructure.

Reuters also reports that Grab is leaning on AI to manage rising fuel costs, which matters less for the company-specific angle than for what it shows about ordinary commercial adoption. AI is continuing to move into margin-management and operational decision support rather than remaining confined to headline model launches.

Bloomberg Law reports that Patlytics has raised $40 million for AI-powered patent workflow tools. The point is not just fundraising, but continued investment in AI products aimed at legal and IP work rather than general-purpose model branding.

Regulation

  • The MHRA announced on 8 April that it is expanding its AI Airlock programme with £3.6 million over three years. That is a meaningful regulatory-development signal because the UK medicines regulator is putting multi-year support behind a live testing environment for AI-enabled medical technologies, suggesting a continued preference for supervised deployment pathways rather than abstract guidance alone.

  • NIST, meanwhile, has opened work on an AI RMF Profile on Trustworthy AI in Critical Infrastructure. The concept note says the profile is intended to help critical-infrastructure sectors deploy AI agents and tools with greater confidence, which makes this an implementation-oriented standards move rather than a broad policy statement.

  • GOV.UK has published the joint statement of the U.S.–UK Financial Regulatory Working Group. The statement records that both sides exchanged views on current and future uses of AI in financial services and discussed cooperation on both AI benefits and AI risks, showing that financial-sector AI oversight is now part of active bilateral regulatory dialogue.

Academia

  • A PubMed-listed study reports that adoption of an AI scribe was associated with changes in clinician time spent in the electronic health record and visit quantity. The value of this item is not that it settles the case for clinical AI, but that it adds real-world evidence on workflow effects in care delivery, which is more useful for policy and procurement discussions than another purely conceptual paper about AI’s future promise.

  • arXiv has posted Reciprocal Trust and Distrust in Artificial Intelligence Systems. The paper argues that debates about trustworthy AI should also confront distrust and institutional reciprocity, which makes it useful for governance discussions that go beyond technical performance alone.

Events

  • The Sedona Conference Working Group 13 Annual Meeting begins on 9 April in Austin and runs through 10 April 2026. It is relevant because WG13 sits squarely at the junction of AI and law, so the meeting is a plausible near-term source of practical discussion on governance, discovery and legal-use standards.

  • Georgetown Law’s Tech Institute is hosting a Tech Law Scholars information webinar. It is not an AI event in isolation, but it is relevant to the education pipeline around technology-law practice and signals continued institutional positioning around tech-law training.

Takeaway

AI governance is advancing where institutions are willing to test, finance and supervise real use. That is visible in legal-tech investment, regulated medical sandboxes, cross-border financial-regulatory coordination and event programmes built around deployment rather than hype.

Sources: Reuters; MHRA; NIST; CourtListener; United States District Court for the Southern District of New York filing provided by user; PubMed; The Sedona Conference; Georgetown Law, Bloomberg Law.