ABC News reports Australia has scrapped a planned permanent AI Advisory Body after months of work, prompting concern that delays and shifts away from mandatory guardrails may narrow the window to put durable safety oversight in place. 

Reuters reports US Federal Reserve Governor Lisa Cook said AI is triggering major economic change and could raise unemployment in the short term as labour markets adjust. 

Regulation

  • The Australian Therapeutic Goods Administration publishes updated guidance explaining how software-based medical devices are regulated, including how “intended purpose” drives classification and obligations under the Therapeutic Goods Act and Medical Devices Regulations. For AI/ML medical software teams, the compliance burden is anchored in device classification, evidence, and ongoing monitoring rather than marketing claims about “AI” per se.

  • The UK government publishes a statement on the designation of Tier 1 video-on-demand services under the Media Act 2024, bringing designated services under enhanced Ofcom regulation comparable to traditional broadcasters. In practice, this widens the regulated surface area for harmful content controls (including synthetic or AI-assisted content in catalogues, promos, and trailers) and increases the likelihood of complaints-led enforcement and standards setting.

Cases

  • The Indian Express reports the Gujarat High Court issued notice on a PIL seeking crackdown and regulatory directions on AI deepfakes targeting constitutional authorities. The immediate legal effect is procedural, but the governance effect is that courts are being asked to convert “public trust” deepfake risk into duties on state agencies and potentially platforms—an enforcement pathway that can move faster than legislation.

  • Bar & Bench reports the Delhi High Court granted interim protection to singer Jubin Nautiyal’s personality/publicity rights, restraining unauthorised use of his name, voice/vocal style, image and likeness across AI platforms and intermediaries, alongside blocking/removal directions. The governance consequence is that voice cloning, AI covers, and avatar endorsements are increasingly being treated as fast-takedown civil injunction problems with John Doe style relief, not merely reputation management.

Academia

  • arXiv publishes “The Digital Gorilla: Rebalancing Power in the Age of AI,” proposing a governance framing that treats advanced AI systems as a distinct societal actor alongside people, the state, and enterprises. 

  • preprints.org publishes an early-stage empirical study on where AI governance roles sit in organisations, using survey data to map functions like CAIO, responsible AI leads, and algorithmic auditors across sectors and geographies. 

Events

  • RE•WORK lists the Chief AI Officer Summit UK in London on 25 February 2026, focused on enterprise AI leadership and operating models. This is a useful forum for tracking how “CAIO” roles are being operationalised under real regulatory pressure rather than theory.

  • Quarles lists an Artificial Intelligence Webinar Series session on 26 February 2026 focused on bringing AI-enabled products and services to market. As a governance signal, the framing “to market” typically clusters around risk allocation, documentation, and go/no‑go decision rights.

Takeaway

The enforcement centre of gravity is shifting from “AI principles” to assignable responsibility, named governance owners inside organisations, regulator-defined compliance perimeters, and court-ready takedown remedies for AI-enabled impersonation.

Sources: ABC News; Reuters; Therapeutic Goods Administration (Australia); UK Government (gov.uk); Indian Express; Bar & Bench; arXiv; preprints.org; RE•WORK; Quarles