California launches AI oversight unit as public and private governance hardens

Reuters reports that California Attorney General Rob Bonta launched an AI oversight and accountability programme while continuing an investigation into xAI over alleged non consensual sexualised imagery generated by Grok. This matters for AI governance because it pairs institution building with an active enforcement posture, which raises the bar for auditable safeguards, monitoring, and rapid incident response. 

DUAA ADM signals and open government AI

TechPolicy Press unpacks the newly surfaced enforcement detail around X and what it implies for researcher data access under the Digital Services Act, which matters for accountability work that depends on verifiable platform data.

Patentability and Misinformation

The Independent reports that researchers are urging Ofcom to examine how AI-generated “news” and ad-monetised misinformation can spike after major incidents, with recommendations aimed at crisis-response and clearer chatbot limitations. The practical governance signal is a shift from content moderation debates to ad-network incentives and regulator-led enforcement questions.

Safer Internet Day and regulator capacity

GOV.UK announces a new government campaign to help parents talk to children about harmful online content, and it explicitly ties this year’s Safer Internet Day theme to the safe and responsible use of AI.

WhatsApp access fight and sovereign AI build

Reuters reports the European Commission has issued antitrust charges against Meta over a policy that blocks rival AI services from using the WhatsApp Business API, and it is weighing interim measures to prevent ‘serious and irreparable’ harm to competition while the case proceeds.

Deepfake detection and AI claims enforcement

Reuters reports that the UK will work with Microsoft, academics and other experts to build a deepfake detection system and an evaluation framework intended to set consistent expectations for how detection tools are assessed. 

Board pressure and public sector AI build

Reuters reports that, with AI accountability stalling, boards should press major technology companies for clearer disclosure and governance evidence, including transparency on human rights impact assessment practice and ethical AI commitments.