The Independent reports that researchers are urging Ofcom to examine how AI-generated “news” and ad-monetised misinformation can spike after major incidents, with recommendations aimed at crisis-response and clearer chatbot limitations. The practical governance signal is a shift from content moderation debates to ad-network incentives and regulator-led enforcement questions.
Safer Internet Day and regulator capacity
GOV.UK announces a new government campaign to help parents talk to children about harmful online content, and it explicitly ties this year’s Safer Internet Day theme to the safe and responsible use of AI.
WhatsApp access fight and sovereign AI build
Reuters reports the European Commission has issued antitrust charges against Meta over a policy that blocks rival AI services from using the WhatsApp Business API, and it is weighing interim measures to prevent ‘serious and irreparable’ harm to competition while the case proceeds.
Deepfake detection and AI claims enforcement
Reuters reports that the UK will work with Microsoft, academics and other experts to build a deepfake detection system and an evaluation framework intended to set consistent expectations for how detection tools are assessed.
Board pressure and public sector AI build
Reuters reports that, with AI accountability stalling, boards should press major technology companies for clearer disclosure and governance evidence, including transparency on human rights impact assessment practice and ethical AI commitments.
AI safety baseline meets deployment friction
Reuters reports that Netflix is facing a boycott by German voice actors over concerns linked to AI training, underlining how rights, consent, and compensation remain live governance issues in creative sup
Agentic AI security and legal guardrails
According to Reuters, Snowflake has announced a partnership with OpenAI reported as a $200 million deal, signalling continued large scale spend on model access and integration in enterprise data stacks. For governance, the key question becomes auditability of model use across data environments and who holds operational responsibility for outputs.
UK public sector AI build-out meets tougher platform and market controls
The Department for Science, Innovation and Technology has set out an expansion of free AI skills training, aiming to upskill 10 million workers by 2030 and making newly benchmarked courses available to all adults. The announcement also signals how government intends to frame “responsible adoption” as an economic and labour-market policy tool.
CMA conduct requirements for Google search and AI Overviews
Ofcom’s consultation on combatting mobile messaging scams closes on 28 January 2026, which matters for AI governance because scam campaigns increasingly scale through automated content generation and rapid targeting. Any new rules that raise detection and disruption duties for networks can indirectly shape how AI-enabled fraud is handled in telecoms ecosystems.
DMA specification and the Mills Review
According to Reuters, the Commission opened two formal specification proceedings under the Digital Markets Act to shape how Google must provide access for rivals to certain services and data connected to AI and search. Google is reported as warning about risks to privacy and innovation, while the Commission frames the process as a structured compliance dialogue with a six-month endpoint.