Ofcom’s consultation on combatting mobile messaging scams closes on 28 January 2026, which matters for AI governance because scam campaigns increasingly scale through automated content generation and rapid targeting. Any new rules that raise detection and disruption duties for networks can indirectly shape how AI-enabled fraud is handled in telecoms ecosystems.

According to Reuters, the UK regulator proposals include allowing publishers to opt out of their content being used for AI Overviews or to train standalone AI models, alongside changes on ranking transparency and user choice. If implemented, this would treat opt-out tooling and attribution design as regulated conduct, not voluntary product policy.

Regulation

  • GOV.UK. The Competition and Markets Authority has launched a consultation on proposed conduct requirements for Google’s general search services, including measures aimed at how publisher content is used in AI features such as AI Overviews. The consultation package frames publisher choice, attribution, and transparency as competition issues, and keeps the consultation window open until 5pm on 25 February 2026.

  • The Information Commissioner’s Office has published a Data Protection Day blog focused on ‘debunking data protection myths about AI’, positioning the regulator as enabling responsible AI use while reinforcing that rights and safeguards still apply. The piece is useful as a short, quotable reference point for organisational messaging on lawful AI adoption using personal data.

  • The European Data Protection Board has published a Data Protection Day 2026 note on keeping children’s personal data safe online. For AI governance work, it is a reminder that child-data risk framing remains a priority lens for online services and AI-enabled personalisation.

Cases

  • According to Reuters, Google will pay $135 million to settle a lawsuit over Android data transfers. Even though this is not an AI-specific dispute, it is a practical signal for AI governance because mobile data flows underpin on-device assistants, ad targeting, and personalisation models, and litigation risk often crystallises around collection and transfer practices rather than model design.

Academia

  • A new SSRN paper titled ‘AI Tools, Not Gods: Why Artificial Intelligence Hype Threatens Global Governance and How to Fix It’ was posted on 28 January 2026. It is long-form, but it can be mined for governance framing language around institutional design and accountability claims.

  • The Solicitors Journal has a recent practice-facing piece titled ‘Governing agentic AI in legal practice’. It is a useful bridge text for explaining why “agentic” tools raise supervision and liability issues that go beyond document drafting.

Events

Takeaway

The clearest governance trend is regulators treating AI-mediated visibility and attribution as a controllable market interface, with opt-out and transparency becoming enforceable design constraints. This pushes AI governance towards evidence-backed controls over data inputs, content use, and ranking behaviour, rather than broad ethics statements.

Sources: Competition and Markets Authority, Reuters, Ofcom, Information Commissioner’s Office, European Data Protection Board, SSRN, Solicitors Journal