Reuters reports that Netflix is facing a boycott by German voice actors over concerns linked to AI training, underlining how rights, consent, and compensation remain live governance issues in creative sup
Agentic AI security and legal guardrails
According to Reuters, Snowflake has announced a partnership with OpenAI reported as a $200 million deal, signalling continued large scale spend on model access and integration in enterprise data stacks. For governance, the key question becomes auditability of model use across data environments and who holds operational responsibility for outputs.
UK public sector AI build-out meets tougher platform and market controls
The Department for Science, Innovation and Technology has set out an expansion of free AI skills training, aiming to upskill 10 million workers by 2030 and making newly benchmarked courses available to all adults. The announcement also signals how government intends to frame “responsible adoption” as an economic and labour-market policy tool.
CMA conduct requirements for Google search and AI Overviews
Ofcom’s consultation on combatting mobile messaging scams closes on 28 January 2026, which matters for AI governance because scam campaigns increasingly scale through automated content generation and rapid targeting. Any new rules that raise detection and disruption duties for networks can indirectly shape how AI-enabled fraud is handled in telecoms ecosystems.
DMA specification and the Mills Review
According to Reuters, the Commission opened two formal specification proceedings under the Digital Markets Act to shape how Google must provide access for rivals to certain services and data connected to AI and search. Google is reported as warning about risks to privacy and innovation, while the Commission frames the process as a structured compliance dialogue with a six-month endpoint.
Grok deepfake enforcement and the UK data library push
According to Reuters the EU opened a new formal line of scrutiny around Grok after non-consensual sexualised deepfakes circulated on X, with potential DSA exposure framed around systemic risk management rather than one-off removals. The story matters because it treats generative tools as part of platform risk architecture and not as a separate product bolt-on.
South Korea’s AI Basic Act takes effect with labelling and oversight duties
According to the Financial Times, the House of Lords has backed an amendment to ban social media for under-16s, intensifying the UK policy debate on age checks and safety-by-design obligations that interact with automated content and recommendation systems.
Digital Omnibus AI opinion published
According to MLex, India is shifting away from a standalone AI Act and instead will lean on existing laws to regulate artificial intelligence risks within its legal system, foregoing a comprehensive new statute for now.
AI stress tests urged for UK finance
According to Reuters, the Treasury Committee has urged UK regulators to run AI specific stress tests for financial services and to publish clearer guidance on how existing rules apply to AI use.
UK updates generative AI safety standards amid global developments
Harvey partners with SCC Online for AI legal research tools (Times of India). On 19 Jan 2026, Harvey announced a partnership with SCC Online to integrate comprehensive Indian legal content for AI‑assisted legal workflows, expanding accessible AI in legal research.