According to Reuters the EU opened a new formal line of scrutiny around Grok after non-consensual sexualised deepfakes circulated on X, with potential DSA exposure framed around systemic risk management rather than one-off removals. The story matters because it treats generative tools as part of platform risk architecture and not as a separate product bolt-on.
South Korea’s AI Basic Act takes effect with labelling and oversight duties
According to the Financial Times, the House of Lords has backed an amendment to ban social media for under-16s, intensifying the UK policy debate on age checks and safety-by-design obligations that interact with automated content and recommendation systems.
Digital Omnibus AI opinion published
According to MLex, India is shifting away from a standalone AI Act and instead will lean on existing laws to regulate artificial intelligence risks within its legal system, foregoing a comprehensive new statute for now.
AI stress tests urged for UK finance
According to Reuters, the Treasury Committee has urged UK regulators to run AI specific stress tests for financial services and to publish clearer guidance on how existing rules apply to AI use.
UK updates generative AI safety standards amid global developments
Harvey partners with SCC Online for AI legal research tools (Times of India). On 19 Jan 2026, Harvey announced a partnership with SCC Online to integrate comprehensive Indian legal content for AI‑assisted legal workflows, expanding accessible AI in legal research.
UK and EU Press Forward on AI Regulation Amid Deepfake Safety Backlash
UK intensifies scrutiny of Grok deepfake harms and compliance with Online Safety Act (The Guardian).
Deepfake enforcement and AI transparency pressure
According to Reuters, Keir Starmer said X is moving to comply with UK law following Ofcom’s probe into Grok-generated sexual deepfakes, while ministers reiterated the new offence criminalising creation of sexual deepfakes will commence within the week.
UK deepfake crackdown and platform enforcement
According to TIME, the UK is bringing into force an offence covering the creation of non-consensual sexualised intimate images, with the Grok incident accelerating political focus on enforcement against distribution channels and tool access. The immediate operational pressure is on platforms and providers to prevent creation and circulation pathways, not just react to reports.
AI deepfake abuse triggers cross border platform action
According to Sky News, Ofcom is investigating X after reports that its Grok tool was used to generate sexualised images of children and undressed images of people. The report frames the immediate issue as illegal content exposure and child safety risk, with platform controls now under scrutiny.