Law Society of England and Wales. The latest Westminster update for the week commencing 15 December flags AI and digital regulation as a continuing focus of parliamentary scrutiny, situating debates on technology within broader concerns about the rule of law, justice policy and the impact of fast moving AI deployments across government.
DeepMind MoU, Crisis Management Advice and ChatGPT Wrongful Death Suit
UK Government.The Department for Science, Innovation and Technology announced a new partnership with Google DeepMind that will see the company open its first automated AI research lab in the UK, give UK scientists priority access to advanced models such as AlphaGenome and an AI co scientist, and support missions under the AI for Science Strategy, with an explicit policy aim of using AI to deliver cleaner energy, better public services and national renewal while working closely with the UK AI Security Institute on safety research and testing of education focused Gemini tools grounded in the national curriculum.
Vietnam’s AI Law and the GeminiJack Security Shock
Noma Security. GeminiJack shows AI assistants can become a covert data exfiltration layer. Noma Labs disclosed a zero click indirect prompt injection vulnerability, dubbed GeminiJack, in Google Gemini Enterprise and previously Vertex AI Search, which allowed attackers to embed hidden instructions in shared documents, calendar invites or emails so that AI powered enterprise search would silently exfiltrate Gmail, Calendar and Docs data through a disguised image request, leading Google to change how Gemini Enterprise interacts with its retrieval and indexing systems and separating Vertex AI Search from Gemini workflows.
AI training data and courtroom misuse under scrutiny
UK Government (DSIT). A new press release confirms that members of the former International Network of AI Safety Institutes have recommitted to joint work on benchmarks and testbeds under the renamed International Network for Advanced AI Measurement, Evaluation and Science, with an explicit focus on improving the comparability and robustness of AI measurement and evaluation practices across major economies.
Parliamentary Pressure, AI Security Warnings and Expanding Data Litigation
The Guardian. Over 100 UK parliamentarians across parties have endorsed a coordinated call, led by nonprofit Control AI, for binding regulation of the most powerful AI systems, urging the Prime Minister to resist pressure to weaken rules and stressing risks comparable to nuclear weapons and pandemics if advanced systems are left largely self governed.
NCSC. The UK National Cyber Security Centre warns that prompt injection should not be treated as a niche variant of SQL injection but a distinct and potentially more dangerous class of attack that exploits how AI systems process instructions, urging organisations to treat prompt injection as a strategic security risk in AI deployments rather than a minor technical bug.
Tech Policy Press. A new commentary argues that current UK law and policy do not provide effective protection from chatbot related harms, highlighting gaps in consumer protection and safety standards and suggesting that regulators have been slower than the speed at which conversational AI is being integrated into everyday services.
Judicial AI guidelines, Australia’s National AI Plan and global AI inequality
According to the Bank of England, the December 2025 Financial Stability Report warns that elevated equity valuations for technology companies focused on artificial intelligence, together with debt-financed AI infrastructure spending and leveraged positions in private credit and gilt markets, now pose heightened risks to UK and global financial stability, even though core UK banks remain resilient under stress tests.
FCA’s 'different”'AI regulation approach, UK fairness challenge and UNESCO AI literacy
According to the Financial Times, Financial Conduct Authority (FCA) chief executive Nikhil Rathi told the FT Global Banking Summit that the “AI era” requires “a totally different” approach to regulation, with the FCA choosing not to introduce AI-specific rules for financial services because the technology “moves every three to six months”.
EU AI sandboxes, UK AI–energy grid, Seoul AI standards and IP risk
GOV.UK (DESNZ/DSIT). According to the UK government, the latest meeting of the AI Energy Council in London focused on speeding up grid connections and building infrastructure for new AI data centres and ‘AI Growth Zones’. Ministers and regulators discussed reforms to accelerate grid access, discounted tariffs for data centres that can use excess capacity, and the broader goal of ensuring that AI’s growing energy demand is matched by sustainable, well governed energy infrastructure across the UK.
Hungary’s AI Act implementation, UK and EU AI literacy push, newsroom arbitration on AI tools
Ofcom – AI and media literacy in the UK. Ofcom warns that AI is ‘changing the information game’ and argues that media literacy policy must account not only for deepfakes but also for how recommender systems and generative tools shape people’s understanding of news and information.
Science Strategy, Cyber Resilience and OpenAI Liability
Scotland – AI infrastructure and water use scrutiny. Digit.FYI reports rising concern that large AI-driven data centres could be straining Scotland’s water resources, prompting calls for tighter transparency and environmental governance around AI infrastructure siting and cooling.
Global – AI and regulatory complexity for companies. Verdict highlights how overlapping AI, privacy and sectoral rules are driving regulatory complexity, arguing that organisations need to embed AI governance and privacy risk assessment within compliance workflows rather than treat AI as a bolt-on issue.