Pressure builds around EU AI timelines
Signals from Brussels and London point to scrutiny of AI rules and market impacts. Reports suggest the Commission may slow selected parts of the AI Act, the EDPB advanced Brazil’s adequacy path, and the Bank of England flagged market risks. Governance teams should track timing, transfers and disclosure duties.
Provenance telecoms security and data flows tighten oversight
UK and partners deepen telecoms-security cooperation. Ofcom and peer regulators from the US, Canada, Australia and New Zealand agreed to enhance information-sharing and joint work on sector threats, including those linked to emerging technologies.
EU starts code of practice on labelling AI-generated content. The European Commission launched expert work toward guidelines and a voluntary code to support transparency obligations for synthetic or manipulated media.
EU starts AI-content labelling code
EU launches work on a code of practice for marking and labelling AI-generated content, signalling practical guidance on provenance and media authenticity under the AI Act. European Commission press release.
UK HM Treasury asks the Financial Services Skills Commission to identify AI and wider tech skills gaps across financial services, with findings to inform policy and regulation. GOV.UK letter. GOV.UK+1
UK professional guidance and EU oversight converge on verifiable AI controls
UK legal practice: caution on AI use in social media. The Law Society Gazette reports updated guidance for solicitors on AI-driven content and engagement, highlighting confidentiality, accuracy, and client-care risks. Law firms should tighten internal AI policies and approval workflows.
UK transparency and skills push meets EU oversight signals
UK: Government confirms multi-year R&D allocations. DSIT set out long-term funding for UK research bodies, signalling continued priority for AI-related programmes and audit-ready public research. UK: New algorithmic transparency record. The Standards and Testing Agency published a record for use of Colossyan AI voiceovers in helpline/training videos, expanding the public register and reinforcing disclosure practice. Algorithmic Transparency Records.
EU institutions tighten AI governance; authors’ case reshapes liability contours
U.S. authors’ case advances. A New York federal judge declined OpenAI’s bid to toss claims that ChatGPT-generated text infringes authors’ copyrights, keeping key allegations alive.
DOE and AMD unveil public AI supercomputers “Lux” and “Discovery.” The US Department of Energy introduced two new systems at Oak Ridge National Laboratory to advance AI-driven science. Governance implications: public-sector compute will embed compliance, auditing, and traceability protocols.
Compute policy, evidence pipelines, and liability signals
U.S. $1bn AI-compute partnership (DoE × AMD). The U.S. Department of Energy launched a $1 billion public-private partnership to build next-gen supercomputing capacity for AI research, raising governance questions on access, export-control compliance, and provenance of training data. UK “AI Growth Lab” — call for evidence (open). The government is soliciting submissions to shape a pro-innovation regulatory sandbox for AI adoption; responses due 2 Jan 2026. This sets expectations for measurable benefits and audit-grade documentation from participants.