UK Parliament – scrutiny of AI Growth Zone policy. A written question in the House of Lords asks what assessment has been made of the proposed “AI Growth Zone” in south-east Wales, seeking clarification on UK Government support, expected investment and governance structures. This continues the trend of using geographically targeted zones to attract AI-related firms, raising questions about local accountability, infrastructure and safeguards around data use and experimentation in these zones.
HRReview – AI job-loss forecast raises regulatory and policy concerns. HRReview reports on a new “future of work” analysis suggesting AI could threaten up to half of existing jobs, particularly in knowledge-intensive services. The piece links the scale of expected disruption to the urgency of labour-law and social-policy responses, including up-skilling, worker consultation on AI deployment, and potential reforms of redundancy and consultation rules if AI adoption accelerates as predicted.
DL News. Thailand – data protection enforcement against Worldcoin iris-scan project. Thailand’s data protection authority ordered the Worldcoin project (associated with Sam Altman’s Worldcoin Foundation) to cease operations in Thailand and delete more than 1.2 million collected iris scans. The regulator concluded that exchanging highly sensitive biometric data for crypto tokens breached Thai data-protection law, including requirements on explicit consent, purpose limitation and proportionality. The order underlines how biometric AI projects face strict scrutiny when business models depend on large-scale, cross-border processing of immutable identifiers.
Regulation
UK Government/DSIT – research on AI-related cyber security in procurement. The Department for Science, Innovation and Technology published research on cyber security practices in supplier management and procurement, explicitly referencing tools such as the Global Standard on AI Cyber Security and the Software Security Code of Practice. The study identifies low awareness and uneven implementation of such standards in public-sector supply chains, and recommends clearer guidance, better risk assessment of AI-enabled components, and procurement conditions that embed security-by-design in AI-rich systems.
Bar Council (England & Wales) – updated guidance on generative AI for the Bar. The Bar Council released an updated note on the use of generative AI in practice, reinforcing that barristers remain personally responsible for the content of all work, must not disclose confidential or privileged information to public AI systems, and should not cite AI-generated authorities that have not been independently checked. The guidance frames generative AI as a tool that may assist drafting and research but stresses duties of competence, confidentiality, and integrity, and warns of disciplinary consequences where reliance on AI leads to misleading the court.
USPTO (United States) – revised inventorship guidance for AI-assisted inventions. The US Patent and Trademark Office issued revised inventorship guidance for AI-assisted inventions, expressly rescinding its February 2024 guidance. The new approach confirms that only natural persons can be inventors and that standard human inventorship criteria apply, regardless of AI involvement. It rejects earlier attempts to adapt joint-inventorship tests to AI, emphasising instead that AI is treated as a tool; the focus is on identifying which human, if any, made a “significant contribution” to conception under long-standing patent law. This shift narrows the room for expansive interpretations of AI-related inventorship and may influence other jurisdictions’ debates.
Cases
USA – prosecutorial use of generative AI leads to “hallucinated” citations. The ABA Journal reports that a US district attorney acknowledged one prosecution brief contained AI-generated “hallucinations”, although other defects in filings were attributed to human error. The story illustrates the practical risks of ungoverned AI use in criminal proceedings: fabricated case citations, opaque drafting processes, and difficulties in attributing responsibility between junior lawyers and tools. It reinforces emerging court-practice trends requiring certification that AI-generated material has been checked and that parties remain accountable for accuracy.
Academia
ActiveMind.legal – fundamental rights impact assessments under the EU AI Act. ActiveMind.legal published a new explainer on the “fundamental rights impact assessment” (FRIA) required for certain high-risk AI systems under Article 27 of the AI Act. The piece dissects how the FRIA interacts with GDPR data-protection impact assessments, clarifying that a DPIA cannot fully replace a FRIA but can cover overlapping data-protection risks. It highlights additional FRIA elements such as broader non-discrimination, access to services and rule-of-law impacts, offering practical guidance for providers preparing combined assessments.
Nature - npj Digital Medicine – consent platforms for health data in AI-enabled apps. A new article in npj Digital Medicine proposes a user-driven consent platform for health data sharing across digital health applications. Although primarily technical, it directly engages legal-governance themes: granular consent management, transparency of secondary uses (including AI training), and mechanisms for revocation and data-minimisation in AI-enabled clinical research environments. The work may inform regulators and health-data controllers designing compliant consent and governance frameworks for AI in healthcare.
Business
Bureau of Investigative Journalism – AI-generated legal scams on Fiverr. An investigation by the Bureau of Investigative Journalism uncovers dozens of Fiverr profiles offering cheap “legal advice” using stolen identities of genuine lawyers, often with AI-generated profile pictures and marketing text. The report documents misleading claims about qualifications and jurisdictions, highlighting regulatory gaps in platform oversight, cross-border enforcement, and consumer protection when AI lowers the cost of sophisticated impersonation. It raises questions for bar regulators and online-platform rules on due-diligence and KYC for legal services.
Eversheds Sutherland – “Commercially Connected” update on AI, labour and contracts. Eversheds Sutherland’s Commercially Connected bulletin for 26 November 2025 flags AI as a central theme in commercial contracting and labour relations. The note points to increasing client demand for clauses addressing AI Act and Data Act compliance, allocation of IP in AI-assisted deliverables, and staff-consultation duties where automation affects working conditions. It illustrates how AI-related risk allocation is now routine in mainstream commercial and employment contracting, rather than a niche issue.
Adoption of AI
India (Maharashtra) – development of an AI-enabled policing model. Reporting from Indian Masterminds describes a planned “AI policing model” in Maharashtra that will combine predictive analytics, crime-pattern analysis and resource-allocation tools. Officials emphasise efficiency and proactive policing, but civil-society voices and legal commentators are likely to focus on transparency, due-process safeguards, and the risk of reinforcing bias in criminal-justice systems – themes closely aligned with AI Act, Council of Europe and OECD work on AI and justice.
UK / techUK – public sector AI “moment” and governance framing. A techUK commentary released today argues that the UK public sector is at an “AI moment”, urging departments to prioritise responsibility over pure efficiency gains. The piece highlights the need for clear accountability structures, procurement criteria that embed human-rights and fairness considerations, and alignment with emerging UK and EU AI governance frameworks when deploying AI and automation across services.
Events
European Parliament AFCO – workshop on institutional aspects of AI (3 December, Brussels / hybrid). The European Parliament’s Committee on Constitutional Affairs (AFCO) announced a workshop on “Institutional aspects of Artificial Intelligence in the context of European integration”, scheduled for 3 December 2025. Organised by the Policy Department for Justice, Civil Liberties and Institutional Affairs, it will examine how AI affects EU institutional design, democratic oversight and separation of powers, and should be of direct interest for AI governance research.
techUK – upcoming AI webinars (late November). techUK’s AI Hub lists several imminent webinars, including a session on the Department for Transport’s “Transport AI Action Plan” (27 November) and events on AI in children’s social care and “frugal AI”. These events focus on sector-specific AI deployment, governance responsibilities of public bodies and suppliers, and practical lessons for AI implementation within UK regulatory parameters.
Global AI Confex – AI in data privacy and cyber security (26 November, online). The “AI in Data Privacy & Cyber Security” Global AI Confex runs today as a virtual event, bringing together DPOs, in-house counsel and cyber-security leads to discuss how AI is reshaping data-governance, privacy compliance and security risk. Even where live participation is no longer possible, materials and recordings are likely to offer practical case-studies for AI governance in high-risk data environments.
Takeaway
Today’s picture is one of tightening soft-law and guidance rather than headline legislation. Professional bodies (Bar Council), regulators (USPTO, Thai DPA) and advisory practices are converging on a message that AI is a powerful tool for humans, not a legal persona, and that responsibility for accuracy, consent and rights-protection remains firmly with human actors. For AIJurium, the central threads to track are: (1) evolving professional-ethics baselines for lawyers and patent practice; (2) strong enforcement signals in biometric and sensitive-data use; and (3) growing expectations that AI deployments be accompanied by structured impact assessments (FRIA, DPIA) and explicit contractual and procurement safeguards.
Sources: GOV.UK, DSIT, UK Parliament, Bar Council, Legal Futures, USPTO, Reuters, DLNews, ABA Journal, Bureau of Investigative Journalism, ActiveMind.legal, npj Digital Medicine, techUK, European Parliament, Edict Events, HRReview