• U.S. authors’ case advances. A New York federal judge declined OpenAI’s bid to toss claims that ChatGPT-generated text infringes authors’ copyrights, keeping key allegations alive. Reuters

  • DOE and AMD unveil public AI supercomputers “Lux” and “Discovery.” The US Department of Energy introduced two new systems at Oak Ridge National Laboratory to advance AI-driven science. Governance implications: public-sector compute will embed compliance, auditing, and traceability protocols. Energy.gov.

  • EU launches “Scale-Up Europe Fund.” The European Commission announced a €3.75 billion investment initiative with the European Investment Bank to support late-stage tech companies, including AI startups, and to strengthen Europe’s autonomy in compute and data infrastructures. The fund complements AI Act implementation by linking financing access to risk-management and transparency readiness. European Commission – Digital Strategy.


Regulation

  • EDPS issues updated Generative AI guidance. The European Data Protection Supervisor published comprehensive guidance for EU institutions deploying generative AI, focusing on transparency duties, lawful-basis assessment, and accountability documentation. The checklist format operationalises governance and data-protection obligations in line with Regulation 2018/1725 and the EU AI Act’s emerging requirements. EDPS Guidance.


Cases

  • Authors v OpenAI. Court allows copyright claims tied to AI-generated summaries to proceed past the pleadings stage. Reuters+1

  • noyb v Clearview. Privacy group files a criminal complaint alleging unlawful biometric scraping—testing GDPR-aligned criminal liability exposure for AI data practices. Reuters+1


Events

  • WIPO Conversation on IP and Frontier Technologies (Geneva/Online). The World Intellectual Property Organization’s ongoing session explores synthetic media, digital-replica rights, and transparency frameworks for generative systems. WIPO.

  • Cambridge Forum on AI Law and Governance – Call for Papers. Human Rights and AI-Powered Content Moderation – the new call invites submissions on regulatory design and rights-preserving enforcement in online-platform AI governance. Cambridge University Press & Assessment.


Academia

  • Herbosch, ‘Beyond the False Dichotomy: Regulating AI Safety, Ethics and Fundamental Rights’. Frames ex-ante design duties versus liability levers under the EU AI Act. SSRN

  • Werner, ‘The Impossible Act: Structural Incompatibilities Within EU AI Regulation’. Critiques the AI Act’s attempt to fuse product-safety logic with fundamental-rights protection and the compliance tensions this creates. SSRN


Business

  • Oracle signals surging enterprise AI demand. Market read-through: compliance-by-design features and auditable pipelines are increasingly commercial differentiators.

  • U.S. public supercomputing for AI (DOE/ORNL). Public-sector investment in compute reinforces expectations for data governance, reproducibility, and safety evaluations at scale.


Adoption of AI

  • Public bodies tightening playbooks. EDPS’s gen-AI guidance nudges EU institutions toward explicit documentation, model access controls, and data-subject-facing transparency. 

  • Litigation pressure points. Active filings over outputs (authors) and inputs (biometric scraping) show courts testing accountability across the AI lifecycle. 


Takeaway

Governance moves are converging on one message: if you build or deploy AI, expect to evidence what you did, why you did it, and how you protect rights. At the same time, courts are beginning to sketch liability contours for both training data and model outputs, raising the compliance stakes for providers and deployers alike.


Sources: EDPS; Energy.gov; ORNL; European Commission – Digital Strategy; Reuters; CourtHouseNews; RobbyStarbuck.com (complaint PDF); WIPO; SSRN