EU AI Act Faces Major Overhaul: High-Risk Rules Delayed to 2027 as Europe Tightens Ban on Deepfake Nudity
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
This omnibus simplification package, kicked off by the European Commission's November 2025 digital omnibus, is racing toward a plenary vote on March 26. If approved, trilogues with the Council—whose position dropped March 13—could reshape compliance before the crunch. Providers get a breather on watermarking AI-generated audio, images, video, or text, with MEPs eyeing November 2, 2026, shorter than the Commission's February 2027 pitch. No more mandatory AI literacy for staff; instead, the Commission and member states will foster it. And the EU AI Office? It's gaining exclusive muscle over systems blending general-purpose AI models, sidelining some national watchdogs except in critical spots like infrastructure or law enforcement.
Think about it, listeners: energy giants from exploration to grid ops, per Baker Botts analysis, face €15 million fines or 3% global turnover hits if high-risk tools falter come deadline. Legal Nodes urges audits now—map every AI, from in-house models to third-party chatbots, classify by risk tiers: unacceptable like social scoring (banned since February 2025), high-risk demanding risk management and oversight, limited-risk needing transparency labels, or minimal like spam filters. Extraterritorial claws snag non-EU firms serving Europe; appoint reps or bust.
As Oliver Patel notes on his Substack, today's Act stands firm until amendments land—August 2, 2026, looms for high-risk rollout. Europe's risk-based fortress contrasts Trump's March 20 White House AI framework, begging the question: will phased enforcement stifle innovation or safeguard rights? Control Risks highlights sandboxes for testing, easing data friction. In Brussels' corridors, this isn't just bureaucracy; it's wiring our future—where AI amplifies humanity or erodes it.
Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
This content was created in partnership and with the help of Artificial Intelligence AI
No reviews yet