The European Union has finalized a critical update to its landmark artificial intelligence rulebook, effectively criminalizing non-consensual sexual deepfakes and pushing back compliance deadlines for high-risk AI systems by roughly one year. The trilogue agreement between the EU Council and the European Parliament reshapes the enforcement timeline of the AI Act while establishing a bright red line around intimate synthetic content—a move that carries direct implications for platform operators, AI developers, and residents across Italy.
Why This Matters
• Explicit ban on sexual deepfakes: Generating non-consensual intimate images or child sexual abuse material using AI is now a prohibited practice under Article 5 of the AI Act.
• Delayed compliance deadlines: High-risk AI systems will face binding rules on December 2, 2027 (standalone systems) and August 2, 2028 (embedded in regulated products).
• Watermarking obligation arrives sooner: AI-generated content must be labeled in machine-readable format by December 2, 2026.
What Triggered the Accelerated Ban
The decision to explicitly prohibit AI tools that create sexually explicit or intimate material without consent—including so-called "nudifier" applications—follows a series of high-profile controversies involving chatbots and image generators. European lawmakers flagged the proliferation of apps that digitally undress individuals in photographs, a practice that poses severe risks to personal dignity, psychological safety, and gender-based violence.
By codifying this ban in the AI Act's prohibited practices list, the EU is creating a clear legal basis for prosecution that member states, including Italy, can enforce through national courts. Violations carry steep penalties: up to €35M or 7% of global annual turnover, whichever is higher, for the most serious infractions. This aligns the AI Act with the Digital Services Act, which already mandates that platforms remove illegal content within strict timeframes.
The New Compliance Calendar for High-Risk Systems
The revised enforcement roadmap grants companies and public agencies additional breathing room to prepare for the AI Act's stringent obligations on high-risk systems—those used in critical infrastructure, employment, education, law enforcement, and border control.
| System Type | Original Deadline | New Deadline ||-----------------|----------------------|------------------|| Standalone high-risk AI | August 2, 2026 | December 2, 2027 || AI embedded in regulated products (e.g., medical devices) | 2027 | August 2, 2028 || Watermarking for generative AI content | Mid-2025 | December 2, 2026 |
European regulators justified the delay by citing the need to finalize harmonized technical standards and ensure uniform application across all 27 member states. For Italy-based businesses deploying AI in recruitment, credit scoring, or predictive policing, this extension provides critical time to conduct conformity assessments, establish risk management protocols, and prepare technical documentation.
The deepfake ban and restrictions on biometric surveillance are now part of the AI Act's core prohibited practices framework, with enforcement ramping up as regulators establish implementation mechanisms.
Impact on Residents and Digital Rights in Italy
For individuals living in Italy, the deepfake prohibition offers tangible legal recourse against a growing form of digital abuse. Victims of non-consensual intimate imagery generated by AI can now invoke EU-level criminal statutes in addition to existing national laws on privacy violations and image-based sexual abuse.
The watermarking requirement, set to activate in late 2026, will make it easier for users to distinguish between authentic media and AI-generated content. Whether reading news articles, viewing social media posts, or encountering political campaign materials, Italians will benefit from mandatory disclosure tags that flag synthetic media. This transparency measure aims to curb misinformation, especially ahead of national and European elections.
For companies operating in Italy—from tech startups in Milan to regional healthcare providers deploying diagnostic AI—the extended deadlines reduce short-term compliance pressure but do not eliminate the need for immediate preparation. Firms must begin mapping which of their systems qualify as high-risk under Annex III of the AI Act, assessing data quality, implementing human oversight mechanisms, and registering systems in the forthcoming EU AI database.
How Italy's Enforcement Landscape Will Shift
Italy's Garante per la Protezione dei Dati Personali (Data Protection Authority) is expected to play a central role in enforcing the AI Act domestically, working alongside newly designated AI supervisory bodies. The prohibition on sexual deepfakes will likely intersect with Italy's existing criminal code provisions on defamation, revenge porn, and child exploitation, creating overlapping enforcement pathways.
Legal experts in Rome have noted that the explicit inclusion of "nudifier" tools addresses a loophole that previously allowed some developers to claim their software had legitimate use cases. Now, any application designed or marketed for creating non-consensual intimate imagery is categorically illegal, regardless of claimed intent.
What Businesses and Developers Must Do Now
Even with the extended deadlines, Italian enterprises should not delay preparation. Key action items include:
• Conduct AI system inventories to identify which applications qualify as high-risk under the Act's classification scheme.
• Establish governance frameworks for human oversight, data quality assurance, and post-market monitoring.
• Review contracts with third-party AI vendors to ensure compliance responsibilities are clearly allocated.
• Implement watermarking capabilities for any generative AI tools that produce text, images, audio, or video, targeting readiness by late 2026.
• Train legal and compliance teams on the prohibited practices list, especially if operating in sectors like recruitment, education, or content moderation.
For AI startups in Italy's growing innovation hubs—particularly those developing generative models or computer vision applications—the deepfake ban imposes a clear design constraint. Any feature that could enable non-consensual intimate content generation must be robustly prevented at the architecture level, not merely discouraged through terms of service.
The Road Ahead
The AI Act's phased rollout reflects the EU's attempt to balance innovation with rights protection. The one-year delay for high-risk system compliance buys time for industry and regulators alike, but it does not signal leniency. Brussels has already opened investigations into platforms over alleged violations, and national authorities are ramping up enforcement capacity.
For residents of Italy, the deepfake prohibition is both a symbolic and practical victory—codifying digital dignity into law and providing enforceable remedies. For businesses, the clock is ticking: compliance infrastructure cannot be built overnight, and the penalties for failure are designed to sting.
As the AI Act's enforcement machinery gears up, Italy will become a testing ground for how Europe's ambitious regulatory vision translates into everyday reality—from courtroom prosecutions of deepfake abusers to boardroom decisions about which AI systems to deploy, and how.