Understanding the present, shaping the future.

Search
09:26 AM UTC · THURSDAY, MAY 7, 2026 LA ERA · México
May 7, 2026 · Updated 09:26 AM UTC
Technology

EU AI Act forces tech giants to master machine unlearning

European regulators will require AI companies to delete specific user data from trained models starting in 2026 or face fines reaching 7% of their global revenue.

Matías Olivares

2 min read

EU AI Act forces tech giants to master machine unlearning
AI data center

Starting in 2026, tech companies operating in the European Union must ensure their artificial intelligence models can effectively 'forget' private data. The EU AI Act mandates that companies provide users with the right to be forgotten, forcing developers to implement complex machine unlearning techniques to comply with strict privacy standards.

Failure to scrub personal information from an AI’s neural network could result in severe financial penalties. Under the new regulations, companies face fines of up to 7% of their global annual turnover if they cannot prove that specific user data no longer influences a model’s output.

Technical hurdles for AI transparency

Removing information from a Large Language Model (LLM) presents a significant engineering challenge. Unlike traditional databases, AI models incorporate training data into their internal weights, making simple deletion impossible without potentially breaking the system.

To meet the 2026 deadline, developers are turning to selective forgetting methods. One approach, known as 'sharding and slicing,' involves breaking training data into isolated segments. If a user requests data removal, the company re-trains only the specific slice containing that information, preserving the rest of the model.

Other companies are testing direct weight editing. This process functions like neurosurgery, where developers locate the exact neural connections where a specific concept is stored and deactivate them. Additionally, the law requires high-risk AI systems to maintain strict traceability records, forcing companies to account for every piece of data used in their training sets.

Latin American governments are closely monitoring these European standards to inform their own legislative agendas. Chile is currently advancing a bill in the Senate that mirrors the EU’s risk-based classification system. Similarly, the Brazilian Senate has approved project 2338/2023, which aligns with international standards on algorithmic transparency.

Global tech giants like Google, OpenAI, and Meta often standardize their processes to match the strictest regional requirements. As a result, the 'right to be forgotten' button is expected to roll out across these companies' applications in markets like Chile and Mexico alongside their European deployments.

Full enforcement of the EU AI Act begins in August 2026, with mandatory watermarking for AI-generated content following in November. By 2027, the regulations will expand to cover specialized AI in the healthcare and justice sectors.

Comments