Why Argentina's Pioneering Privacy Law Is Now Playing Defense Against AI
Argentina broke ground in 2000 with Law 25,326, setting a strict privacy baseline before most of the region had a plan. That early move earned an EU adequacy decision in 2003 and created a culture of rights around consent, purpose limitation, and data quality.
Two decades on, AI has exposed the gaps. Large-scale training, opaque models, and automated decisions strain rules written for a different era. The state is now moving to close the distance with targeted transparency measures, institutional capacity building, and a broader legislative refresh.
The legal baseline that still matters
Law 25,326 grants access, correction, and deletion rights via habeas data, requires database registration, and restricts international transfers. The AAIP serves as the supervisory authority and has leaned heavily on guidance and education.
In 2019, AAIP Resolution 4/2019 recognized a right to request an explanation of logic for decisions made solely by automated processing when they significantly and adversely affect a person. That principle mirrors trends in the EU's GDPR and the Council of Europe's Convention 108+.
GDPR (EU 2016/679) | Convention 108+
Where GAI breaks assumptions
Generative AI repurposes massive datasets, often beyond the consent originally given. That collides with purpose limitation and transparency duties.
Bias, error rates, and lack of explainability create legal exposure in profiling, credit, benefits, hiring, and public safety. Even when principles apply on paper, operationalizing access, correction, and deletion in model pipelines is hard without clear guidance and auditable workflows.
Courts are signaling the floor
Buenos Aires' facial recognition system (SRFP) was halted in 2023 after the Court of Administrative, Tax and Consumer Relations found insufficient safeguards, risk of misidentification, and rights violations. The message: deploy high-risk AI without guardrails and expect injunctions.
For counsel, that case reads like a checklist-prove necessity and proportionality, document accuracy, enable redress, and ensure supervision. Otherwise, constitutional litigation becomes the enforcement mechanism.
The modernization push: transparency first, regulation next
The AAIP launched the Program for Transparency and Personal Data Protection in the Use of AI (Resolution 161/2023). It sets up an AI Observatory, issues non-binding guidelines for the full AI lifecycle, convenes a multidisciplinary advisory council, and drives capacity building across agencies.
Public bodies are expected to document automated systems and publish transparency criteria on the National Transparency Portal. In parallel, Administrative Decision 750/2023 created an interministerial AI roundtable to align policy and technical standards across the state.
What legal teams should do now
- Map AI use cases and data flows. Document lawful bases and refresh consent where model training or secondary use changes purpose.
- Update privacy notices with a clear automated decision section. Implement processes to honor explanation, access, correction, and deletion requests across source data, embeddings, and outputs.
- Run AI/data protection impact assessments for high-risk use (biometrics, profiling, eligibility decisions). Record tradeoffs, mitigations, and approvals.
- Segment and minimize data. Prefer anonymization or strong pseudonymization where feasible; track re-identification risks.
- Treat biometric and children's data as high sensitivity. Apply stricter consent, storage limits, and audit trails.
- Vendor due diligence: data provenance, model documentation, fine-tuning data rights, bias testing, security, and subprocessor chains. Bake in audit and termination rights.
- Set up algorithm audits: measure error rates, disparate impact, and drift. Establish retraining and rollback criteria with change logs.
- Cross-border transfers: validate adequacy or put contracts and safeguards in place. Keep transfer impact assessments on file.
- For public entities: register databases where required and publish automated decision disclosures. Maintain citizen-facing appeal channels.
- Litigation readiness: standardize logs for automated decisions, retain model versions, and preserve evidence for habeas data and injunctive relief actions.
Bills to watch
Congress is considering multiple proposals to amend or replace Law 25,326. Themes include algorithmic transparency, stricter consent for children and adolescents, anonymization standards, biometric safeguards, and explicit rules for automated decision-making and profiling.
Expect closer alignment with international norms, clearer duties for explainability, and stronger remedies. Even before passage, these proposals are a preview of future compliance expectations.
Practical outlook
Argentina is tightening transparency and oversight while it updates the core statute. For now, soft-law measures plus court scrutiny set the bar: be able to explain, justify, and correct automated decisions that affect people.
If your team needs structured upskilling on AI risk and governance, see these curated programs by role: AI courses by job.
Bottom line for counsel: Build explainability and rights handling into your AI stack before regulators force it. Document everything. Treat facial recognition and similar high-risk tools as exceptional, not routine, and be prepared to prove necessity and safeguards on demand.
Your membership also unlocks: