Understanding the Legal Struggles of AI Tools with GDPR: The Case of ChatGPT

Since its debut in November 2022, ChatGPT has been at the forefront of AI technology, captivating users worldwide with its conversational capabilities. However, this innovation brings with it a significant challenge: compliance with the stringent General Data Protection Regulation (GDPR) of the European Union. This regulation, which has been a cornerstone of EU data protection law since 1995, mandates that personal data must be accurate, transparently collected, and freely available for correction upon the data subject’s request.

The functionality of AI like ChatGPT, which generates responses by predicting likely word sequences, inherently includes a risk of producing incorrect information—commonly referred to as “hallucinations” in AI parlance. According to OpenAI, the organization behind ChatGPT, these tools can mistakenly generate false data, with reports suggesting inaccuracies occur in 3 to 27 percent of responses. This raises a critical issue when such data pertains to personal details, which are protected under GDPR.

EU law is particularly stringent about the accuracy of personal data. Under Article 5 of the GDPR, data must not only be accurate but also rectifiable under Article 16, giving individuals the right to correct false information. Furthermore, Article 15 empowers individuals to request access to personal data held about them, a requirement that seems to clash with the capabilities of current AI technologies.

The potential legal implications are profound. Maartje de Graaf, a noted data protection lawyer with noyb, emphasizes that producing or maintaining incorrect personal information can have severe consequences under EU law. The challenges are amplified when AI systems such as ChatGPT fail to meet these legal standards, suggesting a fundamental conflict between technological capabilities and regulatory requirements.

Recent actions by European privacy watchdogs underscore the urgency of these issues. The Italian Data Protection Authority (DPA), for instance, imposed restrictions on AI data processing in March 2023, highlighting concerns about the inaccuracy of data produced by AI systems. Similarly, the establishment of a European Data Protection Board (EDPB) task force specifically to address AI tools like ChatGPT indicates a concerted effort to tackle these challenges.

The ongoing case by noyb against OpenAI in Austria further illustrates the tension between AI development and GDPR compliance. Noyb has petitioned the Austrian Data Protection Authority (DSB) to enforce GDPR compliance by OpenAI, emphasizing the need for transparency in AI data processing and the right of individuals to access and rectify data about them.

This situation presents a crucial question for the future of AI development in compliance-heavy regimes like the EU: Can AI tools adapt to meet the rigorous demands of laws like GDPR, or will legal standards need to evolve to accommodate new technological realities? As AI continues to advance, finding a balance that respects both individual rights and technological innovation remains a pivotal challenge for developers, regulators, and legal experts alike.

Latest articles

Related articles