Italy’s data protection authority has imposed a €5 million ($5.64 million) fine on Replika, an artificial intelligence chatbot developer, for violating user privacy regulations. The San Francisco-based startup, launched in 2017, offers users personalized AI avatars marketed as “virtual friends” aimed at improving emotional well-being. However, Italian regulators found that the company failed to establish a legal basis for processing user data and did not implement safeguards to prevent children from accessing the service.
In February 2023, the Italian privacy watchdog, Garante, ordered Replika to suspend its operations in the country due to potential risks to minors. After further investigation, the regulator concluded that the company, Luka Inc., lacked an adequate age-verification mechanism, allowing underage users to access and interact with the chatbot, prompting the fine. Replika has yet to respond to Reuters’ request for comment regarding the penalty.
Beyond the financial sanction, Garante has also launched a separate probe into whether Replika’s AI systems, particularly the training of its language model, comply with European Union data protection standards. The investigation will focus on how personal data is used during the training of generative AI, a growing area of concern for regulators across Europe.
Garante has been at the forefront of enforcing AI-related data privacy in the EU. It made headlines in 2023 when it briefly banned ChatGPT over similar privacy concerns, later fining its developer, OpenAI, €15 million. The agency’s latest move signals ongoing scrutiny of AI technologies that process personal data, especially those accessible to minors without sufficient protection.
