9 AI Browser Tools Fail Privacy Test, Study Finds

Paul

- A majority of AI-powered browser extensions collect sensitive user data, raising regulatory concerns.
- Perplexity AI stands out as the sole tool to meet privacy standards.
On August 14, 2025, CoinDesk, Yahoo News, and EurekAlert! reported on a study from University College London and Mediterranea University of Reggio Calabria that highlighted critical privacy risks in popular AI-assisted browser extensions. The study revealed that 9 out of 10 widely used tools collect sensitive user data. As a result, tools including OpenAI’s ChatGPT and Microsoft Copilot potentially violate privacy laws like GDPR and HIPAA.
Researchers conducted the investigation on August 12 and 13, evaluating the privacy practices of 10 AI-enabled browser extensions, and discovered that tools such as Merlin AI, Sider, and TinaMind harvested personal information from seemingly private webpages. The collected data included medical histories, banking credentials, academic transcripts, and social security numbers. By simulating routine online activities like accessing health portals and online banking, researchers found that several extensions transmitted entire webpage content to their own servers or third-party platforms. For instance, Merlin AI tracked health records and social security data, while other tools forwarded user prompts and metadata, including IP addresses, to Google Analytics for targeted advertising.
Notably, Copilot and Monica saved detailed chat logs even after users ended their browser sessions, which heightens the risk of data exposure. In addition, researchers observed that ChatGPT, when integrated within browser settings, built user profiles based on inferred demographics such as age, gender, income, and interests to tailor its responses. In contrast, Perplexity AI was the only browser tool that did not collect or transmit sensitive user data, underscoring its commitment to robust privacy standards.
Anna Maria Mandalari, the study’s senior author, highlighted the unprecedented level of access these tools have to users’ online activities and warned that many data handling practices likely violate US and European privacy regulations. Moreover, the study found that company privacy policies often admitted to extensive data collection, including personal and transactional records. The researchers will present these findings at the USENIX Security Symposium, a move that will likely intensify scrutiny of AI-driven data practices within the tech sector.
Get the latest news in your inbox!