At 2 a.m., a lonely university student confides in a chatbot about depression. A young founder shares confidential business plans before a pitch meeting. A father asks for advice about a family crisis he cannot discuss with friends. Across the world, millions of people now speak to artificial intelligence with a level of honesty once reserved for therapists, spouses and diaries. A new lawsuit against OpenAI is exposing the unsettling fear behind that growing intimacy: that some of those deeply personal exchanges may not have been as private as users believed.

Millions of people type their deepest anxieties into chatbots with the assumption that the conversation ends there. A new lawsuit against OpenAI argues otherwise.
The maker of ChatGPT was sued this week in federal court in California over claims that it shares user conversations and personal data with Meta and Google through common website tracking tools, reviving a growing legal fight over what privacy means in the age of artificial intelligence.
The proposed class action, filed Wednesday in the U.S. District Court for the Southern District of California, accuses OpenAI of transmitting chatbot queries, email addresses and user identification data through technologies such as Facebook Pixel and Google Analytics embedded on the ChatGPT website.
For users, the allegation cuts at the heart of an increasingly intimate relationship with artificial intelligence. People now routinely ask chatbots for financial advice, discuss medical concerns, upload sensitive documents and confess personal struggles to systems they regard as private confidants.
“The same is true of individuals, who increasingly rely on ChatGPT to gather information and advice on their most personal issues,” the complaint states.
“As such, personal privacy on ChatGPT is an issue with broad implications for individuals’ control of their privacy and personal information.”
The lawsuit echoes a separate complaint filed in San Francisco federal court earlier this year against Perplexity AI, which similarly accused the company of using hidden trackers that transmitted user interactions to Meta and Google.
Yet legal experts and cybersecurity researchers say the case against OpenAI may face steep hurdles.
“Using Google Analytics and Facebook’s tracking pixels is very common across most websites, no matter the industry. These are industry standard services, even though they’re definitely not very privacy-friendly,” said Aras Nazarovas, an information security researcher at Cybernews.
“Seems like a pretty weak case to me. OpenAI’s privacy policy does disclose that it shares your information with a ton of third parties, including advertisement partners,” Nazarovas said.
At the center of the dispute is a difficult question that courts are only beginning to confront: whether consumers truly understand the bargain they make when using free AI products.
Most users click through privacy policies without reading them, often agreeing to terms that permit some level of data collection for analytics, advertising or model improvement. Critics of the lawsuit argue that users who accept those terms cannot later claim complete surprise when tracking technologies are involved.
Privacy advocates counter that chatbot interactions differ fundamentally from ordinary web browsing because users increasingly treat AI systems like therapists, advisers and research assistants. The emotional intimacy of those exchanges, they say, raises the stakes.
The legal battleground is also familiar territory in California, where businesses have complained about a surge of lawsuits brought under the California Invasion of Privacy Act, or CIPA, a law passed in 1967 to combat wiretapping and eavesdropping on telephone calls.
In recent years, plaintiffs’ lawyers have increasingly used the statute to challenge modern website technologies such as cookies, pixels and session replay software.
“Plaintiffs’ attorneys are dusting it off in the modern era to challenge common website tracking tools,” the Fresno Chamber of Commerce said in a recent statement supporting efforts to reform the law.
Business groups and some lawmakers argue that CIPA was never designed for the internet era and that its broad language has enabled waves of lawsuits targeting routine digital advertising practices.
Anna Caballero, a California state senator, is now backing legislation that would create exemptions for ordinary commercial technologies such as chat functions, analytics tools and session replay software when used for legitimate business purposes.
Still, the lawsuits arrive at a moment of mounting public unease about how artificial intelligence companies handle data. As AI systems become more embedded in daily life, consumers are increasingly forced to confront an uncomfortable reality: convenience and privacy may no longer coexist as easily as they once imagined.

Stay ahead of the stories shaping our world. Subscribe to Impact Newswire for timely, curated insights on global tech, business, and innovation all in one place.
Dive deeper into the future with the Cause Effect 4.0 Podcast, where we explore the ideas, trends, and technologies driving the global AI conversation.
Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide
Faustine Ngila is the AI Editor at Impact Newswire, based in Nairobi, Kenya. He is an award-winning journalist specializing in artificial intelligence, blockchain, and emerging technologies.
He previously worked as a global technology reporter at Quartz in New York and Digital Frontier in London, where he covered innovation, startups, and the global digital economy.
With years of experience reporting on cutting-edge technologies, Faustine focuses on AI developments, industry trends, and the impact of technology on society.
Discover more from Impact Newswire
Subscribe to get the latest posts sent to your email.



