AI chatbots are sharing people's phone numbers with strangers - and there's little victims can do
People are receiving unsolicited calls from strangers who say they found their phone number through Google's AI. The callers seek help with everything from legal advice to locksmith services. Each one reports the same source: the AI gave them the number.
This phenomenon, called "AI doxxing," occurs when chatbots like Gemini or ChatGPT surface private information without consent. In documented cases, victims' phone numbers have been used as placeholder contact details whenever users ask the AI for company phone numbers or service providers.
One victim described the experience on Reddit: "Strangers are calling me constantly looking for a lawyer, a product designer, a locksmith - you name it. Every single one of them tells me: 'I got your number from Google's AI'. This is a massive privacy violation and data leak."
Other reported instances include Elon Musk's Grok chatbot exposing home addresses, Meta's WhatsApp AI assistant sharing private numbers, and ChatGPT generating false incriminating information about individuals.
How personal data ends up in AI systems
Large language models generate responses from material scraped across the internet, including outdated records, forum posts, and databases. This process can surface incorrect or private information as if it were fact.
Data removal service ClearNym traced the problem to a decade of data brokerage practices. "Many organisations have been discreetly harvesting personal phone numbers, addresses, and other details from public databases," a ClearNym spokesperson said. "This information was sold, traded, and thrown into machine learning training sets. It now returns as accurate copies or even fabrications."
Newer, more powerful AI models trained on even larger datasets will likely worsen the problem, ClearNym researchers warn.
Criminals are weaponizing the vulnerability
A Virgin Media O2 report found that millions of people in Britain have been served fake customer service numbers through AI tools. Criminals are now exploiting this by injecting their own phone numbers into AI systems to pose as trusted brands.
They do this by "seeding poisoned content" across the web - placing fake phone numbers in Yelp reviews, YouTube comments, and other platforms with keywords like "official British Airways reservations number." AI web crawlers pick up these fake numbers during training.
"When you ask an assistant how to call your airline, it does exactly what it was designed to do, but with a customer support number that leads straight to a scammer instead of the real company," said Qi Deng, lead security researcher at AI security firm Aurascape.
Victims have almost no recourse
Unlike search engines, which can remove information through "right to be forgotten" legislation, AI models cannot simply unlearn data after training. Once information is embedded in a model, it persists.
The person whose number appears through Google's Gemini submitted an official legal removal request asking Google to blacklist their number from AI outputs. They received no response, and the harassment continues daily.
Google said it has "safeguards in place to prevent personal content from surfacing on Search AI features, along with dedicated tools to request its removal." The company reviews requests and takes action when it can verify violations of its policies.
Security experts recommend using only phone numbers listed on official company websites. But for those whose numbers already appear in chatbot responses, prevention is nearly impossible.
The lack of regulatory oversight means victims cannot order AI systems to forget information, pursue data brokers feeding the algorithms, or compel platforms to act. As generative AI and LLM systems become more prevalent in customer-facing roles, the problem will likely affect more people working in support functions where accuracy and customer trust matter most.
Your membership also unlocks: