Privacy Advocates Warn of Risks in Meta’s New AI-Based Ad Targeting Policy


Privacy Advocates Warn of Risks in Meta’s New AI-Based Ad Targeting Policy

Privacy and digital rights advocates are raising serious concerns over a new Meta policy that allows the company to personalize advertisements based on users’ interactions with its artificial intelligence tools, warning that the move could further erode online privacy protections.

Meta announced on October 1 that it would begin using conversations with its generative AI features to tailor content and advertising across its platforms. The policy, which rolled out this week, applies to users of Meta AI, integrated into Facebook, Instagram, WhatsApp and Messenger. Users will not have the option to opt out of this data use, a decision critics say undermines meaningful consent.

The announcement comes at a time when engagement with AI chatbots is rapidly increasing, while regulators and researchers are examining how such technologies may contribute to mental health issues, self-harm, and even violence.

In a blog post, Meta sought to reassure users, stating that conversations involving sensitive topics such as religion, sexual orientation, political views, health conditions, or racial and ethnic identity would not be used to show ads. However, privacy experts say the company’s assurances are vague and leave room for exploitation.

Arielle Garcia, chief operating officer of digital advertising watchdog Check My Ads, warned that Meta has previously found ways around broadly worded privacy promises. She questioned whether sensitive chat data could still be used indirectly, such as to train AI models or refine ad creative through “proxy signals.” For example, a user discussing World Diabetes Day might receive health-related ads even if their medical condition is not explicitly shared.

Meta declined to address critics’ concerns directly, referring instead to its public blog post.

Sensitive Conversations at Risk

Although AI chatbots are still a relatively new technology, many users share deeply personal information with them, including mental health struggles, relationship problems, financial stress, and physical health concerns.

OpenAI CEO Sam Altman previously warned that users face legal and privacy risks when treating AI systems like doctors or lawyers. “We don’t yet have legal privilege for AI systems,” Altman said earlier this year, noting that society will need to establish new frameworks to address this gap.

Nathalie Maréchal, co-director of privacy and data at the Center for Democracy and Technology, said Meta’s policy is especially risky because many users mistakenly believe chatbot conversations are private.

“People think they’re interacting in a secure environment, which is false,” Maréchal said. “They’re engaging with a system designed to predict words, not to protect their best interests.”

Auditing and filtering millions of chatbot conversations for sensitive content would also be extremely difficult, experts say. Hayden Davis, a legal fellow at the Electronic Privacy and Information Center, questioned how Meta could reliably prevent sensitive information from influencing advertising systems.

Financial Incentives and Child Safety Concerns

Critics are particularly troubled by the lack of an opt-out option. Davis argued that Meta’s automatic opt-in approach reflects a belief that users would not consent if they fully understood how their data is being used.

Privacy advocates also warn that turning chatbot interactions into a revenue source could incentivize Meta to make its AI products more addictive, encouraging users to spend more time engaging with them and to share increasingly personal details.

Recent lawsuits and tragic cases have intensified scrutiny of AI’s psychological impact. In one case, the estate of a Connecticut woman is suing OpenAI and Microsoft, alleging that her son’s extensive interactions with ChatGPT contributed to delusions that led to her death. In another, the parents of a 16-year-old boy who died by suicide said AI chatbot interactions played a role in his mental health decline.

The risks may be even greater for children and teenagers. A recent survey by Common Sense Media found that more than half of teens use AI companions several times a month.

“If chatbot engagement becomes a profit center, companies have a direct financial incentive to push users toward excessive use and deeper self-disclosure,” Davis said.

A Troubling Track Record

Critics also point to Meta’s history of privacy and advertising violations. In 2019, the company was fined $5 billion by the U.S. Federal Trade Commission for privacy breaches and has since faced accusations of profiting from scam advertisements.

“When a company with this track record promises even more precise ad targeting, it’s deeply concerning,” Garcia said. “It raises the risk of more scam ads reaching more vulnerable users.”

As Meta moves forward with its AI-driven advertising strategy, privacy advocates and regulators are expected to closely monitor how user data is collected, interpreted, and monetized in an increasingly AI-powered digital ecosystem.

Privacy and digital rights advocates are raising serious concerns over a new Meta policy that allows the company to personalize advertisements based on users’ interactions with its artificial intelligence tools, warning that the move could further erode online privacy protections.

Meta announced on October 1 that it would begin using conversations with its generative AI features to tailor content and advertising across its platforms. The policy, which rolled out this week, applies to users of Meta AI, integrated into Facebook, Instagram, WhatsApp and Messenger. Users will not have the option to opt out of this data use, a decision critics say undermines meaningful consent.

The announcement comes at a time when engagement with AI chatbots is rapidly increasing, while regulators and researchers are examining how such technologies may contribute to mental health issues, self-harm, and even violence.

In a blog post, Meta sought to reassure users, stating that conversations involving sensitive topics such as religion, sexual orientation, political views, health conditions, or racial and ethnic identity would not be used to show ads. However, privacy experts say the company’s assurances are vague and leave room for exploitation.

Arielle Garcia, chief operating officer of digital advertising watchdog Check My Ads, warned that Meta has previously found ways around broadly worded privacy promises. She questioned whether sensitive chat data could still be used indirectly, such as to train AI models or refine ad creative through “proxy signals.” For example, a user discussing World Diabetes Day might receive health-related ads even if their medical condition is not explicitly shared.

Meta declined to address critics’ concerns directly, referring instead to its public blog post.

Sensitive Conversations at Risk

Although AI chatbots are still a relatively new technology, many users share deeply personal information with them, including mental health struggles, relationship problems, financial stress, and physical health concerns.

OpenAI CEO Sam Altman previously warned that users face legal and privacy risks when treating AI systems like doctors or lawyers. “We don’t yet have legal privilege for AI systems,” Altman said earlier this year, noting that society will need to establish new frameworks to address this gap.

Nathalie Maréchal, co-director of privacy and data at the Center for Democracy and Technology, said Meta’s policy is especially risky because many users mistakenly believe chatbot conversations are private.

“People think they’re interacting in a secure environment, which is false,” Maréchal said. “They’re engaging with a system designed to predict words, not to protect their best interests.”

Auditing and filtering millions of chatbot conversations for sensitive content would also be extremely difficult, experts say. Hayden Davis, a legal fellow at the Electronic Privacy and Information Center, questioned how Meta could reliably prevent sensitive information from influencing advertising systems.

Financial Incentives and Child Safety Concerns

Critics are particularly troubled by the lack of an opt-out option. Davis argued that Meta’s automatic opt-in approach reflects a belief that users would not consent if they fully understood how their data is being used.

Privacy advocates also warn that turning chatbot interactions into a revenue source could incentivize Meta to make its AI products more addictive, encouraging users to spend more time engaging with them and to share increasingly personal details.

Recent lawsuits and tragic cases have intensified scrutiny of AI’s psychological impact. In one case, the estate of a Connecticut woman is suing OpenAI and Microsoft, alleging that her son’s extensive interactions with ChatGPT contributed to delusions that led to her death. In another, the parents of a 16-year-old boy who died by suicide said AI chatbot interactions played a role in his mental health decline.

The risks may be even greater for children and teenagers. A recent survey by Common Sense Media found that more than half of teens use AI companions several times a month.

“If chatbot engagement becomes a profit center, companies have a direct financial incentive to push users toward excessive use and deeper self-disclosure,” Davis said.

A Troubling Track Record

Critics also point to Meta’s history of privacy and advertising violations. In 2019, the company was fined $5 billion by the U.S. Federal Trade Commission for privacy breaches and has since faced accusations of profiting from scam advertisements.

“When a company with this track record promises even more precise ad targeting, it’s deeply concerning,” Garcia said. “It raises the risk of more scam ads reaching more vulnerable users.”

As Meta moves forward with its AI-driven advertising strategy, privacy advocates and regulators are expected to closely monitor how user data is collected, interpreted, and monetized in an increasingly AI-powered digital ecosystem.

Leave a Reply