AI-Washing Pollution: How Industry Uses Artificial Intelligence to Rewrite Environmental Science
Top polluters use AI to promote pollution by censoring environmental research. This tactic risks eroding public trust and spreading industry-backed misinformation.

Pollution Industry Using AI to Make the Case for More Pollution
In 2025, artificial intelligence finds new uses across industries—none more concerning than its adoption by the world's top polluters. These companies are leveraging AI to craft corporate messaging that defends and even promotes continued pollution.
Louis Anthony "Tony" Cox Jr., a longtime petrochemical industry advocate and former political advisor, is at the forefront of this effort. He is developing a large language model (LLM) designed to censor environmental health research that challenges industry narratives. Cox has a history of downplaying the link between air pollution and respiratory issues and collaborating with major industry players such as Philip Morris USA, the American Chemistry Council, and the Truck and Engine Manufacturers Association.
His ties to the American Petroleum Institute run so deep that he has allowed the group to "edit" his research. Cox began building his AI model in 2023 after extensive conversations with ChatGPT, where he attempted to convince the AI that harmful particulates like PM2.5 don’t cause lung cancer—a claim he has long promoted. Frustrated with the time it took to sway ChatGPT, he turned to industry allies to create an app pre-trained to avoid inconvenient truths. This app is pitched as delivering "critical thinking at scale," but in reality, it filters information to fit the petrochemical agenda.
Cox’s past work includes industry-sponsored studies that defend toxic chemicals like PFOAs. Experts warn that this form of "AI-washing" is a deceptive tactic to rebrand corporate propaganda under the guise of AI neutrality—similar to greenwashing but far more insidious. As Itai Vardi from the Energy and Policy Institute explains, outsourcing denial work from individual scientists to AI tools funded by the industry risks further eroding public trust in vital scientific research.
This push for AI-driven misinformation is all the more ironic given AI's environmental footprint. Data centers powering AI rely heavily on coal-powered generators in some regions, contributing to increased rates of premature cancer and asthma deaths. When Cox’s AI tool eventually launches, it likely won’t announce itself loudly. Instead, it will quietly become part of the growing swarm of AI applications, shaping public perception in ways that serve corporate interests.
For professionals in PR and communications, this development highlights the importance of scrutinizing the sources behind AI tools and messaging. Understanding who funds and influences AI-driven content is crucial to maintaining integrity and transparency in public discourse.