How AI Is Being Used to Cast Doubt on Pollution Science
Risk analyst Louis Anthony Cox Jr. is developing an AI tool to challenge links between pollutants and health risks, funded by chemical industry lobbyists. Critics warn it may deepen public doubt and delay regulations.

Inside a Plan to Use AI to Amplify Doubts About Pollutants’ Dangers
Louis Anthony “Tony” Cox Jr, a Denver-based risk analyst with ties to the chemical industry, is developing an AI tool intended to challenge established links between pollutants and health risks. Cox, known for questioning the health impacts of pollutants like PM2.5, has received funding from the American Chemistry Council (ACC)—a major chemical industry lobby group.
This AI application aims to scan epidemiological research to detect what Cox describes as the false conflation of correlation with causation. He presents it as a way to weed out “propaganda” and apply “critical thinking at scale” to scientific studies related to chemical exposures.
Industry Connections and Influence
Cox’s history includes consulting for polluting industries such as Philip Morris USA and the American Petroleum Institute, and he has allowed industry stakeholders to review and edit his work. This raises concerns about the impartiality of his research and, by extension, the AI tool he is developing. Experts warn that the ACC’s sponsorship could skew the project to favor industry interests that aim to minimize regulatory burdens.
Kelly Montes de Oca, spokesperson for the ACC, defends the project as supporting scientific transparency and improving chemical safety. However, critics point out that the ACC’s involvement presents a conflict of interest in research related to chemical exposure and public health.
AI Conversations Reveal Bias Attempts
In early 2023, Cox engaged in detailed conversations with ChatGPT, probing its responses about the health risks of PM2.5, a fine particulate matter linked to respiratory and cardiovascular diseases. Cox challenged the AI on whether PM2.5 causes lung cancer and pushed the chatbot to acknowledge uncertainties and potential confounding factors.
Despite ChatGPT’s initial affirmation of strong scientific evidence, Cox highlighted the AI’s “starting bias” towards accepting association as causation. He envisions AI tools that can perform “critical thinking at scale,” capable of identifying weaknesses in scientific reasoning that might otherwise be overlooked.
Promoting Uncertainty through AI
Emails obtained through public record requests reveal Cox sharing his AI tool’s findings with industry scientists, pointing out alleged flaws in studies linking gas stove exposure and PM2.5 to health issues. Cox envisions the tool assisting authors, reviewers, journalists, and policymakers by evaluating the trustworthiness of research methods and conclusions.
He has submitted proposals to further develop and test the AI reviewer, including pilot studies at academic journals. However, some collaborations did not materialize due to lack of participation from authors. Despite this, around 400 researchers have tested the tool, which Cox claims can already outperform many human peer reviews in identifying causal reasoning issues.
Concerns from Health Experts and Advocates
Critics argue that this AI tool, funded by industry groups, could deepen public confusion about pollutant risks and delay necessary regulations. The tobacco and oil industries have historically used uncertainty to stall health protections, a tactic some see echoed in Cox’s work.
Chris Frey, former chair of the EPA’s clean air scientific advisory committee, emphasizes that the ACC’s agenda is to reduce regulatory burdens, casting doubt on their role in promoting objective science. Terms like “sound science,” frequently used by Cox, have been linked to past industry strategies aimed at setting unrealistically high standards of proof to obstruct policy action.
Balancing Scientific Rigor and Public Health
Cox argues that his goal is to apply “sound technical methods” to pursue scientific truth without bias. He challenges the practice of treating repeated associations as evidence of causation in epidemiology, calling it outdated and harmful to effective health protection.
However, many scientists and regulators operate on precautionary principles, where absolute proof of causation is not always required for policy decisions. The Clean Air Act, for example, mandates standards that include adequate safety margins to protect public health, even amid scientific uncertainty.
Experts warn that demanding excessive proof of causality before regulating pollutants could result in prolonged exposure and avoidable health consequences. Automating such stringent causal scrutiny through AI might serve industry interests more than public health.
Implications for Research and Regulation
- The AI tool could shift scientific peer review towards heightened skepticism of epidemiological studies linking pollutants to health risks.
- Industry funding raises questions about the neutrality of AI-assisted scientific critique.
- There is a risk that AI tools designed this way may erode public trust in established science, complicating regulatory efforts.
While Cox maintains that his AI system is neutral and that final judgments rest with human reviewers, the potential for such technology to influence scientific discourse and policy is significant. This case highlights the need for transparency around AI development and funding sources in research evaluation tools.
Further Reading
Professionals interested in the intersection of AI and scientific research evaluation may explore dedicated courses and resources at Complete AI Training to understand how AI tools are developed and applied in real-world contexts.