Penn Engineers Develop Method to Solve Inverse Partial Differential Equations
Researchers at the University of Pennsylvania School of Engineering and Applied Science have created a new framework for using AI to solve inverse partial differential equations (PDEs), a class of mathematical problems that help scientists infer hidden dynamics from observable patterns.
The method, called "Mollifier Layers," addresses a fundamental challenge across multiple scientific fields. Instead of working forward from known rules to predict system behavior, inverse PDEs work backward - from what researchers observe to the hidden forces that produced it.
"Solving an inverse problem is like looking at ripples in a pond and working backward to figure out where the pebble fell," said Vivek Shenoy, a professor of materials science and engineering who led the work. "You can see the effects clearly, but the real challenge is inferring the hidden cause."
Why the Math Matters
Differential equations describe how systems change over time. Partial differential equations handle more complex scenarios by accounting for change across both space and time - modeling weather systems, heat transfer, and in the case of Shenoy's lab, how DNA organizes itself inside cells.
Inverse PDEs flip the problem. Instead of predicting what happens next, they ask: what underlying dynamics or parameters produced what we observe?
The Shenoy Lab studies chromatin, the bundled form DNA takes inside the nucleus. Researchers could observe its structure but struggled to infer the chemical processes driving its organization. "We could see the structures and model their formation, but we could not reliably infer the epigenetic processes driving this system," Shenoy said.
The Core Problem: Noisy Derivatives
AI systems solving inverse PDEs typically compute derivatives - measurements of how quantities change - using a method called recursive automatic differentiation. This approach repeatedly calculates change through a neural network.
For complex systems with noisy data, the method becomes unstable and demands enormous computing power. Each additional calculation step can magnify noise in the data, making results less reliable.
The team realized the issue wasn't the neural network's design but the differentiation method itself. They needed a way to smooth out the signal before measuring change.
Mollifiers: A 1940s Solution to a Modern Problem
Mathematician Kurt Otto Friedrichs developed "mollifiers" in the 1940s - mathematical tools that smooth jagged or noisy functions by reducing their sharpest features.
The Penn team adapted this technique, inserting a "mollifier layer" into their AI framework. The layer smooths the signal before the system measures how it changes.
"We initially assumed the issue had to do with neural network architecture," said Ananyae Kumar Bhartari, a co-author on the study. "But after carefully adjusting the network, we eventually realized the bottleneck was recursive automatic differentiation itself."
The mollifier layer approach reduced both noise and computational burden. Equations became solvable more reliably without the same power consumption scaling.
Applications in Biology and Beyond
For chromatin research, the method offers immediate value. Tiny domains of chromatin - just 100 nanometers in size - regulate which genes are accessible and active. These domains influence cell identity, function, aging, and disease.
By inferring the reaction rates driving epigenetic changes, researchers can move beyond observing chromatin structure to modeling how it changes over time and how those changes affect gene expression.
"If we can track how these reaction rates evolve during aging, cancer or development," said Vinayak Vinayak, a doctoral candidate and co-author, "this creates the potential for new therapies: If reaction rates control chromatin organization and cell fate, then altering those rates could redirect cells to desired states."
The framework extends beyond biology. Problems in materials science, fluid mechanics, and other scientific machine learning fields involve higher-order equations and noisy data. Mollifier layers could offer a more stable, computationally efficient approach across these areas.
The study was published in Transactions on Machine Learning Research and will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026).
Your membership also unlocks: