University of Pennsylvania researchers solve inverse math problem with new AI method
Engineers at the University of Pennsylvania have developed a way to use artificial intelligence to solve inverse partial differential equations-one of mathematics' most difficult challenges. The method, called "Mollifier Layers," improves how AI handles these problems by refining the underlying mathematics rather than simply adding computing power.
The work will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026) and published in Transactions on Machine Learning Research.
Why this matters for science
Partial differential equations describe how systems change across space and time. Scientists use them to model weather patterns, heat flow, chemical reactions, and biological processes. Inverse PDEs work backward-starting with observed data to uncover the hidden forces driving those observations.
The researchers faced this problem directly while studying chromatin, the folded structure of DNA inside cells. They could observe how chromatin organized itself, but couldn't reliably infer the chemical processes controlling which genes were active.
"You can see the effects clearly, but the real challenge is inferring the hidden cause," said Vivek Shenoy, senior author and professor of materials science and engineering.
The computational bottleneck
Traditional AI systems calculate mathematical derivatives using recursive automatic differentiation. This method works by repeatedly calculating changes as data moves through a neural network. With complex systems and noisy data, the process becomes unstable and demands enormous computing resources.
The team compared it to repeatedly zooming in on a rough, jagged line. Each step amplifies imperfections, making the final result less reliable.
The solution came from a mathematical concept introduced in the 1940s: "mollifiers," tools designed to smooth irregular functions. The researchers adapted this idea by creating a mollifier layer within AI models that smooths input data before calculating changes. This avoids the instability caused by traditional methods.
The results were significant. The new method reduced noise and lowered computational costs required to solve these equations without sacrificing reliability.
Applications beyond biology
The immediate application is understanding chromatin domains-structures just 100 nanometers in size that control gene expression. By estimating the rates of epigenetic reactions, the AI method could help scientists predict how chromatin changes during aging, disease, or development.
"If reaction rates control chromatin organization and cell fate, then altering those rates could redirect cells to desired states," said Vinayak Vinayak, a doctoral candidate and co-author.
The framework extends beyond genetics. Materials research, fluid dynamics, and other fields involving complex equations and noisy data could benefit from this more stable and efficient approach.
For researchers working on AI for Science & Research, this represents a shift in how computational challenges are solved. Rather than scaling up hardware, the focus moves to improving the mathematics itself.
Your membership also unlocks: