Hybrid climate modeling has emerged as an effective way to reduce the computational costs associated with cloud-resolving models while retaining their accuracy. The approach retains physics-based models to simulate large-scale atmospheric dynamics, while harnessing deep learning to emulate cloud and convection processes that are too small to be resolved directly. In practice, however, many hybrid AI-physics models are unreliable. When simulations extend over months or years, small errors can accumulate and cause the model to become unstable.
In a new study published in npj Climate and Atmospheric Science, a team led by Assistant Professor Gianmarco Mengaldo from the Department of Mechanical Engineering, College of Design and Engineering, National University of Singapore, developed an AI-powered correction that addresses a key source of instability in hybrid AI-physics models, enabling climate simulations to run reliably and remain physically consistent over long periods.
The advance makes it more practical to run long-term climate simulations and large ensembles that would otherwise be prohibitively expensive with conventional cloud-resolving models.
“This work constitutes an important milestone in the context of hybrid AI-physics modeling, as it achieves decade-long simulation stability while retaining physical consistency, a key and long-standing issue in hybrid climate simulations. This work could constitute the basis for next-generation climate models, where traditional parametrization schemes for physics are replaced by AI surrogate models and integrated within general circulation models,” Asst Prof Mengaldo said.
In the researchers’ test runs, they found that when hybrid climate simulations broke down, these unstable runs displayed an apparent warning sign whereby the total atmospheric energy rose steadily before the model crashed.
Looking more closely, they discovered that this energy surge was accompanied by an abnormal build-up of atmospheric moisture, particularly at higher altitudes.
Further analysis pointed to a specific cause during the simulation process — water vapor oversaturation during condensation. During the simulation, the deep-learning component of the model sometimes allowed air to hold more water vapor than is physically possible. And because water vapor plays a central role in Earth’s water and energy cycles, even small, persistent errors of such can accumulate over time, eventually nudging long-term simulations into instability.
To address this, the team developed CondensNet, a neural-network architecture designed specifically to handle condensation in a physically consistent way. It combines BasicNet, which predicts how water vapor and energy should change within each atmospheric column, and ConCorrNet, a condensation correction module that activates only when the model is at risk of oversaturation, in which humidity exceeds its physical limit.
Instead of applying a blanket rule after predictions are made, ConCorrNet learns from cloud-resolving reference simulations how moisture and energy should be corrected when condensation occurs. When the model approaches oversaturation, the correction is applied locally through a masking mechanism, ensuring that adjustments are limited to the regions where physical limits would otherwise be exceeded.
“It became clear to us that condensation needed its own targeted correction when we observed oversaturation repeating across the failing cases,” added Dr. Xin Wang, the research fellow in Mengaldo’s group at NUS, who is the first author of the study.
“We designed the constraint to intervene only when the model enters unphysical territory, and to let the correction reflect what the reference physics would actually do.”
The researchers integrated CondensNet into the Community Atmosphere Model (CAM5.2), a widely used global climate model that simulates atmospheric circulation, clouds and energy exchange as part of Earth system modeling. This produced a hybrid framework they call the Physics-Constrained Neural Network GCM (PCNN-GCM). In tests, PCNN-GCM stabilized six neural-network configurations that previously failed, without the need for parameter tuning. The corrected model was able to run long simulations under realistic land and ocean conditions while closely tracking the behavior of the cloud-resolving reference model. (Phys Org)
