What Alex answers in this article
- What is physics-informed artificial intelligence (PI-AI) and why is it important?
- For industries rich in knowledge but sometimes limited in data, how does PI-AI offer a different approach compared to traditional data-hungry AI methods?
- How can PI-AI contribute to the critical shift from reactive responses to proactive and predictive capabilities when it comes to safety?
- Beyond incremental improvements, how can PI-AI drive more fundamental advancements in industrial sustainability and emissions reduction?
- What are some of the most exciting breakthroughs or emerging techniques in PI-AI that you believe will have a significant impact on industrial applications in the near future?
- What about ethical considerations? Which ones must be taken into account to ensure deployments are not only effective but also responsible?
- Can you share some examples of PI-AI making a significant impact on energy operations?
- What advice would you give to other companies looking to integrate PI-AI into their processes and workflows?
Given its importance for an engineering-based industry such as oil and gas, physics-informed artificial intelligence (PI-AI) has caught the world’s attention—and for good reason. The impact it can have on production optimization, performance assurance, emissions reduction, and even carbon capture and storage make it an incredibly promising innovation in energy tech.
But don't take my word for it. I spoke with Alex Gorodetsky, a globally recognized innovator in developing cutting-edge algorithms for decision making under uncertainty, to better understand why and how PI-AI can pose such beneficial outcomes.
Alex's pioneering work spans uncertainty quantification, statistical inference, numerical analysis, and stochastic control, driving advancements in next-generation autonomy that balances the latest developments in high-fidelity simulation and machine learning (ML) and AI. He holds a PhD from MIT, a faculty position in the Aerospace Engineering Department at the University of Michigan, and the role of Chief AI Scientist at Geminus, a company specialized in digital twins, generative AI, and computational autonomy for the industrial sector.
Here’s what Alex had to say about one of the most exciting spaces in the digital transformation journey of hard-to-abate industries, including oil and gas.
Question: What is physics-informed artificial intelligence (PI-AI) and why is it important?
Alex Gorodetsky: PI-AI represents a transformative approach in artificial intelligence that fuses established physical laws with machine learning. By integrating physical laws—such as conservation laws and differential equations—within data-driven models, PI-AI ensures that predictions remain consistent with known scientific principles. This methodology enhances model accuracy, reduces the need for extensive datasets, and improves generalizability, particularly in complex systems where data may be sparse or noisy.
"By uniting data-driven techniques with fundamental physics, PI-AI not only advances the capabilities of AI but also fosters deeper insights into the underlying mechanisms of the natural world."
In the energy sector, PI-AI is revolutionizing operations by increasing production while enhancing efficiency, safety, and sustainability. For example, PI-AI is being used to optimize complex gas networks in real time, leading to increased gas production while eliminating flaring
Q: For industries rich in knowledge but sometimes limited in data, how does PI-AI offer a different approach compared to traditional data-hungry AI methods?
AG: Industrial processes and data have some unique characteristics compared to the data found in classic application areas, such as image recognition or natural language processing.
- Most data collected in industrial processes are indirect measurements of the real world because the physical world is far more complex with significantly more multiscale structure than typical AI applications involving language or images.
- The real world and our measurements of it drift and change. Measurements taken of a system today may be different than those taken of the same system tomorrow because the system itself has changed.
- Real world data is noisy. The values that a sensor provides may be corrupted by complex instrumentation noise, and the nature of this corruption may have a complicated structure. This type of complexity doesn’t exist in traditional AI problems.
- Real world data is sparse or, more importantly, indirect—we might only be able to measure temperature but really want to predict pressure. This type of setting very rarely exists in the classic AI world. The only way to fill in and extrapolate beyond what is directly observed is to fill in the gaps with rich physical domain knowledge.
These unique characteristics require tailored solutions, and this is where physics-informed AI comes in. PI-AI has some fundamentally different requirements, capabilities, and approaches compared to data-centric AI. Most fundamentally, we are often less interested in questions like “Can I fully model this huge dataset and make slight extrapolations from it?” and more interested in questions like “What data do I need to enable predictions of these specific quantities that are needed to make decisions most accurately?”
"Physics and data serve to regularize each other. Data is used to ensure that the physics model’s predictions don’t drift far from reality, while the physics model ensures that predictions don’t overfit the data by maintaining consistency with how the world works."
More deeply, however, PI-AI requires new algorithms for physics-data fusion that
- Are scalable to expensive computational models
- Are robust to noise and drift
- Account for and represent the inherent uncertainty of predictions given the factors above
- Focus more intelligently and efficiently on adaptive data collection, due to lack of enormous amounts of high-quality data required by traditional AI methods.
A simple example of how PI-AI works is found in the development of a surrogate model for a high-fidelity physics simulation for use in subsequent tasks. A simulator is built to predict everything about a system; for this reason, high-fidelity simulators are often extremely compute-intensive and time-consuming to run (what we would categorize as “expensive data”). However, you often only need a limited set of quantities to make decisions: What is the pressure at this location of the pipe? How much yield are we obtaining from this process?
Predicting fewer quantities of interest should naturally be easier than predicting everything, so AI approaches are used to construct fast and accurate models that only compute those. These algorithms carefully and adaptively query a high-fidelity physics model over a wide range of potential conditions to learn how a few important quantities of interest behave. It then uses this knowledge to refine a machine learning (ML) model and possibly loops to query new data based on what is learned. At the end of this process, you obtain an ML model that can be used for subsequent calibration, optimization, and control decision making.
A more complex example occurs when the physics model is not good enough to represent the relevant processes. In this situation, the AI model is embedded within the simulation model so that the predictions and AI training occur simultaneously, thereby ensuring that predictions respect physics.
Q: How can PI-AI contribute to the critical shift from reactive responses to proactive and predictive capabilities when it comes to safety?
AG: There are two core contributions that stem from the fundamental fact that PI-AI often seeks to create faster predictions than traditional high-fidelity models. First, it allows you to rapidly assess thousands of possible outcomes for a wide variety of decisions. This, in turn, enables you to evaluate and select more optimal actions.
Second, having a shorter time to prediction enables you to quantify the effect of uncertainties due to imprecise knowledge about the situation and create responses that stand solid against uncertainties.
To summarize, PI-AI allows you to make both more optimal and robust decisions.
Q: Beyond incremental improvements, how can PI-AI drive more fundamental advancements in industrial sustainability and emissions reduction?
AG: By embedding physical laws into ML models, PI-AI enables real-time optimization of complex systems, leading to significant reductions in energy consumption and emissions. For example, in fluid transport networks, PI-AI can dynamically adjust pump and valve settings to minimize energy use while maintaining desired flow rates. Given that pumps account for over 5% of global electricity consumption, optimizing their operation presents a substantial opportunity for energy savings.
"In pilot projects, control systems driven by PI-AI have demonstrated reductions in energy usage of up to 40% in drinking water networks."
Beyond operational efficiency, PI-AI accelerates the design and deployment of carbon-free infrastructure. In carbon capture and storage (CCS), PI-AI models can rapidly evaluate millions of design configurations, optimizing well placements and storage strategies to enhance CO₂ sequestration while reducing costs.
Q: What are some of the most exciting breakthroughs or emerging techniques in PI-AI that you believe will have a significant impact on industrial applications in the near future?
AG: In the near term, taking standard approaches of interpreting existing information sources as data (e.g., simulators) and training deep learning or other ML models on top of or integrated with them will continue to deliver massive benefits to industry. It has virtually become a standard paradigm:
- There’s an expensive bottleneck to doing something
- A couple of data points can be acquired for it but only through very expensive simulation
- The very expensive simulation is run many times to build a dataset
- A model of this dataset is built and deployed to new conditions.
These same steps have been followed for everything from biology and materials modeling to multiphysics simulations. The faster industrial applications identify this route for their difficult problems, the faster those problems will be addressed.
More exciting, however, is the emergence and prevalence of probabilistic modeling techniques that find and exploit structure through compression-like approaches. For example, diffusion and latent diffusion models are emerging as promising ways of providing probabilistic and generative representations of large-scale data.
Diffusion models are just one approach (perhaps the most popular one currently) that provides probabilistic representations of information. Probabilistic representations are nice because they can be used to solve inverse problems and perform rapid data assimilation. However, these types of methods are often data hungry. Enabling them to impact real applications requires coupling them with other methods that discover and exploit low-dimensional structure.
There are a wide range of approaches that seek to find low-dimensional structure. These stem from classic linear reductions to emerging variational autoencoder architectures. The most exciting advancements are those that design the architectures necessary for finding and exploiting problem structure to discover these low-dimensional variables.
To summarize, emerging techniques that (1) discover low-dimensional structure in a target application problem and then (2) use it to accelerate the construction of probabilistic models have tremendous opportunity to bring new capabilities for scalable inverse problems, data assimilation, and robust optimization to industrial applications.
Q: What about ethical considerations? Which ones must be taken into account to ensure deployments are not only effective but also responsible?
AG: Ethical deployment of PI-AI hinges on safety, transparency, and accountability. Models must be explainable, especially in high-stakes environments, so operators can understand and trust their outputs.
Robustness is critical—models must remain accurate under edge conditions and avoid drifting from physical reality over time. Embedding physical laws helps, but ongoing monitoring and validation are essential.
Finally, environmental responsibility matters. PI-AI can reduce emissions and improve efficiency, but models must be aligned with sustainability goals to avoid unintended tradeoffs.
"Responsible deployment means building systems that are not just effective, but resilient and trustworthy in the real world."
Q: Can you share some examples of PI-AI making a significant impact on energy operations?
AG: PI-AI is driving measurable results in both production and efficiency across a wide range of energy operations. Applications include real-time optimization of production facilities, refineries , well network management optimization, gas lift optimization, and water management. Production increases of a few percent to over 10% are often achieved. For example, a recent deployment focusing on minimizing water cut through the optimization of electric submersible pumps delivered a 10% increase in production.
The midstream sector of the oil and gas industry is increasingly leveraging the power of PI-AI to enhance complex optimization tasks. One notable application is in the recovery of natural gas liquids (NGLs). Intelligent advisors, driven by PI-AI, are proving their worth by optimizing NGL recovery processes, ensuring efficiency and maximizing output. For one operator in Asia, the lack of predictive insights coming from traditional simulation software—along with the inconvenience of relying on discontinuous simulation modeling—is driving the training of PI-AI models that can evaluate 20,000 complex scenarios in a fraction of a second. Modeling these scenarios takes only days to create and include real-time inputs and market prices, thereby enabling operators to better assess how potential changes in their processes will impact their plant’s profitability and carbon footprint.
Q: What advice would you give to other companies looking to integrate PI-AI into their processes and workflows?
AG: While it appears exotic and daunting, PI-AI models can be deployed far faster than traditional data-centric models and, with the right model management infrastructure, can be maintained more easily. Deployments usually deliver ROI in months, but picking the right application for the first deployment when an organization is new to PI-AI is important. Here are a few things to consider when starting out:
- Select a process that will have meaningful financial impact on your organization when optimized, has clear controls that can be adjusted, and allows for change in output to be observed and quantified in a reasonable amount of time.
- Look for processes where simulation models exist or where internal teams have modeling experience.
- Unless your organization has significant experience with closed-loop, model-based control, opt for a process involving a human operator for your first deployment.
- Ensure the operator is open to new ways of working and AI-driven advice, and that they have experience with and trust simulations related to their process.
- Make sure all internal stakeholders that will collaborate with the team deploying the PI-AI application have both the time and willingness to support its creation and deployment.
- Expect a pilot deployment to require anywhere between 3 weeks and 3 months, depending on scope and availability of internal data and support.