Historically, numerical analysis has formed the backbone of supercomputing for decades by applying mathematical models of first-principle physics to simulate the behavior of systems from subatomic to galactic scale. Recently, scientists have begun experimenting with a relatively new approach to understand complex systems using machine learning (ML) predictive models, primarily Deep Neural Networks (DNN), trained by the virtually unlimited data sets produced from traditional analysis and direct observation. Early results indicate that these “synthesis models” combining ML and traditional simulation, can improve accuracy, accelerate time to solution and significantly reduce costs.
You can download the paper on the NVIDIA website. Click here.

Table Of Contents
- Introduction
- Applying Machine Learning In HPC
- Three Approaches To Applying Machine Learning In HPC
- Machine Learning Use Cases In HPC
- Application Of Model Modulation At ITER
- Conclusions
- Figure 1: The Synthesis Of Numerical Analysis And Machine Learning Can Create New Predictive Simulation Models
- Figure 2: Machine Learning Can Improve Neutrino Detection By Combining Simulation Results From Different Models To Produce A Superior Model
- Figure 3: Bose-Einstein Condensate Achieved Convergence After Only 10-12 Experiments Using Machine Learning, Compared To 140 Experiments Using the Traditional Approach
- Figure 4: Example Use Cases For Synthesis Modeling
- Figure 5: Spinning Black Holes Create Gravitational Waves, Ripples In The Fabric Of Space And Time. Machine Learning Is Now Enhancing Our Understanding Of These Phenomena
Companies Cited
- Caltech
- Deep Neural Networks (DNN)
- Fermilab
- Laser Interferometer Gravitational Wave Observatory (LIGO)
- NVIDIA
- National Center For Supercomputing Applications (NCSA)
- University Of Florida
- University Of North Carolina
- University Of South Wales