Breaking through a physics bottleneck

Drawing on decades of measurements, physicists can simulate particle collisions in large experiments, like those at the Large Hadron Collider (LHC), with remarkable precision. However, the excessive time required for traditional simulation pipelines makes them nearly impossible to scale to meet the growing data demands in the near future. Now, research by Prof. Eilam Gross and his team in the Department of Particle Physics and Astrophysics has introduced a new, artificial intelligence (AI)-driven approach that enables efficient data simulation from any detector through a process accessible to both experimentalists and theoreticians.

The LHC is the most ambitious scientific infrastructure project ever built. By accelerating two counter-rotating proton beams, the LHC generates more than a billion collisions per second—2-3 orders of magnitude higher than previous accelerator projects and with nearly 7 times the collision energy. These collisions are captured by two large detectors, ATLAS and CMS. The Weizmann Institute has long played a leading role in the ATLAS experiment. For example, Prof. Gross led the ATLAS effort toward the discovery of a long-predicted particle, the Higgs boson, in 2012.

The complexity of particle physics experiments strains available computational resources. To resolve this challenge, the physics community has developed techniques for fast simulation that replace microscopic details of their full detector models with simpler approximations. These simulations are then reconstructed using algorithmic techniques to yield findings about the types, energies, and directions of particles produced in a collision.

In recent years, applying AI approaches in the form of deep learning has greatly improved the accuracy of fast simulation techniques. Deep learning has also led to innovative ways to expand the scope of fast simulation. Nevertheless, a problem with fast simulation approaches is that they are typically tailored to two communities with distinct needs: experimentalists who use fast simulations to test hypotheses statistically or theoreticians who use them to explore new models. The experimentalist style is too complex and task-specific to be of use to theorists, while the theoretical style lacks sufficient precision for experimentalists.

Now, enabled by cutting-edge advances in deep learning, Prof. Gross and collaborators have devised a new fast simulation approach to meet the needs of both communities. In a paper published in Physical Review Letters, graduate students Dmitrii Kobylianskii and Nathalie Soybelman, together with postdoctoral fellow Dr. Etienne Dreyer and Prof. Gross, introduced a new way to address this challenge, leveraging a machine and learning technique known as conditional flow matching. The team’s new method, developed in collaboration with Dr. Benjamin Nachman from the Berkeley Institute for Data Science, is called Particle-flow Neural Assisted Simulations (PARNASSUS).

PARNASSUS utilizes deep learning to accurately perform detector simulation and reconstruction in one step. This approach establishes a new paradigm for fast, accurate, analysis-ready synthetic particle physics data that can serve experimental and theoretical physicists alike. Although the researchers demonstrated their method using publicly available data from the CMS detector, their approach can be readily extended to a range of existing and future detector experiments. By enabling simulated data for future LHC runs and serving as a versatile tool for model exploration, PARNASSUS can accelerate the quest for new insights that will expand the Standard Model of Particle Physics.

Members of the Gross lab (from left): Dr. Etienne Dreyer, PhD students Dmitrii Kobylianskii, Nathalie Soybelman, and Nilotpal Kakati, and Prof. Eilam Gross and his dog, Richard Feynman the 2nd.

Prof. Eilam Gross is Head of the Nella and Leon Benoziyo Center for High Energy Physics.