Petaflops is a measure of computational performance commonly used in supercomputing and high-performance computing (HPC). It stands for “quadrillions of floating-point operations per second,” where one petaflop equals 10 to the 15th operations per second. This performance metric reflects the speed at which a system can perform mathematical calculations, particularly those involving floating-point arithmetic. The term gained prominence with the IBM Roadrunner, the first supercomputer to break the petaflop barrier in 2008. Achieving one petaflop marked a significant milestone in computational science, enabling more complex simulations and data analysis. https://en.wikipedia.org/wiki/FLOPS
Modern HPC systems and GPUs regularly achieve multi-petaflop performance, enabling breakthroughs in fields like climate modeling, astrophysics, and molecular dynamics. For example, the Summit supercomputer, developed by IBM and operational since 2018, delivers a peak performance of 200 petaflops. This level of computational power allows researchers to simulate physical phenomena at unprecedented scales and resolutions, significantly advancing scientific discovery and technological innovation. https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/
Petaflops performance is not limited to supercomputers; it is also becoming accessible in more compact and specialized systems. AI-focused platforms like the NVIDIA DGX A100, introduced in 2020, achieve petaflop-level performance through optimized GPU architectures. These systems are tailored for machine learning, real-time analytics, and large-scale data processing. As technology progresses toward exascale computing, petaflops will remain a critical benchmark for measuring computational capability in both scientific and commercial applications. https://www.nvidia.com/en-us/data-center/dgx-a100/