The world’s fastest supercomputers are changing fast

Modern computing workloads – including scientific simulations, visualization, data analysis and machine learning – are forcing supercomputing centers, cloud providers and businesses to rethink their computing architecture.

The processor or network or software optimizations alone cannot meet the latest needs of researchers, engineers and computer scientists. Instead, the data center is the new computing device, and organizations need to look at the entire technology stack.

The latest rankings of the world’s most powerful systems continue to show momentum for this full-stack approach in the latest generation of supercomputers.

NVIDIA technologies are accelerating over 70 percent, or 355, of the systems on the TOP500 list released at the SC21 high-performance computing conference this week, including over 90 percent of all new systems. That’s an increase from 342 systems, or 68 percent, of the machines on the TOP500 list released in June.

NVIDIA also continues to have a strong presence on the Green500 list of the most energy-efficient systems, operating 23 of the 25 best systems on the list, unchanged from June. On average, NVIDIA GPU-powered systems deliver 3.5 times higher power efficiency than non-GPU systems on the list.

Microsoft’s GPU-accelerated Azure supercomputer highlighted the emergence of a new generation of cloud-native systems, placing the 10th on the list, the first top 10 for a cloud-based system.

AI is revolutionizing scientific computing. The number of research articles utilizing HPC and machine learning has increased dramatically in recent years; growing from around 600 ML + HPC papers submitted in 2018 to almost 5,000 by 2020.

The ongoing convergence of HPC and AI workloads is also underlined by new benchmarks such as HPL-AI and MLPerf HPC.

HPL-AI is a budding benchmark for converged HPC and AI workloads that use mixed-precision mathematics – the foundation of deep learning and many scientific and commercial jobs – while still delivering the full accuracy of dual-precision mathematics, which is the standard measure for traditional HPC benchmarks.

And MLPerf HPC addresses a computing style that accelerates and enhances AI simulations on supercomputers, with benchmarking performance on three key workloads for HPC centers: astrophysics (Cosmoflow), weather (Deepcam), and molecular dynamics (Opencatalyst).

NVIDIA addresses the entire stack with GPU-accelerated processing, smart networking, GPU-optimized applications, and libraries that support the convergence of AI and HPC. This approach has increased the workload and enabled scientific breakthroughs.

Let’s take a closer look at how NVIDIA supercharges supercomputers.

Accelerated computing

The combined power of the GPU’s parallel processing capabilities and over 2,500 GPU-optimized applications allow users to speed up their HPC jobs, in many cases from weeks to hours.

We are constantly optimizing the CUDA-X libraries and GPU-accelerated applications, so it’s not uncommon for users to see an x-factor performance gain on the same GPU architecture.

As a result, the performance of the most widespread scientific applications – which we call “the golden suite” – has improved 16x over the last six years, with more progress on the way.

16x performance on the best HPC, AI and ML apps from full-stack innovation. **

And to help users quickly take advantage of higher performance, we offer the latest versions of AI and HPC software through containers from the NGC catalog. Users simply drag and run the application on their supercomputer, in the data center or the cloud.

Convergence of HPC and AI

The infusion of AI into HPC helps researchers speed up their simulations while achieving the accuracy they would get with the traditional simulation approach.

This is why an increasing number of scientists are utilizing artificial intelligence to speed up their discoveries.

It includes four of the finalists for this year’s Gordon Bell Award, the most prestigious award in supercomputing. Organizations are racing to build exascale AI computers to support this new model, which combines HPC and AI.

This strength is underlined by relatively new benchmarks, such as HPL-AI and MLPerf HPC, which highlight the ongoing convergence of HPC and AI workloads.

To boost this trend, NVIDIA last week announced a wide range of advanced new libraries and software development kits for HPC.

Graphs – a key data structure in modern computer science – can now be projected into deep neural network frameworks with Deep Graph Library or DGL, a new Python suite.

NVIDIA Modulus builds and trains physics-informed machine learning models that can learn and obey the laws of physics.

And NVIDIA introduced three new libraries:

  • ReOpt – to increase the operational efficiency of the $ 10 trillion logistics industry.
  • cuQuantum – to accelerate quantum computer research.
  • cuNumeric – to accelerate NumPy for researchers, computer scientists, and machine learning and AI researchers in the Python community.

NVIDIA Omniverse weaves it all together – the company’s virtual world simulation and collaboration platform for 3D workflows.

Omniverse is used to simulate digital twins of department stores, plants and factories, of physical and biological systems, of the 5G edge, robots, self-driving cars and even avatars.

Using Omniverse, NVIDIA announced last week that it will build a supercomputer called Earth-2, dedicated to predicting climate change by creating a digital twin of the planet.

Cloud-native supercomputing

As supercomputers take on more workloads across data analysis, AI, simulation and visualization, CPUs are being stretched to support an increasing number of communication tasks needed to operate large and complex systems.

Data processing units alleviate this stress by relieving some of these processes.

As a fully integrated data center-on-a-chip platform, NVIDIA BlueField DPUs can relieve and manage data center infrastructure tasks instead of having the host processor do the work, enabling stronger security and more efficient orchestration of the supercomputer.

Combined with the NVIDIA Quantum InfiniBand platform, this architecture delivers optimal bar-metal performance, while naturally supporting multi-node bearing insulation.

NVIDIA’s Quantum InfiniBand platform provides predictive isolation of bar-metal performance.

Thanks to a zero-confidence approach, these new systems are also more secure.

BlueField DPUs isolate applications from infrastructure. NVIDIA DOCA 1.2 – the latest BlueField software platform – enables next-generation distributed firewalls and wider use of line-speed data encryption. And NVIDIA Morpheus, provided an intruder is already inside the data center, uses deep learning-driven computer science to detect uninvited guests’ activities in real time.

And all the trends outlined above will be accelerated by new networking technology.

NVIDIA Quantum-2, which was also announced last week, is a 400 Gbps InfiniBand platform and consists of the Quantum-2 switch, ConnectX-7 NIC, the BlueField-3 DPU as well as new software for the new network architecture.

NVIDIA Quantum-2 offers the benefits of bare-metal high performance and secure multi-tenancy, enabling the next generation of supercomputers to be secure, cloud-native and better utilized.

** Benchmark applications: Amber, Chroma, GROMACS, MILC, NAMD, PyTorch, Quantum Espresso; Random Forest FP32, TensorFlow, VASP | GPU Node: Dual-socket CPUs with 4x P100, V100 or A100 GPUs.

Follow us on Google News

Disclaimers for mcutimes.com

All the information on this website – https://mcutimes.com – is published in good faith and for general information purposes only. mcutimes.com does not make any warranties about the completeness, reliability, and accuracy of this information. Any action you take upon the information you find on this website (mcutimes.com), is strictly at your own risk. mcutimes.com will not be liable for any losses and/or damages in connection with the use of our website.

Give a Comment