ComnetCo Blog

What to Expect When You’re Expecting a Supercomputer
In May 2022, the US Department of Energy’s newest supercomputer, Frontier, became fully operational at the Oak Ridge National Laboratory (ORNL) in Tennessee. Built by Hewlett Packard Enterprise (HPE), it replaced Fugaku — the result of a collaboration between Fujitsu and Japan’s RIKEN Center for Computational Science (R-CCS) — as the fastest machine on the Top500 list. At the time, the authors called it the “only true exascale* machine on the list”. Nonetheless, another HPE machine, Aurora, became the second exascale system in the world, sitting just below Frontier on the Top500 list at number two.

The HPE Cray Frontier supercomputer — the fastest machine on the planet.
Only a decade ago, many experts doubted we could ever get to exascale computing—i.e.,1018 flops. For one, they believed that an exascale computer would require 100 megawatts of electrical power to operate, making it impractical. Nonetheless, today fully operational exascale computer is busy at the Tennessee lab—as well the second machine at the US DoE’s the Argonne National Laboratory—helping researchers tackle problems of national importance that could not be addressed by existing supercomputing platforms. Some of these scientific challenges include enhancing nuclear reactor efficiency and safety, uncovering the underlying genetics of diseases, and further integrating artificial intelligence (AI) with data analytics, modeling, and simulation.

The HPE Cray Aurora supercomputer — the fastest AI machine on the planet.
The latest verified exascale-class supercomputer, Aurora is an HPE Cray supercomputer with additional compute and accelerator infrastructure provided by Intel. It sits just below Frontier on the Top500 list of the fastest supercomputers in the world.
HPC systems come right sized for a wide range of needs
Not everyone needs a $600 million exascale computer that fills a room larger than two professional basketball courts. However, a growing number of organizations are looking to take advantage of High Performance Computing. Some plan to upgrade existing HPC systems while others will deploy a supercomputer for the first time.
HPC systems range from clusters of high-speed computer servers to purpose-built supercomputers that employ millions of processors or processor cores. However, while when people think of these massive machines they typically think of GPUs and CPUs. When linked together with cutting-edge networking fabrics they become capable of processing massive amounts of data to solve the most complex problems computationally. They enable the organizations running them to achieve everything from advancing human knowledge of the universe to creating significant competitive advantages.
Some of the forces driving increased adoption of HPC include better productivity and faster results with greater accuracy. Specifically, supercomputers and HPC systems offer unique capabilities such as quickly performing the modeling and simulations of world around us, both theoretically and physically, including simulations that rely on solving partial differential equations. HPC can also power applications like Graph Database Analytics, which offers the potential to solve problems previously thought to be unsolvable. These such tasks as deanonymizing the Bitcoin blockchain to uncover perpetrators of cyberextortion, cryptocurrency exchange hacks, and terrorist and WMD financing. And in the exploding area of artificial intelligence, machine tailored to the needs of AI can, for example digest massive data sets such as large language models (LLMs) that enable generative AI.
Some of these challenging, data-intensive problems include:
- Modeling and simulation
- Electromagnetic simulation
- Computational fluid dynamics (CFD)
- Finite element method (FEM)
- Computational chemistry
- Complex graph database analytics
- Oil and gas exploration
- Molecular modeling and drug discovery
- Nuclear fusion research
- Cryptoanalysis
AI also advances these HPC applications with machine learning and deep learning apps. These workloads are driving innovation in:
Healthcare and life sciences
Applications include everything from genomics to molecular modeling to ‘Image Analysis’ for faster and more accurate diagnosis of cancer and personalized medicine for more targeted treatment.
Energy
Government agencies, green energy researchers, and traditional oil & gas companies apply the massive processing power of HPC in applications such as seismic data processing, reservoir simulation and modeling, and wind energy simulation. HPC simulations help these users to predict where they can, for example, find oil reserves or whether the reservoir may tap into one on a neighboring property.
Government and defense
HPC systems provide ideal platforms for processing vast amounts of data such as required in weather forecasting and climate change modeling. This capability also proves indispensable in many government applications, including AI-based large scale satellite image analysis and defense research as well as intelligence work.
Financial services
Fraud detection and risk analysis simulation and Monte Carlo represent just some of the more common applications.
Who is the typical HPC buyer?
With the availability of high-powered cloud computing coupled with AI, organizations are getting a taste of the possibilities offered by HPC. Consequently, whereas not long ago the typical HPC buyer was in the ivory tower, the universe of users is rapidly expanding. From manufacturing to aerospace to pharmaceuticals, commercial applications typically apply the machine to a single task. Universities and government research centers, the other main buyers of supercomputers, almost always have multiple users accessing the machine’s processing power not only for interdisciplinary research but also for novel use cases.
What are the various HPC architectures?
Not all HPC systems are created equal. They vary greatly in the various components and how those components are packaged together. The system’s components typically include a CPU and an accelerator such as an FPGA or GPU along with memory, storage, and networking components. HPC nodes, or servers, can be based on a variety of architectures working in unison, either parallel or clustered nodes to break up the problem and parallel computing that combines enough processing power to handle complex computational tasks holistically. HPC parallel computing architectures allows HPC clusters to execute large workloads and splits them into separate computational tasks that are carried out at the same time. A supercomputer is single machine (even if spread across multiple racks), essentially a mainframe computer on steroids, in which the processors and storage are designed to work as a single, extremely powerful computer. Often the two architectures are difficult to distinguish, therefore the industry sometimes defines supercomputers as HPC systems above a certain price point. NOTE: Interdisciplinary research can lead to faster time to insight by bringing together diversity in schools of science. Supercomputing offers a huge advantage here by facilitating collaboration and cross-pollination of schools of thought.
What is the right environment for a supercomputer?
When you work with Hewlett Packard Enterprise (HPE) and ComnetCo, the engagement does not stop at choosing the right compute architecture; it includes looking at how to combine all resources in the right environment.
The design phase even includes looking at things like “Can the local facility support the power requirements?”. A typical supercomputer consumes anywhere from 1 to 10 megawatts of power, or enough electricity to power almost 10,000 homes. This includes electricity required to not only power the machine but also that is needed to cool it. Furthermore, that electric power needs to be stable. You can have conditioned power and then run it through the UPS, which makes sure you have clean power, but what happens when you have a hundred-year flood and you need to switch over to a generator? Finally, a comprehensive design assessment should even include analyzing the cost of various fuels—e.g., natural gas vs. diesel.
Getting Started
The first step in buying and deploying an HPC system involves partnering with a leading maker and trusted experts who can help you navigate the entire deployment process—from pre-sales scoping studies to purchase to deployment to after-deployment support. When engaging with HPE and ComnetCo, we sit down with you and consider your unique needs.
A good place to start is asking yourselves what are you trying to achieve? The answer to that question will help to determine how much compute power you need. Once we have a handle on capacity and speed that your researchers require, we collaborate with you to decide what type of architecture you will need—scale out or scale up. A scale-out architecture essentially allows you to combine multiple machines into a single machine with a larger memory pool. Scale-up enables you to increase the performance of your existing machine and in many cases to extend its lifecycle. cluster or full-blown supercomputer. These are not easy questions to answer because, just as no two snowflakes are alike, virtually every HPC system comes down to a custom build based on unique needs—even at the level of individual nodes.
What will the engagement look like?
Once we have assessed the basic needs, such as processing power, storage, and electric power and cooling requirements, we look for potential pitfalls and how to avoid them. This ability to see hurdles early so you can avoid them constitutes one of the key advantages of working with a leading manufacturer together with ComnetCo experts. Using HPE Cray Superdome and Superdome Flex servers, ComnetCo has deployed everything from highly efficient HPC clusters to some of the world’s fastest supercomputers, such as three Top100 machines at Idaho National Laboratory.
One of the first questions buyers ask is “How long will it take to deploy our system?”. This varies depending on whether you plan to purchase an HPC cluster or a factory-built supercomputer. The charts below show you the typical phases and timelines, start to finish, involved in both types of deployments.
Buying a Supercomputer Timeline

Case in Point: Sawtooth
Let us look at the example of a very fast supercomputer. Named after a central Idaho mountain range, the Sawtooth supercomputer at Idaho National Laboratory (INL) went online in 2019. At a cost of
$19.2 million, the system ranked #37 on the 2019 Top500 fastest supercomputers in the world. That is the highest ranking ever reached by an INL supercomputer.
As you can see, deploying an HPC system or supercomputer requires careful planning in concert with guidance from experienced experts on everything from choosing a ‘right-sized’ system down to critical advice on the right interconnects. In addition to working with the world’s leading manufacturer of HPC systems—Hewlett Packard Enterprise—ComnetCo also offers more than 25 years of experience in deploying systems of all sizes. Plus, once your system comes online, you can count on the backing of these two leading companies for reliable and responsive complete lifecycle system support.
* As of May 2023