High Performance Computing (HPC)

Density-Optimized Solutions for High-Performance Computing & Artificial Intelligence

From CryoEM to Computer Aided Engineering, ComnetCo custom-designs server solutions optimized for your HPC applications. Vast experience and access to talent within HPE and our partner ecosystem ensure your system will solve the toughest science and engineering problems—rather than simply consuming electricity.

  1. Home
  2. >
  3. Solutions
  4. >
  5. High Performance Computing (HPC)

Overview of HPC Solutions

The typical workloads running on ComnetCo-designed high-performance computing clusters include:

  • Molecular dynamics
  • Chemical engineering
  • Computer-aided engineering
  • Electronic design automation
  • Digital twins
  • Mechanical design
  • Precision Medicine
  • Genom sequencing
  • Drug discover
  • Cryogenic electron microscopy
  • Fraud and anomaly protection
  • Quantitative pre-trade analytics
  • Algorithmic trading
  • Cybersecurity
  • Crash test simulations
  • Wind tunnel simulations
  • Automated driving systems
  • Seismic exploration
  • Reservoir simulation
  • Training of machine learning algorithms

All of these workloads have one thing in common; they require a high-speed, low-latency interconnect to move the massive amount of data processed in parallel by the compute resources. ComnetCo chooses the most appropriate technology—100, 200, or 400 Gbps networks such as Ethernet, InfiniBand, or HPE Slingshot—to connect not only all compute nodes with each other but also with shared high-speed HPC storage.

Higher Education & Research

Academic institutions have long stood at the epicenter of discovery, helping to solve some of the world’s toughest and most complex problems. That fact remains unchanged, especially as recent advancements in high-performance computing (HPC) and artificial intelligence (AI) further empower academic researchers to ask the bigger, more difficult questions.

HPC can help scientists and researchers accurately analyze their data and improve outcomes for patients—from discovering new drugs to finding the best-tailored treatment options.

These needs hold true whether it is discovering drugs to tame an urgent pandemic, seeing how a cancer patient will respond to treatment, predicting the performance of a molecule, or using AI and predictive analysis proactively spot signs of a disaster. The urgent need to create answers faster drives the requirement for ever-more powerful and sophisticated computing resources.

Engineering & Manufacturing

HPC Solutions That Enhance Product Development & Time-To-Market

As manufacturers of all sizes struggle with cost and competitive pressures—and as products become smarter, more complex, and highly customized—the use of computer-aided engineering (CAE) is growing.

Using CAE, engineers take advantage of HPC to design and test ideas for new products without having to physically build many expensive prototypes. This helps manufacturers to lower costs, enhance productivity, improve quality, and reduce time-to-market by focusing on designs that have the best potential for market success. CAE also helps drive innovation and enhance collaboration throughout the supply chain, while mitigating risks and costs associated with potential product failure

CAE applications such as computational fluid dynamics (CFD) and finite element analysis (FEA) have different computing requirements. CFD algorithm solutions are often memory bound and benefit from servers with large amounts of memory, multiple memory channels and large amounts of L3 cache per core.

Implicit FEA involves computationally expensive sparse matrix inversion, which is typically limited by memory size and bandwidth. Explicit FEA problems, such as crash and transient non-linear analysis, need high processor performance. These workloads benefit from higher core-counts and high-frequency processors with large amounts of cache.

Depending on the type of CAE problem, mixing and matching of large core-count processors with high frequencies, very high cache per core, high memory bandwidth, and massive I/O all prove essential to solving CAE problems.

Among the CAE disciplines, the most demanding is Multiphysics because it combines several CAE applications such as structural analysis, fluid mechanics, mechanical dynamics, and electromagnetics. These comprehensive, high-fidelity simulations can help to accurately predict how complex products behave in real-world environments. Iterative design exploration studies are also being extensively used to simulate, design, and optimize complex systems. Multiphysics design optimization studies require very detailed geometric models and large meshes over thousands of operating scenarios. This challenge puts enormous stress on the HPC infrastructure and can often lead companies and research institutions into exploring supercomputing to solve some of their biggest challenges.

Digital Twins

A Digital Twin is a virtual copy of a physical asset that incorporates real-time data captured via sensors. By integrating Multiphysics simulations, IoT, and machine learning, a Digital Twin can enhance a product from design to manufacturing and servicing.

A key goal of CAE and associated disciplines is also simulating how a product or object will behave in a given physical environment, and how that behavior may change based on myriad variables. Traditionally, this meant building one physical prototype after another, and then putting them through a barrage of tests. Often costly and heavily manual undertakings, these tests consume time and resources. The Digital Twin has helped revolutionize this process. Creating a digital version of a physical product and testing it against the variables through mod/sim models, enables manufacturers to reduce costs, improve product quality, and even provide a path towards predictive maintenance in the future. 

HPC can help evolve the ways in which a product’s Digital Twin can add value throughout the product lifecycle. Digital prototypes can be created and tested whenever new data is available. For example, rather than simply testing prototypes during the initial design phase, engineers can use service or warranty defect data from the field to explore and validate enhancements and updates for existing products. They may also pull in more granular sensor or telemetry data from sources such as IoT devices on the factory floor to help create more detailed and accurate simulations—even during production. This approach can help engineers determine whether the current designs require any adjustments, while they still have time to make them.

Real-time sensor data can also help engineers use the Digital Twin to more accurately predict product anomalies or failures. This predictive capability can help ensure production equipment continues to operate optimally, minimizing unplanned downtime while also reducing maintenance costs. While a Digital Twin model can add huge potential value it also requires equally large volumes and complexity of data to make it work. This challenge makes Digital Twins an ideal use case for HPC, which can deliver results quickly despite complex simulations and high volumes of data.

HPE GreenLake for HPC

Leverage the hybrid cloud/ HPC-as-a-Service