AI Solutions for Mixed Precision Workloads

Data can transform scientific research, development, and businesses.  Unlock that potential through AI, HPC, and advanced analytics.   Realize the value of Artificial Intelligence faster, with proven practical approaches to create new application experiences, and achieve breakthrough innovations.  The world is replacing programming with training, let us help guide you on your AI journey with the best HPE and NVIDIA products and match them with our best-in-class partner and AI ecosystem.

 



Cryo-EM is a niche research technique and a rapidly evolving field.  This is where scientists flash freeze their sample down to cryogenic temperatures then place that tiny piece of ice under a cryogenic electron microscope and use a GPU to get a 3D reconstruction of the biomolecules from the sample.  Great work is being done with Cryo-EM and structural biology today especially surrounding the novel coronavirus and the search for a cure.  Researchers take thousands of 2D digital images of the ice scan to create a 3D reconstruction model and this is how we know what COVID-19 looks like all the way down to the molecular level.

HPC for Cryogenic Electron Microscopy (CRYO-EM)

We know that solving molecular structures requires intensive computational and storage resources.  There are common IT challenges in the field that have proven the underlying IT infrastructure for Cryo-EM are as mission critical as the microscopes themselves.  ComnetCo with HPE and NVIDIA hardware can help you manage massive amounts of data and provide timely access to mission-critical data for faster time to discovery.  For the end-to-end workflow we have designed a balanced HPC cluster that can be configured to include GPU capacity for required tasks but also improves the price/performance by keeping some of the tasks on the CPU.  Extreme requirements of Cryo-EM stress conventional storage and the extremely scalable Cray ClusterStor E1000 parallel filesystem speeds up the entire workflow, from data collection to rendering the final structure.

Get The Most Out of Limited Access to Cryo-EM Resources

Scarcity is the first challenge most structural biologists will face when trying to get time scheduled on a Cryo-EM microscope.  There are a limited number of scopes and “scope hours” available to individual researchers.  HPE’s Cryo-EM Pre-Processing System Blueprint provides the computing power and High Performance Storage needed to give researchers pre-processing results quickly, while experiments are still running.  This can help scientists identify issues and make adjustments during an experiment so that they can get the most out of limited Cryo-EM microscope time.

Reach Scientific Insights Faster

The second challenge investigators face is the challenge if discovery.  Scientific insight is an iterative process and having insufficient resources available may limit the iterations and approaches taken by a researcher on the search for scientific insight.  HPE’s Cryo-EM Image Analysis system blueprint couples the right combination of computing infrastructure with high performance storage to allow researchers to more quickly reconstruct molecular structures from Cryo-EM images…accelerating their overall scientific research process.

White Papers & Solution Briefs:





AI Journey

HPE Inference Bundle Banner

A Powerful Inferencing Solution for Artificial Intelligence

AI Inferencing Use Cases

AI Inferencing Use Cases

Apollo 2000 Gen10 System - Front with bezel-HPE_Apollo2000_F_2600_24SFF

HPE DL380 Gen10 with T4 GPUs

A Better Solution Built for AI Inferencing

AI is constantly challenged to keep up with exploding volumes of data yet still deliver fast responses. Not only has AI taken over traditional methods of computing, but it has also changed the way industries perform. From modernizing healthcare and finance streams to research and manufacturing, everything has changed in the blink of an eye. AI is demanding a new breed of performance accelerated machines that can solve highly complex problems quickly, while simplifying IT management and reducing time to insight.

HPE ProLiant AI Inferencing Bundle with NVIDIA GPUs

The HPE ProLiant DL380 server with NVIDIA T4 GPUs is the ideal platform for AI inference, providing unprecedented performance, scalability, and energy efficiency. The T4 GPU, powered by NVIDIA Turing™ Tensor Cores, delivers markedly improved inference performance over predecessors and CPUs.  Designed with size and energy efficiency in mind, the NVIDIA T4 GPU is ideal for the scale-out flexibility of the HPE ProLiant DL380, with each allowing up to 7 T4 GPUs per server at a low power consumption.

Designed to Scale

The chassis with up to seven GPUs helps you meet the demanding data read/write requirements on the storage and data management components of AI environments is built to scale. Also, to accommodate your unique requirements, bundles provide the flexibility to configure the CPU, storage, and memory capacity as needed. As your needs grow, you can easily scale these solutions with additional HPE ProLiant server nodes to support your increasing needs.

HPE Training Bundle Banner

An Ideal Training Solution for Machine Learning

A6500 Gen10+ chassis

Apollo 6500 Gen10+ Chassis

HPE Apollo AI Training Bundle with NVIDIA GPUs

  • Accelerate time to value from your data with AI
  • Specifically engineered for Machine Learning
  • Up to 8 Tesla V100 GPUs with NVLink interconnect

Designed to Scale

  • GPUDirect RDMA helps manage the demanding read/write requirements associated with AI training
  • Bundles provide the flexibility to configure the CPU, storage, and memory capacity
  • NVIDIA GPU Cloud (NGC) ready server, download GPU-optimized software on day one