Intelligent Data Management


What is Intelligent Data Management?

It’s more than just storage, it’s about an end-to-end strategy to help customers manage the full lifecycle of their data from creation and collection through processing, analysis and ongoing retention and compliance.

It’s about helping customers getting the most from their information assets by managing and automating the management of their data in the most cost effective and scalable way possible.

HPE Intelligent Data Management Solutions

   High-performance Storage Tier Virtualization DMF is the core policy-based engine that virtualizes multiple storage technologies into an extended high-performance data management fabric. Proven in some of the world’s most demanding technical computing environments, DMF enables IT managers to break down storage silos to provide easier and more cost-effective management of both high-performance requirements and long term data into a single, manageable fabric.

How does Intelligent Data Management benefit customers?

  • Helping customers by simplifying:
    • Visualization: The visualization of data assets and access patterns
    • Policy: Set policies to optimize data placement and automating the execution of those policies
    • Performance: Visualization of machine performance to enable managing the balance between performance and capacity
    • Access: The ability at the user and administrator level to move data based upon authorized access
  • Automatically optimizing where data is stored – to reduce infrastructure costs and management overhead, minimize silos and thus maximize utilization.
  • Providing IT administrators the tools to pro-actively manage, secure and utilize data across multiple storage types and use cases.
  • Cost effectively managing performance requirements by leveraging a seamless data management fabric
  • Optimizing storage infrastructure and making informed choices about expansion

Why do HPE’s Intelligent Data Management solutions lead the industry?

Better performance and scale; more open; and a longer track record in data management than any other HPC vendor. Other vendors are focused on building archives, which often leads to separate silos and data fragmentation. HPE takes a much more holistic approach across the full data lifecycle – not just archival, but data management from creation to compliance. It’s about optimizing data across multiple storage types, to support the world’s most intense compute environments as well as long-term data protection and compliance.

Why HPE?

HPE and its customers have been at the forefront of solving the world’s most difficult data management challenges for over 20 years.

Data Management Framework – Software Defined Tiered Data Management Platform

Proven for more that two decades in some of the world’s most demanding data environments.
Formerly known as SGI’s Data Migration Facility, the HPE Data Management Framework (HPE DMF) optimizes data accessibility and storage resource utilization by enabling a hierarchical, tiered storage management architecture. Data is allocated to tiers based on service level requirements defined by the administrator. For example, frequently accessed data can be placed on a flash, high-performance tier, less frequently accessed data on hard drives in a capacity tier, and archive data can be sent off to tape storage.

SGI DMF is a powerful storage tier virtualization solution for creating and managing a virtualized storage fabric, which can include disk, tape, object and cloud storage in a massively scalable, completely protected environment at a fraction of the cost of conventional solutions.

Relied upon by some of the world’s largest online data repositories for over 25 years, DMF enables all data to appear simply as online data, regardless of which tier or storage device it may be on at the moment. This is made possible with a rich policy engine that gives IT administrators the ability to optimize placement of data on any or all storage devices and locations based upon what works best for the access requirements and data protection policies.

Based on user criteria DMF, continuously monitors and automatically migrates data between storage tiers with different cost and performance characteristics. Only the most critical or timely data resides on higher performance, more expensive storage media, while less critical or timely data is automatically migrated to less expensive, lower performance storage media. Data always appears to be online to end users and applications regardless of its actual storage location.

DMF has been protecting data in some of the industries largest virtualized environments all over the world, enabling them to maintain uninterrupted online access to data for more than 20 years. Some customers have installations with over 100PB online data capacity, and billions of files, which they are able to manage at a fraction of the cost of conventional online architectures.

DMF delivers significant storage management benefits including:

  • Significantly reduce storage and storage management costs today
  • Proactively manage storage costs as data grows
  • Lower risks and improve service levels through reduced operator intervention
  • Lower risk of data loss through DMF’s Active Backup feature


DMF Virtualizes
Separate Storage Silos

View Larger

Clustered Extents File System

CXFS is a no compromise shared Filesystem for Storage Area Networks (SANs) which removes the LAN bottlenecks in data sharing, backup and administration from any data intensive workflow.

CXFS provides a solution to reduce the Total Cost of Ownership (TCO) of a SAN by eliminating file duplication and because it doesn’t require large files to be moved around between application servers in a customer’s workflow.
Because it uses the available SAN infrastructure, CXFS can deliver much greater I/O performance and bandwidth than any network data sharing mechanism, such as NFS or CIFS. Based on the industry leading XFS Filesystem and the XVM volume manager, CXFS benefits from field proven and feature rich capabilities such as

  • Real world 64-bit scalability supports file sizes up to 9M TB, Filesystems up to 18M TB
  • Proven time-tested technology
  • Highly optimized distributed buffering techniques allowing the industry’s fastest performance
  • High availability with automatic failure detection and recovery
  • Centralized intuitive Java-based management tools
  • Full POSIX compliance requiring no application change
  • Industry’s best Quality of Service (QoS) management tool (GRIO V2)