Production-Ready High-Speed Data Access for AI & Deep Learning Workloads

MapR + NVIDIA Reference Architecture

For quick decision-making and business agility, organizations now more than ever require access to as much high-quality data and as fast as possible. This has quickly become the most pressing challenge standing in the way of infusing the power of artificial intelligence (AI) across organizations. Imagine processing millions of financial transactions in realtime in order to show in-app offers instantly for consumers. Or triggering real-time repair alerts before a complex-but-below-par performing machinery breaks down.

The MapR® Data Platform provides industry-leading data access performance with a broad and open approach that enables AI and deep learning (DL) workloads of tomorrow. The NVIDIA® DGX™ is an AI supercomputer, purpose-built for the unique demands of deep learning. The combined reference architecture (RA), described in this paper, delivers a unique, versatile and scalable option for such organizations deploying applications that require handling DL workloads.

Key Highlights of the RA: Customers seeking to deploy use cases requiring DL technologies benefit from this combination of MapR and NVIDIA in the following ways:

  • Speed. With read speeds of 18 GB/s and write speeds of 16 GB/s, this RA delivers 10x faster performance than traditional GPU-based DL model training.
  • Future Proof Architecture. Customers can leverage this RA for additional DL workloads as GPU technologies evolve.
  • Flexibility. Customers can choose from a variety of multi-tenant data access combinations, either directly in the cluster or using containerized applications.
 

Complete the form to receive your copy today!