NewsStrategic Expansion in AWS-NVIDIA Collaboration Fuels Generative AI Advancements

Strategic Expansion in AWS-NVIDIA Collaboration Fuels Generative AI Advancements

Category articles

AWS and NVIDIA Augment Strategic Alliance to Propel Generative AI Innovations

Amazon Web Services, Inc. (AWS), a subsidiary of Amazon.com, Inc. (NASDAQ: AMZN), in partnership with NVIDIA (NASDAQ: NVDA), has announced an extensive expansion of their strategic collaboration. This initiative aims to provide the most sophisticated infrastructure, software, and services to foster customers’ generative artificial intelligence (AI) advancements. This collaboration merges the strengths of NVIDIA and AWS technologies, ranging from NVIDIA’s latest multi-node systems featuring next-generation GPUs and CPUs, to AWS’s advanced Nitro System virtualization, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability, making it an ideal environment for training foundational models and developing generative AI applications.

Deepening Collaboration to Fuel the Generative AI Era

The expansion of this collaboration builds upon a long-standing relationship that has been pivotal in ushering in the era of generative AI. It has provided early machine learning (ML) innovators with the computational performance necessary to advance these cutting-edge technologies.

Comprehensive Collaboration for Industry-Wide Generative AI Acceleration

As part of this enhanced collaboration, AWS and NVIDIA are implementing multiple initiatives to supercharge generative AI across various industries. These include:

  • Introducing NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology to the cloud, exclusively on AWS. This platform connects 32 Grace Hopper Superchips with NVIDIA NVLink and NVSwitch technologies.
  • Collaborating to host NVIDIA DGX Cloud, an AI-training-as-a-service on AWS, marking the first deployment of the GH200 NVL32.
  • Working together on Project Ceiba to develop the world’s fastest GPU-powered AI supercomputer.
  • Launching three additional Amazon EC2 instances: P5e, G6, and G6e, powered by NVIDIA’s advanced GPUs for various AI and high-performance computing (HPC) workloads.

Innovative Amazon EC2 Instances: A Synergy of NVIDIA and AWS Technologies

AWS is set to become the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with multi-node NVLink technology. These instances will benefit from AWS’s third-generation EFA interconnect, offering unprecedented low-latency, high-bandwidth networking throughput. This facilitates scaling to thousands of GH200 Superchips in EC2 UltraClusters, crucial for large-scale AI/ML workloads.

Revolutionizing AI/ML Workloads with Enhanced Memory and Cooling Solutions

The NVIDIA GH200-powered EC2 instances feature 4.5 TB of HBM3e memory, significantly enhancing training performance and allowing for larger model runs. These instances will also be the first AI infrastructure on AWS to incorporate liquid cooling, ensuring efficient operation of densely-packed server racks at peak performance.

Enhanced Performance and Security with AWS Nitro System

EC2 instances with GH200 NVL32 will benefit from the AWS Nitro System, an infrastructure essential for the next-generation EC2 instances. This system offloads I/O functions to specialized hardware, ensuring more consistent performance and enhanced security to protect customer data.

First-ever NVIDIA DGX Cloud on AWS

AWS and NVIDIA will collaborate to host NVIDIA DGX Cloud powered by Grace Hopper technology. This service will provide enterprises with rapid access to multi-node supercomputing facilities, integral for training complex LLMs and generative AI models.

Project Ceiba: Pioneering AI Supercomputing

The ambitious Project Ceiba supercomputer is a joint effort by AWS and NVIDIA. It will integrate with AWS services like Amazon VPC and Amazon Elastic Block Store, providing NVIDIA with a comprehensive set of AWS capabilities for diverse AI advancements.

Diverse Applications Across Generative AI, HPC, and Simulation

The collaboration will introduce new Amazon EC2 instances to cater to a wide range of AI, HPC, design, and simulation needs. These instances are designed to power the development and deployment of the largest LLMs, offering enhanced GPU memory and networking capabilities.

NVIDIA Software on AWS: Catalyzing Generative AI Development

NVIDIA also announced software on AWS to further enhance generative AI development. This includes NVIDIA NeMo Retriever and NVIDIA BioNeMo, which streamline the creation of AI-based applications and accelerate drug discovery processes.

Michal Pukala
Electronics and Telecommunications engineer with Electro-energetics Master degree graduation. Lightning designer experienced engineer. Currently working in IT industry.

News