Supermicro Extends 8-GPU, 4-GPU, and MGX
Product Lines with Support for the NVIDIA HGX H200 and Grace Hopper
Superchip for LLM Applications with Faster and Larger HBM3e Memory
– New Innovative Supermicro Liquid Cooled 4U Server with NVIDIA HGX
8-GPUs Doubles the Computing Density Per Rack, and Up to 80kW/Rack,
Reducing TCO
SAN JOSE, Calif., and
DENVER, Nov. 13, 2023 /PRNewswire/ -- Supercomputing
Conference (SC23) -- Supermicro, Inc. (NASDAQ: SMCI), a Total
IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is
expanding its AI reach with the upcoming support for the new NVIDIA
HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry
leading AI platforms, including 8U and 4U Universal GPU Systems,
are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x
capacity and 1.4x higher bandwidth HBM3e memory compared to the
NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of
Supermicro NVIDIA MGXTM systems supports the upcoming
NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented
performance, scalability, and reliability, Supermicro's rack scale
AI solutions accelerate the performance of computationally
intensive generative AI, large language Model (LLM) training, and
HPC applications while meeting the evolving demands of growing
model sizes. Using the building block architecture, Supermicro
can quickly bring new technology to market, enabling customers to
become more productive sooner.
![Supermicro 4U 8GPU H200 Supermicro 4U 8GPU H200](https://mma.prnewswire.com/media/2274161/03_2023__4U_8GPU_H200_1080.jpg)
Supermicro is also introducing the industry's highest density
server with NVIDIA HGX H100 8-GPUs systems in a liquid cooled 4U
system, utilizing the latest Supermicro liquid cooling solution.
The industry's most compact high performance GPU server enables
data center operators to reduce footprints and energy costs while
offering the highest performance AI training capacity available in
a single rack. With the highest density GPU systems, organizations
can reduce their TCO by leveraging cutting-edge liquid cooling
solutions.
"Supermicro partners with NVIDIA to design the most advanced
systems for AI training and HPC applications," said Charles Liang, president and CEO of Supermicro.
"Our building block architecture enables us to be first to market
with the latest technology, allowing customers to deploy generative
AI faster than ever before. We can deliver these new systems to
customers faster with our worldwide manufacturing facilities. The
new systems, using the NVIDIA H200 GPU with NVIDIA®
NVLink™ and NVSwitch™ high-speed GPU-GPU interconnects at 900GB/s,
now provide up to 1.1TB of high-bandwidth HBM3e memory per node in
our rack scale AI solutions to deliver the highest performance of
model parallelism for today's LLMs and generative AI. We are also
excited to offer the world's most compact NVIDIA HGX 8-GPU liquid
cooled server, which doubles the density of our rack scale AI
solutions and reduces energy costs to achieve green computing for
today's accelerated data center."
Learn more about the Supermicro servers with NVIDIA GPUs
Supermicro designs and manufactures a broad portfolio of AI
servers with different form factors. The popular 8U and 4U
Universal GPU systems featuring four-way and eight-way NVIDIA HGX
H100 GPUs are now drop-in ready for the new H200 GPUs to train even
larger language models in less time. Each NVIDIA H200 GPU contains
141GB of memory with a bandwidth of 4.8TB/s.
"Supermicro's upcoming server designs using NVIDIA HGX H200 will
help accelerate generative AI and HPC workloads, so that
enterprises and organizations can get the most out of their AI
infrastructure," said Dion Harris,
director of data center product solutions for HPC, AI, and quantum
computing at NVIDIA. "The NVIDIA H200 GPU with high-speed HBM3e
memory will be able to handle massive amounts of data for a variety
of workloads."
Additionally, the recently launched Supermicro MGX servers with
the NVIDIA GH200 Grace Hopper Superchips are engineered to
incorporate the NVIDIA H200 GPU with HBM3e memory.
The new NVIDIA GPUs allow acceleration of today's and future
large language models (LLMs) with 100s of billions of parameters to
fit in more compact and efficient clusters to train Generative AI
with less time and also allow multiple larger models to fit in one
system for real-time LLM inference to serve Generative AI for
millions of users.
At SC23, Supermicro is showcasing the latest offering, a 4U
Universal GPU System featuring the eight-way NVIDIA HGX H100 with
its latest liquid-cooling innovations that further improve density
and efficiency to drive the evolution of AI. With Supermicro's
industry leading GPU and CPU cold plates, CDU(cooling distribution
unit), and CDM (cooling distribution manifold) designed for green
computing, the new liquid-cooled 4U Universal GPU System is also
ready for the eight-way NVIDIA HGX H200, which will dramatically
reduce data center footprints, power cost, and deployment hurdles
through Supermicro's fully integrated liquid-cooling rack solutions
and our L10, L11 and L12 validation testing.
For more information, visit the Supermicro booth at
SC23
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in
Application-Optimized Total IT Solutions. Founded and operating in
San Jose, California, Supermicro
is committed to delivering first to market innovation for
Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are
a Total IT Solutions manufacturer with server, AI, storage, IoT,
switch systems, software, and support services. Supermicro's
motherboard, power, and chassis design expertise further enables
our development and production, enabling next generation innovation
from cloud to edge for our global customers. Our products are
designed and manufactured in-house (in the US, Taiwan, and the
Netherlands), leveraging global operations for scale and
efficiency and optimized to improve TCO and reduce environmental
impact (Green Computing). The award-winning portfolio of Server
Building Block Solutions® allows customers to optimize for their
exact workload and application by selecting from a broad family of
systems built from our flexible and reusable building blocks that
support a comprehensive set of form factors, processors, memory,
GPUs, storage, networking, power, and cooling solutions
(air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT
Green are trademarks and/or registered trademarks of Super Micro
Computer, Inc.
All other brands, names, and trademarks are the property of
their respective owners.
Photo -
https://mma.prnewswire.com/media/2274161/03_2023__4U_8GPU_H200_1080.jpg
Logo -
https://mma.prnewswire.com/media/1443241/Supermicro_Logo.jpg
View original
content:https://www.prnewswire.co.uk/news-releases/supermicro-expands-ai-solutions-with-the-upcoming-nvidia-hgx-h200-and-mgx-grace-hopper-platforms-featuring-hbm3e-memory-301985347.html