Rent GPU
NVIDIA H100

The NVIDIA H100 GPU delivers exceptional performance, scalability, and security. With a dedicated Transformer Engine, it accelerates large language models by up to 30x, enabling efficient training and inference for trillion-parameter AI workloads.

Memory

94GB
GPU RAM

Bandwidth

3.9TB/s
Memory Bandwidth

Form factor

SXM & NVL
Architecture

Interconnect

900GB/s
NVLink Switch

Accelerated
Workload & Computing

The NVIDIA H100 GPU is optimized for AI inferencing and enterprise computing, delivering low-latency, high-throughput performance for production AI workloads. It enables efficient deployment of large-scale models, supports secure multi-tenant environments, and scales seamlessly across data center and cloud infrastructure.

  • Transformational AI Training
  • Real-Time Deep Learning Inference
  • Enterprise-Ready Utilization

Architecture Comparison

Select the form factor optimized for your specific workload

FeatureH100 SXMH100 NVL (Dual-GPU)
architectureHopper (SXM5)Hopper (Bridged)
memory80GB HBM394GB HBM3
memory Bandwidth3.35 TB/s3.9 TB/s
recommended ForLarge-Scale AI Training & HPCLLM Inference (GPT-3/4)