Rent GPU
NVIDIA H100
The NVIDIA H100 GPU delivers exceptional performance, scalability, and security. With a dedicated Transformer Engine, it accelerates large language models by up to 30x, enabling efficient training and inference for trillion-parameter AI workloads.
Memory
94GB
GPU RAM
Bandwidth
3.9TB/s
Memory Bandwidth
Form factor
SXM & NVL
Architecture
Interconnect
900GB/s
NVLink Switch
Accelerated
Workload & Computing
The NVIDIA H100 GPU is optimized for AI inferencing and enterprise computing, delivering low-latency, high-throughput performance for production AI workloads. It enables efficient deployment of large-scale models, supports secure multi-tenant environments, and scales seamlessly across data center and cloud infrastructure.
- Transformational AI Training
- Real-Time Deep Learning Inference
- Enterprise-Ready Utilization
Architecture Comparison
Select the form factor optimized for your specific workload
| Feature | H100 SXM | H100 NVL (Dual-GPU) |
|---|---|---|
| architecture | Hopper (SXM5) | Hopper (Bridged) |
| memory | 80GB HBM3 | 94GB HBM3 |
| memory Bandwidth | 3.35 TB/s | 3.9 TB/s |
| recommended For | Large-Scale AI Training & HPC | LLM Inference (GPT-3/4) |
Loading feeds...

