NVIDIA A100 80GB

  • NVIDIA Ampere GPU architecture
  • Compute-optimized GPU
  • 6912 NVIDIA CUDA Cores
  • 432 NVIDIA Tensor Cores
  • 80GB HBM2e memory with ECC
  • Up to 1,935 GB/s memory bandwidth
  • Max. power consumption: 300W
  • Graphics bus: PCI-E 4.0 x16
  • Thermal solution: Passive
Изчерпан

Безплатна доставка от €100

Promocja cenowa na model HDR-15-5

Продукт, предназначен само за професионална употреба
NVIDIA A100 80GB

NVIDIA A100 80GB

Описание

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. The A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.

Техническа спецификация

  A100 80GB PCIe A100 80GB SXM
FP64 9.7 TFLOPS 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS 19.5 TFLOPS
FP32 19.5 TFLOPS 19.5 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS* 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS* 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS* 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS* 624 TOPS | 1248 TOPS*
GPU Memory 80GB HBM2e 80GB HBM2e
GPU Memory Bandwidth 1,935 GB/s 2,039 GB/s
Max Thermal Design Power (TDP) 300W 400W ***
Multi-Instance GPU (MIG) Up to 7 MIGs @ 10GB Up to 7 MIGs @ 10GB
Form Factor PCIe — Dual-slot air-cooled or single-slot liquid-cooled SXM
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s **
PCIe Gen4: 64 GB/s
NVLink: 600 GB/s
PCIe Gen4: 64 GB/s
Server Options Partner and NVIDIA-Certified Systems™ with 1–8 GPUs NVIDIA HGX™ A100 — Partner and NVIDIA-Certified Systems with 4, 8, or 16 GPUs
NVIDIA DGX™ A100 with 8 GPUs
Notes *With sparsity
**Requires NVLink Bridge
***Configurable

* With sparsity
** SXM4 GPUs via HGX A100 server boards; PCIe GPUs via NVLink Bridge for up to two GPUs
*** 400W TDP for standard configuration. HGX A100-80GB custom thermal solution (CTS) SKU can support TDPs up to 500W

Свържете се със специалист на Elmark

Имате въпроси? Имате нужда от съвет? Обадете се или ни пишете!