AMD Radeon Instinct MI60 vs NVIDIA Tesla V100 PCIe 32 GB
What is the difference between AMD Radeon Instinct MI60 and NVIDIA Tesla V100 PCIe 32 GB. Find out which graphics card has better performance.
Graphics Processor (GPU)
Vega 20 | GPU Name | GV100 |
GCN 5.1 | Architecture | Volta |
TSMC | Foundry | TSMC |
7 nm | Process Size | 12 nm |
13,230 million | Transistors | 21,100 million |
331 mm² | Die Size | 815 mm² |
Graphics Card
Nov 18th, 2018 | Release Date | Mar 27th, 2018 |
Radeon Instinct (MIx) | Family | Tesla (Vxx) |
Active | Production | Active |
PCIe 4.0 x16 | Bus Interface | PCIe 3.0 x16 |
Memory
32 GB | Memory Size | 32 GB |
HBM2 | Memory Type | HBM2 |
4096 bit | Memory Bus | 4096 bit |
1,024 GB/s | Bandwidth | 897.0 GB/s |
Performance
115.2 GPixel/s | Pixel fillrate | 176.6 GPixel/s |
460.8 GTexel/s | Texture fillrate | 441.6 GTexel/s |
29.49 TFLOPS (2:1) | FP16 (half) performance | 28.26 TFLOPS (2:1) |
14.75 TFLOPS | FP32 (float) performance | 14.13 TFLOPS |
7.373 TFLOPS (1:2) | FP64 (double) performance | 7.066 TFLOPS (1:2) |
Clock Speeds
1200 MHz | Base Clock | 1230 MHz |
1800 MHz | Boost Clock | 1380 MHz |
1000 MHz 2 Gbps effective | Memory Clock | 876 MHz 1752 Mbps effective |
Render Config
4096 | Shading Units | 5120 |
256 | TMUs | 320 |
64 | ROPs | 128 |
16 KB (per CU) | L1 Cache | 128 KB (per SM) |
4 MB | L2 Cache | 6 MB |
Board Design
Dual-slot | Slot Width | Dual-slot |
300 W | Thermal design power (TDP) | 250 W |
700 W | Suggested PSU | 600 W |
1x mini-DisplayPort | Display Connectors | No outputs |
1x 6-pin + 1x 8-pin | Power Connectors | 2x 8-pin |
API support
12 (12_1) | DirectX | 12 (12_1) |
4.6 | OpenGL | 4.6 |
2.1 | OpenCL | 3.0 |
1.2 | Vulkan | 1.2 |
6.4 | Shader Model | 6.6 |