Earlier this week Nvidia announced their new PCI-Express add-on card for its flagship Tesla V100 HPC accelerator.

It is based on Nvidia’s next-generation “Volta” GPU architecture with 12nm “GV100” silicon it features a multi-chip module that features a silicon substrate and four HBM2 memory stacks.

To top that off, the Tesla V100 PCI-Express HPC Accelerator features a total of 5,120 CUDA cores and 640 Tensor cores, which are CUDA cores that have been specifically designed to accelerate natural-net building according to Nvidia.

This allows the Tesla V100 to achieve GPU clock speeds of 1,370 MHz via their 4,096-bit wide HBM2 memory interface.

It also features a 900GB/s memory bandwidth alongside with a 815nm² GPU that has a transistor count of 21 billion, yup billion.

Of course, this card isn’t aimed at the average user.

Instead, Nvidia will be aiming it towards institutions that need more power for computations, who they say they will start shipping to later this year.

You can find out more about the card at the links below.

Source: NVIDIA

Via: TPU

This article may include links to affilates, and if you click on one of these affilate links, we may recieve commission.