NVIDIA Hopper GPU pictured, single large monolithic die packed with VRMs and HBM
Six stacks of HBM memory are also visible (which is actually a visually similar configuration seen in the Intel competitor chip HL Gaudi 2) although depending on the exact type of HBM used, it could have wildly different specifications. At a bare minimum, we are looking at 48 GBs of vRAM although all the way up to 96 GB and more is possible with HBM3. NVIDIA will also be rolling out its standard server-based products, here is a quick summary: [caution: this paragraph is highly speculative] If the SM to CUDA core ratio is the same as Turing (and that is a big if), there are exactly 64 CUDA cores per SMs for a grand total of 9126 CUDA cores. Assuming a nominal frequency of 2.2 GHz, this is at least 40 TFLOPs of double-precision performance. -WhyCry, Videocardz The amount of VRMs present on the PCB also indicate that the card is going to have one heck of a power draw and I would not count out a TDP between 400-500 watts for the higher end variants considering the leaks about Ada Lovelace (which is a parallel architecture focused on gaming). In any case, this thing is an absolute beast and we shall see it announced by NVIDIA CEO Jensen Huang - the man in the black leather jacket - tomorrow. [/speculation] NVIDIA’s architectures are always based on computer pioneers and this one appears to be no different. Nvidia’s Hopper architecture is based on Grace Hopper who was one of the pioneers of computer science and one of the first programmers of Harvard Mark 1 and inventor of the first linkers. She also popularized the idea of machine-independent programming languages which led to the development of COBOL - an early high-level programming language still in use today. She enlisted in the Navy and helped the American War efforts during World War II.