It’s not often we get to hear really in-depth announcements about next generation chip technologies – unless its NVIDIA at the helm. When the company started talking about pairing up ARM CPUs with their in-house GPU technology, more than a few pundits scoffed at the idea. And yet, after a couple of generations, the Tegra family of products have really come into their own, with Tegra 3 gaining widespread adoption.
According to remarks that NVIDIA made at GTC this week, Tegra 4 is going to continue that trend by integrating with every sensor and chip under the sun – including camera enhancements – which, along with the previously announced support for LTE radios, could substantially improve prospects for the Tegra SoCs in both mobile and low-powered desktop environments.
Perhaps more relevant to our audience is the updates to NVIDIA’s discrete GPU lineup. After Kepler was introduced last year, we can look forward to see entrants using next-generation Maxwell GPU cores next year, and Volta sometime after that (NVIDIA enjoys using an ‘N+2’ presentation strategy where they introduce a new technology (Kepler) and show off the two platforms following).
The big reveal for Maxwell was NVIDIA’s announcement that the GPU platform will be able to address main system memory, a move first seen by AMD’s unified memory technology. A big plus for this move is unlikely to truly affect PC gamers – memory on desktop video cards is rarely the bottleneck (though it’s entirely possible that this will change once 4K and higher resolutions start to hit the mainstream and high-end markets).
Rather, it’s NVIDIA’s workstation- and HPC-oriented Tesla cards that will benefit from this in the near term. Right now, the Tesla cards top out around 8GB of GDDR memory, and the limitations imposed by cost and density technology prohibit anything more. This will allow the Tesla cards to address the much (much) larger pool of system memory local to the system, which suffers from fewer latency issues than pooling the memory from other Tesla GPUs.
Moving to Volta, it’s clear that NVIDIA sees superior memory access as the future – the next-next-generation of NVIDIA GPUs will feature actual DRAM chips on-die with the GPU. It sounds difficult to justify the extra cost to NVIDIA (who wouldn’t ordinarily pay for the memory on the boards, as that’s something a board partner would supply).
Again, the vast reduction in memory latency is something that would immediately benefit Tesla customers over its consumer lineup – but as developers become better at off-loading computational performance to the GPU, it’s easy to think that an improved ability for these video cards to do general-purpose number crunching can only help in the long run.
All of this, of course, leads up to Project Denver, which we’ll see hit somewhere around the Maxwell-Volta timeframe. Denver is NVIDIA’s plan to reproduce the Tegra project writ large – high-performance computing enabled by marrying high-end ARM (likely A15 cores) CPUs and Tesla GPU technology. It’s unknown yet the kind of performance we can expect or solutions it’ll enable, but given how adept NVIDIA has proven itself to be at both worming its way into new markets and creating others whole cloth, we certainly can’t count them out.