AMD Unveils Heterogeneous Uniform Memory Access

by Reads (5,651)

Today AMD unveiled the new heterogeneous Uniform Memory Access (hUMA) that will be an essential part of AMD’s upcoming new A-series accelerated processing units (APUs) code-named Kaveri.

Full technical details about Kaveri haven’t been made public at this time, but this next generation of A-series processors is rumored to feature up to four Steamroller x86 processor cores, GCN architecture-based AMD Radeon HD 7000 graphics, and a 128-bit memory controller that support both DDR3 as well as GDDR5 memory.

But before spilling the secrets about Kaveri, AMD wants developers and consumers to pay attention to what hUMA means for the future of Heterogeneous System Architecture (HSA). For those who aren t familiar with it, HSA is the intelligent computing architecture that AMD utilizes to combine multi-core CPUs and a multi-core GPU on a single piece of silicon.

The biggest problem with HSA and AMD’s current APU chips is that the CPU and GPU elements of the processor have to share the same memory but cannot access the same memory at the same time. This is what is referred to as Non-Uniform Memory Access (NUMA): Data being used by the APU has to be managed across multiple groups with different address spaces. This adds significant programming complexity and limits the speed of the processor because every time the CPU side of the chip accesses memory that data has to be copied, synchronized, and undergo address translation so the GPU side of the chip can access the data.

When AMD’s Kaveri chips arrive later this year hUMA will solve these problems with bi-directional coherent memory. In the simplest terms, any updates made by one processing element will be seen by all other processing elements — GPU or CPU. This means both the CPU and the GPU have access to the entire memory space and can dynamically allocate memory as needed.

While this largely sounds like techno-babble to average PC users, this is a significant advancement for programmers because existing CPU multi-core algorithms can be moved to the GPU without complex recoding for the absence of memory coherency. For the last several years PC makers have talked about the performance boosts that come from moving part of the processing workflow from the CPU over to the GPU, but the truth is that few current applications make the most of GPU computing because of the complexity in the code and the amount of data that has to be copied in memory.

AMD’s use of hUMA means that there is one less hurdle to the promised performance improvements of HSA. This greater simplicity of programming should also translate into lower development costs since multiple expert programmers aren’t needed to write complicated lines of code just to ensure software coherency of data between the CPU and GPU elements on the APU.

AMD also promises that hUMA translates into better experiences for consumers because the simplified coding means it’s easier for developers to deliver a more visually rich user interface with longer battery life thanks to the ability to get the same performance with less power.

While we’ll have to wait for the next generation of A-series processors and HSA applications to arrive, this latest news from AMD certainly sounds promising.



All content posted on TechnologyGuide is granted to TechnologyGuide with electronic publishing rights in perpetuity, as all content posted on this site becomes a part of the community.