Supporting x86-64 address translation for 100s of GPU lanes

Jason Power, Mark D. Hill,, David A. Wood. HPCA 2014.

Paper on IEEE Xplore Local Download Presentation (pptx) Reproducible Data

Abstract

Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs * 64 lanes/CU) with memory access patterns designed for throughput more than locality. To drive GPU MMU design, we examine GPU memory reference behavior with the Rodinia benchmarks and a database sort to find: (1) the coalescer and scratchpad memory are effective TLB bandwidth filters (reducing the translation rate by 6.8x on average), (2) TLB misses occur in bursts (60 concurrently on average), and (3) postcoalescer TLBs have high miss rates (29% average). We show how a judicious combination of extant CPU MMU ideas satisfies GPU MMU demands for 4 KB pages with minimal overheads (an average of less than 2% over ideal address translation). This proof-of-concept design uses per-compute unit TLBs, a shared highly-threaded page table walker, and a shared page walk cache.

Jason Power, Mark D. Hill and David A. Wood, “Supporting x86-64 address translation for 100s of GPU lanes,” 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA), Orlando, FL, 2014, pp. 568-578. doi: 10.1109/HPCA.2014.6835965

@inproceedings{gpummu:Power:2014,
    author={Jason Power and Mark D. Hill and David A. Wood},
    booktitle={2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)},
    title={Supporting x86-64 address translation for 100s of GPU lanes},
    year={2014},
    pages={568-578},
    doi={10.1109/HPCA.2014.6835965},
    ISSN={1530-0897},
    month={Feb},
}

Updated:

Comments