Pinnacle: IBM MXT in a Memory Controller Chip



- Hide Paper Summary
Paper Title: Pinnacle: IBM MXT in a Memory Controller Chip
Link: https://ieeexplore.ieee.org/document/918003
Year: IEEE Micro 2001
Keyword: Compression; MXT; Pinnacle



Back

The link to original MXT paper: https://ieeexplore.ieee.org/document/5389099

This paper introduces Pinnacle, a memory controller module designed for Pentium and Xeon platform featuring main memory compression. It implements IBM MXT memory compression technology, which supports upto 2x storage reduction and a maximum of 64:1 compression on certain contents. As pointed out at the beginning of this paper, Pinnacle makes a trade-off between access latency and greater functionality.

Pinnacle supports a memory hierarchy consisting of multiple cores, private L1, L2 caches, and a shared Last-Level Cache (LLC). The memory controller sits between the LLC and the memory modules, which also participates in cache coherence by snooping on the system bus. Pinnacle internally distinguishes between the conventional physical address space after MMU translation and the actual hardware address on DRAM modules. The former is called as “real address space”, while the latter is called “physical address space”. In the following discussion, we adhere to these terms and their definitions.

Cache lines are stored in compressed form on the DRAM. They are decompressed when they are fetched into a special cache maintained by the Pinnacle controller itself, and compressed again when evicted. The organization of the cache will be discussed below. Cache lines are always compressed and decompressed in 1KB blocks, and stored as 256 byte sectors in the DRAM. Pinnacle assumes a 2:1 compression ratio in average, although in best cases it could compress a line to 1/64-th of the original size. At system startup time, after memory check passes, the BIOS configures the actual physical DRAM to be twice as large as the actual DRAM installed on the system. This creates an illusion to all upper level software, includng the OS, that the actual amount of usable RAM is doubled. The OS, however, should also be slightly modified such that the virtual memory manager does not over-commit physical memory when the average compression ratio is below 2:1, which is can be a fatal error which crashes the system.

This paper covers three major components of Pinnacle: The cache subsystem, memory subsystem, and the compression/decompression circuit. We reveal the details for each of them in the following discussion.

The cache subsystem, as its name suggests, maintains an on-chip cache private to the controller. The private cache serves two purposes. First, it accelerates access to compressed data without having to go through the decompression latency on each access, since data blocks are stored in uncompressed, 1KB block form on-chip. Second, the cache serves as an interface between the system LLC and the compression logic, since the former uses a conventional cache block size of 64 bytes, optimized for machine access and bus transfer, while the latter has significantly larger blocks of size 1KB, which is required by the compression logic. Cache blocks can be accessed in the unit of 32 bytes from upper level caches (I think this design decision is made irrelevant to compression, but just to comply to the bus protocol), with the critical unit delivered first. The cache itself is of 4-way set-associative organization, with 8192 sets. It operates similar to a set-associative cache using LRU within the way for replacements. The cache also features a directory tracking the status of each block. In addition to conventional attributes such as tag, valid bit and dirty bit, the directory also tracks the 256 byte sectors that are accessed by upper level caches using a bit vector. If these sectors are not accessed since they were brought into the cache, the controller can simply invalidate them without sending coherence messages to the upper level (I think they are assuming inclusive cache hierarchy here).

Larger blocks in the cache can potentially introduce high latency on block eviction and fetch. The paper reveals that Pinnacle pipelines block eviction and fetch on a conflict miss in order to counter the long latency of these two operations. Each block has a bit vector indicating the eviction/fetch status of 32 byte sub-lines, aligning with the access granularity from the upper level. Each element in the bit vector can be one of the following values: Invalid, new and old. An invalid sub-line is the one that has already been evicted, but new content has not yet arrived. Accesses to invalid sub-lines will stall until the new content is fetched. An “old” sub-line is the one that has not yet been moved to the eviction buffer. They remain accessible until they become invalid. A “new” sub-line, on the contrary, is from the On initiating a line eviction, all sub-lines in the block are set to “old”. When a sub-line is moved from the data slot to the eviction buffer, it is set to “invalid”. When a new sub-block is fetched, it is set to “new”. This way, the controller maximizes the number of lines in the cache and fually utilizes all cycles without idling too much.

The memory subsystem is built around the address mapping scheme from real addresses to physical addresses. Address mapping is performed using a translation table located at low address end of the physical address space. Real addresses are first aligned to 1KB boundaries, and then direct-mapped to the table. Each table entry takes 16 bytes, consisting of four pointers and a status field. This adds an extra 1/64 space overhead to maintain the mapping table, as each 16 byte entry can map 1KB of storage. The status field describes the compression status such as size and mode of the 1KB block, while the four pointers point to four 256 byte sectors. Note that not all four pointers may point to valid data at all time, since compressed block may occupy less than four blocks. In fact, space saving with Pinnacle is achieved by using less than four sectors to store the block. In the most extreme case, if a block can be compressed to less than 114 bits, then the entire block is stored in-line within the entry, which eliminates any explicit storage consumption, achieving a compression ratio of 64:1.

Pinnacle manages physical address space as a pool of 256 byte sectors (except for regions with compression disabled). Free sectors are tracked by a linked list, the head of which is stored in a register. The linked list node themselves are allocated from free sectors, which is initialized at system startup time. The paper also suggests that there is a hardware cache for the first two nodes of the linked list to accelerate sector allocation.

The paper also mentions that sectors can be shared by different 1KB blocks within the same 4KB page in the real address space. Such sharing happens at 32 byte granularity, which is also aligned to the access granularity from upper levels. This reduces internal fragmentation as well as storage consumption. The paper, however, did not reveal a detailed model for sector sharing and address mapping within a sector.

The last component is the hardware compression and decompression circuit. Pinnacle runs a variant of LZ77 algorithm. The 1KB block is compressed after it has been fully moved to the eviction buffer by the private cache, and decompressed when it is fetched from memory. The compression logic divides the 1KB block into four 256 byte sectors, and compresses each sector independently at a throughput of 1 byte per second, achieving a compression latency of 256 cycles.