P-Inspect: Architectural Support for Programmable Non-Volatile Memory Frameworks



- Hide Paper Summary
Paper Title: P-Inspect: Architectural Support for Programmable Non-Volatile Memory Frameworks
Link: https://www.microarch.org/micro53/papers/738300a509.pdf
Year: MICRO 2020
Keyword: NVM; P-Inspect



Back

Highlight:

  1. Using transitive closures to ensure that all objects are persistent and there is no invalid pointer after recovery.

  2. Using forwarded object as a stub to perform address translation. This is particularly useful if an object is moved, but not all referenced can be updated. The combination of forwarded objects and bulk update amortizes the cost of object scanning.

  3. Using queued object to block future reference of an object until all of its transitive closure objects are migrated to the NVM. This is to avoid race conditions on the migration process.

  4. The proposed efficient log write approach is a comprimise between non-temporal writes write, which does not update the hierarchy, and regular cached write where data needs to be fetched first and evicted later. It requires only one round-trip time, instead of two compared with regular write, but it works better with cache coherence and memory consistency than non-temporal writes.

Questions

  1. The design assumes a managed language runtime, a certain instruction encoding, a specific address space layout (e.g., for storing filters, for determing NVM and DRAM address space, etc.), without explicitly pointing them out.

  2. Generally speaking, wouldn’t the hardware change too much for a really narrow improvement that could have just been done by experienced programmers and robust testing processes?

  3. Are transitive closure computation serialized? The text (Page 8, bottom right corner) says “the TRANS filter is cleared by a thread executing the clearBFTRANS operation in Table II when it has completed processing a transitive closure. Finally, the Change Active FWD Filter operation is performed by the PUT thread when it wakes up.”, indicating that at most only one such process is active at a time.

This paper prposes P-Inspect, a hardware enhancement for supporting reachability-based programming on NVM applications. Conventional, software-based reachability test either incurs too much run-time cycle overhead, or requires tagging the entire address space. P-Inspect, as a hardware solution, achieves both efficiency and flexibility by moving high-frequency, easy-to-implement condition checks to hardware, while still having software implementing the majority of the functionality.

P-Inspect assumes reachability-based NVM programming model. In this model, data structures, which consist of individual objects, can be allocated both on DRAM and NVM, but only those in NVM can be preserved after a system crash or power loss. The goal of the programming model is that: (1) Initially, objects can be allocated on the DRAM for faster access, but cannot be pointed to by another object in persistent memory; and (2) If an object is to be referenced by a persistent object, the object must be moved to the NVM for consistency.

Prior researches propose that software instrumentations be used to check every load and store to ensure that the transitive closure of any persistent objects is all persistent. These researches assume a managed language environment where reference fields (pointers) can be identified at run-time. When a reference field is being updated, the stubs check whether the object being pointed to (the value object) is in DRAM and whether the host object is in NVM. If true, the run-time will first duplicate the value object in the NVM, and mark it as “Queued”. Then the original DRAM object is marked as “Forward”, meaning that all accesses to the original object should be redirected to the NVM copy. In the last step, the run-time recursively adds all objects referred to by the value object also into NVM, such that the transitive closure is all persistent. The “Queued” mark indicates that the transitive closure of the object is currently being moved to NVM, which blocks further attempts to reference the object from other host objects until the mark is cleared, since otherwise in a multi-threaded environment, other threads may still set a pointer to the value object, and observes inconsistent states after a crash. After the transitive closure has been moved, the Queued mark is cleared, and the invoking pointer update operation is performed. DRAM objects that are marked “Forward” will be GC’ed, after all references to them are dropped. Note that the reachability-based framework has nothing to do with synchronization or atomic persistence. Thread synchronization for shared objects should use locks or other techniques, and atomic persistence is achieved with logging, which are both orthogonal to reachability test.

The above software framework has two drawbacks that hinders its adoption. First, the extra instructions for checking conditions and querying the mapping table add significant cycle overhead. Second, most condition checks on the Queued and Forward bit will result in false, meaning that no special action is taken. Based on the second observation, the paper proposes that conditions checks be delegated to hardware. There are three types of condition checks: (1) When a host object’s reference field is updated to point to a value object; (2) When a host object’s primitive field is updated; and (3) When any field is read from a host object. These three checks are implemented by three different instructions: CheckStoreBoth, CheckStoreH, and CheckLoad. Field addresses are encoded in the form of base plus offset, such that both the field address and the object’s base address can be obtained from the instruction encoding.

In addition, two global bloom filters, “TRANS” and “FWD”, are added for “Queued” and “Forward” mark bookkeeping, respectively. These two bloom filters are mapped to the process’s metadata page, which can be accessed by both system software and the cache controller. Each L1 cache also has a small buffer for both filters, which enables fast access. Bloom filter accesses from different cores are coordinated by regular coherence protocol, and the small buffer also responds to coherence messages. To avoid race conditions on updating the multi-cache line filters, the paper suggests that the L1 controller should first acquire all cache lines of the filters, lock them in the small buffer, before updates can be applied. This serializes all update operations in the form of ascending 2PL, which is free of deadlock, and has guaranteed total ordering. On context switches, the small filter buffer should be flushed, since they essentially act as a virtual address cache. Instructions for inserting into and clearing the entire filter are also added to the ISA.

We next discuss the three condition check instructions. CheckStoreBoth instruction first tests whether the host object in the DRAM, and has FWD bit set in the bloom filter using the base addresses. If true, software handler is invoked to perform address translation that returns a NVM address of the host object. Then, the instruction checks whether the value object is in DRAM, and has TRANS bit set using the base address. If true, a software handler is invoked to wait for the bit to be cleared. Note that this can be skipped if the host object is not in the NVM, since a pointer from DRAM object to NVM object is allowed in all cases. Next, the instruction further checks whether the value object has FWD bit set, and if true, address translation is performed similarly as with the host object. Lastly, the instruction checks whether a transaction is currently active, which is tracked by a bit in the control register. If a transaction is running, and the host object is in the NVM, the store instruction also generates an undo log entry and flush it to the NVM using a persistence barrier, after which the field is also flushed.

Note that although bloom filters can generate false positives, this does not affect correctness since the software handler will verify whether an object is indeed Queued or Forwarded by reading their headers. When the software handler moves the value object from DRAM to the NVM, it uses instructions InsertBFFWD and InsertBFTRANS to add the base address of the object into the filters, before starting the recursive process of computing the transitive closure. Every once for a while, a background GC thread starts, which scans all non-forwarded objects, and update the references to forwarded objects with their NVM addresses. This bulk update operation amortizes the cost of object scanning for pointer updates when the address of an object changes. After the bulk update completes, the FWD filter is cleared.

The TRANS filter is cleared when a software handler completes the transitive closure for a migration. Althouth the paper does not mention whether object migration must be serialized, it seems so, since otherwise, different migration processes can overlap, which inserts into the TRANS filter concurrently, making it difficult to clear the filter when objects are no longer blocked.

CheckStoreH instruction, on the other hand, only checks the host object. It first checks if the host object is in DRAM, and forwarded, and if true, then the software handler is called for address translation. If transaction is on then logging is also performed. Otherwise, the update is just applied in-place.

CheckLoad instruction works almost the same as CheckStoreH, except that it does not check transactional mode. The address translation is performed if the object is in DRAM and it is forwarded. The actual address after translation is used for fulfilling the load.

The paper also proposes an efficient form of NVM write, which is a compromise between non-temporal, streaming writes and regular cache coherent writes. With regular writes, a cache block should be first fetched from the NVM before it can be updated in-place in the cache hierarchy, and then evicted back to the NVM for persistence. This process requires two round-trips between the L1 and the NVM. On the other hand, a non-temporal write can push the request directly to the memory controller, requiring only one round-trip latency, but it does not work well with coherence, since it bypasses the hierarchy entirely. This paper suggests that an ideal write operation should both work well with coherence for easier programming, and minimizes the latency. To this end, a new write primitive is proposed, such that it does not fetch the block to be written into the L1, but instead, the request is pushed through the hierarchy, allowing each level to incorporate the changes, and then sent to the memory controller for NVM update. When the request arrives at the LLC, the directory is locked, such that only one outstanding request per address is supported. The consistency model, therefore, is defined by the order that the LLC directory entry is locked.