FPTree: A Hybrid SCM-DRAM Persistent and Concurrent B-Tree for Storage Class Memory



- Hide Paper Summary
Paper Title: FPTree: A Hybrid SCM-DRAM Persistent and Concurrent B-Tree for Storage Class Memory
Link: https://dl.acm.org/citation.cfm?id=2915251
Year: SIGMOD 2016
Keyword: NVM; B+Tree



Back

This paper presents FPTree, a B+Tree designed for byte-addressable non-volatile memory. This paper points out, at a high level, several challenges of implementing data structures for NVM. First, the data structure should be able to recover to a consistent state after a crash. This is typically achieved by flushing certain data back to the NVM to enforce write ordering between metadata (commit mark, log entry, etc.) and data using persistence barrier. Considering that NVM writes are only atomic on 8 byte aligned words at the CPU side, and that data persistency is only atomic on cache line granularity at the NVM side, partial writes might occur after a crash, if the crash happens during or after the write. There are three common patterns for ensuring consistency. The first pattern is writing data into invalid area (assumed to be initialized to zero). This happens when inserting elements into a B+Tree node. In this case, we first acquire storage for the write, and then perform write. A valid mark is set only after the write completes and is flushed back to the NVM. The valid mark can be appended after the write as a log, or be set in a separate header structure. A second flush on the valid mark commits the write operation. Since the mark can be set with only 8 byte atomic writes, the commit operation is also atomic with regard to failures. The second pattern is writing data into invalid area, but allows data to be invalidated (deleted) before they can be written into again. This corresponds to insert and delete operation on a tree node. The first half (insertion) is exactly the same as the previous pattern. The second half can be easily achieved by resetting the valid mark during the first half to a value indicating deleted entry, and then flushing the mark, which commits the delete operation. Garbage collection is required to reclaim deleted entries to avoid fragmentation. In the third pattern, an entry is first inserted, which is handled as in the first pattern, and then updated without being deleted. If the update can be done with 8 byte atomic write, then we simply perform the update and then flush the value. Otherwise, we write a redo write-ahead log to describe the operation, flush the log, after which the actual content is updated in-place. The log can only be removed after we flush the in-place update back to the NVM. During recovery, the log entries are first processed by replaying the operations recorded in the log.

The second challenge is data recovery. This paper assumes using direct access (DAX) provided by the file system to expose the NVM address space to users using virtual memory. Users need to call mmap() on the NVM file, which allocates a range of virtual addresses that are mapped to the corresponding physical pages that are used to store the content of the file. DAX provides better control over access permission and storage management, since the file system only exposes the part of physical addresses that are used to store the file to users. Unrelated NVM storage is protected by the MMU. When users expand the NVM file, the file system allocates new pages from the NVM address space, and adds them to the file inode. Compared with conventional DAX implemented on block storage, DAX on NVM does not use damand paging and rely on page faults to bring file contents into a memory buffer before access. Instead, the OS directly maps the requested virtual addresses to physical pages on the underlying NVM hardware. Furthermore, fsync() on NVM DAX file no longer flushes the buffer. Instead, we issue flush instructions on every cache line of the affected page to ensure that the page is persistent on NVM. This process, however, does not guarantee that the same NVM file can always be mapped to the same virtual address on each map, as the virtual address space can be occupied by libraries and/or other DAX files, reaulting in virtual address pointers being invalidated. To preserve the semantics of pointers, this paper proposes using DAX-aware pointers, which is a 16 byte pair storing the file ID and offset within the file. No special compiler instrumentation is needed, as the pointer format is only used within the B+Tree. Users always observe normal virtual address pointers in the rumtime.

The third challenge is memory allocation. Memory blocks already allocated to the application should not be re-allocated after the crash due to incorrectly recovered metadata. Similarly, memory blocks should always be tracked by either the application or by the allocator. Failure to observe the second requirement results in memory leak, which is harder to resolve than in volatile memory, since memory leaks are also persistent. Prior researches propose to change the allocator interface to enable atomic ownership transfer: On memory allocation, the address of the target pointer for receiving the allocated block is passed as an argument. The allocator ensures that either the memory block is allocated after recovery and the target word stores the address of the block, or that the block is not allocated, and the target word’s value does not change. The same applies to deallocation: On memory free, the address of the target word holding the block to be freed is passed as an argument. The allocator should ensure that either the block is returned after recovery and the target word is set to NULL, or that the block is not returned, and the target word’s value unchanged. The memory allocator can internally achieve this using redo logging.

We now describe FPTree node layout as follows. Internals nodes of FPTree are volatile and never persisted. The memory for internal nodes are allocated by regular memory allocator in volatile address space. The node consists of three parts: A counter recording the number of entries, an array of sorted keys, and an array of values corresponding to the keys. Internal nodes are always sorted and updated in-place like regular B+Tree nodes. No lock field is needed for thread synchronization, since FPTree worker threads wrap the critical section in hardware transactions (TSX). Leaf nodes of FPTree have more fields than interna nodes for three reasons. First, updates to leaf nodes must be persisted back to the NVM, which is slower than internal node updates. Leaf nodes therefore use log-structured update without enforing key ordering. In other words, updates to leaf nodes simply append the key and value to the next available storage slot, flush the content of the slot, and sets a “valid” bit in the node header bitmap. Second, since keys are not sorted in leaf nodes, node search requires scanning the leaf node. To reduce the extra cost of node scan, a “fingerprint” field is added to help locating entries within the node. The fingerprint field stores 64 1-byte key hashes of all valid keys in the leaf node, supporting a maximum node capacity of 64 entries. The fingerprint field is also aligned to cache line boundary, such that it can be read from NVM using one bus request, and then checked against the hash of the search key with SIMD instructions efficiently. Analysis shows that hash collision within a node is extremely rare, such that on average only one extra probe is needed to determine whether the key truly exists. Thirdly, since only leaf nodes are persisted on NVM, on recovery, we must be able to find all leaf nodes, and then rebuild inner levels. To achieve this, all leaf nodes include a “next node” pointer that points to the address (relative address) of the next node. A pointer to the first node is stored on a well-known location to help recovery process finding the leaf chain. Fianlly, A “lock” bit in leaf nodes is used to synchronize update operations on the leaf. Threads first traverse to the leaf node using HTM, lock the leaf node, and commits the HTM transaction. This can be thought of as a coarse grained hand-over-hand locking, as the entire upper level (non-leaf levels) traversal is made atomic with HTM.

We next describe basic operations as follows. We begin with reads. Read operations (and also the traversal stage of insert, delete, update) use hardware transactional memory to ensure atomicity. The paper recommends using the speculative lock construct in Intel TBB as a mature solution. The speculative lock will first attempt to execute the critical section in HTM mode, and falls back to software mode if HTM transactions aborts repeatedly or because of unrecoverable reasons. The software mode uses a single global lock to synchronize all threads on the tree. As discussed in the previous paragraph, the advantage of using HTM is ease of programming and verification. The tree traversal procedure can be thought of as a coarse grained hand-over-hand lock coupling, in which inner level traversal is made atomic by the speculative lock without having to pay the serialization overhead of using a single “upper level” lock due to fine grained conflict detection privided by HTM.

Tree traversal begins at root level after acquiring the speculative lock, and ends at leaf level like regular tree traversals. During the traversal, any attempty to modify inner nodes will conflict with traversal threads if the inner node under modification is also in the read set. Once leaf node is found, the thread first checks whether the lock bit in the leaf node is set. If true, it aborts the transaction, since a concurrent update is modifying the leaf node, which makes it unsafe to read. The read operation also adds the leaf bit into the read set of the transaction, such that any later acquisition of the lock will force the transaction to abort, serializing the node read operation with concurrent node updates (all node updates will acquire the lock physically). The thread then performs key search within the node by reading in the fingerprint field, checks the hash of the search key against the 1-byte fingerprints, and probes the actual key slot(s) if there is one or more matches. A key is found if the key values match and the bitmap indicates a valid key. The value is returned after the critical section is committed.

Tree insertion takes place the same way as a key search until reaching the leaf node. Once the leaf node is found, the thread sets the lock bit speculatively within the critical section, and commits the transaction, which physically acquires the lock, potentially aborting concurrent read transactions on the leaf node. The thread then searches for the same key as described in the previous paragraph. If the key cannot be found, insertion is performed by first finding an empty slot in the tree node using the bitmap, and then writing the key-value pair into the slot. The insert is committed by persisting the key-value slot and fingerprint value first, setting the corresponding bit in the bitmap, and persisting the bitmap.

When the leaf node is full, the thread will split the leaf node as follows. After locking the leaf node, and before inserting the key-value pair, we check the size of the node. If it is full, then node split begins by first allocating a log entry from the log pool. Log entries are maintained in the NVM address space as an array of log objects. Each log object has fields (possibly overloaded for different purposes) that describes a tree modification operation. The log pool is stored at a known location of the NVM file, and can be located after crash to aid recovery. The log object for node spilt contains two fields, a current leaf pointer field and a new leaf pointer field, both initialized to zero. The thread first writes the current leaf pointer into “current leaf pointer”, and persists the field. If system crashes at this point, the recovery routine will re-allocate memory and complete the rest of node split. The thread then allocates a new leaf node by passing the address of the “new leaf pointer” field to the allocator. The allocator ensures atomic ownership transfer between itself and the log object. If the system crashes and both fields are set, then no new allocation is needed for recovery, since the log object already contains the newly allocated leaf node. If, on the contrary, the new leaf pointer field is empty after a crash, then we know the allocation failed, which leads to a new allocation. The thread then proceeds to populate the new node with upper half of the current leaf node. The bitmap of the current node is also updated to remove keys in the upper half. Both the new node and current node are persisted, before the log entry is invalidated.

In general, by logging essential data and by persisting log fields at certain “persist points”, the recovery procedure is always able to recognize the rough point where crash happens by checking the value of fields in the log entry. As long as all steps between two “persist points” are idempotent, i.e. they can be performed more-than-once without changing the semantics of the operation, recovery by replaying these steps will be correct. Non-idempotent operations (e.g. memory allocation, key appending, etc.), however, must be recorded by the log to ensure exact-once semantics.

Note that parent node update is not logged, since parent nodes are stored in volatile memory only, and will be rebuilt on recovery anyway. Parent nodes are updated by acquiring the same speculative lock as in node traversal while holding the leaf node lock. This is also comparable to the lock coupling protocol for updating internal nodes, in which the intrenal node is locked while holding the leaf node lock. The difference is that in the lock coupling approach, deadlock might happen since there is no global lock ordering; A thread traversing down the tree may deadlock with another therad attempting to lock the internal node while owning the leaf node lock. In our approach, this is impossible since HTM transactions abort eagerly on data conflicts (e.g. when the internal node is read while the current thread is inserting new elements). The thread should first check whether the internal node is the same node it uses to traverse the tree. If not, the current node is found by re-traversing the tree. Otherwise, the internal node is updated in the critical section. We complete the node split operation by committing the transaction and then releasing the leaf node lock.

Key deletion proceeds in a similar manner as key insertion. After acquiring the lock on the leaf node and locating the key in the node, we simply clear the corresponding bit in the bitmap, and commit the deletion by persisting the bitmap. FPTree simplifies node merge by never merging nodes, but instead simply removing empty nodes from the leaf node chain. If a node will become empty after the deletion, both the lock of the node and its previous node (Note: The paper does not mention how to find the previous node) are locked within the node traversal critical section (and the previous node’s lock is also checked). The thread then acquires a new log object for the node delete operation. The log delete object stores the current node to be deleted, and the previous node (in the corner case where the first node in the leaf chain is deleted, the previous node is not logged but set to a special value). The node delete operation is committed by flushing the log object, updating the previous node (or the head pointer to the leaf chain), flushing the updated fields, and deallocating the leaf node. In order to deallocate the node, we pass the address of the current node field of the log object to the memory allocator. The allocator ensures that either the node is returned to the allocator pool and the pointer is set to NULL, or the node still belongs to the application, and the field is not nullified. The recovery routine checks whether the current node field of a delete log object is NULL. If positive, no memory block needs to be freed during recovery. Otherwise, the recovery routine will redo the node delete, and deallocate the current leaf node stored in the log entry. The paper does not mention how and when parent node is updated in the pseudocode. I guess it should be the same as inserting the node, i.e. starting a HTM transaction while holding the lock on the leaf node, and removing the entry in the internal node poitning to the node to be deleted.

On recovery, the memory allocator recovery routine is first invoked to determine block ownership for pending allocations and deallocations (although not mentioned in the paper, this is usually done by reapplying redo logs). The recovery handler for BFTree is then invoked, which first scans the redo log array, and reapplies all active redo logs to complete pending tree modifications and memory allocation/deallocation. The handler then rebuilds internal levels of the tree using an algorithm that is similar to bulk insert. During this process, all leaf nodes are scanned. Leaf node locks are also released in case a leaf node cache line was evicted back to the NVM when the node is locked. Normal operations can resume after rebuilding the internal levels.