forked from luck/tmp_suning_uos_patched
a4bd217b43
This patch introduces pblk, a host-side translation layer for Open-Channel SSDs to expose them like block devices. The translation layer allows data placement decisions, and I/O scheduling to be managed by the host, enabling users to optimize the SSD for their specific workloads. An open-channel SSD has a set of LUNs (parallel units) and a collection of blocks. Each block can be read in any order, but writes must be sequential. Writes may also fail, and if a block requires it, must also be reset before new writes can be applied. To manage the constraints, pblk maintains a logical to physical address (L2P) table, write cache, garbage collection logic, recovery scheme, and logic to rate-limit user I/Os versus garbage collection I/Os. The L2P table is fully-associative and manages sectors at a 4KB granularity. Pblk stores the L2P table in two places, in the out-of-band area of the media and on the last page of a line. In the cause of a power failure, pblk will perform a scan to recover the L2P table. The user data is organized into lines. A line is data striped across blocks and LUNs. The lines enable the host to reduce the amount of metadata to maintain besides the user data and makes it easier to implement RAID or erasure coding in the future. pblk implements multi-tenant support and can be instantiated multiple times on the same drive. Each instance owns a portion of the SSD - both regarding I/O bandwidth and capacity - providing I/O isolation for each case. Finally, pblk also exposes a sysfs interface that allows user-space to peek into the internals of pblk. The interface is available at /dev/block/*/pblk/ where * is the block device name exposed. This work also contains contributions from: Matias Bjørling <matias@cnexlabs.com> Simon A. F. Lund <slund@cnexlabs.com> Young Tack Jin <youngtack.jin@gmail.com> Huaicheng Li <huaicheng@cs.uchicago.edu> Signed-off-by: Javier González <javier@cnexlabs.com> Signed-off-by: Matias Bjørling <matias@cnexlabs.com> Signed-off-by: Jens Axboe <axboe@fb.com>
46 lines
1.3 KiB
Plaintext
46 lines
1.3 KiB
Plaintext
#
|
|
# Open-Channel SSD NVM configuration
|
|
#
|
|
|
|
menuconfig NVM
|
|
bool "Open-Channel SSD target support"
|
|
depends on BLOCK && HAS_DMA
|
|
help
|
|
Say Y here to get to enable Open-channel SSDs.
|
|
|
|
Open-Channel SSDs implement a set of extension to SSDs, that
|
|
exposes direct access to the underlying non-volatile memory.
|
|
|
|
If you say N, all options in this submenu will be skipped and disabled
|
|
only do this if you know what you are doing.
|
|
|
|
if NVM
|
|
|
|
config NVM_DEBUG
|
|
bool "Open-Channel SSD debugging support"
|
|
default n
|
|
---help---
|
|
Exposes a debug management interface to create/remove targets at:
|
|
|
|
/sys/module/lnvm/parameters/configure_debug
|
|
|
|
It is required to create/remove targets without IOCTLs.
|
|
|
|
config NVM_RRPC
|
|
tristate "Round-robin Hybrid Open-Channel SSD target"
|
|
---help---
|
|
Allows an open-channel SSD to be exposed as a block device to the
|
|
host. The target is implemented using a linear mapping table and
|
|
cost-based garbage collection. It is optimized for 4K IO sizes.
|
|
|
|
config NVM_PBLK
|
|
tristate "Physical Block Device Open-Channel SSD target"
|
|
---help---
|
|
Allows an open-channel SSD to be exposed as a block device to the
|
|
host. The target assumes the device exposes raw flash and must be
|
|
explicitly managed by the host.
|
|
|
|
Please note the disk format is considered EXPERIMENTAL for now.
|
|
|
|
endif # NVM
|