]> asedeno.scripts.mit.edu Git - linux.git/log
linux.git
5 years agoamiflop: convert to blk-mq
Omar Sandoval [Mon, 15 Oct 2018 15:16:37 +0000 (09:16 -0600)]
amiflop: convert to blk-mq

Straightforward conversion, just use the existing amiflop_lock to
serialize access to the controller. Compile-tested only.

Cc: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Converted to blk_mq_init_sq_queue()

Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoamiflop: clean up on errors during setup
Omar Sandoval [Thu, 11 Oct 2018 19:20:46 +0000 (12:20 -0700)]
amiflop: clean up on errors during setup

The error handling in fd_probe_drives() doesn't clean up at all. Fix it
up in preparation for converting to blk-mq. While we're here, get rid of
the commented out amiga_floppy_remove().

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoamiflop: fold headers into C file
Omar Sandoval [Thu, 11 Oct 2018 19:20:45 +0000 (12:20 -0700)]
amiflop: fold headers into C file

amifd.h and amifdreg.h are only used from amiflop.c, and they're pretty
small, so move the contents to amiflop.c and get rid of the .h files.
This is preparation for adding a struct blk_mq_tag_set to struct
amiga_floppy_struct.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoswim3: convert to blk-mq
Omar Sandoval [Mon, 15 Oct 2018 15:14:46 +0000 (09:14 -0600)]
swim3: convert to blk-mq

Pretty simple conversion. grab_drive() could probably be replaced by
some freeze/quiesce incantation, but I left it alone, and just used
freeze/quiesce for eject. Compile-tested only.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Converted to blk_mq_init_sq_queue().

Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoswim3: add real error handling in setup
Omar Sandoval [Thu, 11 Oct 2018 19:20:43 +0000 (12:20 -0700)]
swim3: add real error handling in setup

The driver doesn't have support for removing a device that has already
been configured, but with more careful ordering we can avoid the need
for that and make sure that we don't leak generic resources.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoswim: convert to blk-mq
Omar Sandoval [Mon, 15 Oct 2018 15:12:12 +0000 (09:12 -0600)]
swim: convert to blk-mq

The only interesting thing here is that there may be two floppies (i.e.,
request queues) sharing the same controller, so we use the global struct
swim_priv->lock to check whether the controller is busy. Compile-tested
only.

Tested-by: Finn Thain <fthain@telegraphics.com.au>
Acked-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Converted to blk_mq_init_sq_queue()

Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoswim: fix cleanup on setup error
Omar Sandoval [Thu, 11 Oct 2018 19:20:41 +0000 (12:20 -0700)]
swim: fix cleanup on setup error

If we fail to allocate the request queue for a disk, we still need to
free that disk, not just the previous ones. Additionally, we need to
cleanup the previous request queues.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agomtd_blkdevs: convert to blk-mq
Jens Axboe [Tue, 16 Oct 2018 14:09:58 +0000 (08:09 -0600)]
mtd_blkdevs: convert to blk-mq

Straight forward conversion, using an internal list to enable the
driver to pull requests at will.

Dynamically allocate the tag set to avoid having to pull in the
block headers for blktrans.h, since various mtd drivers use
block conflicting names for defines and functions.

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: linux-mtd@lists.infradead.org
Tested-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoxsysace: convert to blk-mq
Jens Axboe [Mon, 15 Oct 2018 15:05:59 +0000 (09:05 -0600)]
xsysace: convert to blk-mq

Straight forward conversion, using an internal list to enable the
driver to pull requests at will.

Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoparide: convert pf to blk-mq
Jens Axboe [Mon, 15 Oct 2018 14:38:08 +0000 (08:38 -0600)]
paride: convert pf to blk-mq

Tested-by: Ondrej Zary <linux@rainbow-software.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoparide: convert pd to blk-mq
Jens Axboe [Mon, 15 Oct 2018 19:53:50 +0000 (13:53 -0600)]
paride: convert pd to blk-mq

Tested-by: Ondrej Zary <linux@rainbow-software.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoparide: convert pcd to blk-mq
Jens Axboe [Mon, 15 Oct 2018 14:38:52 +0000 (08:38 -0600)]
paride: convert pcd to blk-mq

Tested-by: Ondrej Zary <linux@rainbow-software.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agops3disk: convert to blk-mq
Jens Axboe [Mon, 15 Oct 2018 19:32:01 +0000 (13:32 -0600)]
ps3disk: convert to blk-mq

Convert from the old request_fn style driver to blk-mq.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq: provide helper for setting up an SQ queue and tag set
Jens Axboe [Mon, 15 Oct 2018 14:40:37 +0000 (08:40 -0600)]
blk-mq: provide helper for setting up an SQ queue and tag set

This pattern is repeated throughout all the blk-mq conversions.
Provide a basic helper to get it done.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonull_blk: remove set but not used variable 'q'
YueHaibing [Tue, 16 Oct 2018 01:45:26 +0000 (01:45 +0000)]
null_blk: remove set but not used variable 'q'

Fixes gcc '-Wunused-but-set-variable' warning:

drivers/block/null_blk_main.c: In function 'end_cmd':
drivers/block/null_blk_main.c:609:24: warning:
 variable 'q' set but not used [-Wunused-but-set-variable]

It not used any more after commit
e50b1e327aeb ("null_blk: remove legacy IO path")

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agocdrom: don't attempt to fiddle with cdo->capability
Jens Axboe [Sun, 14 Oct 2018 19:20:48 +0000 (13:20 -0600)]
cdrom: don't attempt to fiddle with cdo->capability

We can't modify cdo->capability as it is defined as a const.
Change the modification hack to just WARN_ON_ONCE() if we hit
any of the invalid combinations.

This fixes a regression for pcd, which doesn't work after the
constify patch.

Fixes: 853fe1bf7554 ("cdrom: Make device operations read-only")
Tested-by: Ondrej Zary <linux@rainbow-software.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: remove bogus check for queue_lock assignment
Jens Axboe [Fri, 12 Oct 2018 15:24:57 +0000 (09:24 -0600)]
block: remove bogus check for queue_lock assignment

We just allocated the queue and haven't even set it up yet,
hence we know that checking if ->mq_ops is NULL is always
going to be true.

In fact we do need to assign a lock to ->queue_lock always,
as we need it for the queue flags modifications.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonull_blk: remove legacy IO path
Jens Axboe [Thu, 11 Oct 2018 23:58:17 +0000 (17:58 -0600)]
null_blk: remove legacy IO path

We're planning on removing this code completely, kill the old
path.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoum: Convert ubd driver to blk-mq
Richard Weinberger [Sun, 26 Nov 2017 12:33:11 +0000 (13:33 +0100)]
um: Convert ubd driver to blk-mq

Convert the driver to the modern blk-mq framework.
As byproduct we get rid of our open coded restart logic and let
blk-mq handle it.

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoskd: fixup usage of legacy IO API
Jens Axboe [Thu, 11 Oct 2018 20:56:14 +0000 (14:56 -0600)]
skd: fixup usage of legacy IO API

We need to be using the mq variant of request requeue here.

Fixes: ca33dd92968b ("skd: Convert to blk-mq")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoaoe: convert aoeblk to blk-mq
Jens Axboe [Fri, 12 Oct 2018 16:03:14 +0000 (10:03 -0600)]
aoe: convert aoeblk to blk-mq

Straight forward conversion - instead of rewriting the internal buffer
retrieval logic, just replace the previous elevator peeking with an
internal list of requests.

Reviewed-by: "Ed L. Cashin" <ed.cashin@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq: fallback to previous nr_hw_queues when updating fails
Jianchao Wang [Fri, 12 Oct 2018 10:07:28 +0000 (18:07 +0800)]
blk-mq: fallback to previous nr_hw_queues when updating fails

When we try to increate the nr_hw_queues, we may fail due to
shortage of memory or other reason, then blk_mq_realloc_hw_ctxs stops
and some entries in q->queue_hw_ctx are left with NULL. However,
because queue map has been updated with new nr_hw_queues, some cpus
have been mapped to hw queue which just encounters allocation failure,
thus blk_mq_map_queue could return NULL. This will cause panic in
following blk_mq_map_swqueue.

To fix it, when increase nr_hw_queues fails, fallback to previous
nr_hw_queues and post warning. At the same time, driver's .map_queues
usually use completion irq affinity to map hw and cpu, fallback
nr_hw_queues will cause lack of some cpu's map to hw, so use default
blk_mq_map_queues to do that.

Reported-by: syzbot+83e8cbe702263932d9d4@syzkaller.appspotmail.com
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq: realloc hctx when hw queue is mapped to another node
Jianchao Wang [Fri, 12 Oct 2018 10:07:27 +0000 (18:07 +0800)]
blk-mq: realloc hctx when hw queue is mapped to another node

When the hw queues and mq_map are updated, a hctx could be mapped
to a different numa node. At this moment, we need to realloc the
hctx. If fail to do that, go on using previous hctx.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq: change gfp flags to GFP_NOIO in blk_mq_realloc_hw_ctxs
Jianchao Wang [Fri, 12 Oct 2018 10:07:26 +0000 (18:07 +0800)]
blk-mq: change gfp flags to GFP_NOIO in blk_mq_realloc_hw_ctxs

blk_mq_realloc_hw_ctxs could be invoked during update hw queues.
At the momemt, IO is blocked. Change the gfp flags from GFP_KERNEL
to GFP_NOIO to avoid forever hang during memory allocation in
blk_mq_realloc_hw_ctxs.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq: adjust debugfs and sysfs register when updating nr_hw_queues
Jianchao Wang [Fri, 12 Oct 2018 10:07:25 +0000 (18:07 +0800)]
blk-mq: adjust debugfs and sysfs register when updating nr_hw_queues

blk-mq debugfs and sysfs entries need to be removed before updating
queue map, otherwise, we get get wrong result there. This patch fixes
it and remove the redundant debugfs and sysfs register/unregister
operations during __blk_mq_update_nr_hw_queues.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock, bfq: improve asymmetric scenarios detection
Federico Motta [Fri, 12 Oct 2018 09:55:57 +0000 (11:55 +0200)]
block, bfq: improve asymmetric scenarios detection

bfq defines as asymmetric a scenario where an active entity, say E
(representing either a single bfq_queue or a group of other entities),
has a higher weight than some other entities.  If the entity E does sync
I/O in such a scenario, then bfq plugs the dispatch of the I/O of the
other entities in the following situation: E is in service but
temporarily has no pending I/O request.  In fact, without this plugging,
all the times that E stops being temporarily idle, it may find the
internal queues of the storage device already filled with an
out-of-control number of extra requests, from other entities. So E may
have to wait for the service of these extra requests, before finally
having its own requests served. This may easily break service
guarantees, with E getting less than its fair share of the device
throughput.  Usually, the end result is that E gets the same fraction of
the throughput as the other entities, instead of getting more, according
to its higher weight.

Yet there are two other more subtle cases where E, even if its weight is
actually equal to or even lower than the weight of any other active
entities, may get less than its fair share of the throughput in case the
above I/O plugging is not performed:
1. other entities issue larger requests than E;
2. other entities contain more active child entities than E (or in
   general tend to have more backlog than E).

In the first case, other entities may get more service than E because
they get larger requests, than those of E, served during the temporary
idle periods of E.  In the second case, other entities get more service
because, by having many child entities, they have many requests ready
for dispatching while E is temporarily idle.

This commit addresses this issue by extending the definition of
asymmetric scenario: a scenario is asymmetric when
- active entities representing bfq_queues have differentiated weights,
  as in the original definition
or (inclusive)
- one or more entities representing groups of entities are active.

This broader definition makes sure that I/O plugging will be performed
in all the above cases, provided that there is at least one active
group.  Of course, this definition is very coarse, so it will trigger
I/O plugging also in cases where it is not needed, such as, e.g.,
multiple active entities with just one child each, and all with the same
I/O-request size.  The reason for this coarse definition is just that a
finer-grained definition would be rather heavy to compute.

On the opposite end, even this new definition does not trigger I/O
plugging in all cases where there is no active group, and all bfq_queues
have the same weight.  So, in these cases some unfairness may occur if
there are asymmetries in I/O-request sizes.  We made this choice because
I/O plugging may lower throughput, and probably a user that has not
created any group cares more about throughput than about perfect
fairness.  At any rate, as for possible applications that may care about
service guarantees, bfq already guarantees a high responsiveness and a
low latency to soft real-time applications automatically.

Signed-off-by: Federico Motta <federico@willer.it>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agocfq: clear queue pointers from cfqg after unpinning them in cfq_pd_offline
Maciej S. Szmigiero [Wed, 10 Oct 2018 21:16:50 +0000 (23:16 +0200)]
cfq: clear queue pointers from cfqg after unpinning them in cfq_pd_offline

BFQ is already doing a similar thing in its .pd_offline_fn() method
implementation.

While it seems that after commit 4c6994806f70
("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()")
was reverted leaving these pointers intact no longer causes crashes
clearing them is still a sensible thing to do to make the code more robust.

Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: describe difference between flags IO_STAT and STATS
Konstantin Khlebnikov [Thu, 11 Oct 2018 07:07:06 +0000 (10:07 +0300)]
block: describe difference between flags IO_STAT and STATS

This adds reasonable comments, but they definitely needs better names.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agodrivers/block: remove redundant 'default n' from Kconfig-s
Bartlomiej Zolnierkiewicz [Tue, 9 Oct 2018 14:41:01 +0000 (16:41 +0200)]
drivers/block: remove redundant 'default n' from Kconfig-s

'default n' is the default value for any bool or tristate Kconfig
setting so there is no need to write it explicitly.

Also since commit f467c5640c29 ("kconfig: only write '# CONFIG_FOO
is not set' for visible symbols") the Kconfig behavior is the same
regardless of 'default n' being present or not:

    ...
    One side effect of (and the main motivation for) this change is making
    the following two definitions behave exactly the same:

        config FOO
                bool

        config FOO
                bool
                default n

    With this change, neither of these will generate a
    '# CONFIG_FOO is not set' line (assuming FOO isn't selected/implied).
    That might make it clearer to people that a bare 'default n' is
    redundant.
    ...

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: remove redundant 'default n' from Kconfig-s
Bartlomiej Zolnierkiewicz [Tue, 9 Oct 2018 14:32:54 +0000 (16:32 +0200)]
block: remove redundant 'default n' from Kconfig-s

'default n' is the default value for any bool or tristate Kconfig
setting so there is no need to write it explicitly.

Also since commit f467c5640c29 ("kconfig: only write '# CONFIG_FOO
is not set' for visible symbols") the Kconfig behavior is the same
regardless of 'default n' being present or not:

    ...
    One side effect of (and the main motivation for) this change is making
    the following two definitions behave exactly the same:

        config FOO
                bool

        config FOO
                bool
                default n

    With this change, neither of these will generate a
    '# CONFIG_FOO is not set' line (assuming FOO isn't selected/implied).
    That might make it clearer to people that a bare 'default n' is
    redundant.
    ...

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: guarantee that backpointer is respected on writer stall
Javier González [Tue, 9 Oct 2018 11:12:15 +0000 (13:12 +0200)]
lightnvm: pblk: guarantee that backpointer is respected on writer stall

pblk's write buffer must guarantee that it respects the device's
constrains for reads (i.e., mw_cunits). This is done by maintaining a
backpointer that updates the L2P table as entries wrap up, making them
point to the media instead of pointing to the write buffer.

This mechanism can race in case that the write thread stalls, as the
write pointer will protect the last written entry, thus disregarding the
read constrains.

This patch adds an extra check on wrap up, making sure that the
threshold is respected at all times, preventing new entries to overwrite
committed data, also in case of write thread stall.

Reported-by: Heiner Litz <hlitz@ucsc.edu>
Signed-off-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Heiner Litz <hlitz@ucsc.edu>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: consider max hw sectors supported for max_write_pgs
Zhoujie Wu [Tue, 9 Oct 2018 11:12:14 +0000 (13:12 +0200)]
lightnvm: pblk: consider max hw sectors supported for max_write_pgs

When do GC, the number of read/write sectors are determined
by max_write_pgs(see gc_rq preparation in pblk_gc_line_prepare_ws).

Due to max_write_pgs doesn't consider max hw sectors
supported by nvme controller(128K), which leads to GC
tries to read 64 * 4K in one command, and see below error
caused by pblk_bio_map_addr in function pblk_submit_read_gc.

[ 2923.005376] pblk: could not add page to bio
[ 2923.005377] pblk: could not allocate GC bio (18446744073709551604)

Signed-off-by: Zhoujie Wu <zjwu@marvell.com>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix error handling of pblk_lines_init()
Wei Yongjun [Tue, 9 Oct 2018 11:12:13 +0000 (13:12 +0200)]
lightnvm: pblk: fix error handling of pblk_lines_init()

In the too many bad blocks error handling case, we should release all
the allocated resources, otherwise it will cause memory leak.

Fixes: 2deeefc02dff ("lightnvm: pblk: fail gracefully on line alloc. failure")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: do no update csecs and sos on 1.2
Javier González [Tue, 9 Oct 2018 11:12:12 +0000 (13:12 +0200)]
lightnvm: do no update csecs and sos on 1.2

1.2 devices exposes their data and metadata size through the separate
identify command. Make sure that the NVMe LBA format does not override
these values.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: guarantee mw_cunits on read buffer
Javier González [Tue, 9 Oct 2018 11:12:11 +0000 (13:12 +0200)]
lightnvm: pblk: guarantee mw_cunits on read buffer

OCSSD 2.0 defines the amount of data that the host must buffer per chunk
to guarantee reads through the geometry field mw_cunits. This value is
the base that pblk uses to determine the size of its read buffer.
Currently, this size is set to be the closes power-of-2 to mw_cunits
times the number of parallel units available to the pblk instance for
each open line (currently one). When an entry (4KB) is put in the
buffer, the L2P table points to it. As the buffer wraps up, the L2P is
updated to point to addresses on the device, thus guaranteeing mw_cunits
at a chunk level.

However, given that pblk cannot write to the device under ws_min
(normally ws_opt), there might be a window in which the buffer starts
wrapping up and updating L2P entries before the mw_cunits value in a
chunk has been surpassed.

In order not to violate the mw_cunits constrain in this case, account
for ws_opt on the read buffer creation.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: move ring buffer alloc/free rb init
Javier González [Tue, 9 Oct 2018 11:12:10 +0000 (13:12 +0200)]
lightnvm: pblk: move ring buffer alloc/free rb init

pblk's read/write buffer currently takes a buffer and its size and uses
it to create the metadata around it to use it as a ring buffer. This
puts the responsibility of allocating/freeing ring buffer memory on the
ring buffer user. Instead, move it inside of the ring buffer helpers
(pblk-rb.c). This simplifies creation/destruction routines.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: encapsulate rb pointer operations
Javier González [Tue, 9 Oct 2018 11:12:09 +0000 (13:12 +0200)]
lightnvm: pblk: encapsulate rb pointer operations

pblk's read/write buffer is always a power-of-2, thus wrapping up the
buffer can be done with a bit mask. Since this is an implementation
detail internal to the write buffer, make a helper that hides pointer
increment + wrap, and allows to transparently relax this assumption in
the future.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: remove unused function
Javier González [Tue, 9 Oct 2018 11:12:08 +0000 (13:12 +0200)]
lightnvm: pblk: remove unused function

Removed unused function in pblk-rb.c

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix race on sysfs line state
Javier González [Tue, 9 Oct 2018 11:12:07 +0000 (13:12 +0200)]
lightnvm: pblk: fix race on sysfs line state

pblk exposes a sysfs interface that represents its internal state. Part
of this state is the map bitmap for the current open line, which should
be protected by the line lock to avoid a race when freeing the line
metadata. Currently, it is not.

This patch makes sure that the line state is consistent and NULL
bitmap pointers are not dereferenced.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: add SPDX license tag
Javier González [Tue, 9 Oct 2018 11:12:06 +0000 (13:12 +0200)]
lightnvm: pblk: add SPDX license tag

Add GLP-2.0 SPDX license tag to all pblk files

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: recover open lines on 2.0 devices
Javier González [Tue, 9 Oct 2018 11:12:05 +0000 (13:12 +0200)]
lightnvm: pblk: recover open lines on 2.0 devices

In the OCSSD 2.0 spec, each chunk reports its write pointer. This means
that pblk does not need to scan open lines to find the write pointer,
but instead, it can retrieve it directly (and verify it).

This patch uses the write pointer on open lines to (i) recover the line
up until the last written lba and (ii) reconstruct the map bitmap and
rest of line metadata so that the line can be used for new data.

Since the 1.2 path in lightnvm core has been re-implemented to populate
the chunk structure and thus recover the write pointer on
initialization, this patch removes 1.2 specific recovery, as the 2.0
path can be reused.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: take write semaphore on metadata
Javier González [Tue, 9 Oct 2018 11:12:04 +0000 (13:12 +0200)]
lightnvm: pblk: take write semaphore on metadata

pblk guarantees write ordering at a chunk level through a per open chunk
semaphore. At this point, since we only have an open I/O stream for both
user and GC data, the semaphore is per parallel unit.

For the metadata I/O that is synchronous, the semaphore is not needed as
ordering is guaranteed. However, if the metadata scheme changes or
multiple streams are open, this guarantee might not be preserved.

This patch makes sure that all writes go through the semaphore, even for
synchronous I/O. This is consistent with pblk's write I/O model. It also
simplifies maintenance since changes in the metadata scheme could cause
ordering issues.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: refactor metadata paths
Javier González [Tue, 9 Oct 2018 11:12:03 +0000 (13:12 +0200)]
lightnvm: pblk: refactor metadata paths

pblk maintains two different metadata paths for smeta and emeta, which
store metadata at the start of the line and at the end of the line,
respectively. Until now, these path has been common for writing and
retrieving metadata, however, as these paths diverge, the common code
becomes less clear and unnecessary complicated.

In preparation for further changes to the metadata write path, this
patch separates the write and read paths for smeta and emeta and
removes the synchronous emeta path as it not used anymore (emeta is
scheduled asynchronously to prevent jittering due to internal I/Os).

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: encapsulate rqd dma allocations
Javier González [Tue, 9 Oct 2018 11:12:02 +0000 (13:12 +0200)]
lightnvm: pblk: encapsulate rqd dma allocations

dma allocations for ppa_list and meta_list in rqd are replicated in
several places across the pblk codebase. Make helpers to encapsulate
creation and deletion to simplify the code.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: use internal allocation for chunk log page
Javier González [Tue, 9 Oct 2018 11:12:01 +0000 (13:12 +0200)]
lightnvm: use internal allocation for chunk log page

The lightnvm subsystem provides helpers to retrieve chunk metadata,
where the target needs to provide a buffer to store the metadata. An
implicit assumption is that this buffer is contiguous and can be used to
retrieve the data from the device. If the device exposes too many
chunks, then kmalloc might fail, thus failing instance creation.

This patch removes this assumption by implementing an internal buffer in
the lightnvm subsystem to retrieve chunk metadata. Targets can then
use virtual memory allocations. Since this is a target API change, adapt
pblk accordingly.

Signed-off-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix two sleep-in-atomic-context bugs
Jia-Ju Bai [Tue, 9 Oct 2018 11:12:00 +0000 (13:12 +0200)]
lightnvm: pblk: fix two sleep-in-atomic-context bugs

The driver may sleep with holding a spinlock.

The function call paths (from bottom to top) in Linux-4.16 are:

[FUNC] nvm_dev_dma_alloc(GFP_KERNEL)
drivers/lightnvm/pblk-core.c, 754:
nvm_dev_dma_alloc in pblk_line_submit_smeta_io
drivers/lightnvm/pblk-core.c, 1048:
pblk_line_submit_smeta_io in pblk_line_init_bb
drivers/lightnvm/pblk-core.c, 1434:
pblk_line_init_bb in pblk_line_replace_data
drivers/lightnvm/pblk-recovery.c, 980:
pblk_line_replace_data in pblk_recov_l2p
drivers/lightnvm/pblk-recovery.c, 976:
spin_lock in pblk_recov_l2p

[FUNC] bio_map_kern(GFP_KERNEL)
drivers/lightnvm/pblk-core.c, 762:
bio_map_kern in pblk_line_submit_smeta_io
drivers/lightnvm/pblk-core.c, 1048:
pblk_line_submit_smeta_io in pblk_line_init_bb
drivers/lightnvm/pblk-core.c, 1434:
pblk_line_init_bb in pblk_line_replace_data
drivers/lightnvm/pblk-recovery.c, 980:
pblk_line_replace_data in pblk_recov_l2p
drivers/lightnvm/pblk-recovery.c, 976:
spin_lock in pblk_recov_l2p

To fix these bugs, the call to pblk_line_replace_data()
is moved out of the spinlock protection.

These bugs are found by my static analysis tool DSAC.

Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix mapping issue on failed writes
Hans Holmberg [Tue, 9 Oct 2018 11:11:59 +0000 (13:11 +0200)]
lightnvm: pblk: fix mapping issue on failed writes

On 1.2-devices, the mapping-out of remaning sectors in the
failed-write's block can result in an infinite loop,
stalling the write pipeline, fix this.

Fixes: 6a3abf5beef6 ("lightnvm: pblk: rework write error recovery path")
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: stop recreating global caches
Hans Holmberg [Tue, 9 Oct 2018 11:11:58 +0000 (13:11 +0200)]
lightnvm: pblk: stop recreating global caches

Pblk should not create a set of global caches every time
a pblk instance is created. The global caches should be
made available only when there is one or more pblk instances.

This patch bundles the global caches together with a kref
keeping track of whether the caches should be available or not.

Also, turn the global pblk lock into a mutex that explicitly
protects the caches (as this was the only purpose of the lock).

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: calculate line pad distance in helper
Javier González [Tue, 9 Oct 2018 11:11:57 +0000 (13:11 +0200)]
lightnvm: pblk: calculate line pad distance in helper

If a line is padded, calculate the pad distance directly on the helper
being used for this purpose.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: move ppa transformations to core
Javier González [Tue, 9 Oct 2018 11:11:56 +0000 (13:11 +0200)]
lightnvm: move ppa transformations to core

Continuing the effort of moving 1.2 and 2.0 specific code to core, move
64_to_32 and 32_to_64 ppa helpers from pblk to core.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: add tracing for chunk resets
Hans Holmberg [Tue, 9 Oct 2018 11:11:55 +0000 (13:11 +0200)]
lightnvm: pblk: add tracing for chunk resets

Trace state of chunk resets.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: add trace events for pblk state changes
Hans Holmberg [Tue, 9 Oct 2018 11:11:54 +0000 (13:11 +0200)]
lightnvm: pblk: add trace events for pblk state changes

Add trace events for tracking pblk state changes.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: add trace events for line state changes
Hans Holmberg [Tue, 9 Oct 2018 11:11:53 +0000 (13:11 +0200)]
lightnvm: pblk: add trace events for line state changes

Add trace events for logging for line state changes.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: add trace events for chunk states
Hans Holmberg [Tue, 9 Oct 2018 11:11:52 +0000 (13:11 +0200)]
lightnvm: pblk: add trace events for chunk states

Introduce trace points for tracking chunk states in pblk - this is
useful for inspection of the entire state of the drive, and real handy
for both fw and pblk debugging.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: remove debug from pblk_[down/up]_page
Matias Bjørling [Tue, 9 Oct 2018 11:11:51 +0000 (13:11 +0200)]
lightnvm: pblk: remove debug from pblk_[down/up]_page

Remove the debug only iteration within __pblk_down_page, which
then allows us to reduce the number of arguments down to pblk and
the parallel unit from the functions that calls it. Simplifying the
callers logic considerably.

Also, rename the functions pblk_[down/up]_page to
pblk_[down/up]_chunk, to communicate that it manages the write
pointer of the chunk. Note that it also protects the parallel unit
such that at most one chunk is active per parallel unit.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix write amplificiation calculation
Hans Holmberg [Tue, 9 Oct 2018 11:11:50 +0000 (13:11 +0200)]
lightnvm: pblk: fix write amplificiation calculation

When the user data counter exceeds 32 bits, the write amplification
calculation does not provide the right value. Fix this by using
div64_u64 in stead of div64.

Fixes: 76758390f83e ("lightnvm: pblk: export write amplification counters to sysfs")
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix up prints in pblk_read_check_rand
Hans Holmberg [Tue, 9 Oct 2018 11:11:49 +0000 (13:11 +0200)]
lightnvm: pblk: fix up prints in pblk_read_check_rand

The prefix when printing ppas in pblk_read_check_rand should be "rnd"
not "seq", so fix this so we can differentiate between lba missmatches
in random and sequential reads. Also change the print order so
we align with pblk_read_check_seq, printing read lba first.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: remove unused parameters in pblk_up_rq
Hans Holmberg [Tue, 9 Oct 2018 11:11:48 +0000 (13:11 +0200)]
lightnvm: pblk: remove unused parameters in pblk_up_rq

The parameters nr_ppas and ppa_list are not used, so remove them.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: allocate line map bitmaps using a mempool
Hans Holmberg [Tue, 9 Oct 2018 11:11:47 +0000 (13:11 +0200)]
lightnvm: pblk: allocate line map bitmaps using a mempool

Line map bitmap allocations are fairly large and can fail. Allocation
failures are fatal to pblk, stopping the write pipeline. To avoid this,
allocate the bitmaps using a mempool instead.

Mempool allocations never fail if called from a process context,
and pblk *should* only allocate map bitmaps in process context,
but keep the failure handling for robustness sake.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: introduce nvm_rq_to_ppa_list
Hans Holmberg [Tue, 9 Oct 2018 11:11:46 +0000 (13:11 +0200)]
lightnvm: introduce nvm_rq_to_ppa_list

There is a number of places in the lightnvm subsystem where the user
iterates over the ppa list. Before iterating, the user must know if it
is a single or multiple LBAs due to vector commands using either the
nvm_rq ->ppa_addr or ->ppa_list fields on command submission, which
leads to open-coding the if/else statement.

Instead of having multiple if/else's, move it into a function that can
be called by its users.

A nice side effect of this cleanup is that this patch fixes up a
bunch of cases where we don't consider the single-ppa case in pblk.

Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: guarantee emeta on line close
Javier González [Tue, 9 Oct 2018 11:11:45 +0000 (13:11 +0200)]
lightnvm: pblk: guarantee emeta on line close

If a line is recovered from open chunks, the memory structures for
emeta have not necessarily been properly set on line initialization.
When closing a line, make sure that emeta is consistent so that the line
can be recovered on the fast path on next reboot.

Also, remove a couple of empty lines at the end of the function.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: remove unused variable.
Javier González [Tue, 9 Oct 2018 11:11:44 +0000 (13:11 +0200)]
lightnvm: pblk: remove unused variable.

Removed unused struct ppa_addr variable.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix comment typo
Javier González [Tue, 9 Oct 2018 11:11:43 +0000 (13:11 +0200)]
lightnvm: pblk: fix comment typo

Fix comment typo Decrese -> Decrease

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: improve line helpers
Javier González [Tue, 9 Oct 2018 11:11:42 +0000 (13:11 +0200)]
lightnvm: pblk: improve line helpers

The current helper to obtain a line from a ppa returns the line id,
which requires its users to explicitly retrieve the pointer to the line
with the id.

Make 2 different helpers: one returning the line id and one returning
the line directly.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: add helpers for chunk addresses
Javier González [Tue, 9 Oct 2018 11:11:41 +0000 (13:11 +0200)]
lightnvm: pblk: add helpers for chunk addresses

Implement helpers to go from ppas to a chunk within a line and an
address within a chunk.

These helpers will be used on the patches adding trace support in pblk,
which will be sent in this window.

Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: refactor put line fn on read completion
Matias Bjørling [Tue, 9 Oct 2018 11:11:40 +0000 (13:11 +0200)]
lightnvm: pblk: refactor put line fn on read completion

The read completion path uses the put_line variable to decide whether
the reference on a line should be released. The function name used for
that is pblk_read_put_rqd_kref, which could lead one to believe that it
is the rqd that is releasing the reference, while it is the line
reference that is put.

Rename and also split the function in two to account for either rqd or
single ppa callers and move it to core, such that it later can be used
in the write path as well.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Heiner Litz <hlitz@ucsc.edu>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: remove size and out of bounds read check
Matias Bjørling [Tue, 9 Oct 2018 11:11:39 +0000 (13:11 +0200)]
lightnvm: pblk: remove size and out of bounds read check

The I/O size and capacity checks are already done by the block layer.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix incorrect min_write_pgs
Matias Bjørling [Tue, 9 Oct 2018 11:11:38 +0000 (13:11 +0200)]
lightnvm: pblk: fix incorrect min_write_pgs

The calculation of pblk->min_write_pgs should only use the optimal
write size attribute provided by the drive, it does not correlate to
the memory page size of the system, which can be smaller or larger
than the LBA size reported.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: unify vector max req constants
Matias Bjørling [Tue, 9 Oct 2018 11:11:37 +0000 (13:11 +0200)]
lightnvm: pblk: unify vector max req constants

Both NVM_MAX_VLBA and PBLK_MAX_REQ_ADDRS define how many LBAs that
are available in a vector command. pblk uses them interchangeably
in its implementation. Use NVM_MAX_VLBA as the main one and remove
usages of PBLK_MAX_REQ_ADDRS.

Also remove the power representation that only has one user, and
instead calculate it at runtime.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: move bad block and chunk state logic to core
Matias Bjørling [Tue, 9 Oct 2018 11:11:36 +0000 (13:11 +0200)]
lightnvm: move bad block and chunk state logic to core

pblk implements two data paths for recovery line state. One for 1.2
and another for 2.0, instead of having pblk implement these, combine
them in the core to reduce complexity and make available to other
targets.

The new interface will adhere to the 2.0 chunk definition,
including managing open chunks with an active write pointer. To provide
this interface, a 1.2 device recovers the state of the chunks by
manually detecting if a chunk is either free/open/close/offline, and if
open, scanning the flash pages sequentially to find the next writeable
page. This process takes on average ~10 seconds on a device with 64 dies,
1024 blocks and 60us read access time. The process can be parallelized
but is left out for maintenance simplicity, as the 1.2 specification is
deprecated. For 2.0 devices, the logic is maintained internally in the
drive and retrieved through the 2.0 interface.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix race condition on metadata I/O
Javier González [Tue, 9 Oct 2018 11:11:35 +0000 (13:11 +0200)]
lightnvm: pblk: fix race condition on metadata I/O

In pblk, when a new line is allocated, metadata for the previously
written line is scheduled. This is done through a fixed memory region
that is shared through time and contexts across different lines and
therefore protected by a lock. Unfortunately, this lock is not properly
covering all the metadata used for sharing this memory regions,
resulting in a race condition.

This patch fixes this race condition by protecting this metadata
properly.

Fixes: dd2a43437337 ("lightnvm: pblk: sched. metadata on write thread")
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: move device L2P detection to core
Matias Bjørling [Tue, 9 Oct 2018 11:11:34 +0000 (13:11 +0200)]
lightnvm: move device L2P detection to core

A 1.2 device is able to manage the logical to physical mapping
table internally or leave it to the host.

A target only supports one of those approaches, and therefore must
check on initialization. Move this check to core to avoid each target
implement the check.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: pblk: fix rqd.error return value in pblk_blk_erase_sync
Matias Bjørling [Tue, 9 Oct 2018 11:11:33 +0000 (13:11 +0200)]
lightnvm: pblk: fix rqd.error return value in pblk_blk_erase_sync

rqd.error is masked by the return value of pblk_submit_io_sync.
The rqd structure is then passed on to the end_io function, which
assumes that any error should lead to a chunk being marked
offline/bad. Since the pblk_submit_io_sync can fail before the
command is issued to the device, the error value maybe not correspond
to a media failure, leading to chunks being immaturely retired.

Also, the pblk_blk_erase_sync function prints an error message in case
the erase fails. Since the caller prints an error message by itself,
remove the error message in this function.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: combine 1.2 and 2.0 command flags
Matias Bjørling [Tue, 9 Oct 2018 11:11:32 +0000 (13:11 +0200)]
lightnvm: combine 1.2 and 2.0 command flags

Add nvm_set_flags helper to enable core to appropriately
set the command flags for read/write/erase depending on which version
a drive supports.

The flags arguments can be distilled into the access hint,
scrambling, and program/erase suspend. Replace the access hint with
a "is_seq" parameter. The rest of the flags are dependent on the
command opcode, which is trivial to detect and set.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agolightnvm: remove dependencies on BLK_DEV_NVME and PCI
Matias Bjørling [Tue, 9 Oct 2018 11:11:31 +0000 (13:11 +0200)]
lightnvm: remove dependencies on BLK_DEV_NVME and PCI

No need to force NVMe device driver to be compiled in if the
lightnvm subsystem is selected. Also no need for PCI to be selected
as well, as it would be selected by the device driver that hooks into
the subsystem.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq: complete req in softirq context in case of single queue
Ming Lei [Fri, 28 Sep 2018 08:42:20 +0000 (16:42 +0800)]
blk-mq: complete req in softirq context in case of single queue

Lot of controllers may have only one irq vector for completing IO
request. And usually affinity of the only irq vector is all possible
CPUs, however, on most of ARCH, there may be only one specific CPU
for handling this interrupt.

So if all IOs are completed in hardirq context, it is inevitable to
degrade IO performance because of increased irq latency.

This patch tries to address this issue by allowing to complete request
in softirq context, like the legacy IO path.

IOPS is observed as ~13%+ in the following randread test on raid0 over
virtio-scsi.

mdadm --create --verbose /dev/md0 --level=0 --chunk=1024 --raid-devices=8 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi

fio --time_based --name=benchmark --runtime=30 --filename=/dev/md0 --nrfiles=1 --ioengine=libaio --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=32 --rw=randread --blocksize=4k

Cc: Dongli Zhang <dongli.zhang@oracle.com>
Cc: Zach Marano <zmarano@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: panic fix for making cache device
Dongbo Cao [Mon, 8 Oct 2018 12:41:21 +0000 (20:41 +0800)]
bcache: panic fix for making cache device

when the nbuckets of cache device is smaller than 1024, making cache
device will trigger BUG_ON in kernel, add a condition to avoid this.

Reported-by: nitroxis <n@nxs.re>
Signed-off-by: Dongbo Cao <cdbdyx@163.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: split combined if-condition code into separate ones
Dongbo Cao [Mon, 8 Oct 2018 12:41:20 +0000 (20:41 +0800)]
bcache: split combined if-condition code into separate ones

Split the combined '||' statements in if() check, to make the code easier
for debug.

Signed-off-by: Dongbo Cao <cdbdyx@163.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: use MAX_CACHES_PER_SET instead of magic number 8 in __bch_bucket_alloc_set
Shenghui Wang [Mon, 8 Oct 2018 12:41:19 +0000 (20:41 +0800)]
bcache: use MAX_CACHES_PER_SET instead of magic number 8 in __bch_bucket_alloc_set

Current cache_set has MAX_CACHES_PER_SET caches most, and the macro
is used for
"
struct cache *cache_by_alloc[MAX_CACHES_PER_SET];
"
in the define of struct cache_set.

Use MAX_CACHES_PER_SET instead of magic number 8 in
__bch_bucket_alloc_set.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: replace hard coded number with BUCKET_GC_GEN_MAX
Coly Li [Mon, 8 Oct 2018 12:41:18 +0000 (20:41 +0800)]
bcache: replace hard coded number with BUCKET_GC_GEN_MAX

In extents.c:bch_extent_bad(), number 96 is used as parameter to call
btree_bug_on(). The purpose is to check whether stale gen value exceeds
BUCKET_GC_GEN_MAX, so it is better to use macro BUCKET_GC_GEN_MAX to
make the code more understandable.

Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: remove useless parameter of bch_debug_init()
Dongbo Cao [Mon, 8 Oct 2018 12:41:17 +0000 (20:41 +0800)]
bcache: remove useless parameter of bch_debug_init()

Parameter "struct kobject *kobj" in bch_debug_init() is useless,
remove it in this patch.

Signed-off-by: Dongbo Cao <cdbdyx@163.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: remove unused bch_passthrough_cache
Shenghui Wang [Mon, 8 Oct 2018 12:41:16 +0000 (20:41 +0800)]
bcache: remove unused bch_passthrough_cache

struct kmem_cache *bch_passthrough_cache is not used in
bcache code. Remove it.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: recal cached_dev_sectors on detach
Shenghui Wang [Mon, 8 Oct 2018 12:41:15 +0000 (20:41 +0800)]
bcache: recal cached_dev_sectors on detach

Recal cached_dev_sectors on cached_dev detached, as recal done on
cached_dev attached.

Update the cached_dev_sectors before bcache_device_detach called
as bcache_device_detach will set bcache_device->c to NULL.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: fix miss key refill->end in writeback
Tang Junhui [Mon, 8 Oct 2018 12:41:14 +0000 (20:41 +0800)]
bcache: fix miss key refill->end in writeback

refill->end record the last key of writeback, for example, at the first
time, keys (1,128K) to (1,1024K) are flush to the backend device, but
the end key (1,1024K) is not included, since the bellow code:
if (bkey_cmp(k, refill->end) >= 0) {
ret = MAP_DONE;
goto out;
}
And in the next time when we refill writeback keybuf again, we searched
key start from (1,1024K), and got a key bigger than it, so the key
(1,1024K) missed.
This patch modify the above code, and let the end key to be included to
the writeback key buffer.

Signed-off-by: Tang Junhui <tang.junhui.linux@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: Populate writeback_rate_minimum attribute
Ben Peddell [Mon, 8 Oct 2018 12:41:13 +0000 (20:41 +0800)]
bcache: Populate writeback_rate_minimum attribute

Forgot to include the maintainers with my first email.

Somewhere between Michael Lyle's original
"bcache: PI controller for writeback rate V2" patch dated 07 Sep 2017
and 1d316e6 bcache: implement PI controller for writeback rate,
the mapping of the writeback_rate_minimum attribute was dropped.

Re-add the missing sysfs writeback_rate_minimum attribute mapping to
"allow the user to specify a minimum rate at which dirty blocks are
retired."

Fixes: 1d316e6 ("bcache: implement PI controller for writeback rate")
Signed-off-by: Ben Peddell <klightspeed@killerwolves.net>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: correct dirty data statistics
Tang Junhui [Mon, 8 Oct 2018 12:41:12 +0000 (20:41 +0800)]
bcache: correct dirty data statistics

When bcache device is clean, dirty keys may still exist after
journal replay, so we need to count these dirty keys even
device in clean status, otherwise after writeback, the amount
of dirty data would be incorrect.

Signed-off-by: Tang Junhui <tang.junhui.linux@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: fix typo in code comments of closure_return_with_destructor()
Coly Li [Mon, 8 Oct 2018 12:41:11 +0000 (20:41 +0800)]
bcache: fix typo in code comments of closure_return_with_destructor()

The code comments of closure_return_with_destructor() in closure.h makrs
function name as closure_return(). This patch fixes this type with the
correct name - closure_return_with_destructor.

Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: fix ioctl in flash device
Tang Junhui [Mon, 8 Oct 2018 12:41:10 +0000 (20:41 +0800)]
bcache: fix ioctl in flash device

When doing ioctl in flash device, it will call ioctl_dev() in super.c,
then we should not to get cached device since flash only device has
no backend device. This patch just move the jugement dc->io_disable
to cached_dev_ioctl() to make ioctl in flash device correctly.

Fixes: 0f0709e6bfc3c ("bcache: stop bcache device when backing device is offline")
Signed-off-by: Tang Junhui <tang.junhui.linux@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: use REQ_PRIO to indicate bio for metadata
Coly Li [Mon, 8 Oct 2018 12:41:09 +0000 (20:41 +0800)]
bcache: use REQ_PRIO to indicate bio for metadata

In cached_dev_cache_miss() and check_should_bypass(), REQ_META is used
to check whether a bio is for metadata request. REQ_META is used for
blktrace, the correct REQ_ flag should be REQ_PRIO. This flag means the
bio should be prior to other bio, and frequently be used to indicate
metadata io in file system code.

This patch replaces REQ_META with correct flag REQ_PRIO.

CC Adam Manzanares because he explains to me what REQ_PRIO is for.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Adam Manzanares <adam.manzanares@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: trace missed reading by cache_missed
Tang Junhui [Mon, 8 Oct 2018 12:41:08 +0000 (20:41 +0800)]
bcache: trace missed reading by cache_missed

Missed reading IOs are identified by s->cache_missed, not the
s->cache_miss, so in trace_bcache_read() using trace_bcache_read
to identify whether the IO is missed or not.

Signed-off-by: Tang Junhui <tang.junhui.linux@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agobcache: account size of buckets used in uuid write to ca->meta_sectors_written
Shenghui Wang [Mon, 8 Oct 2018 12:41:07 +0000 (20:41 +0800)]
bcache: account size of buckets used in uuid write to ca->meta_sectors_written

UUIDs are considered as metadata. __uuid_write should add the number
of buckets (in sectors) written to disk to ca->meta_sectors_written.
Currently only 1 bucket is used in uuid write.

Steps to test:
1) create a fresh backing device and a fresh cache device separately.
   The backing device didn't attach to any cache set.
2) cd /sys/block/<cache device>/bcache
   cat metadata_written      // record the output value
   cat bucket_size
3) attach the backing device to cache set
4) cat metadata_written
   The output value is almost the same as the value in step 2
   before the change.
   After the change, the value is bigger about 1 bucket size.

Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Reviewed-by: Tang Junhui <tang.junhui.linux@gmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblk-mq-debugfs: Also show requests that have not yet been started
Bart Van Assche [Thu, 4 Oct 2018 17:35:24 +0000 (10:35 -0700)]
blk-mq-debugfs: Also show requests that have not yet been started

When debugging e.g. the SCSI timeout handler it is important that
requests that have not yet been started or that already have
completed are also reported through debugfs.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'nvme-4.20' of git://git.infradead.org/nvme into for-4.20/block
Jens Axboe [Fri, 5 Oct 2018 14:15:12 +0000 (08:15 -0600)]
Merge branch 'nvme-4.20' of git://git.infradead.org/nvme into for-4.20/block

Pull NVMe updates from Christoph:

"A relatively boring merge window:

 - better AEN tracing (Chaitanya)
 - NUMA aware PCIe multipathing (me)
 - RDMA workqueue fixes (Sagi)
 - better bio usage in the target (Sagi)
 - FC rework for target removal (James)
 - better multipath handling of ->queue_rq failures (James)
 - various cleanups (Milan)"

* 'nvme-4.20' of git://git.infradead.org/nvme:
  nvmet-rdma: use a private workqueue for delete
  nvme: take node locality into account when selecting a path
  nvmet: don't split large I/Os unconditionally
  nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O
  nvme-core: add async event trace helper
  nvme_fc: add 'nvme_discovery' sysfs attribute to fc transport device
  nvmet_fc: support target port removal with nvmet layer
  nvme-fc: fix for a minor typos
  nvmet: remove redundant module prefix
  nvme: fix typo in nvme_identify_ns_descs

5 years agonvmet-rdma: use a private workqueue for delete
Sagi Grimberg [Thu, 27 Sep 2018 18:00:31 +0000 (11:00 -0700)]
nvmet-rdma: use a private workqueue for delete

Queue deletion is done asynchronous when the last reference on the queue
is dropped.  Thus, in order to make sure we don't over allocate under a
connect/disconnect storm, we let queue deletion complete before making
forward progress.

However, given that we flush the system_wq from rdma_cm context which
runs from a workqueue context, we can have a circular locking complaint
[1]. Fix that by using a private workqueue for queue deletion.

[1]:
======================================================
WARNING: possible circular locking dependency detected
4.19.0-rc4-dbg+ #3 Not tainted
------------------------------------------------------
kworker/5:0/39 is trying to acquire lock:
00000000a10b6db9 (&id_priv->handler_mutex){+.+.}, at: rdma_destroy_id+0x6f/0x440 [rdma_cm]

but task is already holding lock:
00000000331b4e2c ((work_completion)(&queue->release_work)){+.+.}, at: process_one_work+0x3ed/0xa20

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #3 ((work_completion)(&queue->release_work)){+.+.}:
       process_one_work+0x474/0xa20
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #2 ((wq_completion)"events"){+.+.}:
       flush_workqueue+0xf3/0x970
       nvmet_rdma_cm_handler+0x133d/0x1734 [nvmet_rdma]
       cma_ib_req_handler+0x72f/0xf90 [rdma_cm]
       cm_process_work+0x2e/0x110 [ib_cm]
       cm_req_handler+0x135b/0x1c30 [ib_cm]
       cm_work_handler+0x2b7/0x38cd [ib_cm]
       process_one_work+0x4ae/0xa20
nvmet_rdma:nvmet_rdma_cm_handler: nvmet_rdma: disconnected (10): status 0 id 0000000040357082
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30
nvme nvme0: Reconnecting in 10 seconds...

-> #1 (&id_priv->handler_mutex/1){+.+.}:
       __mutex_lock+0xfe/0xbe0
       mutex_lock_nested+0x1b/0x20
       cma_ib_req_handler+0x6aa/0xf90 [rdma_cm]
       cm_process_work+0x2e/0x110 [ib_cm]
       cm_req_handler+0x135b/0x1c30 [ib_cm]
       cm_work_handler+0x2b7/0x38cd [ib_cm]
       process_one_work+0x4ae/0xa20
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

-> #0 (&id_priv->handler_mutex){+.+.}:
       lock_acquire+0xc5/0x200
       __mutex_lock+0xfe/0xbe0
       mutex_lock_nested+0x1b/0x20
       rdma_destroy_id+0x6f/0x440 [rdma_cm]
       nvmet_rdma_release_queue_work+0x8e/0x1b0 [nvmet_rdma]
       process_one_work+0x4ae/0xa20
       worker_thread+0x63/0x5a0
       kthread+0x1cf/0x1f0
       ret_from_fork+0x24/0x30

Fixes: 777dc82395de ("nvmet-rdma: occasionally flush ongoing controller teardown")
Reported-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
5 years agoblock: Finish renaming REQ_DISCARD into REQ_OP_DISCARD
Bart Van Assche [Wed, 3 Oct 2018 20:56:25 +0000 (13:56 -0700)]
block: Finish renaming REQ_DISCARD into REQ_OP_DISCARD

Some time ago REQ_DISCARD was renamed into REQ_OP_DISCARD. Some comments
and documentation files were not updated however. Update these comments
and documentation files. See also commit 4e1b2d52a80d ("block, fs,
drivers: remove REQ_OP compat defs and related code").

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Philipp Reisner <philipp.reisner@linbit.com>
Cc: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agocdrom: fix improper type cast, which can leat to information leak.
Young_X [Wed, 3 Oct 2018 12:54:29 +0000 (12:54 +0000)]
cdrom: fix improper type cast, which can leat to information leak.

There is another cast from unsigned long to int which causes
a bounds check to fail with specially crafted input. The value is
then used as an index in the slot array in cdrom_slot_status().

This issue is similar to CVE-2018-16658 and CVE-2018-10940.

Signed-off-by: Young_X <YangX92@hotmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agopktcdvd: fix fall-through annotation
Gustavo A. R. Silva [Tue, 2 Oct 2018 10:24:40 +0000 (12:24 +0200)]
pktcdvd: fix fall-through annotation

Replace "fallthru" with a proper "fall through" annotation.

This fix is part of the ongoing efforts to enabling
-Wimplicit-fallthrough

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme: take node locality into account when selecting a path
Christoph Hellwig [Tue, 11 Sep 2018 07:51:29 +0000 (09:51 +0200)]
nvme: take node locality into account when selecting a path

Make current_path an array with an entry for every possible node, and
cache the best path on a per-node basis.  Take the node distance into
account when selecting it.  This is primarily useful for dual-ported PCIe
devices which are connected to PCIe root ports on different sockets.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
5 years agonvmet: don't split large I/Os unconditionally
Sagi Grimberg [Fri, 28 Sep 2018 22:40:43 +0000 (15:40 -0700)]
nvmet: don't split large I/Os unconditionally

If we know that the I/O size exceeds our inline bio vec, no
point using it and split the rest to begin with. We could
in theory reuse the inline bio and only allocate the bio_vec,
but its really not worth optimizing for.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
5 years agonvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O
James Smart [Thu, 27 Sep 2018 23:58:54 +0000 (16:58 -0700)]
nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O

When an io is rejected by nvmf_check_ready() due to validation of the
controller state, the nvmf_fail_nonready_command() will normally return
BLK_STS_RESOURCE to requeue and retry.  However, if the controller is
dying or the I/O is marked for NVMe multipath, the I/O is failed so that
the controller can terminate or so that the io can be issued on a
different path.  Unfortunately, as this reject point is before the
transport has accepted the command, blk-mq ends up completing the I/O
and never calls nvme_complete_rq(), which is where multipath may preserve
or re-route the I/O. The end result is, the device user ends up seeing an
EIO error.

Example: single path connectivity, controller is under load, and a reset
is induced.  An I/O is received:

  a) while the reset state has been set but the queues have yet to be
     stopped; or
  b) after queues are started (at end of reset) but before the reconnect
     has completed.

The I/O finishes with an EIO status.

This patch makes the following changes:

  - Adds the HOST_PATH_ERROR pathing status from TP4028
  - Modifies the reject point such that it appears to queue successfully,
    but actually completes the io with the new pathing status and calls
    nvme_complete_rq().
  - nvme_complete_rq() recognizes the new status, avoids resetting the
    controller (likely was already done in order to get this new status),
    and calls the multipather to clear the current path that errored.
    This allows the next command (retry or new command) to select a new
    path if there is one.

Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>