summaryrefslogtreecommitdiff
path: root/block
diff options
context:
space:
mode:
authorBart Van Assche <bvanassche@acm.org>2024-03-13 14:42:18 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2024-04-03 15:28:33 +0200
commit9a31f4b61448fccc4f240a95decb369d141c3360 (patch)
tree848247d91a124e4059282fc0d4360568482de315 /block
parentf1d93b2a010c2c55db86c41ef60c452cf7f99ebc (diff)
downloadlinux-rpi-9a31f4b61448fccc4f240a95decb369d141c3360.tar.gz
linux-rpi-9a31f4b61448fccc4f240a95decb369d141c3360.tar.bz2
linux-rpi-9a31f4b61448fccc4f240a95decb369d141c3360.zip
Revert "block/mq-deadline: use correct way to throttling write requests"
[ Upstream commit 256aab46e31683d76d45ccbedc287b4d3f3e322b ] The code "max(1U, 3 * (1U << shift) / 4)" comes from the Kyber I/O scheduler. The Kyber I/O scheduler maintains one internal queue per hwq and hence derives its async_depth from the number of hwq tags. Using this approach for the mq-deadline scheduler is wrong since the mq-deadline scheduler maintains one internal queue for all hwqs combined. Hence this revert. Cc: stable@vger.kernel.org Cc: Damien Le Moal <dlemoal@kernel.org> Cc: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com> Cc: Zhiguo Niu <Zhiguo.Niu@unisoc.com> Fixes: d47f9717e5cf ("block/mq-deadline: use correct way to throttling write requests") Signed-off-by: Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20240313214218.1736147-1-bvanassche@acm.org Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'block')
-rw-r--r--block/mq-deadline.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index f958e79277b8..02a916ba62ee 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -646,9 +646,8 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
struct blk_mq_tags *tags = hctx->sched_tags;
- unsigned int shift = tags->bitmap_tags.sb.shift;
- dd->async_depth = max(1U, 3 * (1U << shift) / 4);
+ dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
}