summaryrefslogtreecommitdiff
path: root/crypto/rmd320.c
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2010-09-03 11:56:16 +0200
committerJens Axboe <jaxboe@fusionio.com>2010-09-10 12:35:36 +0200
commit28e7d1845216538303bb95d679d8fd4de50e2f1a (patch)
tree0ef56dc0d7c894657c4ae71a3e8da6e1164fb933 /crypto/rmd320.c
parentdd831006d5be7f74c3fe7aef82380c51c3637960 (diff)
downloadlinux-3.10-28e7d1845216538303bb95d679d8fd4de50e2f1a.tar.gz
linux-3.10-28e7d1845216538303bb95d679d8fd4de50e2f1a.tar.bz2
linux-3.10-28e7d1845216538303bb95d679d8fd4de50e2f1a.zip
block: drop barrier ordering by queue draining
Filesystems will take all the responsibilities for ordering requests around commit writes and will only indicate how the commit writes themselves should be handled by block layers. This patch drops barrier ordering by queue draining from block layer. Ordering by draining implementation was somewhat invasive to request handling. List of notable changes follow. * Each queue has 1 bit color which is flipped on each barrier issue. This is used to track whether a given request is issued before the current barrier or not. REQ_ORDERED_COLOR flag and coloring implementation in __elv_add_request() are removed. * Requests which shouldn't be processed yet for draining were stalled by returning -EAGAIN from blk_do_ordered() according to the test result between blk_ordered_req_seq() and blk_blk_ordered_cur_seq(). This logic is removed. * Draining completion logic in elv_completed_request() removed. * All barrier sequence requests were queued to request queue and then trckled to lower layer according to progress and thus maintaining request orders during requeue was necessary. This is replaced by queueing the next request in the barrier sequence only after the current one is complete from blk_ordered_complete_seq(), which removes the need for multiple proxy requests in struct request_queue and the request sorting logic in the ELEVATOR_INSERT_REQUEUE path of elv_insert(). * As barriers no longer have ordering constraints, there's no need to dump the whole elevator onto the dispatch queue on each barrier. Insert barriers at the front instead. * If other barrier requests come to the front of the dispatch queue while one is already in progress, they are stored in q->pending_barriers and restored to dispatch queue one-by-one after each barrier completion from blk_ordered_complete_seq(). Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'crypto/rmd320.c')
0 files changed, 0 insertions, 0 deletions