diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2014-02-06 12:04:28 -0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-02-20 11:06:11 -0800 |
commit | 7e405afc255142ceb5d8cc3d97a7cffac46f0e4e (patch) | |
tree | 692d37667fd923f8367156bc48fa8454a10cf036 /fs/buffer.c | |
parent | dce0b4fcf24d431eea39d7675c72a8b830ae6a65 (diff) | |
download | linux-3.10-7e405afc255142ceb5d8cc3d97a7cffac46f0e4e.tar.gz linux-3.10-7e405afc255142ceb5d8cc3d97a7cffac46f0e4e.tar.bz2 linux-3.10-7e405afc255142ceb5d8cc3d97a7cffac46f0e4e.zip |
mm: __set_page_dirty uses spin_lock_irqsave instead of spin_lock_irq
commit 227d53b397a32a7614667b3ecaf1d89902fb6c12 upstream.
To use spin_{un}lock_irq is dangerous if caller disabled interrupt.
During aio buffer migration, we have a possibility to see the following
call stack.
aio_migratepage [disable interrupt]
migrate_page_copy
clear_page_dirty_for_io
set_page_dirty
__set_page_dirty_buffers
__set_page_dirty
spin_lock_irq
This mean, current aio migration is a deadlockable. spin_lock_irqsave
is a safer alternative and we should use it.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reported-by: David Rientjes rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'fs/buffer.c')
-rw-r--r-- | fs/buffer.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/fs/buffer.c b/fs/buffer.c index d2a4d1bb2d5..75964d73444 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -620,14 +620,16 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode); static void __set_page_dirty(struct page *page, struct address_space *mapping, int warn) { - spin_lock_irq(&mapping->tree_lock); + unsigned long flags; + + spin_lock_irqsave(&mapping->tree_lock, flags); if (page->mapping) { /* Race with truncate? */ WARN_ON_ONCE(warn && !PageUptodate(page)); account_page_dirtied(page, mapping); radix_tree_tag_set(&mapping->page_tree, page_index(page), PAGECACHE_TAG_DIRTY); } - spin_unlock_irq(&mapping->tree_lock); + spin_unlock_irqrestore(&mapping->tree_lock, flags); __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } |