diff options
author | Hyeong-Jun Kim <hj514.kim@samsung.com> | 2021-12-10 13:30:12 +0900 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2021-12-14 11:17:55 -0800 |
commit | 7377e853967ba45bf409e3b5536624d2cbc99f21 (patch) | |
tree | c6d4559a1905816d6bb72ecbf549e239f5dea081 /certs/blacklist.c | |
parent | 19bdba5265624ba6b9d9dd936a0c6ccc167cfe80 (diff) | |
download | linux-rpi-7377e853967ba45bf409e3b5536624d2cbc99f21.tar.gz linux-rpi-7377e853967ba45bf409e3b5536624d2cbc99f21.tar.bz2 linux-rpi-7377e853967ba45bf409e3b5536624d2cbc99f21.zip |
f2fs: compress: fix potential deadlock of compress file
There is a potential deadlock between writeback process and a process
performing write_begin() or write_cache_pages() while trying to write
same compress file, but not compressable, as below:
[Process A] - doing checkpoint
[Process B] [Process C]
f2fs_write_cache_pages()
- lock_page() [all pages in cluster, 0-31]
- f2fs_write_multi_pages()
- f2fs_write_raw_pages()
- f2fs_write_single_data_page()
- f2fs_do_write_data_page()
- return -EAGAIN [f2fs_trylock_op() failed]
- unlock_page(page) [e.g., page 0]
- generic_perform_write()
- f2fs_write_begin()
- f2fs_prepare_compress_overwrite()
- prepare_compress_overwrite()
- lock_page() [e.g., page 0]
- lock_page() [e.g., page 1]
- lock_page(page) [e.g., page 0]
Since there is no compress process, it is no longer necessary to hold
locks on every pages in cluster within f2fs_write_raw_pages().
This patch changes f2fs_write_raw_pages() to release all locks first
and then perform write same as the non-compress file in
f2fs_write_cache_pages().
Fixes: 4c8ff7095bef ("f2fs: support data compression")
Signed-off-by: Hyeong-Jun Kim <hj514.kim@samsung.com>
Signed-off-by: Sungjong Seo <sj1557.seo@samsung.com>
Signed-off-by: Youngjin Gil <youngjin.gil@samsung.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'certs/blacklist.c')
0 files changed, 0 insertions, 0 deletions