summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorMel Gorman <mgorman@suse.de>2012-07-31 16:44:26 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2012-07-31 18:42:46 -0700
commitb4b9e3558508980fc0cd161a545ffb55a1f13ee9 (patch)
treeabb2ab54f4b201b1cbdaf181ec16912c3dd889eb /include
parent0614002bb5f7411e61ffa0dfe5be1f2c84df3da3 (diff)
downloadlinux-rpi3-b4b9e3558508980fc0cd161a545ffb55a1f13ee9.tar.gz
linux-rpi3-b4b9e3558508980fc0cd161a545ffb55a1f13ee9.tar.bz2
linux-rpi3-b4b9e3558508980fc0cd161a545ffb55a1f13ee9.zip
netvm: set PF_MEMALLOC as appropriate during SKB processing
In order to make sure pfmemalloc packets receive all memory needed to proceed, ensure processing of pfmemalloc SKBs happens under PF_MEMALLOC. This is limited to a subset of protocols that are expected to be used for writing to swap. Taps are not allowed to use PF_MEMALLOC as these are expected to communicate with userspace processes which could be paged out. [a.p.zijlstra@chello.nl: Ideas taken from various patches] [jslaby@suse.cz: Lock imbalance fix] Signed-off-by: Mel Gorman <mgorman@suse.de> Acked-by: David S. Miller <davem@davemloft.net> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: Eric B Munson <emunson@mgebm.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Cc: Mel Gorman <mgorman@suse.de> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/net/sock.h5
1 files changed, 5 insertions, 0 deletions
diff --git a/include/net/sock.h b/include/net/sock.h
index 81198632ac2a..43a470d40d76 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -754,8 +754,13 @@ static inline __must_check int sk_add_backlog(struct sock *sk, struct sk_buff *s
return 0;
}
+extern int __sk_backlog_rcv(struct sock *sk, struct sk_buff *skb);
+
static inline int sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
{
+ if (sk_memalloc_socks() && skb_pfmemalloc(skb))
+ return __sk_backlog_rcv(sk, skb);
+
return sk->sk_backlog_rcv(sk, skb);
}