diff options
author | Eric Dumazet <dada1@cosmosbay.com> | 2009-01-26 21:35:35 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2009-01-26 21:35:35 -0800 |
commit | 98322f22eca889478045cf896b572250d03dc45f (patch) | |
tree | 22a06e97ece02db900f7d4f496639582b828a4ee /usr | |
parent | 8527bec548e01a29c6d1928d20d6d3be71861482 (diff) | |
download | linux-3.10-98322f22eca889478045cf896b572250d03dc45f.tar.gz linux-3.10-98322f22eca889478045cf896b572250d03dc45f.tar.bz2 linux-3.10-98322f22eca889478045cf896b572250d03dc45f.zip |
udp: optimize bind(0) if many ports are in use
commit 9088c5609584684149f3fb5b065aa7f18dcb03ff
(udp: Improve port randomization) introduced a regression for UDP bind() syscall
to null port (getting a random port) in case lot of ports are already in use.
This is because we do about 28000 scans of very long chains (220 sockets per chain),
with many spin_lock_bh()/spin_unlock_bh() calls.
Fix this using a bitmap (64 bytes for current value of UDP_HTABLE_SIZE)
so that we scan chains at most once.
Instead of 250 ms per bind() call, we get after patch a time of 2.9 ms
Based on a report from Vitaly Mayatskikh
Reported-by: Vitaly Mayatskikh <v.mayatskih@gmail.com>
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Tested-by: Vitaly Mayatskikh <v.mayatskih@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'usr')
0 files changed, 0 insertions, 0 deletions