diff options
author | Paolo Valente <paolo.valente@unimore.it> | 2013-07-10 15:46:08 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2013-07-11 13:01:07 -0700 |
commit | 87f1369d6e2e820c77cf9eac542eed4dcf036f64 (patch) | |
tree | 65cbaa15e9378cc680aa2db5b00d99abdfea1a61 /net | |
parent | cdbaa0bb26d8116d00be24e6b49043777b382f3a (diff) | |
download | kernel-common-87f1369d6e2e820c77cf9eac542eed4dcf036f64.tar.gz kernel-common-87f1369d6e2e820c77cf9eac542eed4dcf036f64.tar.bz2 kernel-common-87f1369d6e2e820c77cf9eac542eed4dcf036f64.zip |
pkt_sched: sch_qfq: improve efficiency of make_eligible
In make_eligible, a mask is used to decide which groups must become eligible:
the i-th group becomes eligible only if the i-th bit of the mask (from the
right) is set. The mask is computed by left-shifting a 1 by a given number of
places, and decrementing the result. The shift is performed on a ULL to avoid
problems in case the number of places to shift is higher than 31. On a 32-bit
machine, this is more costly than working on an UL. This patch replaces such a
costly operation with two cheaper branches.
The trick is based on the following fact: in case of a shift of at least 32
places, the resulting mask has at least the 32 less significant bits set,
whereas the total number of groups is lower than 32. As a consequence, in this
case it is enough to just set the 32 less significant bits of the mask with a
cheaper ~0UL. In the other case, the shift can be safely performed on a UL.
Reported-by: David S. Miller <davem@davemloft.net>
Reported-by: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/sched/sch_qfq.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c index 7c195d972bf0..8d86a8b5522a 100644 --- a/net/sched/sch_qfq.c +++ b/net/sched/sch_qfq.c @@ -821,7 +821,14 @@ static void qfq_make_eligible(struct qfq_sched *q) unsigned long old_vslot = q->oldV >> q->min_slot_shift; if (vslot != old_vslot) { - unsigned long mask = (1ULL << fls(vslot ^ old_vslot)) - 1; + unsigned long mask; + int last_flip_pos = fls(vslot ^ old_vslot); + + if (last_flip_pos > 31) /* higher than the number of groups */ + mask = ~0UL; /* make all groups eligible */ + else + mask = (1UL << last_flip_pos) - 1; + qfq_move_groups(q, mask, IR, ER); qfq_move_groups(q, mask, IB, EB); } |