diff options
author | Arnd Bergmann <arnd@arndb.de> | 2009-08-06 16:02:50 -0700 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-08-09 16:13:04 +0200 |
commit | 8e5b59a2d728e6963b35dba8bb36e0b70267462e (patch) | |
tree | f7441413cd5d6d58102f09189242c988944f0d8d /include/linux/hardirq.h | |
parent | bcf08df3b23b3d13bf8c4ad6bd744a6ad30015fb (diff) | |
download | linux-3.10-8e5b59a2d728e6963b35dba8bb36e0b70267462e.tar.gz linux-3.10-8e5b59a2d728e6963b35dba8bb36e0b70267462e.tar.bz2 linux-3.10-8e5b59a2d728e6963b35dba8bb36e0b70267462e.zip |
sched: Add default defines for PREEMPT_ACTIVE
The PREEMPT_ACTIVE setting doesn't actually need to be
arch-specific, so set up a sane default for all arches to
(hopefully) migrate to.
> if we look at linux/hardirq.h, it makes this claim:
> * - bit 28 is the PREEMPT_ACTIVE flag
> if that's true, then why are we letting any arch set this define ? a
> quick survey shows that half the arches (11) are using 0x10000000 (bit
> 28) while the other half (10) are using 0x4000000 (bit 26). and then
> there is the ia64 oddity which uses bit 30. the exact value here
> shouldnt really matter across arches though should it ?
actually alpha, arm and avr32 also use bit 30 (0x40000000),
there are only five (or eight, depending on how you count)
architectures (blackfin, h8300, m68k, s390 and sparc) using bit
26.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux/hardirq.h')
-rw-r--r-- | include/linux/hardirq.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 8246c697863..0d885fd7511 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -64,6 +64,12 @@ #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT) #define NMI_OFFSET (1UL << NMI_SHIFT) +#ifndef PREEMPT_ACTIVE +#define PREEMPT_ACTIVE_BITS 1 +#define PREEMPT_ACTIVE_SHIFT (NMI_SHIFT + NMI_BITS) +#define PREEMPT_ACTIVE (__IRQ_MASK(PREEMPT_ACTIVE_BITS) << PREEMPT_ACTIVE_SHIFT) +#endif + #if PREEMPT_ACTIVE < (1 << (NMI_SHIFT + NMI_BITS)) #error PREEMPT_ACTIVE is too low! #endif |