Age | Commit message (Collapse) | Author | Files | Lines |
|
Cleanups based on comments from Sam Ravnborg,
* remove references to the currently unused af_names.h
* add rlim_names.h to clean-files:
* rework cmd_make-XXX to make them more readable by adding comments,
reworking the expressions to put logical components on individual lines,
and keep lines < 80 characters.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
|
|
In tomoyo_check_open_permission() since 2.6.36, TOMOYO was by error
recalculating already calculated pathname when checking allow_rewrite
permission. As a result, memory will leak whenever a file is opened for writing
without O_APPEND flag. Also, performance will degrade because TOMOYO is
calculating pathname regardless of profile configuration.
This patch fixes the leak and performance degrade.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
The original ima_must_measure() function based its results on cached
iint information, which required an iint be allocated for all files.
Currently, an iint is allocated only for files in policy. As a result,
for those files in policy, ima_must_measure() is now called twice: once
to determine if the inode is in the measurement policy and, the second
time, to determine if it needs to be measured/re-measured.
The second call to ima_must_measure() unnecessarily checks to see if
the file is in policy. As we already know the file is in policy, this
patch removes the second unnecessary call to ima_must_measure(), removes
the vestige iint parameter, and just checks the iint directly to determine
if the inode has been measured or needs to be measured/re-measured.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
Now that i_readcount is maintained by the VFS layer, remove the
imbalance checking in IMA. Cleans up the IMA code nicely.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
ima_counts_get() updated the readcount and invalidated the PCR,
as necessary. Only update the i_readcount in the VFS layer.
Move the PCR invalidation checks to ima_file_check(), where it
belongs.
Maintaining the i_readcount in the VFS layer, will allow other
subsystems to use i_readcount.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
Convert the inode's i_readcount from an unsigned int to atomic.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
The mmap policy enforcement checks the access of the
SMACK64MMAP subject against the current subject incorrectly.
The check as written works correctly only if the access
rules involved have the same access. This is the common
case, so initial testing did not find a problem.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
|
|
Kill unused macros of SMACK_LIST_MAX, MAY_ANY and MAY_ANYWRITE.
v2: As Casey Schaufler's advice, also remove MAY_ANY.
Signed-off-by: Shan Wei <shanwei@cn.fujitsu.com>
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
|
|
The mmap policy enforcement was not properly handling the
interaction between the global and local rule lists.
Instead of going through one and then the other, which
missed the important case where a rule specified that
there should be no access, combine the access limitations
where there is a rule in each list.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Add calls to path-based security hooks into CacheFiles as, unlike inode-based
security, these aren't implicit in the vfs_mkdir() and similar calls.
Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Kill unused MAX_AVTAB_HASH_MASK and ebitmap_startbit.
Signed-off-by: Shan Wei <shanwei@cn.fujitsu.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
In the embedded world there are often situations
where libraries are updated from a variety of sources,
for a variety of reasons, and with any number of
security characteristics. These differences
might include privilege required for a given library
provided interface to function properly, as occurs
from time to time in graphics libraries. There are
also cases where it is important to limit use of
libraries based on the provider of the library and
the security aware application may make choices
based on that criteria.
These issues are addressed by providing an additional
Smack label that may optionally be assigned to an object,
the SMACK64MMAP attribute. An mmap operation is allowed
if there is no such attribute.
If there is a SMACK64MMAP attribute the mmap is permitted
only if a subject with that label has all of the access
permitted a subject with the current task label.
Security aware applications may from time to time
wish to reduce their "privilege" to avoid accidental use
of privilege. One case where this arises is the
environment in which multiple sources provide libraries
to perform the same functions. An application may know
that it should eschew services made available from a
particular vendor, or of a particular version.
In support of this a secondary list of Smack rules has
been added that is local to the task. This list is
consulted only in the case where the global list has
approved access. It can only further restrict access.
Unlike the global last, if no entry is found on the
local list access is granted. An application can add
entries to its own list by writing to /smack/load-self.
The changes appear large as they involve refactoring
the list handling to accomodate there being more
than one rule list.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
|
|
|
|
Conflicts:
security/smack/smack_lsm.c
Verified and added fix by Stephen Rothwell <sfr@canb.auug.org.au>
Ok'd by Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin
* 'vfs-scale-working' of git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin: (57 commits)
fs: scale mntget/mntput
fs: rename vfsmount counter helpers
fs: implement faster dentry memcmp
fs: prefetch inode data in dcache lookup
fs: improve scalability of pseudo filesystems
fs: dcache per-inode inode alias locking
fs: dcache per-bucket dcache hash locking
bit_spinlock: add required includes
kernel: add bl_list
xfs: provide simple rcu-walk ACL implementation
btrfs: provide simple rcu-walk ACL implementation
ext2,3,4: provide simple rcu-walk ACL implementation
fs: provide simple rcu-walk generic_check_acl implementation
fs: provide rcu-walk aware permission i_ops
fs: rcu-walk aware d_revalidate method
fs: cache optimise dentry and inode for rcu-walk
fs: dcache reduce branches in lookup path
fs: dcache remove d_mounted
fs: fs_struct use seqlock
fs: rcu-walk for path lookup
...
|
|
Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.
This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.
The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
refcounts are not required for persistence. Also we are free to perform mount
lookups, and to assume dentry mount points and mount roots are stable up and
down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
so we can load this tuple atomically, and also check whether any of its
members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
sequence after the child is found in case anything changed in the parent
during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.
When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.
Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).
The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links
In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.
Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
|
|
dget_locked was a shortcut to avoid the lazy lru manipulation when we already
held dcache_lock (lru manipulation was relatively cheap at that point).
However, how that the lru lock is an innermost one, we never hold it at any
caller, so the lock cost can now be avoided. We already have well working lazy
dcache LRU, so it should be fine to defer LRU manipulations to scan time.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
|
|
dcache_lock no longer protects anything. remove it.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
|
|
Protect d_subdirs and d_child with d_lock, except in filesystems that aren't
using dcache_lock for these anyway (eg. using i_mutex).
Note: if we change the locking rule in future so that ->d_child protection is
provided only with ->d_parent->d_lock, it may allow us to reduce some locking.
But it would be an exception to an otherwise regular locking scheme, so we'd
have to see some good results. Probably not worthwhile.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
|
|
Protect d_unhashed(dentry) condition with d_lock. This means keeping
DCACHE_UNHASHED bit in synch with hash manipulations.
Signed-off-by: Nick Piggin <npiggin@kernel.dk>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1436 commits)
cassini: Use local-mac-address prom property for Cassini MAC address
net: remove the duplicate #ifdef __KERNEL__
net: bridge: check the length of skb after nf_bridge_maybe_copy_header()
netconsole: clarify stopping message
netconsole: don't announce stopping if nothing happened
cnic: Fix the type field in SPQ messages
netfilter: fix export secctx error handling
netfilter: fix the race when initializing nf_ct_expect_hash_rnd
ipv4: IP defragmentation must be ECN aware
net: r6040: Return proper error for r6040_init_one
dcb: use after free in dcb_flushapp()
dcb: unlock on error in dcbnl_ieee_get()
net: ixp4xx_eth: Return proper error for eth_init_one
include/linux/if_ether.h: Add #define ETH_P_LINK_CTL for HPNA and wlan local tunnel
net: add POLLPRI to sock_def_readable()
af_unix: Avoid socket->sk NULL OOPS in stream connect security hooks.
net_sched: pfifo_head_drop problem
mac80211: remove stray extern
mac80211: implement off-channel TX using hw r-o-c offload
mac80211: implement hardware offload for remain-on-channel
...
|
|
unix_release() can asynchornously set socket->sk to NULL, and
it does so without holding the unix_state_lock() on "other"
during stream connects.
However, the reverse mapping, sk->sk_socket, is only transitioned
to NULL under the unix_state_lock().
Therefore make the security hooks follow the reverse mapping instead
of the forward mapping.
Reported-by: Jeremy Fitzhardinge <jeremy@goop.org>
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If security_filter_rule_init() doesn't return a rule, then not everything
is as fine as the return code implies.
This bug only occurs when the LSM (eg. SELinux) is disabled at runtime.
Adding an empty LSM rule causes ima_match_rules() to always succeed,
ignoring any remaining rules.
default IMA TCB policy:
# PROC_SUPER_MAGIC
dont_measure fsmagic=0x9fa0
# SYSFS_MAGIC
dont_measure fsmagic=0x62656572
# DEBUGFS_MAGIC
dont_measure fsmagic=0x64626720
# TMPFS_MAGIC
dont_measure fsmagic=0x01021994
# SECURITYFS_MAGIC
dont_measure fsmagic=0x73636673
< LSM specific rule >
dont_measure obj_type=var_log_t
measure func=BPRM_CHECK
measure func=FILE_MMAP mask=MAY_EXEC
measure func=FILE_CHECK mask=MAY_READ uid=0
Thus without the patch, with the boot parameters 'tcb selinux=0', adding
the above 'dont_measure obj_type=var_log_t' rule to the default IMA TCB
measurement policy, would result in nothing being measured. The patch
prevents the default TCB policy from being replaced.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Cc: James Morris <jmorris@namei.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Cc: David Safford <safford@watson.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
net/ipv4/fib_frontend.c
|
|
In construct_alloc_key(), up_write() is called in the error path if
__key_link_begin() fails, but this is incorrect as __key_link_begin() only
returns with the nominated keyring locked if it returns successfully.
Without this patch, you might see the following in dmesg:
=====================================
[ BUG: bad unlock balance detected! ]
-------------------------------------
mount.cifs/5769 is trying to release lock (&key->sem) at:
[<ffffffff81201159>] request_key_and_link+0x263/0x3fc
but there are no more locks to release!
other info that might help us debug this:
3 locks held by mount.cifs/5769:
#0: (&type->s_umount_key#41/1){+.+.+.}, at: [<ffffffff81131321>] sget+0x278/0x3e7
#1: (&ret_buf->session_mutex){+.+.+.}, at: [<ffffffffa0258e59>] cifs_get_smb_ses+0x35a/0x443 [cifs]
#2: (root_key_user.cons_lock){+.+.+.}, at: [<ffffffff81201000>] request_key_and_link+0x10a/0x3fc
stack backtrace:
Pid: 5769, comm: mount.cifs Not tainted 2.6.37-rc6+ #1
Call Trace:
[<ffffffff81201159>] ? request_key_and_link+0x263/0x3fc
[<ffffffff81081601>] print_unlock_inbalance_bug+0xca/0xd5
[<ffffffff81083248>] lock_release_non_nested+0xc1/0x263
[<ffffffff81201159>] ? request_key_and_link+0x263/0x3fc
[<ffffffff81201159>] ? request_key_and_link+0x263/0x3fc
[<ffffffff81083567>] lock_release+0x17d/0x1a4
[<ffffffff81073f45>] up_write+0x23/0x3b
[<ffffffff81201159>] request_key_and_link+0x263/0x3fc
[<ffffffffa026fe9e>] ? cifs_get_spnego_key+0x61/0x21f [cifs]
[<ffffffff812013c5>] request_key+0x41/0x74
[<ffffffffa027003d>] cifs_get_spnego_key+0x200/0x21f [cifs]
[<ffffffffa026e296>] CIFS_SessSetup+0x55d/0x1273 [cifs]
[<ffffffffa02589e1>] cifs_setup_session+0x90/0x1ae [cifs]
[<ffffffffa0258e7e>] cifs_get_smb_ses+0x37f/0x443 [cifs]
[<ffffffffa025a9e3>] cifs_mount+0x1aa1/0x23f3 [cifs]
[<ffffffff8111fd94>] ? alloc_debug_processing+0xdb/0x120
[<ffffffffa027002c>] ? cifs_get_spnego_key+0x1ef/0x21f [cifs]
[<ffffffffa024cc71>] cifs_do_mount+0x165/0x2b3 [cifs]
[<ffffffff81130e72>] vfs_kern_mount+0xaf/0x1dc
[<ffffffff81131007>] do_kern_mount+0x4d/0xef
[<ffffffff811483b9>] do_mount+0x6f4/0x733
[<ffffffff8114861f>] sys_mount+0x88/0xc2
[<ffffffff8100ac42>] system_call_fastpath+0x16/0x1b
Reported-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-and-Tested-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit 2f90b865 added two new netlink message types to the netlink route
socket. SELinux has hooks to define if netlink messages are allowed to
be sent or received, but it did not know about these two new message
types. By default we allow such actions so noone likely noticed. This
patch adds the proper definitions and thus proper permissions
enforcement.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
Cleanup based on David Howells suggestions:
- use static const char arrays instead of #define
- rename init_sdesc to alloc_sdesc
- convert 'unsigned int' definitions to 'size_t'
- revert remaining 'const unsigned int' definitions to 'unsigned int'
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Verify the hex ascii datablob length is correct before converting the IV,
encrypted data, and HMAC to binary.
Reported-by: David Howells <dhowells@redhat.com>
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Cleanup based on David Howells suggestions:
- replace kzalloc, where possible, with kmalloc
- revert 'const unsigned int' definitions to 'unsigned int'
Signed-off-by: David Safford <safford@watson.ibm.com>
Acked-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Previously not all TSS return codes were tested, as they were all eventually
caught by the TPM. Now all returns are tested and handled immediately.
This patch also fixes memory leaks in error and non-error paths.
Signed-off-by: David Safford <safford@watson.ibm.com>
Acked-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Serge E. Hallyn <serge@hallyn.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
In a situation where Smack access rules allow processes
with multiple labels to write to a directory it is easy
to get into a situation where the directory gets cluttered
with files that the owner can't deal with because while
they could be written to the directory a process at the
label of the directory can't write them. This is generally
the desired behavior, but when it isn't it is a real
issue.
This patch introduces a new attribute SMACK64TRANSMUTE that
instructs Smack to create the file with the label of the directory
under certain circumstances.
A new access mode, "t" for transmute, is made available to
Smack access rules, which are expanded from "rwxa" to "rwxat".
If a file is created in a directory marked as transmutable
and if access was granted to perform the operation by a rule
that included the transmute mode, then the file gets the
Smack label of the directory instead of the Smack label of the
creating process.
Note that this is equivalent to creating an empty file at the
label of the directory and then having the other process write
to it. The transmute scheme requires that both the access rule
allows transmutation and that the directory be explicitly marked.
Signed-off-by: Jarkko Sakkinen <ext-jarkko.2.sakkinen@nokia.com>
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
|
|
sidtab_context_to_sid takes up a large share of time when creating large
numbers of new inodes (~30-40% in oprofile runs). This patch implements a
cache of 3 entries which is checked before we do a full context_to_sid lookup.
On one system this showed over a x3 improvement in the number of inodes that
could be created per second and around a 20% improvement on another system.
Any time we look up the same context string sucessivly (imagine ls -lZ) we
should hit this cache hot. A cache miss should have a relatively minor affect
on performance next to doing the full table search.
All operations on the cache are done COMPLETELY lockless. We know that all
struct sidtab_node objects created will never be deleted until a new policy is
loaded thus we never have to worry about a pointer being dereferenced. Since
we also know that pointer assignment is atomic we know that the cache will
always have valid pointers. Given this information we implement a FIFO cache
in an array of 3 pointers. Every result (whether a cache hit or table lookup)
will be places in the 0 spot of the cache and the rest of the entries moved
down one spot. The 3rd entry will be lost.
Races are possible and are even likely to happen. Lets assume that 4 tasks
are hitting sidtab_context_to_sid. The first task checks against the first
entry in the cache and it is a miss. Now lets assume a second task updates
the cache with a new entry. This will push the first entry back to the second
spot. Now the first task might check against the second entry (which it
already checked) and will miss again. Now say some third task updates the
cache and push the second entry to the third spot. The first task my check
the third entry (for the third time!) and again have a miss. At which point
it will just do a full table lookup. No big deal!
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
selinux_inode_init_security computes transitions sids even for filesystems
that use mount point labeling. It shouldn't do that. It should just use
the mount point label always and no matter what.
This causes 2 problems. 1) it makes file creation slower than it needs to be
since we calculate the transition sid and 2) it allows files to be created
with a different label than the mount point!
# id -Z
staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023
# sesearch --type --class file --source sysadm_t --target tmp_t
Found 1 semantic te rules:
type_transition sysadm_t tmp_t : file user_tmp_t;
# mount -o loop,context="system_u:object_r:tmp_t:s0" /tmp/fs /mnt/tmp
# ls -lZ /mnt/tmp
drwx------. root root system_u:object_r:tmp_t:s0 lost+found
# touch /mnt/tmp/file1
# ls -lZ /mnt/tmp
-rw-r--r--. root root staff_u:object_r:user_tmp_t:s0 file1
drwx------. root root system_u:object_r:tmp_t:s0 lost+found
Whoops, we have a mount point labeled filesystem tmp_t with a user_tmp_t
labeled file!
Signed-off-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Reviewed-by: James Morris <jmorris@namei.org>
|
|
SMACK64EXEC. It defines label that is used while task is
running.
Exception: in smack_task_wait() child task is checked
for write access to parent task using label inherited
from the task that forked it.
Fixed issues from previous submit:
- SMACK64EXEC was not read when SMACK64 was not set.
- inode security blob was not updated after setting
SMACK64EXEC
- inode security blob was not updated when removing
SMACK64EXEC
|
|
We duplicate functionality in policydb_index_classes() and
policydb_index_others(). This patch merges those functions just to make it
clear there is nothing special happening here.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
The sym_val_to_name type array can be quite large as it grows linearly with
the number of types. With known policies having over 5k types these
allocations are growing large enough that they are likely to fail. Convert
those to flex_array so no allocation is larger than PAGE_SIZE
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
In rawhide type_val_to_struct will allocate 26848 bytes, an order 3
allocations. While this hasn't been seen to fail it isn't outside the
realm of possibiliy on systems with severe memory fragmentation. Convert
to flex_array so no allocation will ever be bigger than PAGE_SIZE.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
selinuxfs carefully uses i_ino to figure out what the inode refers to. The
VFS used to generically set this value and we would reset it to something
useable. After 85fe4025c616 each filesystem sets this value to a default
if needed. Since selinuxfs doesn't use the default value and it can only
lead to problems (I'd rather have 2 inodes with i_ino == 0 than one
pointing to the wrong data) lets just stop setting a default.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: James Morris <jmorris@namei.org>
|
|
security_netlbl_secattr_to_sid is difficult to follow, especially the
return codes. Try to make the function obvious.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
selinuxfs.c has lots of different standards on how to handle return paths on
error. For the most part transition to
rc=errno
if (failure)
goto out;
[...]
out:
cleanup()
return rc;
Instead of doing cleanup mid function, or having multiple returns or other
options. This doesn't do that for every function, but most of the complex
functions which have cleanup routines on error.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
selinuxfs.c has lots of different standards on how to handle return paths on
error. For the most part transition to
rc=errno
if (failure)
goto out;
[...]
out:
cleanup()
return rc;
Instead of doing cleanup mid function, or having multiple returns or other
options. This doesn't do that for every function, but most of the complex
functions which have cleanup routines on error.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
policydb.c has lots of different standards on how to handle return paths on
error. For the most part transition to
rc=errno
if (failure)
goto out;
[...]
out:
cleanup()
return rc;
Instead of doing cleanup mid function, or having multiple returns or other
options. This doesn't do that for every function, but most of the complex
functions which have cleanup routines on error.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
This patch fixes the linux-next powerpc build errors as reported by
Stephen Rothwell.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Tested-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch addresses a number of long standing issues
with the way Smack treats UNIX domain sockets.
All access control was being done based on the label of
the file system object. This is inconsistant with the
internet domain, in which access is done based on the
IPIN and IPOUT attributes of the socket. As a result
of the inode label policy it was not possible to use
a UDS socket for label cognizant services, including
dbus and the X11 server.
Support for SCM_PEERSEC on UDS sockets is also provided.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Define a new kernel key-type called 'encrypted'. Encrypted keys are kernel
generated random numbers, which are encrypted/decrypted with a 'trusted'
symmetric key. Encrypted keys are created/encrypted/decrypted in the kernel.
Userspace only ever sees/stores encrypted blobs.
Changelog:
- bug fix: replaced master-key rcu based locking with semaphore
(reported by David Howells)
- Removed memset of crypto_shash_digest() digest output
- Replaced verification of 'key-type:key-desc' using strcspn(), with
one based on string constants.
- Moved documentation to Documentation/keys-trusted-encrypted.txt
- Replace hash with shash (based on comments by David Howells)
- Make lengths/counts size_t where possible (based on comments by David Howells)
Could not convert most lengths, as crypto expects 'unsigned int'
(size_t: on 32 bit is defined as unsigned int, but on 64 bit is unsigned long)
- Add 'const' where possible (based on comments by David Howells)
- allocate derived_buf dynamically to support arbitrary length master key
(fixed by Roberto Sassu)
- wait until late_initcall for crypto libraries to be registered
- cleanup security/Kconfig
- Add missing 'update' keyword (reported/fixed by Roberto Sassu)
- Free epayload on failure to create key (reported/fixed by Roberto Sassu)
- Increase the data size limit (requested by Roberto Sassu)
- Crypto return codes are always 0 on success and negative on failure,
remove unnecessary tests.
- Replaced kzalloc() with kmalloc()
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: David Safford <safford@watson.ibm.com>
Reviewed-by: Roberto Sassu <roberto.sassu@polito.it>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Define a new kernel key-type called 'trusted'. Trusted keys are random
number symmetric keys, generated and RSA-sealed by the TPM. The TPM
only unseals the keys, if the boot PCRs and other criteria match.
Userspace can only ever see encrypted blobs.
Based on suggestions by Jason Gunthorpe, several new options have been
added to support additional usages.
The new options are:
migratable= designates that the key may/may not ever be updated
(resealed under a new key, new pcrinfo or new auth.)
pcrlock=n extends the designated PCR 'n' with a random value,
so that a key sealed to that PCR may not be unsealed
again until after a reboot.
keyhandle= specifies the sealing/unsealing key handle.
keyauth= specifies the sealing/unsealing key auth.
blobauth= specifies the sealed data auth.
Implementation of a kernel reserved locality for trusted keys will be
investigated for a possible future extension.
Changelog:
- Updated and added examples to Documentation/keys-trusted-encrypted.txt
- Moved generic TPM constants to include/linux/tpm_command.h
(David Howell's suggestion.)
- trusted_defined.c: replaced kzalloc with kmalloc, added pcrlock failure
error handling, added const qualifiers where appropriate.
- moved to late_initcall
- updated from hash to shash (suggestion by David Howells)
- reduced worst stack usage (tpm_seal) from 530 to 312 bytes
- moved documentation to Documentation directory (suggestion by David Howells)
- all the other code cleanups suggested by David Howells
- Add pcrlock CAP_SYS_ADMIN dependency (based on comment by Jason Gunthorpe)
- New options: migratable, pcrlock, keyhandle, keyauth, blobauth (based on
discussions with Jason Gunthorpe)
- Free payload on failure to create key(reported/fixed by Roberto Sassu)
- Updated Kconfig and other descriptions (based on Serge Hallyn's suggestion)
- Replaced kzalloc() with kmalloc() (reported by Serge Hallyn)
Signed-off-by: David Safford <safford@watson.ibm.com>
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Privileged syslog operations currently require CAP_SYS_ADMIN. Split
this off into a new CAP_SYSLOG privilege which we can sanely take away
from a container through the capability bounding set.
With this patch, an lxc container can be prevented from messing with
the host's syslog (i.e. dmesg -c).
Changelog: mar 12 2010: add selinux capability2:cap_syslog perm
Changelog: nov 22 2010:
. port to new kernel
. add a WARN_ONCE if userspace isn't using CAP_SYSLOG
Signed-off-by: Serge Hallyn <serge.hallyn@ubuntu.com>
Acked-by: Andrew G. Morgan <morgan@kernel.org>
Acked-By: Kees Cook <kees.cook@canonical.com>
Cc: James Morris <jmorris@namei.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: "Christopher J. PeBenito" <cpebenito@tresys.com>
Cc: Eric Paris <eparis@parisplace.org>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
The SELinux ip postroute code indicates when policy rejected a packet and
passes the error back up the stack. The compat code does not. This patch
sends the same kind of error back up the stack in the compat code.
Based-on-patch-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Some of the SELinux netlink code returns a fatal error when the error might
actually be transient. This patch just silently drops packets on
potentially transient errors but continues to return a permanant error
indicator when the denial was because of policy.
Based-on-comments-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The SELinux netfilter hooks just return NF_DROP if they drop a packet. We
want to signal that a drop in this hook is a permanant fatal error and is not
transient. If we do this the error will be passed back up the stack in some
places and applications will get a faster interaction that something went
wrong.
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|