Age | Commit message (Collapse) | Author |
|
Move the inode integrity data(iint) management up to the integrity directory
in order to share the iint among the different integrity models.
Changelog:
- don't define MAX_DIGEST_SIZE
- rename several globally visible 'ima_' prefixed functions, structs,
locks, etc to 'integrity_'
- replace '20' with SHA1_DIGEST_SIZE
- reflect location change in appropriate Kconfig and Makefiles
- remove unnecessary initialization of iint_initialized to 0
- rebased on current ima_iint.c
- define integrity_iint_store/lock as static
There should be no other functional changes.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Serge Hallyn <serge.hallyn@ubuntu.com>
|
|
Now that i_readcount is maintained by the VFS layer, remove the
imbalance checking in IMA. Cleans up the IMA code nicely.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
ima_counts_get() updated the readcount and invalidated the PCR,
as necessary. Only update the i_readcount in the VFS layer.
Move the PCR invalidation checks to ima_file_check(), where it
belongs.
Maintaining the i_readcount in the VFS layer, will allow other
subsystems to use i_readcount.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
Convert the inode's i_readcount from an unsigned int to atomic.
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
|
|
Currently for every removed inode IMA must take a global lock and search
the IMA rbtree looking for an associated integrity structure. Instead
we explicitly mark an inode when we add an integrity structure so we
only have to take the global lock and do the removal if it exists.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Since finding a struct ima_iint_cache requires a valid struct inode, and
the struct ima_iint_cache is supposed to have the same lifetime as a
struct inode (technically they die together but don't need to be created
at the same time) we don't have to worry about the ima_iint_cache
outliving or dieing before the inode. So the refcnt isn't useful. Just
get rid of it and free the structure when the inode is freed.
Signed-off-by: Eric Paris <eapris@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
IMA currently allocated an inode integrity structure for every inode in
core. This stucture is about 120 bytes long. Most files however
(especially on a system which doesn't make use of IMA) will never need
any of this space. The problem is that if IMA is enabled we need to
know information about the number of readers and the number of writers
for every inode on the box. At the moment we collect that information
in the per inode iint structure and waste the rest of the space. This
patch moves those counters into the struct inode so we can eventually
stop allocating an IMA integrity structure except when absolutely
needed.
This patch does the minimum needed to move the location of the data.
Further cleanups, especially the location of counter updates, may still
be possible.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
IMA tracks the number of struct files which are holding a given inode
readonly and the number which are holding the inode write or r/w. It
needs this information so when a new reader or writer comes in it can
tell if this new file will be able to invalidate results it already made
about existing files.
aka if a task is holding a struct file open RO, IMA measured the file
and recorded those measurements and then a task opens the file RW IMA
needs to note in the logs that the old measurement may not be correct.
It's called a "Time of Measure Time of Use" (ToMToU) issue. The same is
true is a RO file is opened to an inode which has an open writer. We
cannot, with any validity, measure the file in question since it could
be changing.
This patch attempts to use the i_writecount field to track writers. The
i_writecount field actually embeds more information in it's value than
IMA needs but it should work for our purposes and allow us to shrink the
struct inode even more.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Currently IMA uses 2 longs in struct inode. To save space (and as it
seems impossible to overflow 32 bits) we switch these to unsigned int.
The switch to unsigned does require slightly different checks for
underflow, but it isn't complex.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The opencount was used to help debugging to make sure that everything
which created a struct file also correctly made the IMA calls. Since we
moved all of that into the VFS this isn't as necessary. We should be
able to get the same amount of debugging out of just the reader and
write count.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The IMA code needs to store the number of tasks which have an open fd
granting permission to write a file even when IMA is not in use. It
needs this information in order to be enabled at a later point in time
without losing it's integrity garantees.
At the moment that means we store a little bit of data about every inode
in a cache. We use a radix tree key'd on the inode's memory address.
Dave Chinner pointed out that a radix tree is a terrible data structure
for such a sparse key space. This patch switches to using an rbtree
which should be more efficient.
Bug report from Dave:
"I just noticed that slabtop was reporting an awfully high usage of
radix tree nodes:
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
4200331 2778082 66% 0.55K 144839 29 2317424K radix_tree_node
2321500 2060290 88% 1.00K 72581 32 2322592K xfs_inode
2235648 2069791 92% 0.12K 69864 32 279456K iint_cache
That is, 2.7M radix tree nodes are allocated, and the cache itself is
consuming 2.3GB of RAM. I know that the XFS inodei caches are indexed
by radix tree node, but for 2 million cached inodes that would mean a
density of 1 inode per radix tree node, which for a system with 16M
inodes in the filsystems is an impossibly low density. The worst I've
seen in a production system like kernel.org is about 20-25% density,
which would mean about 150-200k radix tree nodes for that many inodes.
So it's not the inode cache.
So I looked up what the iint_cache was. It appears to used for
storing per-inode IMA information, and uses a radix tree for indexing.
It uses the *address* of the struct inode as the indexing key. That
means the key space is extremely sparse - for XFS the struct inode
addresses are approximately 1000 bytes apart, which means the closest
the radix tree index keys get is ~1000. Which means that there is a
single entry per radix tree leaf node, so the radix tree is using
roughly 550 bytes for every 120byte structure being cached. For the
above example, it's probably wasting close to 1GB of RAM...."
Reported-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
commit 8262bb85da allocated the inode integrity struct (iint) before any
inodes were created. Only after IMA was initialized in late_initcall were
the counters updated. This patch updates the counters, whether or not IMA
has been initialized, to resolve 'imbalance' messages.
This patch fixes the bug as reported in bugzilla: 15673. When the i915
is builtin, the ring_buffer is initialized before IMA, causing the
imbalance message on suspend.
Reported-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Tested-by: Thomas Meyer <thomas@m3y3r.de>
Tested-by: David Safford<safford@watson.ibm.com>
Cc: Stable Kernel <stable@kernel.org>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Of the three uses of kref_set in the kernel:
One really should be kref_put as the code is letting go of a
reference,
Two really should be kref_init because the kref is being
initialised.
This suggests that making kref_set available encourages bad code.
So fix the three uses and remove kref_set completely.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
|
|
implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
|
As noted by checkpatch.pl, __func__ should be used instead of gcc
specific __FUNCTION__.
Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
If radix_tree_preload is failed in ima_inode_alloc, we don't need
radix_tree_preload_end because kernel is alread preempt enabled
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
ima wants to create an inode information struct (iint) when inodes are
allocated. This means that at least the part of ima which does this
allocation (the allocation is filled with information later) should
before any inodes are created. To accomplish this we split the ima
initialization routine placing the kmem cache allocator inside a
security_initcall() function. Since this makes use of radix trees we also
need to make sure that is initialized before security_initcall().
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
ima_inode_free() has some funky #define just to confuse the crap out of me.
void ima_iint_delete(struct inode *inode)
and then things actually call ima_inode_free() and nothing calls
ima_iint_delete().
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
iints are supposed to be allocated when an inode is allocated (during
security_inode_alloc()) But we have code which will attempt to allocate
an iint during measurement calls. If we couldn't allocate the iint and we
cared, we should have died during security_inode_alloc(). Not make the
code more complex and less efficient.
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
ima_inode_alloc returns 0 and 1, but the LSM hooks expects an errno.
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
While running fsstress tests on the NFSv4 mounted ext3 and ext4
filesystem, the following call trace was generated on the nfs
server machine.
Replace GFP_KERNEL with GFP_NOFS in ima_iint_insert() to avoid a
potential deadlock.
=================================
[ INFO: inconsistent lock state ]
2.6.31-31.el6.x86_64 #1
---------------------------------
inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
kswapd2/75 [HC0[0]:SC0[0]:HE1:SE1] takes:
(jbd2_handle){+.+.?.}, at: [<ffffffff811edd5e>] jbd2_journal_start+0xfe/0x13f
{RECLAIM_FS-ON-W} state was registered at:
[<ffffffff81091e40>] mark_held_locks+0x65/0x99
[<ffffffff81091f31>] lockdep_trace_alloc+0xbd/0xf5
[<ffffffff81126fdd>] kmem_cache_alloc+0x40/0x185
[<ffffffff812344d7>] ima_iint_insert+0x3d/0xf1
[<ffffffff812345b0>] ima_inode_alloc+0x25/0x44
[<ffffffff811484ac>] inode_init_always+0xec/0x271
[<ffffffff81148682>] alloc_inode+0x51/0xa1
[<ffffffff81148700>] new_inode+0x2e/0x94
[<ffffffff811b2f08>] ext4_new_inode+0xb8/0xdc9
[<ffffffff811be611>] ext4_create+0xcf/0x175
[<ffffffff8113e2cd>] vfs_create+0x82/0xb8
[<ffffffff8113f337>] do_filp_open+0x32c/0x9ee
[<ffffffff811309b9>] do_sys_open+0x6c/0x12c
[<ffffffff81130adc>] sys_open+0x2e/0x44
[<ffffffff81011e42>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff
irq event stamp: 90371
hardirqs last enabled at (90371): [<ffffffff8112708d>]
kmem_cache_alloc+0xf0/0x185
hardirqs last disabled at (90370): [<ffffffff81127026>]
kmem_cache_alloc+0x89/0x185
softirqs last enabled at (89492): [<ffffffff81068ecf>]
__do_softirq+0x1bf/0x1eb
softirqs last disabled at (89477): [<ffffffff8101312c>] call_softirq+0x1c/0x30
other info that might help us debug this:
2 locks held by kswapd2/75:
#0: (shrinker_rwsem){++++..}, at: [<ffffffff810f98ba>] shrink_slab+0x44/0x177
#1: (&type->s_umount_key#25){++++..}, at: [<ffffffff811450ba>]
Reported-by: Muni P. Beerakam <mbeeraka@in.ibm.com>
Reported-by: Amit K. Arora <amitarora@in.ibm.com>
Cc: stable@kernel.org
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
A number of IMA functions only used during init are not marked with __init.
Add those notations so they are freed automatically.
Signed-off-by: Eric Paris <eparis@redhat.com>
Acked-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Based on Andrew Morton's comments:
- add missing locks around radix_tree_lookup in ima_iint_insert()
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Cc: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
The number of calls to ima_path_check()/ima_file_free()
should be balanced. An extra call to fput(), indicates
the file could have been accessed without first being
measured.
Although f_count is incremented/decremented in places other
than fget/fput, like fget_light/fput_light and get_file, the
current task must already hold a file refcnt. The call to
__fput() is delayed until the refcnt becomes 0, resulting
in ima_file_free() flagging any changes.
- add hook to increment opencount for IPC shared memory(SYSV),
shmat files, and /dev/zero
- moved NULL iint test in opencount_get()
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
IMA provides hardware (TPM) based measurement and attestation for
file measurements. As the Trusted Computing (TPM) model requires,
IMA measures all files before they are accessed in any way (on the
integrity_bprm_check, integrity_path_check and integrity_file_mmap
hooks), and commits the measurements to the TPM. Once added to the
TPM, measurements can not be removed.
In addition, IMA maintains a list of these file measurements, which
can be used to validate the aggregate value stored in the TPM. The
TPM can sign these measurements, and thus the system can prove, to
itself and to a third party, the system's integrity in a way that
cannot be circumvented by malicious or compromised software.
- alloc ima_template_entry before calling ima_store_template()
- log ima_add_boot_aggregate() failure
- removed unused IMA_TEMPLATE_NAME_LEN
- replaced hard coded string length with #define name
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|