Age | Commit message (Collapse) | Author | Files | Lines |
|
That's a more declarative interface.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
size_t is too large; if we ever need it, that's the least of our
problems. Besides, when we roll our own (e.g. in keymap.h) it's already
unsigned int. Instead, add some emergency overflow check. So, why?
- It plays nicer with all the other uint32_t's and unsigned int's (no
extensions, etc.).
- Reduces keymap memory usage by 5% or so as a bonus.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
We have quite diverged from the upstream file, so let's make it at least
easier to look at. Remove some unused macros and rename some for
consistency.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
clang doesn't like the use of typeof with out default flags, so just
don't use it.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Before it was a static array of size XKB_NUM_GROUPS.
The previous cleanups made this transition a bit easier. This is a
first step for removing the XKB_NUM_GROUPS hardcoded limit; but for now
we still check that the groups are < XKB_NUM_GROUPS (e.g. in
ResolveGroup and GetGroupIndex) until the keymap, etc. is worked out as
well.
This also makes us alloc quite a bit less (this is just rulescomp):
Before:
==51999== total heap usage: 291,474 allocs, 291,474 frees, 21,458,334 bytes allocated
After:
==31394== total heap usage: 293,595 allocs, 293,595 frees, 18,150,110 bytes allocated
This is because most rmlvo's don't use the full 4 layouts that KeyInfo
had always alloced statically before.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
This way we don't need to look up the key every time. We now only deal
with keycodes in the public API and in keycodes.c.
Also adds an xkb_foreach_key macro, which is used a lot.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
.uncrustify.cfg committed for future reference also, but had to manually
fix up a few things: it really likes justifying struct initialisers.
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
|
|
- Make darray_free also initialize the array back to an empty state, and
stop worrying about it everywhere.
- Add darray_mem, to access the underlying memory, which we do manually
now using &darray_item(arr, 0). This makes a bit more clear when we
actually mean to take the address of a specific item.
- Add darray_copy, to make a deep copy of a darray.
- Add darray_same, to test whether two darrays have the same underlying
memory (e.g. if the struct itself was value copied). This should used
where previously two arrays were compared for pointer equality.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Here are some quick numbers from valgrind, running rulescomp only with a
simple, common "us,de" rule set:
before darray: cb047bb
total heap usage: 44,924 allocs, 44,924 frees, 3,162,342 bytes allocated
after darray: c87468e
total heap usage: 52,670 allocs, 52,670 frees, 2,844,517 bytes allocated
tweaking specific inital allocation sizes:
total heap usage: 52,652 allocs, 52,652 frees, 2,841,814 bytes allocated
changing initial alloc = 2 globally
total heap usage: 47,802 allocs, 47,802 frees, 2,833,614 bytes allocated
changing initial alloc = 3 globally
total heap usage: 47,346 allocs, 47,346 frees, 3,307,110 bytes allocated
changing initial alloc = 4 globally
total heap usage: 44,643 allocs, 44,643 frees, 2,853,646 bytes allocated
[ Changing the geometric progression constant from 2 only made things
worse. I tried the golden ratio - not so golden :) ]
The last one is obviously the best, so it was chosen, with the specific
tweaks thrown in as well (these were there before but don't make much
difference). Overall it seems to do better than the previous manual
allocations which is a bit surprising.
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Signed-off-by: Ran Benita <ran234@gmail.com>
|
|
Signed-off-by: Ran Benita <ran234@gmail.com>
|