Age | Commit message (Collapse) | Author | Files | Lines |
|
task_cgroup_path_from_hierarchy() was added for the planned new users
and none of the currently planned users wants to know about multiple
hierarchies. This patch drops the multiple hierarchy part and makes
it always return the path in the first non-dummy hierarchy.
As unified hierarchy will always have id 1, this is guaranteed to
return the path for the unified hierarchy if mounted; otherwise, it
will return the path from the hierarchy which happens to occupy the
lowest hierarchy id, which will usually be the first hierarchy mounted
after boot.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Cc: Jan Kaluža <jkaluza@redhat.com>
Change-Id: Iaa199f7332f01a03f791def776b5403f6fa459b3
Origin: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=913ffdb54366f94eec65c656cae8c6e00e1ab1b0
Backported-by: Maciej Wereski <m.wereski@partner.samsung.com>
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
|
|
kdbus folks want a sane way to determine the cgroup path that a given
task belongs to on a given hierarchy, which is a reasonble thing to
expect from cgroup core.
Implement task_cgroup_path_from_hierarchy().
v2: Dropped unnecessary NULL check on the return value of
task_cgroup_from_root() as suggested by Li Zefan.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Greg Kroah-Hartman <greg@kroah.com>
Acked-by: Li Zefan <lizefan@huawei.com>
Cc: Kay Sievers <kay@vrfy.org>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Daniel Mack <daniel@zonque.org>
Change-Id: Ifd630e09163b8272627c2ef8be1866c5e9dc05f9
Origin: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=857a2beb09ab83e9a8185821ae16db7dfbe8b837
Backported-by: Maciej Wereski <m.wereski@partner.samsung.com>
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
|
|
memfd_create() is similar to mmap(MAP_ANON), but returns a file-descriptor
that you can pass to mmap(). It can support sealing and avoids any
connection to user-visible mount-points. Thus, it's not subject to quotas
on mounted file-systems, but can be used like malloc()'ed memory, but with
a file-descriptor to it.
memfd_create() returns the raw shmem file, so calls like ftruncate() can
be used to modify the underlying inode. Also calls like fstat() will
return proper information and mark the file as regular file. If you want
sealing, you can specify MFD_ALLOW_SEALING. Otherwise, sealing is not
supported (like on all other regular files).
Compared to O_TMPFILE, it does not require a tmpfs mount-point and is not
subject to a filesystem size limit. It is still properly accounted to
memcg limits, though, and to the same overcommit or no-overcommit
accounting as all user memory.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Ryan Lortie <desrt@desrt.ca>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Daniel Mack <zonque@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I2ac7e2b47a1d68d4b83680f4527e5ed2aa9a420c
Origin: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=9183df25fe7b194563db3fec6dc3202a5855839c
Backported-by: Maciej Wereski <m.wereski@partner.samsung.com>
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
|
|
If two processes share a common memory region, they usually want some
guarantees to allow safe access. This often includes:
- one side cannot overwrite data while the other reads it
- one side cannot shrink the buffer while the other accesses it
- one side cannot grow the buffer beyond previously set boundaries
If there is a trust-relationship between both parties, there is no need
for policy enforcement. However, if there's no trust relationship (eg.,
for general-purpose IPC) sharing memory-regions is highly fragile and
often not possible without local copies. Look at the following two
use-cases:
1) A graphics client wants to share its rendering-buffer with a
graphics-server. The memory-region is allocated by the client for
read/write access and a second FD is passed to the server. While
scanning out from the memory region, the server has no guarantee that
the client doesn't shrink the buffer at any time, requiring rather
cumbersome SIGBUS handling.
2) A process wants to perform an RPC on another process. To avoid huge
bandwidth consumption, zero-copy is preferred. After a message is
assembled in-memory and a FD is passed to the remote side, both sides
want to be sure that neither modifies this shared copy, anymore. The
source may have put sensible data into the message without a separate
copy and the target may want to parse the message inline, to avoid a
local copy.
While SIGBUS handling, POSIX mandatory locking and MAP_DENYWRITE provide
ways to achieve most of this, the first one is unproportionally ugly to
use in libraries and the latter two are broken/racy or even disabled due
to denial of service attacks.
This patch introduces the concept of SEALING. If you seal a file, a
specific set of operations is blocked on that file forever. Unlike locks,
seals can only be set, never removed. Hence, once you verified a specific
set of seals is set, you're guaranteed that no-one can perform the blocked
operations on this file, anymore.
An initial set of SEALS is introduced by this patch:
- SHRINK: If SEAL_SHRINK is set, the file in question cannot be reduced
in size. This affects ftruncate() and open(O_TRUNC).
- GROW: If SEAL_GROW is set, the file in question cannot be increased
in size. This affects ftruncate(), fallocate() and write().
- WRITE: If SEAL_WRITE is set, no write operations (besides resizing)
are possible. This affects fallocate(PUNCH_HOLE), mmap() and
write().
- SEAL: If SEAL_SEAL is set, no further seals can be added to a file.
This basically prevents the F_ADD_SEAL operation on a file and
can be set to prevent others from adding further seals that you
don't want.
The described use-cases can easily use these seals to provide safe use
without any trust-relationship:
1) The graphics server can verify that a passed file-descriptor has
SEAL_SHRINK set. This allows safe scanout, while the client is
allowed to increase buffer size for window-resizing on-the-fly.
Concurrent writes are explicitly allowed.
2) For general-purpose IPC, both processes can verify that SEAL_SHRINK,
SEAL_GROW and SEAL_WRITE are set. This guarantees that neither
process can modify the data while the other side parses it.
Furthermore, it guarantees that even with writable FDs passed to the
peer, it cannot increase the size to hit memory-limits of the source
process (in case the file-storage is accounted to the source).
The new API is an extension to fcntl(), adding two new commands:
F_GET_SEALS: Return a bitset describing the seals on the file. This
can be called on any FD if the underlying file supports
sealing.
F_ADD_SEALS: Change the seals of a given file. This requires WRITE
access to the file and F_SEAL_SEAL may not already be set.
Furthermore, the underlying file must support sealing and
there may not be any existing shared mapping of that file.
Otherwise, EBADF/EPERM is returned.
The given seals are _added_ to the existing set of seals
on the file. You cannot remove seals again.
The fcntl() handler is currently specific to shmem and disabled on all
files. A file needs to explicitly support sealing for this interface to
work. A separate syscall is added in a follow-up, which creates files that
support sealing. There is no intention to support this on other
file-systems. Semantics are unclear for non-volatile files and we lack any
use-case right now. Therefore, the implementation is specific to shmem.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Ryan Lortie <desrt@desrt.ca>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Daniel Mack <zonque@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I58642ae2db7fef5d952b22beada3525526dd3a20
Origin: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=40e041a2c858b3caefc757e26cb85bfceae5062b
Backported-by: Maciej Wereski <m.wereski@partner.samsung.com>
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
|
|
This patch (of 6):
The i_mmap_writable field counts existing writable mappings of an
address_space. To allow drivers to prevent new writable mappings, make
this counter signed and prevent new writable mappings if it is negative.
This is modelled after i_writecount and DENYWRITE.
This will be required by the shmem-sealing infrastructure to prevent any
new writable mappings after the WRITE seal has been set. In case there
exists a writable mapping, this operation will fail with EBUSY.
Note that we rely on the fact that iff you already own a writable mapping,
you can increase the counter without using the helpers. This is the same
that we do for i_writecount.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Ryan Lortie <desrt@desrt.ca>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Daniel Mack <zonque@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: If33fdcedbcf202ab177c4e21afc7eec261088a8b
Origin: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=4bb5f5d9395bc112d93a134d8f5b05611eddc9c0
Backported-by: Maciej Wereski <m.wereski@partner.samsung.com>
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
|
|
This is intended for use in loops which read data protected by RCU and may
have a large number of iterations. Such an example is dumping the list of
connections known to IPVS: ip_vs_conn_array() and ip_vs_conn_seq_next().
The benefits are for CONFIG_PREEMPT_RCU=y where we save CPU cycles
by moving rcu_read_lock and rcu_read_unlock out of large loops
but still allowing the current task to be preempted after every
loop iteration for the CONFIG_PREEMPT_RCU=n case.
The call to cond_resched() is not needed when CONFIG_PREEMPT_RCU=y.
Thanks to Paul E. McKenney for explaining this and for the
final version that checks the context with CONFIG_DEBUG_ATOMIC_SLEEP=y
for all possible configurations.
The function can be empty in the CONFIG_PREEMPT_RCU case,
rcu_read_lock and rcu_read_unlock are not needed in this case
because the task can be preempted on indication from scheduler.
Thanks to Peter Zijlstra for catching this and for his help
in trying a solution that changes __might_sleep.
Initial cond_resched_rcu_lock() function suggested by Eric Dumazet.
Tested-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Change-Id: I5f36f86484198f9064725d424c3d91d5fac8e1d4
Origin: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f6f3c437d09e2f62533034e67bfb4385191e992c
Backported-by: Maciej Wereski <m.wereski@partner.samsung.com>
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
|
|
For the casual device driver writer, it is hard to remember when to use
init_completion (to init a completion structure) or INIT_COMPLETION (to
*reinit* a completion structure). Furthermore, while all other
completion functions exepct a pointer as a parameter, INIT_COMPLETION
does not. To make it easier to remember which function to use and to
make code more readable, introduce a new inline function with the proper
name and consistent argument type. Update the kernel-doc for
init_completion while we are here.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Acked-by: Linus Walleij <linus.walleij@linaro.org> (personally at LCE13)
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The DMA Buffer synchronization API provides buffer synchronization
mechanism based on DMA buffer sharing machanism[1], dmafence and
reservation frameworks[2];
i.e., buffer access control to CPU and DMA, and easy-to-use interfaces
for device drivers and user application. And this API can be used
for all dma devices using system memory as dma buffer, especially
for most ARM based SoCs.
For more details, please refer to Documentation/dma-buf-syc.txt
[1] http://lwn.net/Articles/470339/
[2] https://lkml.org/lkml/2014/2/24/824
Change-Id: I3b2084a3c331fc06992fa8d2a4c71378e88b10b5
Signed-off-by: Inki Dae <inki.dae@samsung.com>
|
|
Currently seqlocks and seqcounts don't support lockdep.
After running across a seqcount related deadlock in the timekeeping
code, I used a less-refined and more focused variant of this patch
to narrow down the cause of the issue.
This is a first-pass attempt to properly enable lockdep functionality
on seqlocks and seqcounts.
Since seqcounts are used in the vdso gettimeofday code, I've provided
non-lockdep accessors for those needs.
I've also handled one case where there were nested seqlock writers
and there may be more edge cases.
Comments and feedback would be appreciated!
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Link: http://lkml.kernel.org/r/1381186321-4906-3-git-send-email-john.stultz@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The sequence lock (seqlock) was originally designed for the cases where
the readers do not need to block the writers by making the readers retry
the read operation when the data change.
Since then, the use cases have been expanded to include situations where
a thread does not need to change the data (effectively a reader) at all
but have to take the writer lock because it can't tolerate changes to
the protected structure. Some examples are the d_path() function and
the getcwd() syscall in fs/dcache.c where the functions take the writer
lock on rename_lock even though they don't need to change anything in
the protected data structure at all. This is inefficient as a reader is
now blocking other sequence number reading readers from moving forward
by pretending to be a writer.
This patch tries to eliminate this inefficiency by introducing a new
type of locking reader to the seqlock locking mechanism. This new
locking reader will try to take an exclusive lock preventing other
writers and locking readers from going forward. However, it won't
affect the progress of the other sequence number reading readers as the
sequence number won't be changed.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In lockdep.h, the spinlock/mutex/rwsem/rwlock/lock_map acquire macros have
different definitions based on the value of CONFIG_PROVE_LOCKING. We have
separate ifdefs for each of these definitions, which seems redundant.
Introduce lock_acquire_{exclusive,shared,shared_recursive} helpers which
will have different definitions based on CONFIG_PROVE_LOCKING. Then all
other helper macros can be defined based on the above ones, which reduces
the amount of ifdefined code.
Signed-off-by: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/20130708212350.6DD1931C15E@corp2gmr1-1.hot.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
This adds some extra functions to deal with rcu.
reservation_object_get_fences_rcu() will obtain the list of shared
and exclusive fences without obtaining the ww_mutex.
reservation_object_wait_timeout_rcu() will wait on all fences of the
reservation_object, without obtaining the ww_mutex.
reservation_object_test_signaled_rcu() will test if all fences of the
reservation_object are signaled without using the ww_mutex.
reservation_object_get_excl and reservation_object_get_list require
the reservation object to be held, updating requires
write_seqcount_begin/end. If only the exclusive fence is needed,
rcu_dereference followed by fence_get_rcu can be used, if the shared
fences are needed it's recommended to use the supplied functions.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Move the list of shared fences to a struct, and return it in
reservation_object_get_list().
Add reservation_object_get_excl to get the exclusive fence.
Add reservation_object_reserve_shared(), which reserves space
in the reservation_object for 1 more shared fence.
reservation_object_add_shared_fence() and
reservation_object_add_excl_fence() are used to assign a new
fence to a reservation_object pointer, to complete a reservation.
Changes since v1:
- Add reservation_object_get_excl, reorder code a bit.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Thanks to Fengguang Wu for spotting a missing static cast.
v2:
- Kill unused variable need_shared.
v3:
- Clarify the BUG() in dma_buf_release some more. (Rob Clark)
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/dma-buf/dma-buf.c
Change-Id: I6c0d192dfd53809a16d3564e3863c1d1f0f348c7
|
|
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Move the definitions for wound/wait mutexes out to a separate
header, ww_mutex.h. This reduces clutter in mutex.h, and
increases readability.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Dave Airlie <airlied@gmail.com>
Link: http://lkml.kernel.org/r/51D675DC.3000907@canonical.com
[ Tidied up the code a bit. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
This allows reservation objects to be used in dma-buf. it's required
for implementing polling support on the fences that belong to a dma-buf.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Mauro Carvalho Chehab <m.chehab@samsung.com> #drivers/media/v4l2-core/
Acked-by: Thomas Hellstrom <thellstrom@vmware.com> #drivers/gpu/drm/ttm
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Vincent Stehlé <vincent.stehle@laposte.net> #drivers/gpu/drm/armada/
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/gpu/drm/armada/armada_gem.c
drivers/gpu/drm/drm_prime.c
drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
drivers/gpu/drm/i915/i915_gem_dmabuf.c
drivers/gpu/drm/nouveau/nouveau_drm.c
drivers/gpu/drm/nouveau/nouveau_gem.h
drivers/gpu/drm/nouveau/nouveau_prime.c
drivers/gpu/drm/radeon/radeon_drv.c
drivers/gpu/drm/tegra/gem.c
drivers/gpu/drm/ttm/ttm_object.c
drivers/staging/android/ion/ion.c
Change-Id: I44fbb1f41500deaf9067eb5d7e1c6ed758231d69
|
|
This type of fence can be used with hardware synchronization for simple
hardware that can block execution until the condition
(dma_buf[offset] - value) >= 0 has been met when WAIT_GEQUAL is used,
or (dma_buf[offset] != 0) has been met when WAIT_NONZERO is set.
A software fallback still has to be provided in case the fence is used
with a device that doesn't support this mechanism. It is useful to expose
this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait,
so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original
v2: Renamed from bikeshed to seqno, moved into dma-fence.c since
not much was left of the file. Lots of documentation added.
v3: Use fence_ops instead of custom callbacks. Moved to own file
to avoid circular dependency between dma-buf.h and fence.h
v4: Add spinlock pointer to seqno_fence_init
v5: Add condition member to allow wait for != 0.
Fix small style errors pointed out by checkpatch.
v6: Move to a separate file. Fix up api changes in fences.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com> #v4
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
A fence can be attached to a buffer which is being filled or consumed
by hw, to allow userspace to pass the buffer without waiting to another
device. For example, userspace can call page_flip ioctl to display the
next frame of graphics after kicking the GPU but while the GPU is still
rendering. The display device sharing the buffer with the GPU would
attach a callback to get notified when the GPU's rendering-complete IRQ
fires, to update the scan-out address of the display, without having to
wake up userspace.
A driver must allocate a fence context for each execution ring that can
run in parallel. The function for this takes an argument with how many
contexts to allocate:
+ fence_context_alloc()
A fence is transient, one-shot deal. It is allocated and attached
to one or more dma-buf's. When the one that attached it is done, with
the pending operation, it can signal the fence:
+ fence_signal()
To have a rough approximation whether a fence is fired, call:
+ fence_is_signaled()
The dma-buf-mgr handles tracking, and waiting on, the fences associated
with a dma-buf.
The one pending on the fence can add an async callback:
+ fence_add_callback()
The callback can optionally be cancelled with:
+ fence_remove_callback()
To wait synchronously, optionally with a timeout:
+ fence_wait()
+ fence_wait_timeout()
When emitting a fence, call:
+ trace_fence_emit()
To annotate that a fence is blocking on another fence, call:
+ trace_fence_annotate_wait_on(fence, on_fence)
A default software-only implementation is provided, which can be used
by drivers attaching a fence to a buffer when they have no other means
for hw sync. But a memory backed fence is also envisioned, because it
is common that GPU's can write to, or poll on some memory location for
synchronization. For example:
fence = custom_get_fence(...);
if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
dma_buf *fence_buf = seqno_fence->sync_buf;
get_dma_buf(fence_buf);
... tell the hw the memory location to wait ...
custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
} else {
/* fall-back to sw sync * /
fence_add_callback(fence, my_cb);
}
On SoC platforms, if some other hw mechanism is provided for synchronizing
between IP blocks, it could be supported as an alternate implementation
with it's own fence ops in a similar way.
enable_signaling callback is used to provide sw signaling in case a cpu
waiter is requested or no compatible hardware signaling could be used.
The intention is to provide a userspace interface (presumably via eventfd)
later, to be used in conjunction with dma-buf's mmap support for sw access
to buffers (or for userspace apps that would prefer to do their own
synchronization).
v1: Original
v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
that dma-fence didn't need to care about the sw->hw signaling path
(it can be handled same as sw->sw case), and therefore the fence->ops
can be simplified and more handled in the core. So remove the signal,
add_callback, cancel_callback, and wait ops, and replace with a simple
enable_signaling() op which can be used to inform a fence supporting
hw->hw signaling that one or more devices which do not support hw
signaling are waiting (and therefore it should enable an irq or do
whatever is necessary in order that the CPU is notified when the
fence is passed).
v3: Fix locking fail in attach_fence() and get_fence()
v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst
we decided that we need to be able to attach one fence to N dma-buf's,
so using the list_head in dma-fence struct would be problematic.
v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
about checking if fence fired or not. This is broken by design.
waitqueue_active during destruction is now fatal, since the signaller
should be holding a reference in enable_signalling until it signalled
the fence. Pass the original dma_fence_cb along, and call __remove_wait
in the dma_fence_callback handler, so that no cleanup needs to be
performed.
v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
fence wasn't signaled yet, for example for hardware fences that may
choose to signal blindly.
v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
header and fixed include mess. dma-fence.h now includes dma-buf.h
All members are now initialized, so kmalloc can be used for
allocating a dma-fence. More documentation added.
v9: Change compiler bitfields to flags, change return type of
enable_signaling to bool. Rework dma_fence_wait. Added
dma_fence_is_signaled and dma_fence_wait_timeout.
s/dma// and change exports to non GPL. Added fence_is_signaled and
fence_enable_sw_signaling calls, add ability to override default
wait operation.
v10: remove event_queue, use a custom list, export try_to_wake_up from
scheduler. Remove fence lock and use a global spinlock instead,
this should hopefully remove all the locking headaches I was having
on trying to implement this. enable_signaling is called with this
lock held.
v11:
Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
However I kept the guarantee that after fence_signal returns, it is
guaranteed that enable_signaling has either been called to completion,
or will not be called any more.
Add contexts and seqno to base fence implementation. This allows you
to wait for less fences, by testing for seqno + signaled, and then only
wait on the later fence.
Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
spam, and another runtime option can turn it off at runtime.
v12:
Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
and fence->seqno members.
v13:
Fixup CONFIG_FENCE_TRACE kconfig description.
Move fence_context_alloc to fence.
Simplify fence_later.
Kill priv member to fence_cb.
v14:
Remove priv argument from fence_add_callback, oops!
v15:
Remove priv from documentation.
Explicitly include linux/atomic.h.
v16:
Add trace events.
Import changes required by android syncpoints.
v17:
Use wake_up_state instead of try_to_wake_up. (Colin Cross)
Fix up commit description for seqno_fence. (Rob Clark)
v18:
Rename release_fence to fence_release.
Move to drivers/dma-buf/.
Rename __fence_is_signaled and __fence_signal to *_locked.
Rename __fence_init to fence_init.
Make fence_default_wait return a signed long, and fix wait ops too.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Conflicts:
drivers/base/Kconfig
Change-Id: Ie62c8c33a0cb7ca3df596f47ef328c33c4468139
|
|
Russell King observed 'wierd' looking output from debugfs, and also suggested
better ways of getting device names (use KBUILD_MODNAME, dev_name())
This patch addresses these issues to make the debugfs output correct and better
looking.
While at it, replace seq_printf with seq_puts to remove the checkpatch.pl
warnings.
Reported-by: Russell King - ARM Linux <linux@arm.linux.org.uk>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
|
|
This adds support for a generic reservations framework that can be
hooked up to ttm and dma-buf and allows easy sharing of reservations
across devices.
The idea is that a dma-buf and ttm object both will get a pointer
to a struct reservation_object, which has to be reserved before
anything is done with the contents of the dma-buf.
Changes since v1:
- Fix locking issue in ticket_reserve, which could cause mutex_unlock
to be called too many times.
Changes since v2:
- All fence related calls and members have been taken out for now,
what's left is the bare minimum to be useful for ttm locking conversion.
Changes since v3:
- Removed helper functions too. The documentation has an example
implementation for locking. With the move to ww_mutex there is no
need to have much logic any more.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
This reverts commit 7a9958fedb90ef4000b6461d77a5c6dfd795c1c1.
|
|
This reverts commit 5c6a3a47e9a5b4286e4219bd70e9917b8ffee414.
|
|
This reverts commit 5d2749a0ac3be2a3ed43a24a88d821e26097bf1e.
|
|
This reverts commit 4439a419906d4fe3d7e5093292bd2f4f4fbfc8c2.
|
|
This reverts commit cf7e07ce2d9843105d2ed8f9d30ee66c06d83bb0.
|
|
Add additional polling mode for sleep state to define different mode with
normal state. With this change, charger-manager can work differently in
normal state or sleep state. e.g, polling aways for normal and polling
only when charing for sleep. If there is no defined polling mode for
sleep state it just follows the normal state's.
In addition to, polling rate is still same in sleep.
Change-Id: I787a3abd646bdc0be81dccbafbd635c22c84951c
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
In current code, charger-manager may attemps to read battery state more
than once in a same period. And it already did such when user accesses
uevent, it reads battery voltage several times.
This patch makes charger-manager to read current battery state including
SoC, voltage and temperature at the start of monitoring and re-use it
during whole period.
Change-Id: Ic69ddb465a75237ded0488cb3eb41f47bb909311
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
In cm_monitor() where charging management starts, it checks various charging
condition sequentially to decide next charging operation. However, as it
follows sequential process, cascade if statements, it does some duplicated
jobs which is already done in previous stage. It results delay in decision
making. And moreover, starting point of charing is spreaded all around, so
it makes maintain codes and debugging difficult.
Both of problems mentioned above becomes clean if it manages battery charging
with focusing battery status not following sequential condition checking.
Now, cm_monitor() moves battery state diagram and does optimal operation for
current state. As a result, it reduces whole monitoring time almost in half.
Change-Id: I1c8cfe57cea6fc8c02a4c0bf7bde6a0d8395b786
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
To confirm NULL point exception and also follow the common convention
of 'for loop' control, fix the driver to use scalar variable,
'num_chargers' when it retrieves the power_supply pointer array.
Change-Id: Iab834cf93822b1a03419ca9933e82d6396cc0474
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
cm_notify_event() is introduced to get event associated with battery status
externally, but no one had been used. Moreover it makes charger manager
driver more complicated. This patch tries to drop the function and all data
related to simplify the driver.
Change-Id: I89d802f57a3005c9102e8d342318f2bf77ccce48
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
Monitoring battery temperature becomes mandatory for charger manager working.
If there is no way to measure temperature, it stops probing and won't work.
And also it will use thermal susbsystem inteface only to make driver simple.
Change-Id: Idef5f8f29a104f5f51532c3321f6bebe887deb24
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
To guerantee proper charing and managing batteries even in suspend,
charger-manager has used rtc device with rtc framework interface.
However, it is better to use alarmtimer for cleaner and more appropriate
operation.
This patch makes driver to use alamtimer for polling work in suspend and
removes all deprecated codes related with using rtc interface.
Change-Id: I0c84ccd62653b50fc9c9b687fddcaba7ea58e8ca
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
Drop all local commits and adjust it to up-to-date version of mainline
to make it easy to maintain.
Change-Id: Id5dc3314afd6498e704bcc1bdebe2c226b8fa07c
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
Add functions needed for hooking up alarmtimer to timerfd:
* alarm_restart: Similar to hrtimer_restart, restart an alarmtimer after
the expires time has already been updated (as with alarm_forward).
* alarm_forward_now: Similar to hrtimer_forward_now, move the expires
time forward to an interval from the current time of the associated clock.
* alarm_start_relative: Start an alarmtimer with an expires time relative to
the current time of the associated clock.
* alarm_expires_remaining: Similar to hrtimer_expires_remaining, return the
amount of time remaining until alarm expiry.
Change-Id: I609e268d53725b2c2b62b618ccc388f4329997f5
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
|
|
This patch introduces a device tree bindings for
describing the hardware thermal behavior and limits.
Also a parser to read and interpret the data and feed
it in the thermal framework is presented.
This patch introduces a thermal data parser for device
tree. The parsed data is used to build thermal zones
and thermal binding parameters. The output data
can then be used to deploy thermal policies.
This patch adds also documentation regarding this
API and how to define tree nodes to use
this infrastructure.
Note that, in order to be able to have control
on the sensor registration on the DT thermal zone,
it was required to allow changing the thermal zone
.get_temp callback. For this reason, this patch
also removes the 'const' modifier from the .ops
field of thermal zone devices.
Change-Id: Iaecd480e8a5e21f0d3154cc9bf782bbfd051d40a
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: linux-pm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Eduardo Valentin <eduardo.valentin@ti.com>
|
|
Currently, all the power supply devices are registered with wakeup source,
this results in that every power_supply_changed() invocation brings
the system out of suspend-to-freeze state.
This is overkill as some device drivers, e.g. ACPI battery driver,
have the ability to check the device status and wake up the system
from sleeping only when necessary.
Thus introduce a new API which allows device to be registered
w/o wakeup source.
Change-Id: If0ea9720c9c2161e3a33db0988bcd464b79f2b91
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Add method to get power supply by device tree phandle.
Change-Id: I486cb12098d96a0c0b1a930b194dd899fb5a61c8
Signed-off-by: Sebastian Reichel <sre@debian.org>
Reviewed-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Anton Vorontsov <anton@enomsg.org>
|
|
This patch adds a notifier chain to the power_supply, this helps drivers
in other subsystem to listen to changes in power supply subsystem.
This would help to take some actions in those drivers on changing the
power supply properties. One such scenario is to increase/decrease system
performance based on the battery capacity/voltage. Another scenario is to
adjust the h/w peak current detection voltage/current thresholds based on
battery voltage/capacity. The notifier helps drivers to listen to changes
in power_suppy susbystem without polling the power_supply properties
Change-Id: I06d5b614d1ad836826f87f24791c94d2fa6c4aa6
Signed-off-by: Jenny TC <jenny.tc@intel.com>
Signed-off-by: Pali Rohár <pali.rohar@gmail.com>
Acked-by: Jenny TC <jenny.tc@intel.com>
Signed-off-by: Anton Vorontsov <anton@enomsg.org>
|
|
This patch, originally authored by Arve Hjonnevag and Todd Poynor,
prevents the system from entering suspend mode until the power supply
plug, unplug, or any other change of state event is fully processed. This
guarantees that the screen lights up and displays the battery charging
state. The implementation uses the power supply wakeup_source object.
Change-Id: I541e6a5d0567f2a731f14abc16fda30256ae93fe
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Arve Hjonnevag <arve@android.com>
Cc: Todd Poynor <toddpoynor@google.com>
Cc: John Stultz <john.stultz@linaro.org>
Signed-off-by: Zoran Markovic <zoran.markovic@linaro.org>
Signed-off-by: Anton Vorontsov <anton@enomsg.org>
|
|
Some device depentent mchine revision. So this patch add get machine revision
function. Machine revision get from device tree. If device tree haven't
'revision' property, function return zero.
Change-Id: Ide2bdd314db334643e5acaabf31d8abc133a51dc
Signed-off-by: Beomho Seo <beomho.seo@samsung.com>
|
|
This patch add new 'load_table' debugfs file to show previous accumulated data
of CPUs load as following path and add CPUFREQ_LOADCHECK notification to
CPUFREQ_TRANSITION_NOTIFIER notifier chain.
- /sys/kernel/debug/cpufreq/cpuX/load_table
When governor calculates CPUs load on dbs_check_cpu(), governor send
CPUFREQ_LOADCHECK notification with CPUs load, so that cpufreq_stats
accumulates calculated CPUs load on 'load_table' storage.
This debugfs file is used to judge the correct system state or determine
suitable system resource according to current CPUs load on user-space.
This debugfs file include following data:
- Measurement point of time
- CPU frequency
- Per-CPU load
Changes since v6:
- Remove unnecessary memory free/allocation operation on
cpufreq_stats_reset_debugfs()
- Get correct index of cpu_debugfs[] array according to cpu number
- Reset 'load_table' data when cpufreq governor is changed or updated because
specific governor(e.g., performance/powersave) haven't used 'load_table;
debugfs file.
Changes since v5:
- Determine index value of policy->cpu_debugfs[] according to
cpumask_weight(policy->cpus) value
- Bug fix, store 'policy->cpu' to 'freq->cpu' before notify
CPUFREQ_LOADCHECK notification
Changes since v4:
- Reset the data of CPUs load when cpufreq governor is changed
- Move code about creating debugfs directory to below first patch
: [PATCH 1/2] cpufreq: Add debugfs directory for cpufreq
Changes since v3:
- Extend a range of accumulated data (10 ~ 1000)
- Add unit information of time/freq and align 'Time' field as left for readability
- Use CONFIG_CPU_FREQ_STAT depdendency instead of CONFIG_CPU_FREQ_STAT_DETATILS
- Initialize load of offline CPUx as zero(0)
- Create/remove debugfs root directory on cpufreq_stats_init/exit() because
debugfs root is used on all CPUs.
Changes since v2:
- Code clean according to Viresh Kumar's comment
- Show both old frequency and new frequency on 'load_table' debugfs file
- Change debufs file patch as below
old: /sys/kernel/debugfs/cpufreq/load_table
new: /sys/kernel/debugfs/cpufreq/cpuX/load_table
Changes since v1:
- Set maximum storage size to save CPUs load on Kconfig
- Use spinlock to synchronize read/write operation for CPUs load
- Use local variable instead of global variable(struct cpufreq_freqs *freqs)
- Use pointer of data structure to get correct size of data structure
in sizeof() macro instead of structure name
: sizeof(struct cpufreq_freqs) -> sizeof(*stat->load_table)
- Change time unit from nanosecond to microsecond
- Remove unnecessary memory copy
Change-Id: I14e68196360a3ec00a36e7357b8c73c887abddce
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
|
|
This patch create debugfs root directory and child directory according to
the number of CPUs for CPUFreq as below debugfs directory path:
- /sys/kernel/debug/cpufreq/cpuX
If many CPUs share only one cpufreq policy, other CPUs(except for first CPU)
create a symbolic link for debugfs directory of CPU0.
- link: /sys/kernel/debug/cpufreq/cpu[1-(N-1)] -> /sys/kernel/debug/cpufreq/cpu0
And then cpufreq may need to create debugfs specific file below of debugfs
directory of cpufreq. (e.g., /sys/kernel/debug/cpufreq/cpu0/load_table)
Changes since v6:
- Use 'policy->related_cpus' instead of 'policy->cpus' when getting the number
of CPUs included in the same package
- Get correct index of cpu_debugfs[] array according to cpu number
- Refactoring cpufreq_move_debugfs_dir() / cpufreq_create_debugfs_symlink()
- Use for_each_cpu() to support multi cluster instead of for_each_present_cpu()
Changes since v5:
- Refactoring patch v4
- Create again symbolic link of debugfs directory when first CPU dev is removed
(In this case, many CPUs share only one cpufreq policy)
Change-Id: Ibd84118e6dd3b1e3bc624e1871d39425c99b1673
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
|
|
Introduce common of_flat_dt_match_machine and
of_flat_dt_get_machine_name functions to unify architectures' handling
of machine level model and compatible properties.
Several architectures match the root compatible string with an arch
specific list of machine descriptors duplicating the same search
algorithm. Create a common implementation with a simple architecture
specific hook to iterate over each machine's match table.
Change-Id: I77acb5c560e2b08591c37b57d5d87023aa3fbe91
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Grant Likely <grant.likely@linaro.org>
|
|
This patch adds helper functions to configure clock parents and rates
as specified through 'assigned-clock-parents', 'assigned-clock-rates'
DT properties for a clock provider or clock consumer device.
The helpers are now being called by the bus code for the platform, I2C
and SPI busses, before the driver probing and also in the clock core
after registration of a clock provider.
Change-Id: I96d98c9c9d576fcbf0dfc90d1cc75feb9fdf97cb
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
[s.nawrocki@samsung.com: backported to v3.10]
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
|
|
WORKAROUND: Temporary workaround for bluetooth enable.
The wake pin is used for waking up the hw and host wake pin is used
for waking up host processor during sleep.
Additionally, This patch add reset-gpio property and devm_gpio_request_one
function.
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
[This patch rebased by Beomho Seo]
Signed-off-by: Beomho Seo <beomho.seo@samsung.com>
Change-Id: Ibb2f74cae6e1d11ae84172d0f49a10563fc57e7f
|
|
This patch codeclean and add callback function.
- Add open/close callback function.
- Remove code about unused configuration, function and enum.
Signed-off-by: Beomho Seo <beomho.seo@samsung.com>
|
|
Add support for MAX77836 chipset and its additional two LDO regulators.
These LDO regulators are controlled by the PMIC block with additional
regmap (different I2C slave address).
The MAX77836 charger and safeout regulators are almost identical to
MAX14577. The registers layout is the same, except values for charger's
current. The patch adds simple mapping between device type and supported
current by the charger regulator.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reviewed-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
Some system might want to notice changing of battery soc, so this patch
makes charger-manager to trigger uevent whenver battery soc changes.
It is configurable operation via setting flag, 'cm-poll-batt-soc', in dt.
Signed-off-by: Jonghwa Lee <jonghwa3.lee@samsung.com>
|
|
This patch add sensorhub driver to communicate MCU called sensorhub. It
require firmware driver for MCU.
Signed-off-by: Jaewon Kim <jaewon02.kim@samsung.com>
|