<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/include/linux/list.h, branch v4.11-rc3</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>list: introduce list_for_each_entry_from_reverse helper</title>
<updated>2017-02-03T21:35:42+00:00</updated>
<author>
<name>Jiri Pirko</name>
<email>jiri@mellanox.com</email>
</author>
<published>2017-02-03T09:29:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b862815c3ee7b49ec20a9ab25da55a5f0bcbb95e'/>
<id>b862815c3ee7b49ec20a9ab25da55a5f0bcbb95e</id>
<content type='text'>
Similar to list_for_each_entry_continue and its reverse variant
list_for_each_entry_continue_reverse, introduce reverse helper for
list_for_each_entry_from.

Signed-off-by: Jiri Pirko &lt;jiri@mellanox.com&gt;
Acked-by: Ido Schimmel &lt;idosch@mellanox.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Similar to list_for_each_entry_continue and its reverse variant
list_for_each_entry_continue_reverse, introduce reverse helper for
list_for_each_entry_from.

Signed-off-by: Jiri Pirko &lt;jiri@mellanox.com&gt;
Acked-by: Ido Schimmel &lt;idosch@mellanox.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: Split list_del() debug checking into separate function</title>
<updated>2016-10-31T20:01:57+00:00</updated>
<author>
<name>Kees Cook</name>
<email>keescook@chromium.org</email>
</author>
<published>2016-08-17T21:42:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0cd340dcb05c4a43742fe156f36737bb2a321bfd'/>
<id>0cd340dcb05c4a43742fe156f36737bb2a321bfd</id>
<content type='text'>
Similar to the list_add() debug consolidation, this commit consolidates
the debug checking performed during CONFIG_DEBUG_LIST into a new
__list_del_entry_valid() function, and stops list updates when corruption
is found.

Refactored from same hardening in PaX and Grsecurity.

Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Acked-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Acked-by: Rik van Riel &lt;riel@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Similar to the list_add() debug consolidation, this commit consolidates
the debug checking performed during CONFIG_DEBUG_LIST into a new
__list_del_entry_valid() function, and stops list updates when corruption
is found.

Refactored from same hardening in PaX and Grsecurity.

Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Acked-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Acked-by: Rik van Riel &lt;riel@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: Split list_add() debug checking into separate function</title>
<updated>2016-10-31T20:01:56+00:00</updated>
<author>
<name>Kees Cook</name>
<email>keescook@chromium.org</email>
</author>
<published>2016-08-17T21:42:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d7c816733d501b59dbdc2483f2cc8e4431fd9160'/>
<id>d7c816733d501b59dbdc2483f2cc8e4431fd9160</id>
<content type='text'>
Right now, __list_add() code is repeated either in list.h or in
list_debug.c, but the only differences between the two versions
are the debug checks. This commit therefore extracts these debug
checks into a separate __list_add_valid() function and consolidates
__list_add(). Additionally this new __list_add_valid() function will stop
list manipulations if a corruption is detected, instead of allowing for
further corruption that may lead to even worse conditions.

This is slight refactoring of the same hardening done in PaX and Grsecurity.

Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Acked-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Acked-by: Rik van Riel &lt;riel@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Right now, __list_add() code is repeated either in list.h or in
list_debug.c, but the only differences between the two versions
are the debug checks. This commit therefore extracts these debug
checks into a separate __list_add_valid() function and consolidates
__list_add(). Additionally this new __list_add_valid() function will stop
list manipulations if a corruption is detected, instead of allowing for
further corruption that may lead to even worse conditions.

This is slight refactoring of the same hardening done in PaX and Grsecurity.

Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Acked-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Acked-by: Rik van Riel &lt;riel@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: Expand list_first_entry_or_null()</title>
<updated>2016-09-14T19:57:43+00:00</updated>
<author>
<name>Chris Wilson</name>
<email>chris@chris-wilson.co.uk</email>
</author>
<published>2016-07-23T18:27:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=12adfd882c5f37548acaba4f043a158b3c54468b'/>
<id>12adfd882c5f37548acaba4f043a158b3c54468b</id>
<content type='text'>
Due to the use of READ_ONCE() in list_empty() the compiler cannot
optimise !list_empty() ? list_first_entry() : NULL very well. By
manually expanding list_first_entry_or_null() we can take advantage of
the READ_ONCE() to avoid the list element changing under the test while
the compiler can generate smaller code.

Signed-off-by: Chris Wilson &lt;chris@chris-wilson.co.uk&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Dan Williams &lt;dan.j.williams@intel.com&gt;
Cc: Jan Kara &lt;jack@suse.cz&gt;
Cc: Josef Bacik &lt;jbacik@fb.com&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Due to the use of READ_ONCE() in list_empty() the compiler cannot
optimise !list_empty() ? list_first_entry() : NULL very well. By
manually expanding list_first_entry_or_null() we can take advantage of
the READ_ONCE() to avoid the list element changing under the test while
the compiler can generate smaller code.

Signed-off-by: Chris Wilson &lt;chris@chris-wilson.co.uk&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Dan Williams &lt;dan.j.williams@intel.com&gt;
Cc: Jan Kara &lt;jack@suse.cz&gt;
Cc: Josef Bacik &lt;jbacik@fb.com&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>hlist: Add hlist_is_singular_node() helper</title>
<updated>2016-07-07T08:35:07+00:00</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2016-07-04T09:50:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=15dba1e37b5cfd7fab9bc84e0f05f35c918f4eef'/>
<id>15dba1e37b5cfd7fab9bc84e0f05f35c918f4eef</id>
<content type='text'>
Required to figure out whether the entry is the only one in the hlist.

Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Arjan van de Ven &lt;arjan@infradead.org&gt;
Cc: Chris Mason &lt;clm@fb.com&gt;
Cc: Eric Dumazet &lt;edumazet@google.com&gt;
Cc: George Spelvin &lt;linux@sciencehorizons.net&gt;
Cc: Josh Triplett &lt;josh@joshtriplett.org&gt;
Cc: Len Brown &lt;lenb@kernel.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: rt@linutronix.de
Link: http://lkml.kernel.org/r/20160704094341.867631372@linutronix.de
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Required to figure out whether the entry is the only one in the hlist.

Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Arjan van de Ven &lt;arjan@infradead.org&gt;
Cc: Chris Mason &lt;clm@fb.com&gt;
Cc: Eric Dumazet &lt;edumazet@google.com&gt;
Cc: George Spelvin &lt;linux@sciencehorizons.net&gt;
Cc: Josh Triplett &lt;josh@joshtriplett.org&gt;
Cc: Len Brown &lt;lenb@kernel.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: rt@linutronix.de
Link: http://lkml.kernel.org/r/20160704094341.867631372@linutronix.de
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: kill list_force_poison()</title>
<updated>2016-03-09T23:43:42+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2016-03-09T22:08:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d77a117e6871ff78a06def46583d23752593de60'/>
<id>d77a117e6871ff78a06def46583d23752593de60</id>
<content type='text'>
Given we have uninitialized list_heads being passed to list_add() it
will always be the case that those uninitialized values randomly trigger
the poison value.  Especially since a list_add() operation will seed the
stack with the poison value for later stack allocations to trip over.

For example, see these two false positive reports:

  list_add attempted on force-poisoned entry
  WARNING: at lib/list_debug.c:34
  [..]
  NIP [c00000000043c390] __list_add+0xb0/0x150
  LR [c00000000043c38c] __list_add+0xac/0x150
  Call Trace:
    __list_add+0xac/0x150 (unreliable)
    __down+0x4c/0xf8
    down+0x68/0x70
    xfs_buf_lock+0x4c/0x150 [xfs]

  list_add attempted on force-poisoned entry(0000000000000500),
   new-&gt;next == d0000000059ecdb0, new-&gt;prev == 0000000000000500
  WARNING: at lib/list_debug.c:33
  [..]
  NIP [c00000000042db78] __list_add+0xa8/0x140
  LR [c00000000042db74] __list_add+0xa4/0x140
  Call Trace:
    __list_add+0xa4/0x140 (unreliable)
    rwsem_down_read_failed+0x6c/0x1a0
    down_read+0x58/0x60
    xfs_log_commit_cil+0x7c/0x600 [xfs]

Fixes: commit 5c2c2587b132 ("mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup")
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Reported-by: Eryu Guan &lt;eguan@redhat.com&gt;
Tested-by: Eryu Guan &lt;eguan@redhat.com&gt;
Cc: Ross Zwisler &lt;ross.zwisler@linux.intel.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Given we have uninitialized list_heads being passed to list_add() it
will always be the case that those uninitialized values randomly trigger
the poison value.  Especially since a list_add() operation will seed the
stack with the poison value for later stack allocations to trip over.

For example, see these two false positive reports:

  list_add attempted on force-poisoned entry
  WARNING: at lib/list_debug.c:34
  [..]
  NIP [c00000000043c390] __list_add+0xb0/0x150
  LR [c00000000043c38c] __list_add+0xac/0x150
  Call Trace:
    __list_add+0xac/0x150 (unreliable)
    __down+0x4c/0xf8
    down+0x68/0x70
    xfs_buf_lock+0x4c/0x150 [xfs]

  list_add attempted on force-poisoned entry(0000000000000500),
   new-&gt;next == d0000000059ecdb0, new-&gt;prev == 0000000000000500
  WARNING: at lib/list_debug.c:33
  [..]
  NIP [c00000000042db78] __list_add+0xa8/0x140
  LR [c00000000042db74] __list_add+0xa4/0x140
  Call Trace:
    __list_add+0xa4/0x140 (unreliable)
    rwsem_down_read_failed+0x6c/0x1a0
    down_read+0x58/0x60
    xfs_log_commit_cil+0x7c/0x600 [xfs]

Fixes: commit 5c2c2587b132 ("mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup")
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Reported-by: Eryu Guan &lt;eguan@redhat.com&gt;
Tested-by: Eryu Guan &lt;eguan@redhat.com&gt;
Cc: Ross Zwisler &lt;ross.zwisler@linux.intel.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup</title>
<updated>2016-01-16T01:56:32+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2016-01-16T00:56:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5c2c2587b13235bf8b5c9027589f22eff68bdf49'/>
<id>5c2c2587b13235bf8b5c9027589f22eff68bdf49</id>
<content type='text'>
get_dev_page() enables paths like get_user_pages() to pin a dynamically
mapped pfn-range (devm_memremap_pages()) while the resulting struct page
objects are in use.  Unlike get_page() it may fail if the device is, or
is in the process of being, disabled.  While the initial lookup of the
range may be an expensive list walk, the result is cached to speed up
subsequent lookups which are likely to be in the same mapped range.

devm_memremap_pages() now requires a reference counter to be specified
at init time.  For pmem this means moving request_queue allocation into
pmem_alloc() so the existing queue usage counter can track "device
pages".

ZONE_DEVICE pages always have an elevated count and will never be on an
lru reclaim list.  That space in 'struct page' can be redirected for
other uses, but for safety introduce a poison value that will always
trip __list_add() to assert.  This allows half of the struct list_head
storage to be reclaimed with some assurance to back up the assumption
that the page count never goes to zero and a list_add() is never
attempted.

Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Tested-by: Logan Gunthorpe &lt;logang@deltatee.com&gt;
Cc: Dave Hansen &lt;dave@sr71.net&gt;
Cc: Matthew Wilcox &lt;willy@linux.intel.com&gt;
Cc: Ross Zwisler &lt;ross.zwisler@linux.intel.com&gt;
Cc: Alexander Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
get_dev_page() enables paths like get_user_pages() to pin a dynamically
mapped pfn-range (devm_memremap_pages()) while the resulting struct page
objects are in use.  Unlike get_page() it may fail if the device is, or
is in the process of being, disabled.  While the initial lookup of the
range may be an expensive list walk, the result is cached to speed up
subsequent lookups which are likely to be in the same mapped range.

devm_memremap_pages() now requires a reference counter to be specified
at init time.  For pmem this means moving request_queue allocation into
pmem_alloc() so the existing queue usage counter can track "device
pages".

ZONE_DEVICE pages always have an elevated count and will never be on an
lru reclaim list.  That space in 'struct page' can be redirected for
other uses, but for safety introduce a poison value that will always
trip __list_add() to assert.  This allows half of the struct list_head
storage to be reclaimed with some assurance to back up the assumption
that the page count never goes to zero and a list_add() is never
attempted.

Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Tested-by: Logan Gunthorpe &lt;logang@deltatee.com&gt;
Cc: Dave Hansen &lt;dave@sr71.net&gt;
Cc: Matthew Wilcox &lt;willy@linux.intel.com&gt;
Cc: Ross Zwisler &lt;ross.zwisler@linux.intel.com&gt;
Cc: Alexander Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: Use WRITE_ONCE() when initializing list_head structures</title>
<updated>2015-12-04T20:34:33+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-10-12T23:56:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=2f073848c3cc8aff2655ab7c46d8c0de90cf4e50'/>
<id>2f073848c3cc8aff2655ab7c46d8c0de90cf4e50</id>
<content type='text'>
Code that does lockless emptiness testing of non-RCU lists is relying
on INIT_LIST_HEAD() to write the list head's -&gt;next pointer atomically,
particularly when INIT_LIST_HEAD() is invoked from list_del_init().
This commit therefore adds WRITE_ONCE() to this function's pointer stores
that could affect the head's -&gt;next pointer.

Reported-by: Andrey Konovalov &lt;andreyknvl@google.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Code that does lockless emptiness testing of non-RCU lists is relying
on INIT_LIST_HEAD() to write the list head's -&gt;next pointer atomically,
particularly when INIT_LIST_HEAD() is invoked from list_del_init().
This commit therefore adds WRITE_ONCE() to this function's pointer stores
that could affect the head's -&gt;next pointer.

Reported-by: Andrey Konovalov &lt;andreyknvl@google.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: Use READ_ONCE() when testing for empty lists</title>
<updated>2015-11-23T18:37:35+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-09-21T00:03:16+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1658d35ead5d8dd76f2b2d6ad0e32c08d123faa2'/>
<id>1658d35ead5d8dd76f2b2d6ad0e32c08d123faa2</id>
<content type='text'>
Most of the list-empty-check macros (list_empty(), hlist_empty(),
hlist_bl_empty(), hlist_nulls_empty(), and hlist_nulls_empty()) use
an unadorned load to check the list header.  Given that these macros
are sometimes invoked without the protection of a lock, this is
not sufficient.  This commit therefore adds READ_ONCE() calls to
them.  This commit does not touch llist_empty() because it already
has the needed ACCESS_ONCE().

Reported-by: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Most of the list-empty-check macros (list_empty(), hlist_empty(),
hlist_bl_empty(), hlist_nulls_empty(), and hlist_nulls_empty()) use
an unadorned load to check the list header.  Given that these macros
are sometimes invoked without the protection of a lock, this is
not sufficient.  This commit therefore adds READ_ONCE() calls to
them.  This commit does not touch llist_empty() because it already
has the needed ACCESS_ONCE().

Reported-by: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>list: Use WRITE_ONCE() when adding to lists and hlists</title>
<updated>2015-11-23T18:37:35+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-09-21T05:02:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1c97be677f72b3c338312aecd36d8fff20322f32'/>
<id>1c97be677f72b3c338312aecd36d8fff20322f32</id>
<content type='text'>
Code that does lockless emptiness testing of non-RCU lists is relying
on the list-addition code to write the list head's -&gt;next pointer
atomically.  This commit therefore adds WRITE_ONCE() to list-addition
pointer stores that could affect the head's -&gt;next pointer.

Reported-by: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Code that does lockless emptiness testing of non-RCU lists is relying
on the list-addition code to write the list head's -&gt;next pointer
atomically.  This commit therefore adds WRITE_ONCE() to list-addition
pointer stores that could affect the head's -&gt;next pointer.

Reported-by: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
