Age | Commit message (Collapse) | Author |
|
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Martin Schwidefsky:
"There is only one new feature in this pull for the 4.4 merge window,
most of it is small enhancements, cleanup and bug fixes:
- Add the s390 backend for the software dirty bit tracking. This
adds two new pgtable functions pte_clear_soft_dirty and
pmd_clear_soft_dirty which is why there is a hit to
arch/x86/include/asm/pgtable.h in this pull request.
- A series of cleanup patches for the AP bus, this includes the
removal of the support for two outdated crypto cards (PCICC and
PCICA).
- The irq handling / signaling on buffer full in the runtime
instrumentation code is dropped.
- Some micro optimizations: remove unnecessary memory barriers for a
couple of functions: [smb_]rmb, [smb_]wmb, atomics, bitops, and for
spin_unlock. Use the builtin bswap if available and make
test_and_set_bit_lock more cache friendly.
- Statistics and a tracepoint for the diagnose calls to the
hypervisor.
- The CPU measurement facility support to sample KVM guests is
improved.
- The vector instructions are now always enabled for user space
processes if the hardware has the vector facility. This simplifies
the FPU handling code. The fpu-internal.h header is split into fpu
internals, api and types just like x86.
- Cleanup and improvements for the common I/O layer.
- Rework udelay to solve a problem with kprobe. udelay has busy loop
semantics but still uses an idle processor state for the wait"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (66 commits)
s390: remove runtime instrumentation interrupts
s390/cio: de-duplicate subchannel validation
s390/css: unneeded initialization in for_each_subchannel
s390/Kconfig: use builtin bswap
s390/dasd: fix disconnected device with valid path mask
s390/dasd: fix invalid PAV assignment after suspend/resume
s390/dasd: fix double free in dasd_eckd_read_conf
s390/kernel: fix ptrace peek/poke for floating point registers
s390/cio: move ccw_device_stlck functions
s390/cio: move ccw_device_call_handler
s390/topology: reduce per_cpu() invocations
s390/nmi: reduce size of percpu variable
s390/nmi: fix terminology
s390/nmi: remove casts
s390/nmi: remove pointless error strings
s390: don't store registers on disabled wait anymore
s390: get rid of __set_psw_mask()
s390/fpu: split fpu-internal.h into fpu internals, api, and type headers
s390/dasd: fix list_del corruption after lcu changes
s390/spinlock: remove unneeded serializations at unlock
...
|
|
cio_validate_io_subchannel() and cio_validate_msg_subchannel() are
identical, as the called functions already take care about the
differences between subchannel types.
Just inline the code into the only user,
cio_validate_subchannel(), instead.
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The ret variable is always set by the fn function.
There is no need to initialize it.
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-By: Sascha Silbe <silbe@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Path verification is either done via dasd_eckd_read_conf() which is
triggered during online processing and resume or via
do_path_verification_work() which is triggered after path events.
The dasd_eckd_read_conf() version added paths unconditionally and did
not check if the path mask was empty. This led to devices having the
disconnected stop flag set but a valid path mask. So they where not
working although they had paths validated successfully. After a resume
this state could even not be solved with additional paths added.
Fix by checking for an empty path mask in dasd_eckd_read_conf() and
clearing the device stop bits for a newly added channel path.
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
For a valid PAV assignment the DASD driver needs to notice possibly
changed configuration data. Thus the failing of read configuration
data should also fail the device restore to prevent invalid PAV
assignment. The failed device may get restored after additional paths
get available later on.
If the restore fails after the device was added to the lcu alias
handling it needs to be removed from the alias handling before exiting
the restore function.
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The configuration data is stored per path and also the first valid
configuration data per device. When dasd_eckd_read_conf is called
again after a path got lost the device configuration data is cleared
but possibly not the per path configuration data. This might lead to a
double free when the lost path gets operational again.
Fix by clearing all per path configuration data when the first valid
configuration data is received and stored.
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
device_ops.c should only contain functions that are called by ccw device
drivers. Move the cio internal functions that handle unconditional
reserve + release to device_pgid.c
Acked-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
device_ops.c should only contain functions that are called by ccw device
drivers. Move the cio internal function ccw_device_call_handler to
device_fsm.c where it's used. Remove some useless comments while at it.
Acked-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Remove all the casts to and from the machine check interruption code.
This patch changes struct mci to a union, which contains an anonymous
structure with the already known bits and in addition an unsigned
long field, which contains the raw machine check interruption code.
This allows to simply assign and decoce the interruption code value
without the need for all those casts we had all the time.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
A summary unit check occurs when the lcu updates the PAV configuration
e.g. base PAV assignment or PAV mode at all. This requires the reset
of the drivers internal pavgroups. Therefore the alias devices are
flushed and moved via a temporary list to the active_devices list
where they are not associated with a pavgroup. In conjunction with
updates to the base device the pavgroup may be removed since both
base_list and alias_list are empty. Unfortunately during alias flush
and move to the active_device list from alias_list the pavgroup
pointer is not deleted in the device private structure. This leads to
a list del_corruption if another lcu_update tries to move the device
in the non existent pavgroup.
Fix by removing the pavgroup pointer after the alias device was moved
to the active_devices list.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
There is a system work queue system_long_wq for long running work.
Use this work queue for the AP bus scan loop.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Remove the code for really old crypt cards, PCICC and PCICA.
These cards have been out of service for several years.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Replace the two fields 'unregistered' and 'reset' with a device
state with 5 possible values. Introduce two events for the AP devices,
device poll and device timeout. With the state machine it is easier
to deal with device initialization and suspend/resume. Device polling
is simpler as well, the arkane 'flags' passing is gone.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
If a AP device is removed while messages are still pending, the requests
are cancelled by calling the message receive function with an error pointer
for the reply. The message type receive handler recognize this and create
a fake hardware error TYPE82_RSP_CODE / REP82_ERROR_MACHINE_FAILURE.
The message with the hardware error then causes a printk and a return
code of -EAGAIN.
Replace the intricate scheme with an explicit return code for this sitation
and avoid the error message.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Set the configuration timer at the end of the ap_scan_bus function.
Make use of setup_timer and remove some unnecessary add_timer, mod_timer
and del_timer_sync calls. Replace the complicated timer_pending, mod_timer
and add_timer code in ap_config_time_store with a simple mod_timer.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
If there are no devices on the AP bus there will not be a single
call to the per-device ap_bus_suspend function. Even worse,
there will not be a call to the per-device ap_bus_resume either
and the AP will fail so resume correctly.
Introduce a bus specific dev_pm_ops to suspend / resume the AP
bus related things. While we are at it, simplify the power management
code of the AP bus.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The ap_queue_messsage function will call device_unregister if the
unregistered field of the device has been set while trying to queue
a message. This races with other device_unregister calls, e.g. from
the ap_scan_bus. Remove the call to device_unregister from
ap_queue_message and let ap_scan_bus deal with it.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The ap_query_configuration function allocates the ap_config_info
structure, but there is no code to free the structure.
Allocate the structure in the module_init function and free it
again in module_exit.
While we are at it simplify a few functions in regard to the
ap configuration data.
Reviewed-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
ap_test_queue, ap_query_facilities, __ap_query_functions all use
the same PQAP(TAPQ) command. Consolidate the three into a single
ap_test_queue function that returns the AP status and the 64-bit
result. The exception table entry for PQAP(TAPQ) can be avoided
if the T bit for the APFT facility is set only if test_facility(15)
indicated that the facility is present.
Integrate ap_query_function into ap_query queue to avoid calling
PQAP(TAPQ) twice.
Reviewed-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The sclp console and tty code currently uses several message text
objects in a single message event to print several lines with one
SCCB. This causes the output of these lines to be fused into a
block which is noticeable when selecting text in the operating system
message panel.
Instead use several message events with a single message text object
each to print every line on its own. This changes the SCCB layout
from
struct sccb_header
struct evbuf_header
struct mdb_header
struct go
struct mto
...
struct mto
to
struct sccb_header
struct evbuf_header
struct mdb_header
struct go
struct mto
...
struct evbuf_header
struct mdb_header
struct go
struct mto
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Introduce /sys/debug/kernel/diag_stat with a statistic how many diagnose
calls have been done by each CPU in the system.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
We often need to correlate an 8 bit path mask with the position
in a channel path array. Introduce and use pathmask_to_pos for
that task.
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
During resume from hibernate we already reenable measurement block
updates on a per device basis. In addition to that we also need to
activate channel measurement globally using the set channel monitor
instruction.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Extended measurement blocks need to be 64 byte aligned. To achieve that
128 bytes for each measurement block are allocated and an align callback
returns a 64 byte aligned address inside this area.
Replace this code with kmem_cache allocations.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The measurement block for the extended measurement data is not freed when
switching off per device measurement. Free the measurement block after HW
stopped accessing it.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
During allocation of extended measurement blocks we check if the device is
already active for channel measurement and add the device to a list of
devices with active channel measurement. The check is done under ccwlock
protection and the list modification is guarded by a different lock.
To guarantee that both states are in sync make sure that both locks
are held during the allocation process (like it's already done for the
"normal" measurement block allocation).
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Devices with active channel measurement are included in a list. When a
device is removed without deactivating channel measurement first the
list_head is freed but still used. Fix this by making sure that
channel measurement is deactivated during device deregistration.
For devices that we deregister because they are no longer accessible
deactivating channel measurement will fail. In this case we can report
success because the FW will no longer access the measurement block.
In addition to these steps keep an extra device reference while
channel measurement is active.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Hold the device_lock during [de]activation of the channel measurement
block to synchronize concurrent usage of these functions.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Ensure that we hold the ccwlock when accessing private data. Return errors
that occur during measurement enabling to userspace. Apply some cleanups
while at it.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
We were able to reduce the CPU overhead of big paging scenarios
when announcing our paging disks as non-rotational.
Almost all dasd devices are implemented in storage servers with
cache, raid, striping and lots of magic. There is no point in
optimizing the disk schedulers and swap code for a single platter
moving arm rotational disks. Given the complexity of the setup
and the fact that this change is mostly to disable the additional
overhead in swap code, lets keep the other functionality unchanged
and do not disable the this device as entropy source - unlike other
non-rotational devices.
Suggested-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
In the past only even modulus sizes were allowed for RSA keys in
CRT format. This restriction was based on limited RSA key generation
on older crypto adapters that provides only even modulus sizes. This
restriction is not valid any more.
Revoke restrictions that crypto requests can be serviced with odd
RSA modulus length in CRT format.
Signed-off-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
If HiperSockets Completion Queueing is enabled, qdio always
issues a warning, since the condition is always met.
This patch fixes the condition in WARN_ON_ONCE that was always
true.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com>
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In layer2 mode of the qeth driver, MAC address lists
from struct net_device require mapping to the OSA-card.
The existing implementation is inefficient for lists with
more than several MAC addresses, since for every
ndo_set_rx_mode callback it removes all MAC addresses first,
and then registers the current MAC address list.
This patch changes implementation of ndo_set_rx_mode callback
in qeth, only performing hardware registration/removal for
new/deleted addresses. To shorten lookup of MAC addresses
registered addresses are kept in a hashtable instead of a
linear list.
Signed-off-by: Lakhvich Dmitriy <ldmitriy@ru.ibm.com>
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Reviewed-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com>
Reviewed-by: Thomas Richter <tmricht@de.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for GRO (generic receive offload) in the layer 2
part of device driver qeth. This results in a performance
improvement when GRO and RX is turned on.
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Conflicts:
net/ipv4/arp.c
The net/ipv4/arp.c conflict was one commit adding a new
local variable while another commit was deleting one.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The iucv code uses arrays as arguments. Even though this does not
really cause a problem, it could be misleading, since the compiler
turns array arguments into just a pointer argument. To be more
precise this patch changes the array arguments into pointers.
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Checksum offloading for send and receive is already
supported for layer 3 (IP layer). This patch
adds support for RX and TX hardware checksum offloading
for layer 2 (MAC layer). The hardware calculates the checksum
for IP UDP and TCP packets.
This patch moves the hardware checksum offloading setup
to the set of common functions in qeth_core_main.c.
Layer 2 and layer 3 now simply call the same common functions.
Also note that TX checksum offloading is always enabled.
The device driver relies on the TCP/IP stack to make use of
this feature.
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Reviewed-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
An OSA-Express port name was required to identify a shared OSA port.
All operating system instances that shared the port had to use the
same port name. This requirement no longer applies.
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
User is not allowed to write into bridge_state sysfs file.
Fixed attribute not mislead the user
Signed-off-by: Lakhvich Dmitriy <ldmitriy@ru.ibm.com>
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Reported-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Reviewed-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com>
Reviewed-by: Thomas Richter <tmricht@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Length specifier in the %pM format is not supported (at least, not
documented). Remove it, and also an extraneous '&' for the array.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com>
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pull virtio fixes and cleanups from Michael Tsirkin:
"This fixes the virtio-test tool, and improves the error handling for
virtio-ccw"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio/s390: handle failures of READ_VQ_CONF ccw
tools/virtio: propagate V=X to kernel build
vhost: move features to core
tools/virtio: fix build after 4.2 changes
|
|
In virtio_ccw_read_vq_conf() the return value of ccw_io_helper()
was not checked.
If the configuration could not be read properly, we'd wrongly assume a
queue size of 0.
Let's propagate any I/O error to virtio_ccw_setup_vq() so it may
properly fail.
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Instead of custom approach let's use recently introduced seq_hex_dump()
helper.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Ingo Tuchscherer <ingo.tuchscherer@de.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Joe Perches <joe@perches.com>
Cc: Tadeusz Struk <tadeusz.struk@intel.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Vladimir Kondratiev <qca_vkondrat@qca.qualcomm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm updates from Dan Williams:
"This update has successfully completed a 0day-kbuild run and has
appeared in a linux-next release. The changes outside of the typical
drivers/nvdimm/ and drivers/acpi/nfit.[ch] paths are related to the
removal of IORESOURCE_CACHEABLE, the introduction of memremap(), and
the introduction of ZONE_DEVICE + devm_memremap_pages().
Summary:
- Introduce ZONE_DEVICE and devm_memremap_pages() as a generic
mechanism for adding device-driver-discovered memory regions to the
kernel's direct map.
This facility is used by the pmem driver to enable pfn_to_page()
operations on the page frames returned by DAX ('direct_access' in
'struct block_device_operations').
For now, the 'memmap' allocation for these "device" pages comes
from "System RAM". Support for allocating the memmap from device
memory will arrive in a later kernel.
- Introduce memremap() to replace usages of ioremap_cache() and
ioremap_wt(). memremap() drops the __iomem annotation for these
mappings to memory that do not have i/o side effects. The
replacement of ioremap_cache() with memremap() is limited to the
pmem driver to ease merging the api change in v4.3.
Completion of the conversion is targeted for v4.4.
- Similar to the usage of memcpy_to_pmem() + wmb_pmem() in the pmem
driver, update the VFS DAX implementation and PMEM api to provide
persistence guarantees for kernel operations on a DAX mapping.
- Convert the ACPI NFIT 'BLK' driver to map the block apertures as
cacheable to improve performance.
- Miscellaneous updates and fixes to libnvdimm including support for
issuing "address range scrub" commands, clarifying the optimal
'sector size' of pmem devices, a clarification of the usage of the
ACPI '_STA' (status) property for DIMM devices, and other minor
fixes"
* tag 'libnvdimm-for-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (34 commits)
libnvdimm, pmem: direct map legacy pmem by default
libnvdimm, pmem: 'struct page' for pmem
libnvdimm, pfn: 'struct page' provider infrastructure
x86, pmem: clarify that ARCH_HAS_PMEM_API implies PMEM mapped WB
add devm_memremap_pages
mm: ZONE_DEVICE for "device memory"
mm: move __phys_to_pfn and __pfn_to_phys to asm/generic/memory_model.h
dax: drop size parameter to ->direct_access()
nd_blk: change aperture mapping from WC to WB
nvdimm: change to use generic kvfree()
pmem, dax: have direct_access use __pmem annotation
dax: update I/O path to do proper PMEM flushing
pmem: add copy_from_iter_pmem() and clear_pmem()
pmem, x86: clean up conditional pmem includes
pmem: remove layer when calling arch_has_wmb_pmem()
pmem, x86: move x86 PMEM API to new pmem.h header
libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option
pmem: switch to devm_ allocations
devres: add devm_memremap
libnvdimm, btt: write and validate parent_uuid
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking and atomic updates from Ingo Molnar:
"Main changes in this cycle are:
- Extend atomic primitives with coherent logic op primitives
(atomic_{or,and,xor}()) and deprecate the old partial APIs
(atomic_{set,clear}_mask())
The old ops were incoherent with incompatible signatures across
architectures and with incomplete support. Now every architecture
supports the primitives consistently (by Peter Zijlstra)
- Generic support for 'relaxed atomics':
- _acquire/release/relaxed() flavours of xchg(), cmpxchg() and {add,sub}_return()
- atomic_read_acquire()
- atomic_set_release()
This came out of porting qwrlock code to arm64 (by Will Deacon)
- Clean up the fragile static_key APIs that were causing repeat bugs,
by introducing a new one:
DEFINE_STATIC_KEY_TRUE(name);
DEFINE_STATIC_KEY_FALSE(name);
which define a key of different types with an initial true/false
value.
Then allow:
static_branch_likely()
static_branch_unlikely()
to take a key of either type and emit the right instruction for the
case. To be able to know the 'type' of the static key we encode it
in the jump entry (by Peter Zijlstra)
- Static key self-tests (by Jason Baron)
- qrwlock optimizations (by Waiman Long)
- small futex enhancements (by Davidlohr Bueso)
- ... and misc other changes"
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits)
jump_label/x86: Work around asm build bug on older/backported GCCs
locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations
locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
locking/qrwlock: Make use of _{acquire|release|relaxed}() atomics
locking/qrwlock: Implement queue_write_unlock() using smp_store_release()
locking/lockref: Remove homebrew cmpxchg64_relaxed() macro definition
locking, asm-generic: Add _{relaxed|acquire|release}() variants for 'atomic_long_t'
locking, asm-generic: Rework atomic-long.h to avoid bulk code duplication
locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations
locking, compiler.h: Cast away attributes in the WRITE_ONCE() magic
locking/static_keys: Make verify_keys() static
jump label, locking/static_keys: Update docs
locking/static_keys: Provide a selftest
jump_label: Provide a self-test
s390/uaccess, locking/static_keys: employ static_branch_likely()
x86, tsc, locking/static_keys: Employ static_branch_likely()
locking/static_keys: Add selftest
locking/static_keys: Add a new static_key interface
locking/static_keys: Rework update logic
locking/static_keys: Add static_key_{en,dis}able() helpers
...
|
|
Pull core block updates from Jens Axboe:
"This first core part of the block IO changes contains:
- Cleanup of the bio IO error signaling from Christoph. We used to
rely on the uptodate bit and passing around of an error, now we
store the error in the bio itself.
- Improvement of the above from myself, by shrinking the bio size
down again to fit in two cachelines on x86-64.
- Revert of the max_hw_sectors cap removal from a revision again,
from Jeff Moyer. This caused performance regressions in various
tests. Reinstate the limit, bump it to a more reasonable size
instead.
- Make /sys/block/<dev>/queue/discard_max_bytes writeable, by me.
Most devices have huge trim limits, which can cause nasty latencies
when deleting files. Enable the admin to configure the size down.
We will look into having a more sane default instead of UINT_MAX
sectors.
- Improvement of the SGP gaps logic from Keith Busch.
- Enable the block core to handle arbitrarily sized bios, which
enables a nice simplification of bio_add_page() (which is an IO hot
path). From Kent.
- Improvements to the partition io stats accounting, making it
faster. From Ming Lei.
- Also from Ming Lei, a basic fixup for overflow of the sysfs pending
file in blk-mq, as well as a fix for a blk-mq timeout race
condition.
- Ming Lin has been carrying Kents above mentioned patches forward
for a while, and testing them. Ming also did a few fixes around
that.
- Sasha Levin found and fixed a use-after-free problem introduced by
the bio->bi_error changes from Christoph.
- Small blk cgroup cleanup from Viresh Kumar"
* 'for-4.3/core' of git://git.kernel.dk/linux-block: (26 commits)
blk: Fix bio_io_vec index when checking bvec gaps
block: Replace SG_GAPS with new queue limits mask
block: bump BLK_DEF_MAX_SECTORS to 2560
Revert "block: remove artifical max_hw_sectors cap"
blk-mq: fix race between timeout and freeing request
blk-mq: fix buffer overflow when reading sysfs file of 'pending'
Documentation: update notes in biovecs about arbitrarily sized bios
block: remove bio_get_nr_vecs()
fs: use helper bio_add_page() instead of open coding on bi_io_vec
block: kill merge_bvec_fn() completely
md/raid5: get rid of bio_fits_rdev()
md/raid5: split bio for chunk_aligned_read
block: remove split code in blkdev_issue_{discard,write_same}
btrfs: remove bio splitting and merge_bvec_fn() calls
bcache: remove driver private bio splitting code
block: simplify bio_add_page()
block: make generic_make_request handle arbitrarily sized bios
blk-cgroup: Drop unlikely before IS_ERR(_OR_NULL)
block: don't access bio->bi_error after bio_put()
block: shrink struct bio down to 2 cache lines again
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Martin Schwidefsky:
"The big one is support for fake NUMA, splitting a really large machine
in more manageable piece improves performance in some cases, e.g. for
a KVM host.
The FICON Link Incident handling has been improved, this helps the
operator to identify degraded or non-operational FICON connections.
The save and restore of floating point and vector registers has been
overhauled to allow the future use of vector registers in the kernel.
A few small enhancement, magic sys-requests for the vt220 console via
SCLP, some more assembler code has been converted to C, the PCI error
handling is improved.
And the usual cleanup and bug fixing"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (59 commits)
s390/jump_label: Use %*ph to print small buffers
s390/sclp_vt220: support magic sysrequests
s390/ctrlchar: improve handling of magic sysrequests
s390/numa: remove superfluous ARCH_WANT defines
s390/3270: redraw screen on unsolicited device end
s390/dcssblk: correct out of bounds array indexes
s390/mm: simplify page table alloc/free code
s390/pci: move debug messages to debugfs
s390/nmi: initialize control register 0 earlier
s390/zcrypt: use msleep() instead of mdelay()
s390/hmcdrv: fix interrupt registration
s390/setup: fix novx parameter
s390/uaccess: remove uaccess_primary kernel parameter
s390: remove unneeded sizeof(void *) comparisons
s390/facilities: remove transactional-execution bits
s390/numa: re-add DIE sched_domain_topology_level
s390/dasd: enhance CUIR scope detection
s390/dasd: fix failing path verification
s390/vdso: emit a GNU hash
s390/numa: make core to node mapping data dynamic
...
|
|
None of the implementations currently use it. The common
bdev_direct_access() entry point handles all the size checks before
calling ->direct_access().
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
Implement magic sysrequest handling for the VT220 terminal (also known as
the Integrated ASCII console on the HMC/SE).
To invoke a "magic sysrequest" function, press "Ctrl+o" followed by a
second character that designates the debugging function.
The handling of the sysrq is scheduled away from the SCLP IRQ context;
because large amounts of sysrq output might fill up the console buffers.
The console might deadlock because it cannot empty the buffers while still
in the receiving IRQ context. This behavior is the same as for the SCLP
console.
Reported-by: Horst Weber <hweber@de.ibm.com>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Extract the sysrq handling from the ctrlchar_handle() into a separate
function that can be directly used by other users.
Introduce a new sysrq_work structure to embed the work_struct and to
specify the magic sysrq function to be invoked.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|