Age | Commit message (Collapse) | Author |
|
When using the Promise TX2+ SATA controller on PA-RISC, the system often
crashes with kernel panic, for example just writing data with the dd
utility will make it crash.
Kernel panic - not syncing: drivers/parisc/sba_iommu.c: I/O MMU @ 000000000000a000 is out of mapping resources
CPU: 0 PID: 18442 Comm: mkspadfs Not tainted 4.4.0-rc2 #2
Backtrace:
[<000000004021497c>] show_stack+0x14/0x20
[<0000000040410bf0>] dump_stack+0x88/0x100
[<000000004023978c>] panic+0x124/0x360
[<0000000040452c18>] sba_alloc_range+0x698/0x6a0
[<0000000040453150>] sba_map_sg+0x260/0x5b8
[<000000000c18dbb4>] ata_qc_issue+0x264/0x4a8 [libata]
[<000000000c19535c>] ata_scsi_translate+0xe4/0x220 [libata]
[<000000000c19a93c>] ata_scsi_queuecmd+0xbc/0x320 [libata]
[<0000000040499bbc>] scsi_dispatch_cmd+0xfc/0x130
[<000000004049da34>] scsi_request_fn+0x6e4/0x970
[<00000000403e95a8>] __blk_run_queue+0x40/0x60
[<00000000403e9d8c>] blk_run_queue+0x3c/0x68
[<000000004049a534>] scsi_run_queue+0x2a4/0x360
[<000000004049be68>] scsi_end_request+0x1a8/0x238
[<000000004049de84>] scsi_io_completion+0xfc/0x688
[<0000000040493c74>] scsi_finish_command+0x17c/0x1d0
The cause of the crash is not exhaustion of the IOMMU space, there is
plenty of free pages. The function sba_alloc_range is called with size
0x11000, thus the pages_needed variable is 0x11. The function
sba_search_bitmap is called with bits_wanted 0x11 and boundary size is
0x10 (because dma_get_seg_boundary(dev) returns 0xffff).
The function sba_search_bitmap attempts to allocate 17 pages that must not
cross 16-page boundary - it can't satisfy this requirement
(iommu_is_span_boundary always returns true) and fails even if there are
many free entries in the IOMMU space.
How did it happen that we try to allocate 17 pages that don't cross
16-page boundary? The cause is in the function iommu_coalesce_chunks. This
function tries to coalesce adjacent entries in the scatterlist. The
function does several checks if it may coalesce one entry with the next,
one of those checks is this:
if (startsg->length + dma_len > max_seg_size)
break;
When it finishes coalescing adjacent entries, it allocates the mapping:
sg_dma_len(contig_sg) = dma_len;
dma_len = ALIGN(dma_len + dma_offset, IOVP_SIZE);
sg_dma_address(contig_sg) =
PIDE_FLAG
| (iommu_alloc_range(ioc, dev, dma_len) << IOVP_SHIFT)
| dma_offset;
It is possible that (startsg->length + dma_len > max_seg_size) is false
(we are just near the 0x10000 max_seg_size boundary), so the funcion
decides to coalesce this entry with the next entry. When the coalescing
succeeds, the function performs
dma_len = ALIGN(dma_len + dma_offset, IOVP_SIZE);
And now, because of non-zero dma_offset, dma_len is greater than 0x10000.
iommu_alloc_range (a pointer to sba_alloc_range) is called and it attempts
to allocate 17 pages for a device that must not cross 16-page boundary.
To fix the bug, we must make sure that dma_len after addition of
dma_offset and alignment doesn't cross the segment boundary. I.e. change
if (startsg->length + dma_len > max_seg_size)
break;
to
if (ALIGN(dma_len + dma_offset + startsg->length, IOVP_SIZE) > max_seg_size)
break;
This patch makes this change (it precalculates max_seg_boundary at the
beginning of the function iommu_coalesce_chunks). I also added a check
that the mapping length doesn't exceed dma_get_seg_boundary(dev) (it is
not needed for Promise TX2+ SATA, but it may be needed for other devices
that have dma_get_seg_boundary lower than dma_get_max_seg_size).
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
"Five stable fixes:
- Two DM btree bufio buffer leak fixes that resolve reported BUG_ONs
during DM thinp metadata close's dm_bufio_client_destroy().
- A DM thinp range discard fix to handle discarding a partially
mapped range.
- A DM thinp metadata snapshot fix to make sure the btree roots saved
in the metadata snapshot are the most current.
- A DM space map metadata refcounting fix that improves both DM thinp
and DM cache metadata"
* tag 'dm-4.4-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm btree: fix bufio buffer leaks in dm_btree_del() error path
dm space map metadata: fix ref counting bug when bootstrapping a new space map
dm thin metadata: fix bug when taking a metadata snapshot
dm thin metadata: fix bug in dm_thin_remove_range()
dm btree: fix leak of bufio-backed block in btree_split_sibling error path
|
|
Pull drm fixes from Dave Airlie:
"Not too much this time.
- One nouveau workaround extended to a few more GPUs
- Some amdgpu big endian fixes, and a regression fixer
- Some vmwgfx fixes
- One ttm locking fix
- One vgaarb fix"
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
vgaarb: fix signal handling in vga_get()
radeon: Fix VCE IB test on Big-Endian systems
radeon: Fix VCE ring test for Big-Endian systems
radeon/cik: Fix GFX IB test on Big-Endian
drm/amdgpu: fix the lost duplicates checking
drm/nouveau/pmu: remove whitelist for PGOB-exit WAR, enable by default
drm/vmwgfx: Implement the cursor_set2 callback v2
drm/vmwgfx: fix a warning message
drm/ttm: Fixed a read/write lock imbalance
|
|
There are few defects in vga_get() related to signal hadning:
- we shouldn't check for pending signals for TASK_UNINTERRUPTIBLE
case;
- if we found pending signal we must remove ourself from wait queue
and change task state back to running;
- -ERESTARTSYS is more appropriate, I guess.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Cc: stable@vger.kernel.org
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
into drm-fixes
some big endian fixes and one regression fix.
* 'drm-fixes-4.4' of git://people.freedesktop.org/~agd5f/linux:
radeon: Fix VCE IB test on Big-Endian systems
radeon: Fix VCE ring test for Big-Endian systems
radeon/cik: Fix GFX IB test on Big-Endian
drm/amdgpu: fix the lost duplicates checking
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma
Pull rdma fixes from Doug Ledford:
"Most are minor to important fixes.
There is one performance enhancement that I took on the grounds that
failing to check if other processes can run before running what's
intended to be a background, idle-time task is a bug, even though the
primary effect of the fix is to improve performance (and it was a very
simple patch)"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma:
IB/mlx5: Postpone remove_keys under knowledge of coming preemption
IB/mlx4: Use vmalloc for WR buffers when needed
IB/mlx4: Use correct order of variables in log message
iser-target: Remove explicit mlx4 work-around
mlx4: Expose correct max_sge_rd limit
IB/mad: Require CM send method for everything except ClassPortInfo
IB/cma: Add a missing rcu_read_unlock()
IB core: Fix ib_sg_to_pages()
IB/srp: Fix srp_map_sg_fr()
IB/srp: Fix indirect data buffer rkey endianness
IB/srp: Initialize dma_length in srp_map_idb
IB/srp: Fix possible send queue overflow
IB/srp: Fix a memory leak
IB/sa: Put netlink request into the request list before sending
IB/iser: use sector_div instead of do_div
IB/core: use RCU for uverbs id lookup
IB/qib: Minor fixes to qib per SFF 8636
IB/core: Fix user mode post wr corruption
IB/qib: Fix qib_mr structure
|
|
If dm_btree_del()'s call to push_frame() fails, e.g. due to
btree_node_validator finding invalid metadata, the dm_btree_del() error
path must unlock all frames (which have active dm-bufio buffers) that
were pushed onto the del_stack.
Otherwise, dm_bufio_client_destroy() will BUG_ON() because dm-bufio
buffers have leaked, e.g.:
device-mapper: bufio: leaked buffer 3, hold count 1, list 0
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
|
|
Pull VFIO fixes from Alex Williamson:
- Various fixes for removing redundancy, const'ifying structs, avoiding
stack usage, fixing WARN usage (Krzysztof Kozlowski, Julia Lawall,
Kees Cook, Dan Carpenter)
- Revert No-IOMMU mode as the intended user has not emerged (Alex
Williamson)
* tag 'vfio-v4.4-rc5' of git://github.com/awilliam/linux-vfio:
Revert: "vfio: Include No-IOMMU mode"
vfio: fix a warning message
vfio: platform: remove needless stack usage
vfio-pci: constify pci_error_handlers structures
vfio: Drop owner assignment from platform_driver
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux
Pull DT fixes from Rob Herring:
"I think this should be all for 4.4:
- Fix incorrect warning about overlapping memory regions
- Export of_irq_find_parent again which was made static in 4.4, but
has users pending for 4.5.
- Fix of_msi_map_rid declaration location
- Fix re-entrancy for of_fdt_unflatten_tree
- Clean-up of phys_addr_t printks"
* tag 'devicetree-fixes-for-4.4-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
of/irq: move of_msi_map_rid declaration to the correct ifdef section
of/irq: Export of_irq_find_parent again
of/fdt: Add mutex protection for calls to __unflatten_device_tree()
of/address: fix typo in comment block of of_translate_one()
of: do not use 0x in front of %pa
of: Fix comparison of reserved memory regions
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clk fixes from Stephen Boyd:
"One small build fix, a couple do_div() fixes, and a fix for the gpio
basic clock type are the major changes here. There's also a couple
fixes for the TI, sunxi, and scpi clock drivers"
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
clk: sunxi: pll2: Fix clock running too fast
clk: scpi: add missing of_node_put
clk: qoriq: fix memory leak
imx/clk-pllv2: fix wrong do_div() usage
imx/clk-pllv1: fix wrong do_div() usage
clk: mmp: add linux/clk.h includes
clk: ti: drop locking code from mux/divider drivers
clk: ti816x: Add missing dmtimer clkdev entries
clk: ti: fapll: fix wrong do_div() usage
clk: ti: clkt_dpll: fix wrong do_div() usage
clk: gpio: Get parent clk names in of_gpio_clk_setup()
|
|
Pull IPMI fix from Corey Minyard:
"Fix an Oops if an interrupt occurs at startup. This can happen on
some hardware"
* tag 'for-linus-4.4-1' of git://git.code.sf.net/p/openipmi/linux-ipmi:
ipmi: move timer init to before irq is setup
|
|
We encountered a panic on boot in ipmi_si on a dell per320 due to an
uninitialized timer as follows.
static int smi_start_processing(void *send_info,
ipmi_smi_t intf)
{
/* Try to claim any interrupts. */
if (new_smi->irq_setup)
new_smi->irq_setup(new_smi);
--> IRQ arrives here and irq handler tries to modify uninitialized timer
which triggers BUG_ON(!timer->function) in __mod_timer().
Call Trace:
<IRQ>
[<ffffffffa0532617>] start_new_msg+0x47/0x80 [ipmi_si]
[<ffffffffa053269e>] start_check_enables+0x4e/0x60 [ipmi_si]
[<ffffffffa0532bd8>] smi_event_handler+0x1e8/0x640 [ipmi_si]
[<ffffffff810f5584>] ? __rcu_process_callbacks+0x54/0x350
[<ffffffffa053327c>] si_irq_handler+0x3c/0x60 [ipmi_si]
[<ffffffff810efaf0>] handle_IRQ_event+0x60/0x170
[<ffffffff810f245e>] handle_edge_irq+0xde/0x180
[<ffffffff8100fc59>] handle_irq+0x49/0xa0
[<ffffffff8154643c>] do_IRQ+0x6c/0xf0
[<ffffffff8100ba53>] ret_from_intr+0x0/0x11
/* Set up the timer that drives the interface. */
setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi);
The following patch fixes the problem.
To: Openipmi-developer@lists.sourceforge.net
To: Corey Minyard <minyard@acm.org>
CC: linux-kernel@vger.kernel.org
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Signed-off-by: Tony Camuso <tcamuso@redhat.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: stable@vger.kernel.org # Applies cleanly to 3.10-, needs small rework before
|
|
When applying block operations (BOPs) do not remove them from the
uncommitted BOP ring-buffer until after they've been applied -- in case
we recurse.
Also, perform BOP_INC operation, in dm_sm_metadata_create() and
sm_metadata_extend(), in terms of the uncommitted BOP ring-buffer rather
than using direct calls to sm_ll_inc().
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
|
|
When you take a metadata snapshot the btree roots for the mapping and
details tree need to have their reference counts incremented so they
persist for the lifetime of the metadata snap.
The roots being incremented were those currently written in the
superblock, which could possibly be out of date if concurrent IO is
triggering new mappings, breaking of sharing, etc.
Fix this by performing a commit with the metadata lock held while taking
a metadata snapshot.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI fixes from Bjorn Helgaas:
"These are more fixes I'd like to have in v4.4. Several for the Altera
driver added for v4.4, and one for an MSI domain problem that affects
several arm64 platforms:
MSI:
- Only use the generic MSI layer when domain is hierarchical (Marc
Zyngier)
Altera host bridge driver:
- Fix loop in tlp_read_packet() (Dan Carpenter)
- Fix Requester ID for config accesses (Ley Foon Tan)
- Check TLP completion status (Ley Foon Tan)
- Fix error when INTx is 4 (Ley Foon Tan)"
* tag 'pci-v4.4-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
PCI: altera: Fix error when INTx is 4
PCI: altera: Check TLP completion status
PCI: altera: Fix Requester ID for config accesses
PCI: altera: Fix loop in tlp_read_packet()
PCI/MSI: Only use the generic MSI layer when domain is hierarchical
|
|
of_irq_find_parent was made static since it had no users outside of
of_irq.c. Export it again since we are going to use it again.
Signed-off-by: Carlo Caione <carlo@endlessm.com>
[robh: move of_irq_find_parent to correct ifdef section]
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
This patch makes the VCE IB test pass on Big-Endian systems. It converts
to little-endian the contents of the VCE message.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
This patch fixes the VCE ring test when running on Big-Endian machines.
Every write to the ring needs to be translated to little-endian.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
This patch makes the IB test on the GFX ring pass for CI-based cards
installed in Big-Endian machines.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com>
Cc: stable@vger.kernel.org
|
|
git://people.freedesktop.org/~thomash/linux into drm-fixes
Pull request of 2015-12-08
A couple of fixes for vmwgfx. A WARN() fix by Dan Carpenter,
a TTM read/write lock imbalance causing occasional hangs with Wayland and
an implementation of cursor_set2 to fix incorrectly offset Wayland cursors.
* tag 'vmwgfx-fixes-4.4-151208' of git://people.freedesktop.org/~thomash/linux:
drm/vmwgfx: Implement the cursor_set2 callback v2
drm/vmwgfx: fix a warning message
drm/ttm: Fixed a read/write lock imbalance
|
|
Just the one commit I mentioned earlier, making the PGOB workaround the
default.
* 'linux-4.4' of https://github.com/skeggsb/linux:
drm/nouveau/pmu: remove whitelist for PGOB-exit WAR, enable by default
|
|
NVIDIA have indicated that the workaround is required on all GK10[467]
boards that have the PGOB fuse set.
I've left the commandline option in place for now, as paranoia.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
The remove_keys() logic is performed as garbage collection task. Such
task is intended to be run when no other active processes are running.
The need_resched() will return TRUE if there are user tasks to be
activated in near future.
In such case, we don't execute remove_keys() and postpone
the garbage collection work to try to run in next cycle,
in order to free CPU resources to other tasks.
The possible pseudo-code to trigger such scenario:
1. Allocate a lot of MR to fill the cache above the limit.
2. Wait a small amount of time "to calm" the system.
3. Start CPU extensive operations on multi-node cluster.
4. Expect performance degradation during MR cache shrink operation.
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
There are several hits that WR buffer allocation(kmalloc) failed.
It failed at order 3 and/or 4 contigous pages allocation. At the same time
there are actually 100MB+ free memory but well fragmented.
So try vmalloc when kmalloc failed.
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Acked-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
There is a mis-order in mlx4 log. Fix it.
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Acked-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata
Pull libata fixes from Tejun Heo:
"Nothing too interesting. All are device specific additions and
workarounds"
* 'for-4.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata:
ata/sata_fsl.c: add ATA_FLAG_NO_LOG_PAGE to blacklist the controller for log page reads
libata-eh.c: Introduce new ata port flag for controller which lockup on read log page
sata_sil: disable trim
AHCI: Fix softreset failed issue of Port Multiplier
sata/mvebu: use #ifdef around suspend/resume code
ahci: Order SATA device IDs for codename Lewisburg
ahci: Add Device ID for Intel Sunrise Point PCH
|
|
The driver now exposes sufficient limits so we can
avoid having mlx4 specific work-around.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
mlx4 devices (ConnectX-2, ConnectX-3) has a limitation
where rdma read work queue entries cannot exceed 512 bytes.
A rdma_read wqe needs to fit in 512 bytes:
- wqe control segment (16 bytes)
- rdma segment (16 bytes)
- scatter elements (16 bytes each)
So max_sge_rd should be: (512 - 16 - 16) / 16 = 30.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Receipt of CM MAD with other than the Send method for an attribute
other than the ClassPortInfo attribute is invalid.
CM attributes other than ClassPortInfo only use the send method.
The SRP initiator does not maintain a timeout policy for CM connect
requests relies on the CM layer to do that. The result was that
the SRP initiator hung as the connect request never completed.
A new SRP target has been observed to respond to Send CM REQ
with GetResp of CM REQ with bad status. This is non conformant
with IBA spec but exposes a vulnerability in the current MAD/CM
code which will respond to the incoming GetResp of CM REQ as if
it was a valid incoming Send of CM REQ rather than tossing
this on the floor. It also causes the MAD layer not to
retransmit the original REQ even though it has not received a REP.
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Hal Rosenstock <hal@mellanox.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Ensure that validate_ipv4_net_dev() calls rcu_read_unlock() if
fib_lookup() fails. Detected by sparse. Compile-tested only.
Fixes: "IB/cma: Validate routing of incoming requests" (commit f887f2ac87c2).
Cc: Haggai Eran <haggaie@mellanox.com>
Cc: stable <stable@vger.kernel.org>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Haggai Eran <haggaie@mellanox.com>
Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
__unflatten_device_tree() calls unflatten_dt_node(), which declares
a static variable. It is therefore not reentrant.
One of the callers of __unflatten_device_tree(), unflatten_device_tree(),
is only called once during early initialization and does not need to be
protected. The other caller, of_fdt_unflatten_tree(), can be called at
any time, possibly multiple times in parallel. This can happen, for
example, if multiple devicetree overlays have to be loaded and installed.
Without this protection, errors such as the following may be seen.
kernel: End of tree marker overwritten: e6a3a458
kernel: find_target_node:
Failed to find target-indirect node at /fragment@0
kernel: __of_overlay_create: of_build_overlay_info() failed for tree@/
Add a mutex to of_fdt_unflatten_tree() to make the call reentrant.
Cc: Pantelis Antoniou <pantelis.antoniou@konsulko.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Cc: stable@vger.kernel.org # v4.1+
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
Fixes native drm clients like Fedora 23 Wayland which now appears to
be able to use cursor hotspots without strange cursor offsets.
Also fixes a couple of ignored error paths.
Since the core drm cursor hotspot is incompatible with the legacy vmwgfx
hotspot (the core drm hotspot is reset when the drm_mode_cursor ioctl
is used), we need to keep track of both and add them when the device
hotspot is set. We assume that either is always zero.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
|
|
On 12/03/2015 01:18 AM, Christoph Hellwig wrote:
> The patch looks good to me, but while we touch this area, how about
> throwing in a few cosmetic fixes as well?
How about the patch below ? In that version of the ib_sg_to_pages() fix
these concerns have been addressed and additionally to more bugs have been fixed.
------------
[PATCH] IB core: Fix ib_sg_to_pages()
Fix the code for detecting gaps. A gap occurs not only if the
second or later scatterlist element is not aligned but also if
any scatterlist element other than the last does not end at a
page boundary.
In the code for coalescing contiguous elements, ensure that
mr->length is correct and that last_page_addr is up-to-date.
Ensure that this function returns a negative
error code instead of zero if the first set_page() call fails.
Fixes: commit 4c67e2bfc8b7 ("IB/core: Introduce new fast registration API")
Reported-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
After dma_map_sg() has been called the return value of that function
must be used as the number of elements in the scatterlist instead of
scsi_sg_count().
Fixes: commit f7f7aab1a5c0 ("IB/srp: Convert to new registration API")
Reported-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: stable <stable@vger.kernel.org> # v4.4+
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Detected by sparse.
Fixes: commit 330179f2fa93 ("IB/srp: Register the indirect data buffer descriptor")
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: stable <stable@vger.kernel.org> # v4.3+
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Without this sg_dma_len will return 0 on architectures tha have
the dma_length field.
Fixes: commit f7f7aab1a5c0 ("IB/srp: Convert to new registration API")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
When using work request based memory registration (fast_reg)
we must reserve SQ entries for registration and invalidation
in addition to send operations. Each IO consumes 3 SQ entries
(registration, send, invalidation) so we need to allocate 3x
larger send-queue instead of 2x.
Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
CC: Stable <stable@vger.kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
If srp_connect_ch() returns a positive value then that is considered
by its caller as a connection failure but this does not result in a
scsi_host_put() call and additionally causes the srp_create_target()
function to return a positive value while it should return a negative
value. Avoid all this confusion and additionally fix a memory leak by
ensuring that srp_connect_ch() always returns a value that is <= 0.
This patch avoids that a rejected login triggers the following memory
leak:
unreferenced object 0xffff88021b24a220 (size 8):
comm "srp_daemon", pid 56421, jiffies 4295006762 (age 4240.750s)
hex dump (first 8 bytes):
68 6f 73 74 35 38 00 a5 host58..
backtrace:
[<ffffffff8151014a>] kmemleak_alloc+0x7a/0xc0
[<ffffffff81165c1e>] __kmalloc_track_caller+0xfe/0x160
[<ffffffff81260d2b>] kvasprintf+0x5b/0x90
[<ffffffff81260e2d>] kvasprintf_const+0x8d/0xb0
[<ffffffff81254b0c>] kobject_set_name_vargs+0x3c/0xa0
[<ffffffff81337e3c>] dev_set_name+0x3c/0x40
[<ffffffff81355757>] scsi_host_alloc+0x327/0x4b0
[<ffffffffa03edc8e>] srp_create_target+0x4e/0x8a0 [ib_srp]
[<ffffffff8133778b>] dev_attr_store+0x1b/0x20
[<ffffffff811f27fa>] sysfs_kf_write+0x4a/0x60
[<ffffffff811f1e8e>] kernfs_fop_write+0x14e/0x180
[<ffffffff81176eef>] __vfs_write+0x2f/0xf0
[<ffffffff811771e4>] vfs_write+0xa4/0x100
[<ffffffff81177c64>] SyS_write+0x54/0xc0
[<ffffffff8151b257>] entry_SYSCALL_64_fastpath+0x12/0x6f
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
It was found by Saurabh Sengar that the netlink code tried to allocate
memory with GFP_KERNEL while holding a spinlock. While it is possible
to fix the issue by replacing GFP_KERNEL with GFP_ATOMIC, it is better
to get rid of the spinlock while sending the packet. However, in order
to protect against a race condition that a quick response may be received
before the request is put on the request list, we need to put the request
on the list first.
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reported-by: Saurabh Sengar <saurabh.truth@gmail.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
do_div is the wrong way to divide a sector_t, as it is less
efficient when sector_t is 32-bit wide. With the upcoming
do_div optimizations, the kernel starts warning about this:
drivers/infiniband/ulp/iser/iser_verbs.c:1296:4: note: in expansion of macro 'do_div'
include/asm-generic/div64.h:224:22: warning: passing argument 1 of '__div64_32' from incompatible pointer type
This changes the code to use sector_div instead, which always
produces optimal code.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
The current implementation gets a spin_lock, and at any scale with
qib and hfi1 post send, the lock contention grows exponentially
with the number of QPs.
idr_find() is RCU compatibile, so read doesn't need the lock.
Change to use rcu_read_lock() and rcu_read_unlock() in
__idr_get_uobj().
kfree_rcu() is used to insure a grace period between the
idr removal and actual free.
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Minor errors found via code inspection during future development.
SFF 8636 defines bit position 2 to hold the status indication of
QSFP memory paging. The mask used to test for the value was
incorrect and is fixed in this patch. Additionally, the dump
function had a mismatch between the field being printed out and
the field used to source the data which was fixed.
Reviewed-by: Mitko Haralanov <mitko.haralanov@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reported-by: Easwar Hariharan <easwar.hariharan@intel.com>
Signed-off-by: Easwar Hariharan <easwar.hariharan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Commit e622f2f4ad21 ("IB: split struct ib_send_wr")
introduced a regression for HCAs whose user mode post
sends go through ib_uverbs_post_send().
The code didn't account for the fact that the first sge is
offset by an operation dependent length. The allocation did,
but the pointer to the destination sge list is computed without
that knowledge. The sge list copy_from_user() then corrupts
fields in the work request
Store the operation dependent length in a local variable and
compute the sge list copy_from_user() destination using that length.
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
struct qib_mr requires the mr member be the last because struct
qib_mregion contains a dynamic array at the end. The additions
of members should have been placed before this structure as the
comment noted.
Failure to do so was causing random memory corruption. Reproducing
this bug was easy to do by running the client and server of
ib_write_bw -s 8 -n 5 on the same node.
This BUG() was tripped in a slab debug kernel:
kernel BUG at mm/slab.c:2572!
Fixes: 38071a461f0a ("IB/qib: Support the new memory registration API")
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Improves cacheline transfer flow of available ring header.
Virtqueues are implemented as a pair of rings, one producer->consumer
avail ring and one consumer->producer used ring; preceding the
avail ring in memory are two contiguous u16 fields -- avail->flags
and avail->idx. A producer posts work by writing to avail->idx and
a consumer reads avail->idx.
The flags and idx fields only need to be written by a producer CPU
and only read by a consumer CPU; when the producer and consumer are
running on different CPUs and the virtio_ring code is structured to
only have source writes/sink reads, we can continuously transfer the
avail header cacheline between 'M' states between cores. This flow
optimizes core -> core bandwidth on certain CPUs.
(see: "Software Optimization Guide for AMD Family 15h Processors",
Section 11.6; similar language appears in the 10h guide and should
apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
Unfortunately the existing virtio_ring code issued reads to the
avail->idx and read-modify-writes to avail->flags on the producer.
This change shadows the flags and index fields in producer memory;
the vring code now reads from the shadows and only ever writes to
avail->flags and avail->idx, allowing the cacheline to transfer
core -> core optimally.
In a concurrent version of vring_bench, the time required for
10,000,000 buffer checkout/returns was reduced by ~2% (average
across many runs) on an AMD Piledriver (15h) CPU:
(w/o shadowing):
Performance counter stats for './vring_bench':
5,451,082,016 L1-dcache-loads
...
2.221477739 seconds time elapsed
(w/ shadowing):
Performance counter stats for './vring_bench':
5,405,701,361 L1-dcache-loads
...
2.168405376 seconds time elapsed
The further away (in a NUMA sense) virtio producers and consumers are
from each other, the more we expect to benefit. Physical implementations
of virtio devices and implementations of virtio where the consumer polls
vring avail indexes (vhost) should also benefit.
Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
b92b1b89a33c ("virtio: force vring descriptors to be allocated from
lowmem") tried to exclude highmem pages for descriptors so it cleared
__GFP_HIGHMEM from a given gfp mask. The patch also cleared __GFP_HIGH
which doesn't make much sense for this fix because __GFP_HIGH only
controls access to memory reserves and it doesn't have any influence
on the zone selection. Some of the call paths use GFP_ATOMIC and
dropping __GFP_HIGH will reduce their changes for success because the
lack of access to memory reserves.
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mel Gorman <mgorman@techsingularity.net>
|
|
We know vring num is a power of 2, so use &
to mask the high bits.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The virtio core uses a static ida named virtio_index_ida for
assigning index numbers to virtio devices during registration.
The ida core may allocate some internal idr cache layers and
an ida bitmap upon any ida allocation, and all these layers are
truely freed only upon the ida destruction. The virtio_index_ida
is not destroyed at present, leading to a memory leak when using
the virtio core as a module and atleast one virtio device is
registered and unregistered.
Fix this by invoking ida_destroy() in the virtio core module
exit.
Cc: stable@vger.kernel.org
Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
commit 5d9a07b0de512b77bf28d2401e5fe3351f00a240 ("vhost: relax used
address alignment") fixed the alignment for the used virtual address,
but not for the physical address used for logging.
That's a mistake: alignment should clearly be the same for virtual and
physical addresses,
Cc: stable@vger.kernel.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|