summaryrefslogtreecommitdiff
path: root/drivers/scsi
AgeCommit message (Collapse)Author
36 hoursMerge tag 'block-6.19-20251208' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block updates from Jens Axboe: "Followup set of fixes and updates for block for the 6.19 merge window. NVMe had some late minute debates which lead to dropping some patches from that tree, which is why the initial PR didn't have NVMe included. It's here now. This pull request contains: - NVMe pull request via Keith: - Subsystem usage cleanups (Max) - Endpoint device fixes (Shin'ichiro) - Debug statements (Gerd) - FC fabrics cleanups and fixes (Daniel) - Consistent alloc API usages (Israel) - Code comment updates (Chu) - Authentication retry fix (Justin) - Fix a memory leak in the discard ioctl code, if the task is being interrupted by a signal at just the wrong time - Zoned write plugging fixes - Add ioctls for for persistent reservations - Enable per-cpu bio caching by default - Various little fixes and tweaks" * tag 'block-6.19-20251208' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: (27 commits) nvme-fabrics: add ENOKEY to no retry criteria for authentication failures nvme-auth: use kvfree() for memory allocated with kvcalloc() nvmet-tcp: use kvcalloc for commands array nvmet-rdma: use kvcalloc for commands and responses arrays nvme: fix typo error in nvme target nvmet-fc: use pr_* print macros instead of dev_* nvmet-fcloop: remove unused lsdir member. nvmet-fcloop: check all request and response have been processed nvme-fc: check all request and response have been processed block: fix memory leak in __blkdev_issue_zero_pages block: fix comment for op_is_zone_mgmt() to include RESET_ALL block: Clear BLK_ZONE_WPLUG_PLUGGED when aborting plugged BIOs blk-mq: Abort suspend when wakeup events are pending blk-mq: add blk_rq_nr_bvec() helper block: add IOC_PR_READ_RESERVATION ioctl block: add IOC_PR_READ_KEYS ioctl nvme: reject invalid pr_read_keys() num_keys values scsi: sd: reject invalid pr_read_keys() num_keys values block: enable per-cpu bio cache by default block: use bio_alloc_bioset for passthru IO by default ...
4 daysMerge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds
Pull SCSI updates from James Bottomley: "Usual driver updates (ufs, lpfc, target, qla2xxx) plus assorted cleanups and fixes including the WQ_PERCPU series. The biggest core change is the new allocation of pseudo-devices which allow the sending of internal commands to a given SCSI target" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (147 commits) scsi: MAINTAINERS: Add the UFS include directory scsi: scsi_debug: Support injecting unaligned write errors scsi: qla2xxx: Fix improper freeing of purex item scsi: ufs: rockchip: Fix compile error without CONFIG_GPIOLIB scsi: ufs: rockchip: Reset controller on PRE_CHANGE of hce enable notify scsi: ufs: core: Use scsi_device_busy() scsi: ufs: core: Fix single doorbell mode support scsi: pm80xx: Add WQ_PERCPU to alloc_workqueue() users scsi: target: Add WQ_PERCPU to alloc_workqueue() users scsi: qedi: Add WQ_PERCPU to alloc_workqueue() users scsi: target: ibmvscsi: Add WQ_PERCPU to alloc_workqueue() users scsi: qedf: Add WQ_PERCPU to alloc_workqueue() users scsi: bnx2fc: Add WQ_PERCPU to alloc_workqueue() users scsi: be2iscsi: Add WQ_PERCPU to alloc_workqueue() users scsi: message: fusion: Add WQ_PERCPU to alloc_workqueue() users scsi: lpfc: WQ_PERCPU added to alloc_workqueue() users scsi: scsi_transport_fc: WQ_PERCPU added to alloc_workqueue users() scsi: scsi_dh_alua: WQ_PERCPU added to alloc_workqueue() users scsi: qla2xxx: WQ_PERCPU added to alloc_workqueue() users scsi: target: sbp: Replace use of system_unbound_wq with system_dfl_wq ...
5 daysMerge tag 'pci-v6.19-changes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci Pull PCI updates from Bjorn Helgaas: "Enumeration: - Enable host bridge emulation for PCI_DOMAINS_GENERIC platforms (Dan Williams) - Switch vmd from custom domain number allocator to the common allocator to prevent a potential race with new non-VMD buses (Dan Williams) - Enable Precision Time Measurement (PTM) only if device advertises support for a relevant role, to prevent invalid PTM Requests that cause ACS violations that are reported as AER Uncorrectable Non-Fatal errors (Mika Westerberg) Resource management: - Prevent resource tree corruption when BAR resize fails (Ilpo Järvinen) - Restore BARs to the original size if a BAR resize fails (Ilpo Järvinen) - Remove BAR release from BAR resize attempts by the xe, i915, and amdgpu drivers so the PCI core can restore BARs if the resize fails (Ilpo Järvinen) - Move Resizable BAR code to rebar.c (Ilpo Järvinen) - Add pci_rebar_size_supported() and use it in i915 and xe (Ilpo Järvinen) - Add pci_rebar_get_max_size() and use it in xe and amdgpu (Ilpo Järvinen) Power management and error handling: - For drivers using PCI legacy suspend, save config state at suspend so that state (not any earlier state from enumeration, probe, or error recovery) will be restored when resuming (Lukas Wunner) - For devices with no driver or a driver that lacks power management, save config state at hibernate so that state (not any earlier state from enumeration, probe, or error recovery) will be restored when resuming (Lukas Wunner) - Save device config space on device addition, before driver binding, so error recovery works more reliably (Lukas Wunner) - Drop pci_save_state() from several drivers that no longer need it since the PCI core always does it and pci_restore_state() no longer invalidates the saved state (Lukas Wunner) - Document use of pci_save_state() by drivers to capture the state they want restored during error recovery (Lukas Wunner) Power control: - Add a struct pci_ops.assert_perst() function pointer to assert/deassert PCIe PERST# and implement it for the qcom driver (Krishna Chaitanya Chundru) - Add DT binding and pwrctrl driver for the Toshiba TC9563 PCIe switch, which must be held in reset after poweron so the pwrctrl driver can configure the switch via I2C before bringing up the links (Krishna Chaitanya Chundru) Endpoint framework: - Convert the endpoint doorbell test to use a threaded IRQ to fix a 'sleeping while atomic' issue (Bhanu Seshu Kumar Valluri) - Add endpoint VNTB MSI doorbell support to reduce latency between host and endpoint (Frank Li) New native PCIe controller drivers: - Add CIX Sky1 host controller DT binding and driver (Hans Zhang) - Add NXP S32G host controller DT binding and driver (Vincent Guittot) - Add Renesas RZ/G3S host controller DT binding and driver (Claudiu Beznea) - Add SpacemiT K1 host controller DT binding and driver (Alex Elder) Amlogic Meson PCIe controller driver: - Update DT binding to name DBI region 'dbi', not 'elbi', and update driver to support both (Manivannan Sadhasivam) Apple PCIe controller driver: - Move struct pci_host_bridge allocation from pci_host_common_init() to callers, which significantly simplifies pcie-apple (Marc Zyngier) Broadcom STB PCIe controller driver: - Disable advertising ASPM L0s support correctly (Jim Quinlan) - Add a panic/die handler to print diagnostic info in case PCIe caused an unrecoverable abort (Jim Quinlan) Cadence PCIe controller driver: - Add module support for Cadence platform host and endpoint controller driver (Manikandan K Pillai) - Split headers into 'legacy' (LGA) and 'high perf' (HPA) to prepare for new CIX Sky1 driver (Manikandan K Pillai) MediaTek PCIe controller driver: - Convert DT binding to YAML schema (Christian Marangi) - Add Airoha AN7583 DT compatible and driver support (Christian Marangi) Qualcomm PCIe controller driver: - Add Qualcomm Kaanapali to SM8550 DT binding (Qiang Yu) - Add required 'power-domains' and 'resets' to qcom sa8775p, sc7280, sc8280xp, sm8150, sm8250, sm8350, sm8450, sm8550, x1e80100 DT schemas (Krzysztof Kozlowski) - Look up OPP using both frequency and data rate (not just frequency) so RPMh votes can account for both (Krishna Chaitanya Chundru) Rockchip DesignWare PCIe controller driver: - Add Rockchip RK3528 compatible strings in DT binding (Yao Zi) STMicroelectronics STM32MP25 PCIe controller driver: - Fix a race between link training and endpoint register initialization (Christian Bruel) - Align endpoint allocations to match the ATU requirements (Christian Bruel) Synopsys DesignWare PCIe controller driver: - Clear L1 PM Substate Capability 'Supported' bits unless glue driver says it's supported, which prevents users from enabling non-working L1SS. Currently only qcom and tegra194 support L1SS (Bjorn Helgaas) - Remove now-superfluous L1SS disable code from tegra194 (Bjorn Helgaas) - Configure L1SS support in dw-rockchip when DT says 'supports-clkreq' (Shawn Lin) TI Keystone PCIe controller driver: - Fail the probe instead of silently succeeding if ks_pcie_of_data didn't specify Root Complex or Endpoint mode (Siddharth Vadapalli) - Make keystone buildable as a loadable module, except on ARM32 where hook_fault_code() is __init (Siddharth Vadapalli)" * tag 'pci-v6.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (100 commits) MAINTAINERS: Add Manivannan Sadhasivam as PCI/pwrctrl maintainer MAINTAINERS: Add CIX Sky1 PCIe controller driver maintainer PCI: sky1: Add PCIe host support for CIX Sky1 dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings PCI: cadence: Add support for High Perf Architecture (HPA) controller MAINTAINERS: Add NXP S32G PCIe controller driver maintainer PCI: s32g: Add NXP S32G PCIe controller driver (RC) PCI: dwc: Add register and bitfield definitions dt-bindings: PCI: s32g: Add NXP S32G PCIe controller PCI: Add Renesas RZ/G3S host controller driver PCI: host-generic: Move bridge allocation outside of pci_host_common_init() dt-bindings: PCI: Add Renesas RZ/G3S PCIe controller binding PCI: Validate pci_rebar_size_supported() input Documentation: PCI: Amend error recovery doc with pci_save_state() rules treewide: Drop pci_save_state() after pci_restore_state() PCI/ERR: Ensure error recoverability at all times PCI/PM: Stop needlessly clearing state_saved on enumeration and thaw PCI/PM: Reinstate clearing state_saved in legacy and !PM codepaths PCI: dw-rockchip: Configure L1SS support PCI: tegra194: Remove unnecessary L1SS disable code ...
6 daysscsi: sd: reject invalid pr_read_keys() num_keys valuesStefan Hajnoczi
The pr_read_keys() interface has a u32 num_keys parameter. The SCSI PERSISTENT RESERVE IN command has a maximum READ KEYS service action size of 65536 bytes. Reject num_keys values that are too large to fit into the SCSI command. This will become important when pr_read_keys() is exposed to untrusted userspace via an <linux/pr.h> ioctl. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 daysMerge tag 'for-6.19/block-20251201' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block updates from Jens Axboe: - Fix head insertion for mq-deadline, a regression from when priority support was added - Series simplifying and improving the ublk user copy code - Various ublk related cleanups - Fixup REQ_NOWAIT handling in loop/zloop, clearing NOWAIT when the request is punted to a thread for handling - Merge and then later revert loop dio nowait support, as it ended up causing excessive stack usage for when the inline issue code needs to dip back into the full file system code - Improve auto integrity code, making it less deadlock prone - Speedup polled IO handling, but manually managing the hctx lookups - Fixes for blk-throttle for SSD devices - Small series with fixes for the S390 dasd driver - Add support for caching zones, avoiding unnecessary report zone queries - MD pull requests via Yu: - fix null-ptr-dereference regression for dm-raid0 - fix IO hang for raid5 when array is broken with IO inflight - remove legacy 1s delay to speed up system shutdown - change maintainer's email address - data can be lost if array is created with different lbs devices, fix this problem and record lbs of the array in metadata - fix rcu protection for md_thread - fix mddev kobject lifetime regression - enable atomic writes for md-linear - some cleanups - bcache updates via Coly - remove useless discard and cache device code - improve usage of per-cpu workqueues - Reorganize the IO scheduler switching code, fixing some lockdep reports as well - Improve the block layer P2P DMA support - Add support to the block tracing code for zoned devices - Segment calculation improves, and memory alignment flexibility improvements - Set of prep and cleanups patches for ublk batching support. The actual batching hasn't been added yet, but helps shrink down the workload of getting that patchset ready for 6.20 - Fix for how the ps3 block driver handles segments offsets - Improve how block plugging handles batch tag allocations - nbd fixes for use-after-free of the configuration on device clear/put - Set of improvements and fixes for zloop - Add Damien as maintainer of the block zoned device code handling - Various other fixes and cleanups * tag 'for-6.19/block-20251201' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: (162 commits) block/rnbd: correct all kernel-doc complaints blk-mq: use queue_hctx in blk_mq_map_queue_type md: remove legacy 1s delay in md_notify_reboot md/raid5: fix IO hang when array is broken with IO inflight md: warn about updating super block failure md/raid0: fix NULL pointer dereference in create_strip_zones() for dm-raid sbitmap: fix all kernel-doc warnings ublk: add helper of __ublk_fetch() ublk: pass const pointer to ublk_queue_is_zoned() ublk: refactor auto buffer register in ublk_dispatch_req() ublk: add `union ublk_io_buf` with improved naming ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() kfifo: add kfifo_alloc_node() helper for NUMA awareness blk-mq: fix potential uaf for 'queue_hw_ctx' blk-mq: use array manage hctx map instead of xarray ublk: prevent invalid access with DEBUG s390/dasd: Use scnprintf() instead of sprintf() s390/dasd: Move device name formatting into separate function s390/dasd: Remove unnecessary debugfs_create() return checks s390/dasd: Fix gendisk parent after copy pair swap ...
7 daysMerge tag 'printk-for-6.19' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/printk/linux Pull printk updates from Petr Mladek: - Allow creaing nbcon console drivers with an unsafe write_atomic() callback that can only be called by the final nbcon_atomic_flush_unsafe(). Otherwise, the driver would rely on the kthread. It is going to be used as the-best-effort approach for an experimental nbcon netconsole driver, see https://lore.kernel.org/r/20251121-nbcon-v1-2-503d17b2b4af@debian.org Note that a safe .write_atomic() callback is supposed to work in NMI context. But some networking drivers are not safe even in IRQ context: https://lore.kernel.org/r/oc46gdpmmlly5o44obvmoatfqo5bhpgv7pabpvb6sjuqioymcg@gjsma3ghoz35 In an ideal world, all networking drivers would be fixed first and the atomic flush would be blocked only in NMI context. But it brings the question how reliable networking drivers are when the system is in a bad state. They might block flushing more reliable serial consoles which are more suitable for serious debugging anyway. - Allow to use the last 4 bytes of the printk ring buffer. - Prevent queuing IRQ work and block printk kthreads when consoles are suspended. Otherwise, they create non-necessary churn or even block the suspend. - Release console_lock() between each record in the kthread used for legacy consoles on RT. It might significantly speed up the boot. - Release nbcon context between each record in the atomic flush. It prevents stalls of the related printk kthread after it has lost the ownership in the middle of a record - Add support for NBCON consoles into KDB - Add %ptsP modifier for printing struct timespec64 and use it where possible - Misc code clean up * tag 'printk-for-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/printk/linux: (48 commits) printk: Use console_is_usable on console_unblank arch: um: kmsg_dump: Use console_is_usable drivers: serial: kgdboc: Drop checks for CON_ENABLED and CON_BOOT lib/vsprintf: Unify FORMAT_STATE_NUM handlers printk: Avoid irq_work for printk_deferred() on suspend printk: Avoid scheduling irq_work on suspend printk: Allow printk_trigger_flush() to flush all types tracing: Switch to use %ptSp scsi: snic: Switch to use %ptSp scsi: fnic: Switch to use %ptSp s390/dasd: Switch to use %ptSp ptp: ocp: Switch to use %ptSp pps: Switch to use %ptSp PCI: epf-test: Switch to use %ptSp net: dsa: sja1105: Switch to use %ptSp mmc: mmc_test: Switch to use %ptSp media: av7110: Switch to use %ptSp ipmi: Switch to use %ptSp igb: Switch to use %ptSp e1000e: Switch to use %ptSp ...
12 daysMerge branch 'pm-sleep'Rafael J. Wysocki
Merge updates related to system suspend and hibernation for 6.19-rc1: - Replace snprintf() with scnprintf() in show_trace_dev_match() (Kaushlendra Kumar) - Fix memory allocation error handling in pm_vt_switch_required() (Malaya Kumar Rout) - Introduce CALL_PM_OP() macro and use it to simplify code in generic PM operations (Kaushlendra Kumar) - Add module param to backtrace all CPUs in the device power management watchdog (Sergey Senozhatsky) - Rework message printing in swsusp_save() (Rafael Wysocki) - Make it possible to change the number of hibernation compression threads (Xueqin Luo) - Clarify that only cgroup1 freezer uses PM freezer (Tejun Heo) - Add document on debugging shutdown hangs to PM documentation and correct a mistaken configuration option in it (Mario Limonciello) - Shut down wakeup source timer before removing the wakeup source from the list (Kaushlendra Kumar, Rafael Wysocki) - Introduce new PMSG_POWEROFF event for system shutdown handling with the help of PM device callbacks (Mario Limonciello) - Make pm_test delay interruptible by wakeup events (Riwen Lu) - Clean up kernel-doc comment style usage in the core hibernation code and remove unuseful comments from it (Sunday Adelodun, Rafael Wysocki) - Add support for handling wakeup events and aborting the suspend process while it is syncing file systems (Samuel Wu, Rafael Wysocki) * pm-sleep: (21 commits) PM: hibernate: Extra cleanup of comments in swap handling code PM: sleep: Call pm_sleep_fs_sync() instead of ksys_sync_helper() PM: sleep: Add support for wakeup during filesystem sync PM: hibernate: Clean up kernel-doc comment style usage PM: suspend: Make pm_test delay interruptible by wakeup events usb: sl811-hcd: Add PM_EVENT_POWEROFF into suspend callbacks scsi: Add PM_EVENT_POWEROFF into suspend callbacks PM: Introduce new PMSG_POWEROFF event PM: wakeup: Update after recent wakeup source removal ordering change PM: wakeup: Delete timer before removing wakeup source from list Documentation: power: Correct a mistaken configuration option Documentation: power: Add document on debugging shutdown hangs freezer: Clarify that only cgroup1 freezer uses PM freezer PM: hibernate: add sysfs interface for hibernate_compression_threads PM: hibernate: make compression threads configurable PM: hibernate: dynamically allocate crc->unc_len/unc for configurable threads PM: hibernate: Rework message printing in swsusp_save() PM: dpm_watchdog: add module param to backtrace all CPUs PM: sleep: Introduce CALL_PM_OP() macro to simplify code PM: console: Fix memory allocation error handling in pm_vt_switch_required() ...
2025-11-24treewide: Drop pci_save_state() after pci_restore_state()Lukas Wunner
In 2009, commit c82f63e411f1 ("PCI: check saved state before restore") changed the behavior of pci_restore_state() such that it became necessary to call pci_save_state() afterwards, lest recovery from subsequent PCI errors fails. The commit has just been reverted and so all the pci_save_state() after pci_restore_state() calls that have accumulated in the tree are now superfluous. Drop them. Two drivers chose a different approach to achieve the same result: drivers/scsi/ipr.c and drivers/net/ethernet/intel/e1000e/netdev.c set the pci_dev's "state_saved" flag to true before calling pci_restore_state(). Drop this as well. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> # qat Link: https://patch.msgid.link/c2b28cc4defa1b743cf1dedee23c455be98b397a.1760274044.git.lukas@wunner.de
2025-11-19Merge branch 6.18/scsi-fixes into 6.19/scsi-stagingMartin K. Petersen
Pull in fixes branch to resolve UFS merge conflict. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-19scsi: sg: Do not sleep in atomic contextBart Van Assche
sg_finish_rem_req() calls blk_rq_unmap_user(). The latter function may sleep. Hence, call sg_finish_rem_req() with interrupts enabled instead of disabled. Reported-by: syzbot+c01f8e6e73f20459912e@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-scsi/691560c4.a70a0220.3124cb.001a.GAE@google.com/ Cc: Hannes Reinecke <hare@suse.de> Cc: stable@vger.kernel.org Fixes: 97d27b0dd015 ("scsi: sg: close race condition in sg_remove_sfp_usercontext()") Signed-off-by: Bart Van Assche <bvanassche@acm.org> Link: https://patch.msgid.link/20251113181643.1108973-1-bvanassche@acm.org Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-19scsi: scsi_debug: Support injecting unaligned write errorsBart Van Assche
Allow user space software, e.g. a blktests test, to inject unaligned write errors. Acked-by: Douglas Gilbert <dgilbert@interlog.com> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ming Lei <ming.lei@redhat.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Link: https://patch.msgid.link/20251113174151.1095574-1-bvanassche@acm.org Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-19scsi: qla2xxx: Fix improper freeing of purex itemZilin Guan
In qla2xxx_process_purls_iocb(), an item is allocated via qla27xx_copy_multiple_pkt(), which internally calls qla24xx_alloc_purex_item(). The qla24xx_alloc_purex_item() function may return a pre-allocated item from a per-adapter pool for small allocations, instead of dynamically allocating memory with kzalloc(). An error handling path in qla2xxx_process_purls_iocb() incorrectly uses kfree() to release the item. If the item was from the pre-allocated pool, calling kfree() on it is a bug that can lead to memory corruption. Fix this by using the correct deallocation function, qla24xx_free_purex_item(), which properly handles both dynamically allocated and pre-allocated items. Fixes: 875386b98857 ("scsi: qla2xxx: Add Unsolicited LS Request and Response Support for NVMe") Signed-off-by: Zilin Guan <zilin@seu.edu.cn> Reviewed-by: Himanshu Madhani <hmadhani2024@gmail.com> Link: https://patch.msgid.link/20251113151246.762510-1-zilin@seu.edu.cn Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-19scsi: snic: Switch to use %ptSpAndy Shevchenko
Use %ptSp instead of open coded variants to print content of struct timespec64 in human readable format. Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Karan Tilak Kumar <kartilak@cisco.com> Link: https://patch.msgid.link/20251113150217.3030010-21-andriy.shevchenko@linux.intel.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-19scsi: fnic: Switch to use %ptSpAndy Shevchenko
Use %ptSp instead of open coded variants to print content of struct timespec64 in human readable format. Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Karan Tilak Kumar <kartilak@cisco.com> Link: https://patch.msgid.link/20251113150217.3030010-20-andriy.shevchenko@linux.intel.com [pmladek@suse.com: Fixed output ordering and last_read_time update.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-17Merge back earlier material related to system sleep for 6.19Rafael J. Wysocki
2025-11-14scsi: Add PM_EVENT_POWEROFF into suspend callbacksMario Limonciello (AMD)
If the PM core uses hibernation callbacks for powering off the system, drivers will receive PM_EVENT_POWEROFF and should handle it the same as they previously handled PM_EVENT_HIBERNATE. Support this case in the scsi driver. No functional changes. Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Tested-by: Eric Naim <dnaim@cachyos.org> Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org> Link: https://patch.msgid.link/20251112224025.2051702-3-superm1@kernel.org Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-11-12Merge patch series "replace old wq(s), added WQ_PERCPU to alloc_workqueue"Martin K. Petersen
Marco Crivellari <marco.crivellari@suse.com> says: Hi, === Current situation: problems === Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected. This leads to different scenarios if a work item is scheduled on an isolated CPU where "delay" value is 0 or greater then 0: schedule_delayed_work(, 0); This will be handled by __queue_work() that will queue the work item on the current local (isolated) CPU, while: schedule_delayed_work(, 1); Will move the timer on an housekeeping CPU, and schedule the work there. Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. === Recent changes to the WQ API === The following, address the recent changes in the Workqueue API: - commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") - commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") The old workqueues will be removed in a future release cycle. === Introduced Changes by this series === 1) [P 1] Replace uses of system_wq and system_unbound_wq system_unbound_wq is to be used when locality is not required. Because of that, system_unbound_wq has been replaced with system_dfl_wq, to make clear it should be used when locality is not required. 2) [P 2-3-4] WQ_PERCPU added to alloc_workqueue() This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. Thanks! Link: https://patch.msgid.link/20251031095643.74246-1-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: pm80xx: Add WQ_PERCPU to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") This adds a new WQ_PERCPU flag to explicitly request to alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251107155257.316728-1-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qedi: Add WQ_PERCPU to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251107151618.281250-1-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: target: ibmvscsi: Add WQ_PERCPU to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251107150542.271229-1-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qedf: Add WQ_PERCPU to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251107150155.267651-3-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: bnx2fc: Add WQ_PERCPU to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251107150155.267651-2-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: be2iscsi: Add WQ_PERCPU to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueues a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This continues the effort to refactor workqueue APIs, which began with the introduction of new workqueues and a new alloc_workqueue flag in: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251107144949.256894-1-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: lpfc: WQ_PERCPU added to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. This patch continues the effort to refactor worqueue APIs, which has begun with the change introducing new workqueues and a new alloc_workqueue flag: commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq") commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag") With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Reviewed-by: Justin Tee <justin.tee@broadcom.com> Link: https://patch.msgid.link/20251104110808.123424-1-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: scsi_transport_fc: WQ_PERCPU added to alloc_workqueue users()Marco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Reviewed-by: Justin Tee <justin.tee@broadcom.com> Link: https://patch.msgid.link/20251031095643.74246-5-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: scsi_dh_alua: WQ_PERCPU added to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251031095643.74246-4-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: WQ_PERCPU added to alloc_workqueue() usersMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This change adds a new WQ_PERCPU flag to explicitly request alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251031095643.74246-3-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: scsi_transport_iscsi: Replace use of system_unbound_wq with system_dfl_wqMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. system_unbound_wq should be the default workqueue so as not to enforce locality constraints for random work whenever it's not required. Adding system_dfl_wq to encourage its use when unbound work should be used. The old system_unbound_wq will be kept for a few release cycles. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251031095643.74246-2-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: Replace use of system_unbound_wq with system_dfl_wqMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed without refactoring the API. system_unbound_wq should be the default workqueue so as not to enforce locality constraints for random work whenever it's not required. Adding system_dfl_wq to encourage its use when unbound work should be used. The old system_unbound_wq will be kept for a few release cycles. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20251031095643.74246-2-marco.crivellari@suse.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: scsi_debug: Fix uninitialized pointers with __free attrAlly Heev
Uninitialized pointers with '__free' attribute can cause undefined behaviour as the memory assigned(randomly) to the pointer is freed automatically when the pointer goes out of scope scsi doesn't have any bugs related to this as of now, but it is better to initialize and assign pointers with '__free' attr in one statement to ensure proper scope-based cleanup Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://lore.kernel.org/all/aPiG_F5EBQUjZqsl@stanley.mountain/ Signed-off-by: Ally Heev <allyheev@gmail.com> Link: https://patch.msgid.link/20251105-aheev-uninitialized-free-attr-scsi-v1-1-d28435a0a7ea@gmail.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: pm: Drop unneeded call to pm_runtime_mark_last_busy()Nuno Sá
There's no need to explicitly call pm_runtime_mark_last_busy() since pm_runtime_autosuspend() is now doing it since commit 08071e64cb64 ("PM: runtime: Mark last busy stamp in pm_runtime_autosuspend()") Signed-off-by: Nuno Sá <nuno.sa@analog.com> Link: https://patch.msgid.link/20251111-scsi-pm-improv-v2-1-626b8491f4b4@analog.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: sim710: Fix resource leak by adding missing ioport_unmap() callsHaotian Zhang
The driver calls ioport_map() to map I/O ports in sim710_probe_common() but never calls ioport_unmap() to release the mapping. This causes resource leaks in both the error path when request_irq() fails and in the normal device removal path via sim710_device_remove(). Add ioport_unmap() calls in the out_release error path and in sim710_device_remove(). Fixes: 56fece20086e ("[PATCH] finally fix 53c700 to use the generic iomem infrastructure") Signed-off-by: Haotian Zhang <vulab@iscas.ac.cn> Link: https://patch.msgid.link/20251029032555.1476-1-vulab@iscas.ac.cn Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: aic94xx: fix use-after-free in device removal pathJunrui Luo
The asd_pci_remove() function fails to synchronize with pending tasklets before freeing the asd_ha structure, leading to a potential use-after-free vulnerability. When a device removal is triggered (via hot-unplug or module unload), race condition can occur. The fix adds tasklet_kill() before freeing the asd_ha structure, ensuring all scheduled tasklets complete before cleanup proceeds. Reported-by: Yuhao Jiang <danisjiang@gmail.com> Reported-by: Junrui Luo <moonafterrain@outlook.com> Fixes: 2908d778ab3e ("[SCSI] aic94xx: new driver") Cc: stable@vger.kernel.org Signed-off-by: Junrui Luo <moonafterrain@outlook.com> Link: https://patch.msgid.link/ME2PR01MB3156AB7DCACA206C845FC7E8AFFDA@ME2PR01MB3156.ausprd01.prod.outlook.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12Merge patch series "qla2xxx target mode improvements"Martin K. Petersen
Tony Battersby <tonyb@cybernetics.com> says: This patch series improves the qla2xxx FC driver in target mode. I developed these patches using the out-of-tree SCST target-mode subsystem (https://scst.sourceforge.net/), although most of the improvements will also apply to the other target-mode subsystems such as the in-tree LIO. Unfortunately qla2xxx+LIO does not pass all of my tests, but my patches do not make it any worse (results below). These patches have been well-tested at my employer with qla2xxx+SCST in both initiator mode and target mode and with a variety of FC HBAs and initiators. Since SCST is out-of-tree, some of the patches have parts that apply in-tree and other parts that apply out-of-tree to SCST. The SCST patches can be found in the v2 posting linked above. All patches apply to linux 6.17 and SCST 3.10 master branch. Summary of patches: - bugfixes - cleanups - improve handling of aborts and task management requests - improve log message - add back SLER / SRR support (removed in 2017) Some of these patches improve handling of aborts and task management requests. This is some of the testing that I did: Test 1: Use /dev/sg to queue random disk I/O with short timeouts; make sure cmds are aborted successfully. Test 2: Queue lots of disk I/O, then use "sg_reset -N -d /dev/sg" on initiator to reset logical unit. Test 3: Queue lots of disk I/O, then use "sg_reset -N -t /dev/sg" on initiator to reset target. Test 4: Queue lots of disk I/O, then use "sg_reset -N -b /dev/sg" on initiator to reset bus. Test 5: Queue lots of disk I/O, then use "sg_reset -N -H /dev/sg" on initiator to reset host. Test 6: Use fiber channel attenuator to trigger SRR during write/read/compare test; check data integrity. With my patches, SCST passes all of these tests. Results with in-tree LIO target-mode subsystem: Test 1: Seems to abort the same cmd multiple times (both qlt_24xx_retry_term_exchange() and __qlt_send_term_exchange()). But cmds get aborted, so give it a pass? Test 2: Seems to work; cmds are aborted. Test 3: Target reset doesn't seem to abort cmds, instead, a few seconds later: qla2xxx [0000:04:00.0]-f058:9: qla_target(0): tag 1314312, op 2a: CTIO with TIMEOUT status 0xb received (state 1, port 51:40:2e:c0:18:1d:9f:cc, LUN 0) Tests 4 and 5: The initiator is unable to log back in to the target; the following messages are repeated over and over on the target: qla2xxx [0000:04:00.0]-e01c:9: Sending TERM ELS CTIO (ha=00000000f8811390) qla2xxx [0000:04:00.0]-f097:9: Linking sess 000000008df5aba8 [0] wwn 51:40:2e:c0:18:1d:9f:cc with PLOGI ACK to wwn 51:40:2e:c0:18:1d:9f:cc s_id 00:00:01, ref=2 pla 00000000835a9271 link 0 Test 6: passes with my patches; SRR not supported previously. So qla2xxx+LIO seems a bit flaky when handling exceptions, but my patches do not make it any worse. Perhaps someone who is more familiar with LIO can look at the difference between LIO and SCST and figure out how to improve it. Tony Battersby https://www.cybernetics.com/ v1: https://lore.kernel.org/r/f8977250-638c-4d7d-ac0c-65f742b8d535@cybernetics.com/ v2: https://lore.kernel.org/linux-scsi/e95ee7d0-3580-4124-b854-7f73ca3a3a84@cybernetics.com/ Link: https://patch.msgid.link/aaea0ab0-da8b-4153-9369-60db7507ff7a@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Improve safety of cmd lookup by handleTony Battersby
The driver associates two different structs with numeric handles and passes the handles to the hardware. When the hardware passes the handle back to the driver, the driver consults a table of void * to convert the handle back to the struct without checking the type of struct. This can lead to type confusion if the HBA firmware misbehaves (and some firmware versions do). So verify the type of struct is what is expected before using it. But we can also do better than that. Also verify that the exchange address of the message sent from the hardware matches the exchange address of the command being returned. This adds an extra guard against buggy HBA firmware that returns duplicate messages multiple times (which has also been seen) in case the driver has reused the handle for a different command of the same type. These problems were seen on a QLE2694L with firmware 9.08.02 when testing SLER / SRR support. The SRR caused the HBA to flood the response queue with hundreds of bogus entries. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/7c7cb574-fe62-42ae-b800-d136d8dd89ca@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Add back SRR supportTony Battersby
Background: loading qla2xxx with "ql2xtgt_tape_enable=1" enables Sequence Level Error Recovery (SLER), which is most commonly used for tape drives. With SLER enabled, if there is a recoverable I/O error during a SCSI command, a Sequence Retransmission Request (SRR) will be used to retry the I/O at a low-level completely within the driver without propagating the error to the upper levels of the SCSI stack. SRR support was removed in 2017 by commit 2c39b5ca2a8c ("qla2xxx: Remove SRR code"). Add it back, new and improved. The old removed SRR code used sequence numbers to correlate the SRR CTIOs with SRR immediate notify messages. I don't see how that would work reliably with MSI-X interrupts and multiple queues. So instead use the exchange address to find the command associated with the immediate notify (qlt_srr_to_cmd). The old removed SRR code had a function qlt_check_srr_debug() to simulate a SRR, but it didn't work for me. Instead I just used fiber optic attenuators attached to the FC cable to reduce the strength of the signal and induce errors. Unfortunately this only worked for inducing SRRs on Data-Out (write) commands, so that is all I was able to test. The code to build a new scatterlist for a SRR with nonzero offset has been improved to reduce memory requirements and has been well-tested. However it does not support protection information. When a single cmd gets multiple SRRs, the old removed SRR code would restore the data buffer from the values in cmd->se_cmd before processing the new SRR. That might be needed if the offset for the new SRR was lower than the offset for the previous SRR, but I am not sure if that can happen. In my testing, when a single cmd gets multiple SRRs, the SRR offset always increases or stays the same. But in case it can decrease, I added the function qlt_restore_orig_sg(). If this is not supposed to happen then qlt_restore_orig_sg() can be removed to simplify the code. I ran into some HBA firmware bugs with QLE269x, QLE27xx, and QLE28xx firmware 9.05.xx - 9.08.xx where a SRR would cause the HBA to misbehave badly. Since SRRs are rare and therefore difficult to test, I figured it would be worth checking for the buggy firmware and disabling SLER with a warning instead of letting others run into the same problem on the rare occasion that they get a SRR. This turned out to be difficult because the firmware version isn't known in the normal NVRAM config routine, so I added a second NVRAM config routine that is called after the firmware version is known. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/654b7181-b79e-40ed-a15b-6d6e441a5d5f@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Improve cmd loggingTony Battersby
- Add the command tag to various messages so that different messages about the same command can be correlated. - For CTIO errors (i.e. when the HW reports an error about a cmd), print the cmd tag, opcode, state, initiator WWPN, and LUN. This info helps an administrator determine what is going wrong. - When a command experiences a transport error, log a message when it is freed. This makes debugging exceptions easier. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/c579987d-5658-41ae-9653-f0e58c9d1880@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Add cmd->rsp_sentTony Battersby
Add cmd->rsp_sent to indicate that the SCSI status has been sent successfully, so that SCST can be informed of any transport errors. This will also be used for logging in later patches. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/d4b0203f-7817-4517-9789-5866bb24fad7@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Fix invalid memory access with big CDBsTony Battersby
struct atio7_fcp_cmnd is a variable-length data structure because of add_cdb_len, but it is embedded in struct atio_from_isp and copied around like a fixed-length data structure. For big CDBs > 16 bytes, get_datalen_for_atio() called on a fixed-length copy of the atio will access invalid memory. In some cases this can be fixed by moving the atio to the end of the data structure and using a variable-length allocation. In other cases such as allocating struct qla_tgt_cmd, the fixed-length data structures are preallocated for speed, so in the case that add_cdb_len != 0, allocate a separate buffer for the CDB. Also add memcpy_atio() as a safeguard against invalid memory accesses. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/306a9d0b-3c89-42fc-a69c-eebca8171347@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: Fix TMR failure handlingTony Battersby
(target mode) If handle_tmr() fails: - The code for QLA_TGT_ABTS results in memory-use-after-free and double-free: qlt_do_tmr_work() qlt_build_abts_resp_iocb() qpair->req->outstanding_cmds[h] = (srb_t *)mcmd; mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool); FIRST FREE qlt_handle_abts_completion() mcmd = qlt_ctio_to_cmd() cmd = req->outstanding_cmds[h]; return cmd; vha = mcmd->vha; USE-AFTER-FREE ha->tgt.tgt_ops->free_mcmd(mcmd); SECOND FREE - qlt_send_busy() makes no sense because it sends a SCSI command response instead of a TMR response. Instead just call qlt_xmit_tm_rsp() to send a TMR failed response, since that code is well-tested and handles a number of corner cases. But it would be incorrect to call ha->tgt.tgt_ops->free_mcmd() after handle_tmr() failed, so add a flag to mcmd indicating the proper way to free the mcmd so that qlt_xmit_tm_rsp() can be used for both cases. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/09a1ff3d-6738-4953-a31b-10e89c540462@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Improve checks in qlt_xmit_response() / qlt_rdy_to_xfer()Tony Battersby
Similar fixes to both functions: qlt_xmit_response: - If the cmd cannot be processed, remember to call ->free_cmd() to prevent the target-mode midlevel from seeing a cmd lockup. - Do not try to send the response if the exchange has been terminated. - Check for chip reset once after lock instead of both before and after lock. - Give errors from qlt_pre_xmit_response() a lower priority to compensate for removing the first check for chip reset. qlt_rdy_to_xfer: - Check for chip reset after lock instead of before lock to avoid races. - Do not try to receive data if the exchange has been terminated. - Give errors from qlt_pci_map_calc_cnt() a lower priority to compensate for moving the check for chip reset. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/cd6ccd31-33fa-4454-be36-507bf578a546@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Fix races with aborting commandsTony Battersby
cmd->cmd_lock only protects cmd->aborted, but when deciding how to process a cmd, it is necessary to consider other factors such as cmd->state and if the chip has been reset, which are protected by qpair->qp_lock_ptr. So replace cmd_lock with qp_lock_ptr, whick makes it possible to check additional values and make decisions about what to do without racing with the CTIO handler and other code. - Lock cmd->qpair->qp_lock_ptr when aborting a cmd. - Eliminate cmd->cmd_lock and change cmd->aborted to a bitfield since it is now protected by qp_lock_ptr just like all the other flags. - Add another command state QLA_TGT_STATE_DONE to avoid any possible races between qlt_abort_cmd() and tgt_ops->free_cmd(). - Add the cmd->sent_term_exchg flag to indicate if qlt_send_term_exchange() has already been called. - Export qlt_send_term_exchange() for SCST so that it can be called directly instead of trying to make qlt_abort_cmd() work for both TMR abort and HW timeout. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/2c8d03e4-308b-4d5a-a418-a334be23f815@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: Clear cmds after chip resetTony Battersby
Commit aefed3e5548f ("scsi: qla2xxx: target: Fix offline port handling and host reset handling") caused two problems: 1. Commands sent to FW, after chip reset got stuck and never freed as FW is not going to respond to them anymore. 2. BUG_ON(cmd->sg_mapped) in qlt_free_cmd(). Commit 26f9ce53817a ("scsi: qla2xxx: Fix missed DMA unmap for aborted commands") attempted to fix this, but introduced another bug under different circumstances when two different CPUs were racing to call qlt_unmap_sg() at the same time: BUG_ON(!valid_dma_direction(dir)) in dma_unmap_sg_attrs(). So revert "scsi: qla2xxx: Fix missed DMA unmap for aborted commands" and partially revert "scsi: qla2xxx: target: Fix offline port handling and host reset handling" at __qla2x00_abort_all_cmds. Fixes: aefed3e5548f ("scsi: qla2xxx: target: Fix offline port handling and host reset handling") Fixes: 26f9ce53817a ("scsi: qla2xxx: Fix missed DMA unmap for aborted commands") Co-developed-by: Dmitry Bogdanov <d.bogdanov@yadro.com> Signed-off-by: Dmitry Bogdanov <d.bogdanov@yadro.com> Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/0e7e5d26-e7a0-42d1-8235-40eeb27f3e98@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12Merge patch series "Optimize the hot path in the UFS driver"Martin K. Petersen
Bart Van Assche <bvanassche@acm.org> says: Hi Martin, This patch series optimizes the hot path of the UFS driver by making struct scsi_cmnd and struct ufshcd_lrb adjacent. Making these two data structures adjacent is realized as follows: @@ -9040,6 +9046,7 @@ static const struct scsi_host_template ufshcd_driver_template = { .name = UFSHCD, .proc_name = UFSHCD, .map_queues = ufshcd_map_queues, + .cmd_size = sizeof(struct ufshcd_lrb), .init_cmd_priv = ufshcd_init_cmd_priv, .queuecommand = ufshcd_queuecommand, .mq_poll = ufshcd_poll, The following changes had to be made prior to making these two data structures adjacent: * Add support for driver-internal and reserved commands in the SCSI core. * Instead of making the reserved command slot (hba->reserved_slot) invisible to the SCSI core, let the SCSI core allocate a reserved command. * Remove all UFS data structure members that are no longer needed because struct scsi_cmnd and struct ufshcd_lrb are now adjacent * Call ufshcd_init_lrb() from inside the code for queueing a command instead of calling this function before I/O starts. This is necessary because ufshcd_memory_alloc() allocates fewer instances than the block layer allocates requests. See also the following code in the block layer core: if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx, hctx->numa_node)) Although the UFS driver could be modified such that ufshcd_init_lrb() is called from ufshcd_init_cmd_priv(), realizing this would require moving the memory allocations that happen from inside ufshcd_memory_alloc() into ufshcd_init_cmd_priv(). That would make this patch series even larger. Although ufshcd_init_lrb() is called for each command, the benefits of reduced indirection and better cache efficiency outweigh the small overhead of per-command lrb initialization. * ufshcd_add_scsi_host() happens now before any device management commands are submitted. This change is necessary because this patch makes device management command allocation happen when the SCSI host is allocated. * Allocate as many command slots as the host controller supports. Decrease host->cmds_per_lun if necessary once it is clear whether or not the UFS device supports less command slots than the host controller. Please consider this patch series for the next merge window. Thanks, Bart. Link: https://patch.msgid.link/20251031204029.2883185-1-bvanassche@acm.org Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Fix term exchange when cmd_sent_to_fw == 1Tony Battersby
Properly set the nport_handle field of the terminate exchange message. Previously when this field was not set properly, the term exchange would fail when cmd_sent_to_fw == 1 but work when cmd_sent_to_fw == 0 (i.e. it would fail when the HW was actively transferring data or status for the cmd but work when the HW was idle). With this change, term exchange works in any cmd state, which now makes it possible to abort a command that is locked up in the HW. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/1a221699-969b-4f28-8ea4-395d2f7a7c0a@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Improve debug output for term exchangeTony Battersby
Print better debug info when terminating a command, and print the response status from the hardware. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/22f8a0b6-0e24-474d-9f28-9d65c9b7af03@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: target: Remove code for unsupported hardwareTony Battersby
As far as I can tell, CONTINUE_TGT_IO_TYPE and CTIO_A64_TYPE are message types from non-FWI2 boards (older than ISP24xx), which are not supported by qla_target.c. Removing them makes it possible to turn a void * into the real type and avoid some typecasts. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/cb006628-e321-4e30-a60b-08b37b8685a5@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: Use reinit_completion on mbx_intr_compTony Battersby
If a mailbox command completes immediately after wait_for_completion_timeout() times out, ha->mbx_intr_comp could be left in an inconsistent state, causing the next mailbox command not to wait for the hardware. Fix by reinitializing the completion before use. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/11b6485e-0bfd-4784-8f99-c06a196dad94@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: Fix lost interrupts with qlini_mode=disabledTony Battersby
When qla2xxx is loaded with qlini_mode=disabled, ha->flags.disable_msix_handshake is used before it is set, resulting in the wrong interrupt handler being used on certain HBAs (qla2xxx_msix_rsp_q_hs() is used when qla2xxx_msix_rsp_q() should be used). The only difference between these two interrupt handlers is that the _hs() version writes to a register to clear the "RISC" interrupt, whereas the other version does not. So this bug results in the RISC interrupt being cleared when it should not be. This occasionally causes a different interrupt handler qla24xx_msix_default() for a different vector to see ((stat & HSRX_RISC_INT) == 0) and ignore its interrupt, which then causes problems like: qla2xxx [0000:02:00.0]-d04c:6: MBX Command timeout for cmd 20, iocontrol=8 jiffies=1090c0300 mb[0-3]=[0x4000 0x0 0x40 0xda] mb7 0x500 host_status 0x40000010 hccr 0x3f00 qla2xxx [0000:02:00.0]-101e:6: Mailbox cmd timeout occurred, cmd=0x20, mb[0]=0x20. Scheduling ISP abort (the cmd varies; sometimes it is 0x20, 0x22, 0x54, 0x5a, 0x5d, or 0x6a) This problem can be reproduced with a 16 or 32 Gbps HBA by loading qla2xxx with qlini_mode=disabled and running a high IOPS test while triggering frequent RSCN database change events. While analyzing the problem I discovered that even with disable_msix_handshake forced to 0, it is not necessary to clear the RISC interrupt from qla2xxx_msix_rsp_q_hs() (more below). So just completely remove qla2xxx_msix_rsp_q_hs() and the logic for selecting it, which also fixes the bug with qlini_mode=disabled. The test below describes the justification for not needing qla2xxx_msix_rsp_q_hs(): Force disable_msix_handshake to 0: qla24xx_config_rings(): if (0 && (ha->fw_attributes & BIT_6) && (IS_MSIX_NACK_CAPABLE(ha)) && (ha->flags.msix_enabled)) { In qla24xx_msix_rsp_q() and qla2xxx_msix_rsp_q_hs(), check: (rd_reg_dword(&reg->host_status) & HSRX_RISC_INT) Count the number of calls to each function with HSRX_RISC_INT set and the number with HSRX_RISC_INT not set while performing some I/O. If qla2xxx_msix_rsp_q_hs() clears the RISC interrupt (original code): qla24xx_msix_rsp_q: 50% of calls have HSRX_RISC_INT set qla2xxx_msix_rsp_q_hs: 5% of calls have HSRX_RISC_INT set (# of qla2xxx_msix_rsp_q_hs interrupts) = (# of qla24xx_msix_rsp_q interrupts) * 3 If qla2xxx_msix_rsp_q_hs() does not clear the RISC interrupt (patched code): qla24xx_msix_rsp_q: 100% of calls have HSRX_RISC_INT set qla2xxx_msix_rsp_q_hs: 9% of calls have HSRX_RISC_INT set (# of qla2xxx_msix_rsp_q_hs interrupts) = (# of qla24xx_msix_rsp_q interrupts) * 3 In the case of the original code, qla24xx_msix_rsp_q() was seeing HSRX_RISC_INT set only 50% of the time because qla2xxx_msix_rsp_q_hs() was clearing it when it shouldn't have been. In the patched code, qla24xx_msix_rsp_q() sees HSRX_RISC_INT set 100% of the time, which makes sense if that interrupt handler needs to clear the RISC interrupt (which it does). qla2xxx_msix_rsp_q_hs() sees HSRX_RISC_INT only 9% of the time, which is just overlap from the other interrupt during the high IOPS test. Tested with SCST on: QLE2742 FW:v9.08.02 (32 Gbps 2-port) QLE2694L FW:v9.10.11 (16 Gbps 4-port) QLE2694L FW:v9.08.02 (16 Gbps 4-port) QLE2672 FW:v8.07.12 (16 Gbps 2-port) both initiator and target mode Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/56d378eb-14ad-49c7-bae9-c649b6c7691e@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-11-12scsi: qla2xxx: Fix initiator mode with qlini_mode=exclusiveTony Battersby
When given the module parameter qlini_mode=exclusive, qla2xxx in initiator mode is initially unable to successfully send SCSI commands to devices it finds while scanning, resulting in an escalating series of resets until an adapter reset clears the issue. Fix by checking the active mode instead of the module parameter. Signed-off-by: Tony Battersby <tonyb@cybernetics.com> Link: https://patch.msgid.link/1715ec14-ba9a-45dc-9cf2-d41aa6b81b5e@cybernetics.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>