summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2025-10-24kdb: Adapt kdb_msg_write to work with NBCON consolesMarcos Paulo de Souza
Function kdb_msg_write was calling con->write for any found console, but it won't work on NBCON consoles. In this case we should acquire the ownership of the console using NBCON_PRIO_EMERGENCY, since printing kdb messages should only be interrupted by a panic. At this point, the console is required to use the atomic callback. The console is skipped if the write_atomic callback is not set or if the context could not be acquired. The validation of NBCON is done by the console_is_usable helper. The context is released right after write_atomic finishes. The oops_in_progress handling is only needed in the legacy consoles, so it was moved around the con->write callback. Suggested-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Link: https://patch.msgid.link/20251016-nbcon-kgdboc-v6-5-866aac60a80e@suse.com [pmladek@suse.com: Fixed compilation with !CONFIG_PRINTK.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-24printk: nbcon: Export nbcon_write_context_set_bufMarcos Paulo de Souza
This function will be used in the next patch to allow a driver to set both the message and message length of a nbcon_write_context. This is necessary because the function also initializes the ->unsafe_takeover struct member. By using this helper we ensure that the struct is initialized correctly. Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Link: https://patch.msgid.link/20251016-nbcon-kgdboc-v6-4-866aac60a80e@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-24printk: nbcon: Allow KDB to acquire the NBCON contextMarcos Paulo de Souza
KDB can interrupt any console to execute the "mirrored printing" at any time, so add an exception to nbcon_context_try_acquire_direct to allow to get the context if the current CPU is the same as kdb_printf_cpu. This change will be necessary for the next patch, which fixes kdb_msg_write to work with NBCON consoles by calling ->write_atomic on such consoles. But to print it first needs to acquire the ownership of the console, so nbcon_context_try_acquire_direct is fixed here. Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Link: https://patch.msgid.link/20251016-nbcon-kgdboc-v6-3-866aac60a80e@suse.com [pmladek@suse.com: Fix compilation with !CONFIG_KGDB_KDB.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-24printk: nbcon: Introduce KDB helpersMarcos Paulo de Souza
These helpers will be used when calling console->write_atomic on KDB code in the next patch. It's basically the same implementation as nbcon_device_try_acquire, but using NBCON_PRIO_EMERGENCY when acquiring the context. If the acquire succeeds, the message and message length are assigned to nbcon_write_context so ->write_atomic can print the message. After release try to flush the console since there may be a backlog of messages in the ringbuffer. The kthread console printers do not get a chance to run while kdb is active. Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Link: https://patch.msgid.link/20251016-nbcon-kgdboc-v6-2-866aac60a80e@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-24printk: nbcon: Export console_is_usableMarcos Paulo de Souza
The helper will be used on KDB code in the next commits. Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Link: https://patch.msgid.link/20251016-nbcon-kgdboc-v6-1-866aac60a80e@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-24genirq/manage: Add buslock back in to enable_irq()Charles Keepax
The locking was changed from a buslock to a plain lock, but the patch description states there was no functional change. Assuming this was accidental so reverting to using the buslock. Fixes: bddd10c55407 ("genirq/manage: Rework enable_irq()") Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251023154901.1333755-4-ckeepax@opensource.cirrus.com
2025-10-24genirq/manage: Add buslock back in to __disable_irq_nosync()Charles Keepax
The locking was changed from a buslock to a plain lock, but the patch description states there was no functional change. Assuming this was accidental so reverting to using the buslock. Fixes: 1b7444446724 ("genirq/manage: Rework __disable_irq_nosync()") Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251023154901.1333755-3-ckeepax@opensource.cirrus.com
2025-10-24genirq/chip: Add buslock back in to irq_set_handler()Charles Keepax
The locking was changed from a buslock to a plain lock, but the patch description states there was no functional change. Assuming this was accidental so reverting to using the buslock. Fixes: 5cd05f3e2315 ("genirq/chip: Rework irq_set_handler() variants") Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251023154901.1333755-2-ckeepax@opensource.cirrus.com
2025-10-23Merge tag 'trace-rv-v6.18-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing fixes from Steven Rostedt: "A couple of fixes for Runtime Verification: - A bug caused a kernel panic when reading enabled_monitors was reported. Change callback functions to always use list_head iterators and by doing so, fix the wrong pointer that was leading to the panic. - The rtapp/pagefault monitor relies on the MMU to be present (pagefaults exist) but that was not enforced via kconfig, leading to potential build errors on systems without an MMU. Add that kconfig dependency" * tag 'trace-rv-v6.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: rv: Make rtapp/pagefault monitor depends on CONFIG_MMU rv: Fully convert enabled_monitors to use list_head as iterator
2025-10-23printk: Use console_flush_one_record for legacy printer kthreadAndrew Murray
The legacy printer kthread uses console_lock and __console_flush_and_unlock to flush records to the console. This approach results in the console_lock being held for the entire duration of a flush. This can result in large waiting times for those waiting for console_lock especially where there is a large volume of records or where the console is slow (e.g. serial). This contention is observed during boot, as the call to filp_open in console_on_rootfs will delay progression to userspace until any in-flight flush is completed. Let's instead use console_flush_one_record and release/reacquire the console_lock between records. On a PocketBeagle 2, with the following boot args: "console=ttyS2,9600 initcall_debug=1 loglevel=10" Without this patch: [ 5.613166] +console_on_rootfs/filp_open [ 5.643473] mmc1: SDHCI controller on fa00000.mmc [fa00000.mmc] using ADMA 64-bit [ 5.643823] probe of fa00000.mmc returned 0 after 258244 usecs [ 5.710520] mmc1: new UHS-I speed SDR104 SDHC card at address 5048 [ 5.721976] mmcblk1: mmc1:5048 SD32G 29.7 GiB [ 5.747258] mmcblk1: p1 p2 [ 5.753324] probe of mmc1:5048 returned 0 after 40002 usecs [ 15.595240] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to 30040000.pruss [ 15.595282] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to e010000.watchdog [ 15.595297] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to e000000.watchdog [ 15.595437] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to 30300000.crc [ 146.275961] -console_on_rootfs/filp_open ... and with: [ 5.477122] +console_on_rootfs/filp_open [ 5.595814] mmc1: SDHCI controller on fa00000.mmc [fa00000.mmc] using ADMA 64-bit [ 5.596181] probe of fa00000.mmc returned 0 after 312757 usecs [ 5.662813] mmc1: new UHS-I speed SDR104 SDHC card at address 5048 [ 5.674367] mmcblk1: mmc1:5048 SD32G 29.7 GiB [ 5.699320] mmcblk1: p1 p2 [ 5.705494] probe of mmc1:5048 returned 0 after 39987 usecs [ 6.418682] -console_on_rootfs/filp_open ... ... [ 15.593509] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to 30040000.pruss [ 15.593551] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to e010000.watchdog [ 15.593566] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to e000000.watchdog [ 15.593704] ti_sci_pm_domains 44043000.system-controller:power-controller: sync_state() pending due to 30300000.crc Where I've added a printk surrounding the call in console_on_rootfs to filp_open. Suggested-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Andrew Murray <amurray@thegoodpenguin.co.uk> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://patch.msgid.link/20251020-printk_legacy_thread_console_lock-v3-3-00f1f0ac055a@thegoodpenguin.co.uk [pmladek@suse.com: Fixed ordering of variable definition suggested by John.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-23printk: console_flush_one_record() code cleanupPetr Mladek
console_flush_one_record() and console_flush_all() duplicate several checks. They both want to tell the caller that consoles are not longer usable in this context because it has lost the lock or the lock has to be reserved for the panic CPU. Remove the duplication by changing the semantic of the function console_flush_one_record() return value and parameters. The function will return true when it is able to do the job. It means that there is at least one usable console. And the flushing was not interrupted by a takeover or panic_on_other_cpu(). Also replace the @any_usable parameter with @try_again. The @try_again parameter will be set to true when the function could do the job and at least one console made a progress. Motivation: The callers need to know when + they should continue flushing => @try_again + when the console is flushed => can_do_the_job(return) && !@try_again + when @next_seq is valid => same as flushed + when lost console_lock => @takeover The proposed change makes it clear when the function can do the job. It simplifies the answer for the other questions. Also the return value from console_flush_one_record() can be used as return value from console_flush_all(). Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://patch.msgid.link/20251020-printk_legacy_thread_console_lock-v3-2-00f1f0ac055a@thegoodpenguin.co.uk [pmladek@suse.com: Fixed type of any_usable variable reported by John] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-23printk: Introduce console_flush_one_recordAndrew Murray
console_flush_all prints all remaining records to all usable consoles whilst its caller holds console_lock. This can result in large waiting times for those waiting for console_lock especially where there is a large volume of records or where the console is slow (e.g. serial). Let's extract the parts of this function which print a single record into a new function named console_flush_one_record. This can later be used for functions that will release and reacquire console_lock between records. This commit should not change existing functionality. Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Andrew Murray <amurray@thegoodpenguin.co.uk> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://patch.msgid.link/20251020-printk_legacy_thread_console_lock-v3-1-00f1f0ac055a@thegoodpenguin.co.uk Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-23Revert "PM: sleep: Make pm_wakeup_clear() call more clear"Samuel Wu
This reverts commit 56a232d93cea0ba14da5e3157830330756a45b4c. The above commit changed the position of pm_wakeup_clear() for the suspend call path, but other call paths with references to freeze_processes() were not updated. This means that other call paths, such as hibernate(), will not have pm_wakeup_clear() called. Suggested-by: Saravana Kannan <saravanak@google.com> Signed-off-by: Samuel Wu <wusamuel@google.com> [ rjw: Changelog edits ] Link: https://patch.msgid.link/20251022222830.634086-1-wusamuel@google.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22Merge tag 'mm-hotfixes-stable-2025-10-22-12-43' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull hotfixes from Andrew Morton: "17 hotfixes. 12 are cc:stable and 14 are for MM. There's a two-patch DAMON series from SeongJae Park which addresses a missed check and possible memory leak. Apart from that it's all singletons - please see the changelogs for details" * tag 'mm-hotfixes-stable-2025-10-22-12-43' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: csky: abiv2: adapt to new folio flags field mm/damon/core: use damos_commit_quota_goal() for new goal commit mm/damon/core: fix potential memory leak by cleaning ops_filter in damon_destroy_scheme hugetlbfs: move lock assertions after early returns in huge_pmd_unshare() vmw_balloon: indicate success when effectively deflating during migration mm/damon/core: fix list_add_tail() call on damon_call() mm/mremap: correctly account old mapping after MREMAP_DONTUNMAP remap mm: prevent poison consumption when splitting THP ocfs2: clear extent cache after moving/defragmenting extents mm: don't spin in add_stack_record when gfp flags don't allow dma-debug: don't report false positives with DMA_BOUNCE_UNALIGNED_KMALLOC mm/damon/sysfs: dealloc commit test ctx always mm/damon/sysfs: catch commit test ctx alloc failure hung_task: fix warnings caused by unaligned lock pointers
2025-10-22audit: fix comment misindentation in audit.hRicardo Robaina
Minor comment misindentation adjustment. Signed-off-by: Ricardo Robaina <rrobaina@redhat.com> Signed-off-by: Paul Moore <paul@paul-moore.com>
2025-10-22sched_ext: Use rhashtable_lookup() instead of rhashtable_lookup_fast()Tejun Heo
The find_user_dsq() function is called from contexts that are already under RCU read lock protection. Switch from rhashtable_lookup_fast() to rhashtable_lookup() to avoid redundant RCU locking. Requires: bee8a520eb84 ("rhashtable: Use rcu_dereference_all and rcu_dereference_all_check") Acked-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-10-22sched_ext: Rename pnt_seq to kick_syncTejun Heo
The pnt_seq field and related infrastructure were originally named for "pick next task sequence", reflecting their original implementation in scx_next_task_picked(). However, the sequence counter is now incremented in both put_prev_task_scx() and pick_task_scx() and its purpose is to synchronize kick operations via SCX_KICK_WAIT, not specifically to track pick_next_task events. Rename to better reflect the actual semantics: - pnt_seq -> kick_sync - scx_kick_pseqs -> scx_kick_syncs - pseqs variables -> ksyncs - Update comments to refer to "kick_sync sequence" instead of "pick_task sequence" This is a pure renaming with no functional changes. Reviewed-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-10-22sched_ext: Fix SCX_KICK_WAIT to work reliablyTejun Heo
SCX_KICK_WAIT is used to synchronously wait for the target CPU to complete a reschedule and can be used to implement operations like core scheduling. This used to be implemented by scx_next_task_picked() incrementing pnt_seq, which was always called when a CPU picks the next task to run, allowing SCX_KICK_WAIT to reliably wait for the target CPU to enter the scheduler and pick the next task. However, commit b999e365c298 ("sched_ext: Replace scx_next_task_picked() with switch_class()") replaced scx_next_task_picked() with the switch_class() callback, which is only called when switching between sched classes. This broke SCX_KICK_WAIT because pnt_seq would no longer be reliably incremented unless the previous task was SCX and the next task was not. This fix leverages commit 4c95380701f5 ("sched/ext: Fold balance_scx() into pick_task_scx()") which refactored the pick path making put_prev_task_scx() the natural place to track task switches for SCX_KICK_WAIT. The fix moves pnt_seq increment to put_prev_task_scx() and also increments it in pick_task_scx() to handle cases where the same task is re-selected, whether by BPF scheduler decision or slice refill. The semantics: If the current task on the target CPU is SCX, SCX_KICK_WAIT waits until the CPU enters the scheduling path. This provides sufficient guarantee for use cases like core scheduling while keeping the operation self-contained within SCX. v2: - Also increment pnt_seq in pick_task_scx() to handle same-task re-selection (Andrea Righi). - Use smp_cond_load_acquire() for the busy-wait loop for better architecture optimization (Peter Zijlstra). Reported-by: Wen-Fang Liu <liuwenfang@honor.com> Link: http://lkml.kernel.org/r/228ebd9e6ed3437996dffe15735a9caa@honor.com Cc: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-10-22sched_ext: Don't kick CPUs running higher classesTejun Heo
When a sched_ext scheduler tries to kick a CPU, the CPU may be running a higher class task. sched_ext has no control over such CPUs. A sched_ext scheduler couldn't have expected to get access to the CPU after kicking it anyway. Skip kicking when the target CPU is running a higher class. Reviewed-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-10-22PM: EM: Notify an event when the performance domain changesChangwoo Min
Send an event to userspace when a performance domain is created or deleted, or its energy model is updated. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-11-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Implement em_notify_pd_created/updated()Changwoo Min
Implement two event notifications when a performance domain is created (EM_CMD_PD_CREATED) and updated (EM_CMD_PD_UPDATED). The message format of these two event notifications is the same as EM_CMD_GET_PD_TABLE -- containing the performance domain's ID and its energy model table. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-10-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Implement em_notify_pd_deleted()Changwoo Min
Add the event notification infrastructure and implement the event notification for when a performance domain is deleted (EM_CMD_PD_DELETED). The event contains the ID of the performance domain (EM_A_PD_TABLE_PD_ID) so the userspace can identify the changed performance domain for further processing. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-9-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Implement em_nl_get_pd_table_doit()Changwoo Min
When a userspace requests EM_CMD_GET_PD_TABLE with an ID of a performance domain, the kernel reports back the energy model table of the specified performance domain. The message format of the response is as follows: EM_A_PD_TABLE_PD_ID (NLA_U32) EM_A_PD_TABLE_PS (NLA_NESTED)* EM_A_PS_PERFORMANCE (NLA_U64) EM_A_PS_FREQUENCY (NLA_U64) EM_A_PS_POWER (NLA_U64) EM_A_PS_COST (NLA_U64) EM_A_PS_FLAGS (NLA_U64) where EM_A_PD_TABLE_PS can be repeated as many times as there are performance states (struct em_perf_state). Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-8-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Implement em_nl_get_pds_doit()Changwoo Min
When a userspace requests EM_CMD_GET_PDS, the kernel responds with information on all performance domains. The message format of the response is as follows: EM_A_PDS_PD (NLA_NESTED)* EM_A_PD_PD_ID (NLA_U32) EM_A_PD_FLAGS (NLA_U64) EM_A_PD_CPUS (NLA_STRING) where EM_A_PDS_PD can be repeated as many times as there are performance domains, and EM_A_PD_CPUS is a hexadecimal string representing a CPU bitmask. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-7-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Add an iterator and accessor for the performance domainChangwoo Min
Add an iterator function (for_each_em_perf_domain) that iterates all the performance domains in the global list. A passed callback function (cb) is called for each performance domain. Additionally, add a lookup function (em_perf_domain_get_by_id) that searches for a performance domain by matching the ID in the global list. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-6-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Add a skeleton code for netlink notificationChangwoo Min
Add a boilerplate code for netlink notification to register the new protocol family. Also, initialize and register the netlink during booting. The initialization is called at the postcore level, which is late enough after the generic netlink is initialized. Finally, update MAINTAINERS to include new files. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-5-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Add em.yaml and autogen filesChangwoo Min
Add a generic netlink spec in YAML format and autogenerate boilerplate code using ynl-regen.sh to introduce a generic netlink for the energy model. It allows a userspace program to read the performance domain and its energy model. It notifies the userspace program when a performance domain is created or deleted or its energy model is updated through a multicast interface. Specifically, it supports two commands: - EM_CMD_GET_PDS: Get the list of information for all performance domains. - EM_CMD_GET_PD_TABLE: Get the energy model table of a performance domain. Also, it supports three notification events: - EM_CMD_PD_CREATED: When a performance domain is created. - EM_CMD_PD_DELETED: When a performance domain is deleted. - EM_CMD_PD_UPDATED: When the energy model table of a performance domain is updated. Finally, update MAINTAINERS to include new files. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-4-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Expose the ID of a performance domain via debugfsChangwoo Min
For ease of debugging, let's expose the assigned ID of a performance domain through debugfs (e.g., /sys/kernel/debug/energy_model/cpu0/id). Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-3-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22PM: EM: Assign a unique ID when creating a performance domainChangwoo Min
It is necessary to refer to a specific performance domain from a userspace. For example, the energy model of a particular performance domain is updated. To this end, assign a unique ID to each performance domain to address it, and manage them in a global linked list to look up a specific one by matching ID. IDA is used for ID assignment, and the mutex is used to protect the global list from concurrent access. Note that the mutex (em_pd_list_mutex) is not supposed to hold while holding em_pd_mutex to avoid ABBA deadlock. Signed-off-by: Changwoo Min <changwoo@igalia.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Link: https://patch.msgid.link/20251020220914.320832-2-changwoo@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-10-22blktrace: handle BLKTRACESETUP2 ioctlJohannes Thumshirn
Handle the BLKTRACESETUP2 ioctl, requesting an extended version of the blktrace protocol from user-space. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: trace zone write plugging operationsJohannes Thumshirn
Trace zone write plugging operations on block devices. As tracing of zoned block commands needs the upper 32bit of the widened 64bit action, only add traces to blktrace if user-space has requested version 2 of the blktrace protocol. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: expose ZONE APPEND completions to blktraceJohannes Thumshirn
Expose ZONE APPEND completions as a block trace completion action to blktrace. As tracing of zoned block commands needs the upper 32bit of the widened 64bit action, only add traces to blktrace if user-space has requested version 2 of the blktrace protocol. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: add block trace commands for zone operationsJohannes Thumshirn
Add block trace commands for zone operations. These commands can only be handled with version 2 of the blktrace protocol. For version 1, warn if a command that does not fit into the 16 bits reserved for the command in this version is passed in. Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: move ftrace blk_io_tracer to blk_io_trace2Johannes Thumshirn
Move ftrace's blk_io_tracer to the new blk_io_trace2 infrastructure. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: move trace_note to blk_io_trace2Johannes Thumshirn
Move trace_note() to the new blk_io_trace2 infrastructure. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: differentiate between blk_io_trace versionsJohannes Thumshirn
Differentiate between blk_io_trace and blk_io_trace2 when relaying to user-space depending on which version has been requested by the blktrace utility. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: add definitions for struct blk_io_trace2Johannes Thumshirn
Add definitions for the extended version of the blktrace protocol using a wider action type to be able to record new actions in the kernel. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: pass blk_user_trace2 to setup functionsJohannes Thumshirn
Pass struct blk_user_trace_setup2 to blktrace_setup_finalize(). This prepares for the incoming extension of the blktrace protocol with a 64bit act_mask. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: add definitions for blk_user_trace_setup2Johannes Thumshirn
Add definitions for a version 2 of the blk_user_trace_setup ioctl. This new ioctl will enable a different struct layout of the binary data passed to user-space when using a new version of the blktrace utility requesting the new struct layout. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: split do_blk_trace_setup into two functionsJohannes Thumshirn
Split do_blk_trace_setup into two functions, this is done to prepare for an incoming new BLKTRACESETUP2 ioctl(2) which can receive extended parameters from user-space. Also move the size verification logic to the callers in preparation for using a new internal structure later. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: change the internal action to 64bitJohannes Thumshirn
Change the internal use of the action in blktrace to 64bit. Although for now only the lower 32bits will be used. With the upcoming version 2 of the blktrace user-space protocol the upper 32bit will also be utilized. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: untangle if/else sequence in __blk_add_traceJohannes Thumshirn
Untangle the if/else sequence setting the trace action in __blk_add_trace() and turn it into a switch statement for better extensibility. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: split out relaying a blktrace eventJohannes Thumshirn
Split out the code relaying a blktrace event to user-space using relayfs. This enables adding a second version supporting a new version of the protocol. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: factor out recording a blktrace eventJohannes Thumshirn
Factor out the recording of a blktrace event into its own function, deduplicating the code. This also enables recording different versions of the blktrace protocol later on. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22blktrace: only calculate trace length onceJohannes Thumshirn
De-duplicate the calculation of the trace length instead of doing the calculation twice, once for calling trace_buffer_lock_reserve() and once for calling relay_reserve(). Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-22sched/fair: Start a cfs_rq on throttled hierarchy with PELT clock throttledK Prateek Nayak
Matteo reported hitting the assert_list_leaf_cfs_rq() warning from enqueue_task_fair() post commit fe8d238e646e ("sched/fair: Propagate load for throttled cfs_rq") which transitioned to using cfs_rq_pelt_clock_throttled() check for leaf cfs_rq insertions in propagate_entity_cfs_rq(). The "cfs_rq->pelt_clock_throttled" flag is used to indicate if the hierarchy has its PELT frozen. If a cfs_rq's PELT is marked frozen, all its descendants should have their PELT frozen too or weird things can happen as a result of children accumulating PELT signals when the parents have their PELT clock stopped. Another side effect of this is the loss of integrity of the leaf cfs_rq list. As debugged by Aaron, consider the following hierarchy: root(#) / \ A(#) B(*) | C <--- new cgroup | D <--- new cgroup # - Already on leaf cfs_rq list * - Throttled with PELT frozen The newly created cgroups don't have their "pelt_clock_throttled" signal synced with cgroup B. Next, the following series of events occur: 1. online_fair_sched_group() for cgroup D will call propagate_entity_cfs_rq(). (Same can happen if a throttled task is moved to cgroup C and enqueue_task_fair() returns early.) propagate_entity_cfs_rq() adds the cfs_rq of cgroup C to "rq->tmp_alone_branch" since its PELT clock is not marked throttled and cfs_rq of cgroup B is not on the list. cfs_rq of cgroup B is skipped since its PELT is throttled. root cfs_rq already exists on cfs_rq leading to list_add_leaf_cfs_rq() returning early. The cfs_rq of cgroup C is left dangling on the "rq->tmp_alone_branch". 2. A new task wakes up on cgroup A. Since the whole hierarchy is already on the leaf cfs_rq list, list_add_leaf_cfs_rq() keeps returning early without any modifications to "rq->tmp_alone_branch". The final assert_list_leaf_cfs_rq() in enqueue_task_fair() sees the dangling reference to cgroup C's cfs_rq in "rq->tmp_alone_branch". !!! Splat !!! Syncing the "pelt_clock_throttled" indicator with parent cfs_rq is not enough since the new cfs_rq is not yet enqueued on the hierarchy. A dequeue on other subtree on the throttled hierarchy can freeze the PELT clock for the parent hierarchy without setting the indicators for this newly added cfs_rq which was never enqueued. Since there are no tasks on the new hierarchy, start a cfs_rq on a throttled hierarchy with its PELT clock throttled. The first enqueue, or the distribution (whichever happens first) will unfreeze the PELT clock and queue the cfs_rq on the leaf cfs_rq list. While at it, add an assert_list_leaf_cfs_rq() in propagate_entity_cfs_rq() to catch such cases in the future. Closes: https://lore.kernel.org/lkml/58a587d694f33c2ea487c700b0d046fa@codethink.co.uk/ Fixes: e1fad12dcb66 ("sched/fair: Switch to task based throttle model") Reported-by: Matteo Martelli <matteo.martelli@codethink.co.uk> Suggested-by: Aaron Lu <ziqianlu@bytedance.com> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Aaron Lu <ziqianlu@bytedance.com> Tested-by: Aaron Lu <ziqianlu@bytedance.com> Tested-by: Matteo Martelli <matteo.martelli@codethink.co.uk> Link: https://patch.msgid.link/20251021053522.37583-1-kprateek.nayak@amd.com
2025-10-22printk_ringbuffer: don't needlessly wrap data blocks aroundDaniil Tatianin
Previously, data blocks that perfectly fit the data ring buffer would get wrapped around to the beginning for no reason since the calculated offset of the next data block would belong to the next wrap. Since this offset is not actually part of the data block, but rather the offset of where the next data block is going to start, there is no reason to include it when deciding whether the current block fits the buffer. Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru> Reviewed-by: Petr Mladek <pmladek@suse.com> Tested-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Tested-by: John Ogness <john.ogness@linutronix.de> Link: https://patch.msgid.link/20250905144152.9137-2-d-tatianin@yandex-team.ru [pmladek@suse.com: Updated indentation.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-10-21sched_ext: Fix scx_bpf_dsq_insert() backward binary compatibilityTejun Heo
cded46d97159 ("sched_ext: Make scx_bpf_dsq_insert*() return bool") introduced a new bool-returning scx_bpf_dsq_insert() and renamed the old void-returning version to scx_bpf_dsq_insert___compat, with the expectation that libbpf would match old binaries to the ___compat variant, maintaining backward binary compatibility. However, while libbpf ignores ___suffix on the BPF side when matching symbols, it doesn't do so for kernel-side symbols. Old binaries compiled with the original scx_bpf_dsq_insert() could no longer resolve the symbol. Fix by reversing the naming: Keep scx_bpf_dsq_insert() as the old void-returning interface and add ___v2 to the new bool-returning version. This allows old binaries to continue working while new code can use the ___v2 variant. Once libbpf is updated to ignore kernel-side ___SUFFIX, the ___v2 suffix can be dropped when the compat interface is removed. v2: Use ___v2 instead of ___new. Fixes: cded46d97159 ("sched_ext: Make scx_bpf_dsq_insert*() return bool") Signed-off-by: Tejun Heo <tj@kernel.org>
2025-10-21bpf: make bpf_insn_successors to return a pointerAnton Protopopov
The bpf_insn_successors() function is used to return successors to a BPF instruction. So far, an instruction could have 0, 1 or 2 successors. Prepare the verifier code to introduction of instructions with more than 2 successors (namely, indirect jumps). To do this, introduce a new struct, struct bpf_iarray, containing an array of bpf instruction indexes and make bpf_insn_successors to return a pointer of that type. The storage for all instructions is allocated in the env->succ, which holds an array of size 2, to be used for all instructions. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20251019202145.3944697-10-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-10-21bpf: generalize and export map_get_next_key for arraysAnton Protopopov
The kernel/bpf/array.c file defines the array_map_get_next_key() function which finds the next key for array maps. It actually doesn't use any map fields besides the generic max_entries field. Generalize it, and export as bpf_array_get_next_key() such that it can be re-used by other array-like maps. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20251019202145.3944697-4-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>