diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2025-05-27 16:48:47 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2025-05-27 16:48:47 -0700 |
commit | c89756bcf406af313d191cfe3709e7c175c5b0cd (patch) | |
tree | 46259271bfd32051a26a9c5f26c455960ffbdf51 /drivers/opp | |
parent | 3702a515edec515fcc7e085053da636fefac88d6 (diff) | |
parent | 3e0c509fbdb106ba2d2fa13beafe58f4ba11e13d (diff) |
Merge tag 'pm-6.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"Once again, the changes are dominated by cpufreq updates, but this
time the majority of them are cpufreq core changes, mostly related to
the introduction of policy locking guards and __free() usage, and
fixes related to boost handling.
Still, there is also a significant update of the intel_pstate driver
making it register an energy model when running on a hybrid platform
which is used for enabling energy-aware scheduling (EAS) if the driver
operates in the passive mode (and schedutil is used as the cpufreq
governor for all CPUs which is the passive mode default).
There are some amd-pstate driver updates too, for a good measure,
including the "Requested CPU Min frequency" BIOS option support and
new online/offline callbacks.
In the cpuidle space, the most significant change is the addition of a
C1 demotion on/off sysfs knob to intel_idle which should help some
users to configure their systems more precisely. There is also the
conversion of the PSCI cpuidle driver to a faux device one and there
are two small updates of cpuidle governors.
Device power management is also modified quite a bit, especially the
handling of devices with asynchronous suspend and resume enabled
during system transitions. They are now going to be handled more
asynchronously during suspend transitions and somewhat less
aggressively during resume transitions.
Apart from the above, the operating performance points (OPP) library
is now going to use mutex locking guards and scope-based cleanup
helpers and there is the usual bunch of assorted fixes and code
cleanups.
Specifics:
- Fix potential division-by-zero error in em_compute_costs() (Yaxiong
Tian)
- Fix typos in energy model documentation and example driver code
(Moon Hee Lee, Atul Kumar Pant)
- Rearrange the energy model management code and add a new function
for adjusting a CPU energy model after adjusting the capacity of
the given CPU to it (Rafael Wysocki)
- Refactor cpufreq_online(), add and use cpufreq policy locking
guards, use __free() in policy reference counting, and clean up
core cpufreq code on top of that (Rafael Wysocki)
- Fix boost handling on CPU suspend/resume and sysfs updates (Viresh
Kumar)
- Fix des_perf clamping with max_perf in amd_pstate_update()
(Dhananjay Ugwekar)
- Add offline, online and suspend callbacks to the amd-pstate driver,
rename and use the existing amd_pstate_epp callbacks in it
(Dhananjay Ugwekar)
- Add support for the "Requested CPU Min frequency" BIOS option to
the amd-pstate driver (Dhananjay Ugwekar)
- Reset amd-pstate driver mode after running selftests (Swapnil
Sapkal)
- Avoid shadowing ret in amd_pstate_ut_check_driver() (Nathan
Chancellor)
- Add helper for governor checks to the schedutil cpufreq governor
and move cpufreq-specific EAS checks to cpufreq (Rafael Wysocki)
- Populate the cpu_capacity sysfs entries from the intel_pstate
driver after registering asym capacity support (Ricardo Neri)
- Add support for enabling Energy-aware scheduling (EAS) to the
intel_pstate driver when operating in the passive mode on a hybrid
platform (Rafael Wysocki)
- Drop redundant cpus_read_lock() from store_local_boost() in the
cpufreq core (Seyediman Seyedarab)
- Replace sscanf() with kstrtouint() in the cpufreq code and use a
symbol instead of a raw number in it (Bowen Yu)
- Add support for autonomous CPU performance state selection to the
CPPC cpufreq driver (Lifeng Zheng)
- OPP: Add dev_pm_opp_set_level() (Praveen Talari)
- Introduce scope-based cleanup headers and mutex locking guards in
OPP core (Viresh Kumar)
- Switch OPP to use kmemdup_array() (Zhang Enpei)
- Optimize bucket assignment when next_timer_ns equals KTIME_MAX in
the menu cpuidle governor (Zhongqiu Han)
- Convert the cpuidle PSCI driver to a faux device one (Sudeep Holla)
- Add C1 demotion on/off sysfs knob to the intel_idle driver (Artem
Bityutskiy)
- Fix typos in two comments in the teo cpuidle governor (Atul Kumar
Pant)
- Fix denying of auto suspend in pm_suspend_timer_fn() (Charan Teja
Kalla)
- Move debug runtime PM attributes to runtime_attrs[] (Rafael
Wysocki)
- Add new devm_ functions for enabling runtime PM and runtime PM
reference counting (Bence Csókás)
- Remove size arguments from strscpy() calls in the hibernation core
code (Thorsten Blum)
- Adjust the handling of devices with asynchronous suspend enabled
during system suspend and resume to start resuming them immediately
after resuming their parents and to start suspending such a device
immediately after suspending its first child (Rafael Wysocki)
- Adjust messages printed during tasks freezing to avoid using
pr_cont() (Andrew Sayers, Paul Menzel)
- Clean up unnecessary usage of !! in pm_print_times_init() (Zihuan
Zhang)
- Add missing wakeup source attribute relax_count to sysfs and remove
the space character at the end ofi the string produced by
pm_show_wakelocks() (Zijun Hu)
- Add configurable pm_test delay for hibernation (Zihuan Zhang)
- Disable asynchronous suspend in ucsi_ccg_probe() to prevent the
cypd4226 device on Tegra boards from suspending prematurely (Jon
Hunter)
- Unbreak printing PM debug messages during hibernation and clean up
some related code (Rafael Wysocki)
- Add a systemd service to run cpupower and change cpupower binding's
Makefile to use -lcpupower (John B. Wyatt IV, Francesco Poli)"
* tag 'pm-6.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (72 commits)
cpufreq: CPPC: Add support for autonomous selection
cpufreq: Update sscanf() to kstrtouint()
cpufreq: Replace magic number
OPP: switch to use kmemdup_array()
PM: freezer: Rewrite restarting tasks log to remove stray *done.*
PM: runtime: fix denying of auto suspend in pm_suspend_timer_fn()
cpufreq: drop redundant cpus_read_lock() from store_local_boost()
cpupower: do not install files to /etc/default/
cpupower: do not call systemctl at install time
cpupower: do not write DESTDIR to cpupower.service
PM: sleep: Introduce pm_sleep_transition_in_progress()
cpufreq/amd-pstate: Avoid shadowing ret in amd_pstate_ut_check_driver()
cpufreq: intel_pstate: Document hybrid processor support
cpufreq: intel_pstate: EAS: Increase cost for CPUs using L3 cache
cpufreq: intel_pstate: EAS support for hybrid platforms
PM: EM: Introduce em_adjust_cpu_capacity()
PM: EM: Move CPU capacity check to em_adjust_new_capacity()
PM: EM: Documentation: Fix typos in example driver code
cpufreq: Drop policy locking from cpufreq_policy_is_good_for_eas()
PM: sleep: Introduce pm_suspend_in_progress()
...
Diffstat (limited to 'drivers/opp')
-rw-r--r-- | drivers/opp/core.c | 428 | ||||
-rw-r--r-- | drivers/opp/cpu.c | 30 | ||||
-rw-r--r-- | drivers/opp/of.c | 205 | ||||
-rw-r--r-- | drivers/opp/opp.h | 1 |
4 files changed, 234 insertions, 430 deletions
diff --git a/drivers/opp/core.c b/drivers/opp/core.c index 73e9a3b2f29b..edbd60501cf0 100644 --- a/drivers/opp/core.c +++ b/drivers/opp/core.c @@ -40,17 +40,14 @@ static DEFINE_XARRAY_ALLOC1(opp_configs); static bool _find_opp_dev(const struct device *dev, struct opp_table *opp_table) { struct opp_device *opp_dev; - bool found = false; - mutex_lock(&opp_table->lock); + guard(mutex)(&opp_table->lock); + list_for_each_entry(opp_dev, &opp_table->dev_list, node) - if (opp_dev->dev == dev) { - found = true; - break; - } + if (opp_dev->dev == dev) + return true; - mutex_unlock(&opp_table->lock); - return found; + return false; } static struct opp_table *_find_opp_table_unlocked(struct device *dev) @@ -58,10 +55,8 @@ static struct opp_table *_find_opp_table_unlocked(struct device *dev) struct opp_table *opp_table; list_for_each_entry(opp_table, &opp_tables, node) { - if (_find_opp_dev(dev, opp_table)) { - _get_opp_table_kref(opp_table); - return opp_table; - } + if (_find_opp_dev(dev, opp_table)) + return dev_pm_opp_get_opp_table_ref(opp_table); } return ERR_PTR(-ENODEV); @@ -80,18 +75,13 @@ static struct opp_table *_find_opp_table_unlocked(struct device *dev) */ struct opp_table *_find_opp_table(struct device *dev) { - struct opp_table *opp_table; - if (IS_ERR_OR_NULL(dev)) { pr_err("%s: Invalid parameters\n", __func__); return ERR_PTR(-EINVAL); } - mutex_lock(&opp_table_lock); - opp_table = _find_opp_table_unlocked(dev); - mutex_unlock(&opp_table_lock); - - return opp_table; + guard(mutex)(&opp_table_lock); + return _find_opp_table_unlocked(dev); } /* @@ -319,18 +309,13 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); */ unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev) { - struct opp_table *opp_table; - unsigned long clock_latency_ns; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return 0; - clock_latency_ns = opp_table->clock_latency_ns_max; - - dev_pm_opp_put_opp_table(opp_table); - - return clock_latency_ns; + return opp_table->clock_latency_ns_max; } EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency); @@ -342,7 +327,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency); */ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) { - struct opp_table *opp_table; + struct opp_table *opp_table __free(put_opp_table); struct dev_pm_opp *opp; struct regulator *reg; unsigned long latency_ns = 0; @@ -358,33 +343,31 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) /* Regulator may not be required for the device */ if (!opp_table->regulators) - goto put_opp_table; + return 0; count = opp_table->regulator_count; uV = kmalloc_array(count, sizeof(*uV), GFP_KERNEL); if (!uV) - goto put_opp_table; - - mutex_lock(&opp_table->lock); + return 0; - for (i = 0; i < count; i++) { - uV[i].min = ~0; - uV[i].max = 0; + scoped_guard(mutex, &opp_table->lock) { + for (i = 0; i < count; i++) { + uV[i].min = ~0; + uV[i].max = 0; - list_for_each_entry(opp, &opp_table->opp_list, node) { - if (!opp->available) - continue; + list_for_each_entry(opp, &opp_table->opp_list, node) { + if (!opp->available) + continue; - if (opp->supplies[i].u_volt_min < uV[i].min) - uV[i].min = opp->supplies[i].u_volt_min; - if (opp->supplies[i].u_volt_max > uV[i].max) - uV[i].max = opp->supplies[i].u_volt_max; + if (opp->supplies[i].u_volt_min < uV[i].min) + uV[i].min = opp->supplies[i].u_volt_min; + if (opp->supplies[i].u_volt_max > uV[i].max) + uV[i].max = opp->supplies[i].u_volt_max; + } } } - mutex_unlock(&opp_table->lock); - /* * The caller needs to ensure that opp_table (and hence the regulator) * isn't freed, while we are executing this routine. @@ -397,8 +380,6 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) } kfree(uV); -put_opp_table: - dev_pm_opp_put_opp_table(opp_table); return latency_ns; } @@ -428,7 +409,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_transition_latency); */ unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev) { - struct opp_table *opp_table; + struct opp_table *opp_table __free(put_opp_table); unsigned long freq = 0; opp_table = _find_opp_table(dev); @@ -438,8 +419,6 @@ unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev) if (opp_table->suspend_opp && opp_table->suspend_opp->available) freq = dev_pm_opp_get_freq(opp_table->suspend_opp); - dev_pm_opp_put_opp_table(opp_table); - return freq; } EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp_freq); @@ -449,15 +428,13 @@ int _get_opp_count(struct opp_table *opp_table) struct dev_pm_opp *opp; int count = 0; - mutex_lock(&opp_table->lock); + guard(mutex)(&opp_table->lock); list_for_each_entry(opp, &opp_table->opp_list, node) { if (opp->available) count++; } - mutex_unlock(&opp_table->lock); - return count; } @@ -470,21 +447,16 @@ int _get_opp_count(struct opp_table *opp_table) */ int dev_pm_opp_get_opp_count(struct device *dev) { - struct opp_table *opp_table; - int count; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { - count = PTR_ERR(opp_table); - dev_dbg(dev, "%s: OPP table not found (%d)\n", - __func__, count); - return count; + dev_dbg(dev, "%s: OPP table not found (%ld)\n", + __func__, PTR_ERR(opp_table)); + return PTR_ERR(opp_table); } - count = _get_opp_count(opp_table); - dev_pm_opp_put_opp_table(opp_table); - - return count; + return _get_opp_count(opp_table); } EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count); @@ -551,7 +523,7 @@ static struct dev_pm_opp *_opp_table_find_key(struct opp_table *opp_table, if (assert && !assert(opp_table, index)) return ERR_PTR(-EINVAL); - mutex_lock(&opp_table->lock); + guard(mutex)(&opp_table->lock); list_for_each_entry(temp_opp, &opp_table->opp_list, node) { if (temp_opp->available == available) { @@ -566,8 +538,6 @@ static struct dev_pm_opp *_opp_table_find_key(struct opp_table *opp_table, dev_pm_opp_get(opp); } - mutex_unlock(&opp_table->lock); - return opp; } @@ -578,8 +548,7 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available, unsigned long opp_key, unsigned long key), bool (*assert)(struct opp_table *opp_table, unsigned int index)) { - struct opp_table *opp_table; - struct dev_pm_opp *opp; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { @@ -588,12 +557,8 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available, return ERR_CAST(opp_table); } - opp = _opp_table_find_key(opp_table, key, index, available, read, - compare, assert); - - dev_pm_opp_put_opp_table(opp_table); - - return opp; + return _opp_table_find_key(opp_table, key, index, available, read, + compare, assert); } static struct dev_pm_opp *_find_key_exact(struct device *dev, @@ -1187,10 +1152,9 @@ static void _find_current_opp(struct device *dev, struct opp_table *opp_table) * make special checks to validate current_opp. */ if (IS_ERR(opp)) { - mutex_lock(&opp_table->lock); - opp = list_first_entry(&opp_table->opp_list, struct dev_pm_opp, node); - dev_pm_opp_get(opp); - mutex_unlock(&opp_table->lock); + guard(mutex)(&opp_table->lock); + opp = dev_pm_opp_get(list_first_entry(&opp_table->opp_list, + struct dev_pm_opp, node)); } opp_table->current_opp = opp; @@ -1329,8 +1293,7 @@ static int _set_opp(struct device *dev, struct opp_table *opp_table, dev_pm_opp_put(old_opp); /* Make sure current_opp doesn't get freed */ - dev_pm_opp_get(opp); - opp_table->current_opp = opp; + opp_table->current_opp = dev_pm_opp_get(opp); return ret; } @@ -1348,11 +1311,10 @@ static int _set_opp(struct device *dev, struct opp_table *opp_table, */ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) { - struct opp_table *opp_table; + struct opp_table *opp_table __free(put_opp_table); + struct dev_pm_opp *opp __free(put_opp) = NULL; unsigned long freq = 0, temp_freq; - struct dev_pm_opp *opp = NULL; bool forced = false; - int ret; opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { @@ -1369,9 +1331,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) * equivalent to a clk_set_rate() */ if (!_get_opp_count(opp_table)) { - ret = opp_table->config_clks(dev, opp_table, NULL, - &target_freq, false); - goto put_opp_table; + return opp_table->config_clks(dev, opp_table, NULL, + &target_freq, false); } freq = clk_round_rate(opp_table->clk, target_freq); @@ -1386,10 +1347,9 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) temp_freq = freq; opp = _find_freq_ceil(opp_table, &temp_freq); if (IS_ERR(opp)) { - ret = PTR_ERR(opp); - dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n", - __func__, freq, ret); - goto put_opp_table; + dev_err(dev, "%s: failed to find OPP for freq %lu (%ld)\n", + __func__, freq, PTR_ERR(opp)); + return PTR_ERR(opp); } /* @@ -1402,14 +1362,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) forced = opp_table->current_rate_single_clk != freq; } - ret = _set_opp(dev, opp_table, opp, &freq, forced); - - if (freq) - dev_pm_opp_put(opp); - -put_opp_table: - dev_pm_opp_put_opp_table(opp_table); - return ret; + return _set_opp(dev, opp_table, opp, &freq, forced); } EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate); @@ -1425,8 +1378,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate); */ int dev_pm_opp_set_opp(struct device *dev, struct dev_pm_opp *opp) { - struct opp_table *opp_table; - int ret; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { @@ -1434,10 +1386,7 @@ int dev_pm_opp_set_opp(struct device *dev, struct dev_pm_opp *opp) return PTR_ERR(opp_table); } - ret = _set_opp(dev, opp_table, opp, NULL, false); - dev_pm_opp_put_opp_table(opp_table); - - return ret; + return _set_opp(dev, opp_table, opp, NULL, false); } EXPORT_SYMBOL_GPL(dev_pm_opp_set_opp); @@ -1462,9 +1411,8 @@ struct opp_device *_add_opp_dev(const struct device *dev, /* Initialize opp-dev */ opp_dev->dev = dev; - mutex_lock(&opp_table->lock); - list_add(&opp_dev->node, &opp_table->dev_list); - mutex_unlock(&opp_table->lock); + scoped_guard(mutex, &opp_table->lock) + list_add(&opp_dev->node, &opp_table->dev_list); /* Create debugfs entries for the opp_table */ opp_debug_register(opp_dev, opp_table); @@ -1688,14 +1636,10 @@ static void _opp_table_kref_release(struct kref *kref) kfree(opp_table); } -void _get_opp_table_kref(struct opp_table *opp_table) +struct opp_table *dev_pm_opp_get_opp_table_ref(struct opp_table *opp_table) { kref_get(&opp_table->kref); -} - -void dev_pm_opp_get_opp_table_ref(struct opp_table *opp_table) -{ - _get_opp_table_kref(opp_table); + return opp_table; } EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_table_ref); @@ -1729,9 +1673,10 @@ static void _opp_kref_release(struct kref *kref) kfree(opp); } -void dev_pm_opp_get(struct dev_pm_opp *opp) +struct dev_pm_opp *dev_pm_opp_get(struct dev_pm_opp *opp) { kref_get(&opp->kref); + return opp; } EXPORT_SYMBOL_GPL(dev_pm_opp_get); @@ -1750,27 +1695,25 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put); */ void dev_pm_opp_remove(struct device *dev, unsigned long freq) { + struct opp_table *opp_table __free(put_opp_table); struct dev_pm_opp *opp = NULL, *iter; - struct opp_table *opp_table; opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return; if (!assert_single_clk(opp_table, 0)) - goto put_table; - - mutex_lock(&opp_table->lock); + return; - list_for_each_entry(iter, &opp_table->opp_list, node) { - if (iter->rates[0] == freq) { - opp = iter; - break; + scoped_guard(mutex, &opp_table->lock) { + list_for_each_entry(iter, &opp_table->opp_list, node) { + if (iter->rates[0] == freq) { + opp = iter; + break; + } } } - mutex_unlock(&opp_table->lock); - if (opp) { dev_pm_opp_put(opp); @@ -1780,32 +1723,26 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq) dev_warn(dev, "%s: Couldn't find OPP with freq: %lu\n", __func__, freq); } - -put_table: - /* Drop the reference taken by _find_opp_table() */ - dev_pm_opp_put_opp_table(opp_table); } EXPORT_SYMBOL_GPL(dev_pm_opp_remove); static struct dev_pm_opp *_opp_get_next(struct opp_table *opp_table, bool dynamic) { - struct dev_pm_opp *opp = NULL, *temp; + struct dev_pm_opp *opp; + + guard(mutex)(&opp_table->lock); - mutex_lock(&opp_table->lock); - list_for_each_entry(temp, &opp_table->opp_list, node) { + list_for_each_entry(opp, &opp_table->opp_list, node) { /* * Refcount must be dropped only once for each OPP by OPP core, * do that with help of "removed" flag. */ - if (!temp->removed && dynamic == temp->dynamic) { - opp = temp; - break; - } + if (!opp->removed && dynamic == opp->dynamic) + return opp; } - mutex_unlock(&opp_table->lock); - return opp; + return NULL; } /* @@ -1829,20 +1766,14 @@ static void _opp_remove_all(struct opp_table *opp_table, bool dynamic) bool _opp_remove_all_static(struct opp_table *opp_table) { - mutex_lock(&opp_table->lock); - - if (!opp_table->parsed_static_opps) { - mutex_unlock(&opp_table->lock); - return false; - } + scoped_guard(mutex, &opp_table->lock) { + if (!opp_table->parsed_static_opps) + return false; - if (--opp_table->parsed_static_opps) { - mutex_unlock(&opp_table->lock); - return true; + if (--opp_table->parsed_static_opps) + return true; } - mutex_unlock(&opp_table->lock); - _opp_remove_all(opp_table, false); return true; } @@ -1855,16 +1786,13 @@ bool _opp_remove_all_static(struct opp_table *opp_table) */ void dev_pm_opp_remove_all_dynamic(struct device *dev) { - struct opp_table *opp_table; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return; _opp_remove_all(opp_table, true); - - /* Drop the reference taken by _find_opp_table() */ - dev_pm_opp_put_opp_table(opp_table); } EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic); @@ -2049,17 +1977,15 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct list_head *head; int ret; - mutex_lock(&opp_table->lock); - head = &opp_table->opp_list; + scoped_guard(mutex, &opp_table->lock) { + head = &opp_table->opp_list; - ret = _opp_is_duplicate(dev, new_opp, opp_table, &head); - if (ret) { - mutex_unlock(&opp_table->lock); - return ret; - } + ret = _opp_is_duplicate(dev, new_opp, opp_table, &head); + if (ret) + return ret; - list_add(&new_opp->node, head); - mutex_unlock(&opp_table->lock); + list_add(&new_opp->node, head); + } new_opp->opp_table = opp_table; kref_init(&new_opp->kref); @@ -2161,8 +2087,8 @@ static int _opp_set_supported_hw(struct opp_table *opp_table, if (opp_table->supported_hw) return 0; - opp_table->supported_hw = kmemdup(versions, count * sizeof(*versions), - GFP_KERNEL); + opp_table->supported_hw = kmemdup_array(versions, count, + sizeof(*versions), GFP_KERNEL); if (!opp_table->supported_hw) return -ENOMEM; @@ -2706,18 +2632,16 @@ struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, return ERR_PTR(-EBUSY); for (i = 0; i < src_table->required_opp_count; i++) { - if (src_table->required_opp_tables[i] == dst_table) { - mutex_lock(&src_table->lock); + if (src_table->required_opp_tables[i] != dst_table) + continue; + scoped_guard(mutex, &src_table->lock) { list_for_each_entry(opp, &src_table->opp_list, node) { if (opp == src_opp) { - dest_opp = opp->required_opps[i]; - dev_pm_opp_get(dest_opp); + dest_opp = dev_pm_opp_get(opp->required_opps[i]); break; } } - - mutex_unlock(&src_table->lock); break; } } @@ -2749,7 +2673,6 @@ int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, unsigned int pstate) { struct dev_pm_opp *opp; - int dest_pstate = -EINVAL; int i; /* @@ -2783,22 +2706,17 @@ int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, return -EINVAL; } - mutex_lock(&src_table->lock); + guard(mutex)(&src_table->lock); list_for_each_entry(opp, &src_table->opp_list, node) { - if (opp->level == pstate) { - dest_pstate = opp->required_opps[i]->level; - goto unlock; - } + if (opp->level == pstate) + return opp->required_opps[i]->level; } pr_err("%s: Couldn't find matching OPP (%p: %p)\n", __func__, src_table, dst_table); -unlock: - mutex_unlock(&src_table->lock); - - return dest_pstate; + return -EINVAL; } /** @@ -2853,46 +2771,38 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_add_dynamic); static int _opp_set_availability(struct device *dev, unsigned long freq, bool availability_req) { - struct opp_table *opp_table; - struct dev_pm_opp *tmp_opp, *opp = ERR_PTR(-ENODEV); - int r = 0; + struct dev_pm_opp *opp __free(put_opp) = ERR_PTR(-ENODEV), *tmp_opp; + struct opp_table *opp_table __free(put_opp_table); /* Find the opp_table */ opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { - r = PTR_ERR(opp_table); - dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r); - return r; + dev_warn(dev, "%s: Device OPP not found (%ld)\n", __func__, + PTR_ERR(opp_table)); + return PTR_ERR(opp_table); } - if (!assert_single_clk(opp_table, 0)) { - r = -EINVAL; - goto put_table; - } + if (!assert_single_clk(opp_table, 0)) + return -EINVAL; - mutex_lock(&opp_table->lock); + scoped_guard(mutex, &opp_table->lock) { + /* Do we have the frequency? */ + list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { + if (tmp_opp->rates[0] == freq) { + opp = dev_pm_opp_get(tmp_opp); - /* Do we have the frequency? */ - list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { - if (tmp_opp->rates[0] == freq) { - opp = tmp_opp; - break; - } - } + /* Is update really needed? */ + if (opp->available == availability_req) + return 0; - if (IS_ERR(opp)) { - r = PTR_ERR(opp); - goto unlock; + opp->available = availability_req; + break; + } + } } - /* Is update really needed? */ - if (opp->available == availability_req) - goto unlock; - - opp->available = availability_req; - - dev_pm_opp_get(opp); - mutex_unlock(&opp_table->lock); + if (IS_ERR(opp)) + return PTR_ERR(opp); /* Notify the change of the OPP availability */ if (availability_req) @@ -2902,14 +2812,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_DISABLE, opp); - dev_pm_opp_put(opp); - goto put_table; - -unlock: - mutex_unlock(&opp_table->lock); -put_table: - dev_pm_opp_put_opp_table(opp_table); - return r; + return 0; } /** @@ -2929,9 +2832,9 @@ int dev_pm_opp_adjust_voltage(struct device *dev, unsigned long freq, unsigned long u_volt_max) { - struct opp_table *opp_table; - struct dev_pm_opp *tmp_opp, *opp = ERR_PTR(-ENODEV); - int r = 0; + struct dev_pm_opp *opp __free(put_opp) = ERR_PTR(-ENODEV), *tmp_opp; + struct opp_table *opp_table __free(put_opp_table); + int r; /* Find the opp_table */ opp_table = _find_opp_table(dev); @@ -2941,49 +2844,36 @@ int dev_pm_opp_adjust_voltage(struct device *dev, unsigned long freq, return r; } - if (!assert_single_clk(opp_table, 0)) { - r = -EINVAL; - goto put_table; - } + if (!assert_single_clk(opp_table, 0)) + return -EINVAL; - mutex_lock(&opp_table->lock); + scoped_guard(mutex, &opp_table->lock) { + /* Do we have the frequency? */ + list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { + if (tmp_opp->rates[0] == freq) { + opp = dev_pm_opp_get(tmp_opp); - /* Do we have the frequency? */ - list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { - if (tmp_opp->rates[0] == freq) { - opp = tmp_opp; - break; - } - } + /* Is update really needed? */ + if (opp->supplies->u_volt == u_volt) + return 0; - if (IS_ERR(opp)) { - r = PTR_ERR(opp); - goto adjust_unlock; - } - - /* Is update really needed? */ - if (opp->supplies->u_volt == u_volt) - goto adjust_unlock; + opp->supplies->u_volt = u_volt; + opp->supplies->u_volt_min = u_volt_min; + opp->supplies->u_volt_max = u_volt_max; - opp->supplies->u_volt = u_volt; - opp->supplies->u_volt_min = u_volt_min; - opp->supplies->u_volt_max = u_volt_max; + break; + } + } + } - dev_pm_opp_get(opp); - mutex_unlock(&opp_table->lock); + if (IS_ERR(opp)) + return PTR_ERR(opp); /* Notify the voltage change of the OPP */ blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADJUST_VOLTAGE, opp); - dev_pm_opp_put(opp); - goto put_table; - -adjust_unlock: - mutex_unlock(&opp_table->lock); -put_table: - dev_pm_opp_put_opp_table(opp_table); - return r; + return 0; } EXPORT_SYMBOL_GPL(dev_pm_opp_adjust_voltage); @@ -2997,9 +2887,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_adjust_voltage); */ int dev_pm_opp_sync_regulators(struct device *dev) { - struct opp_table *opp_table; + struct opp_table *opp_table __free(put_opp_table); struct regulator *reg; - int i, ret = 0; + int ret, i; /* Device may not have OPP table */ opp_table = _find_opp_table(dev); @@ -3008,23 +2898,20 @@ int dev_pm_opp_sync_regulators(struct device *dev) /* Regulator may not be required for the device */ if (unlikely(!opp_table->regulators)) - goto put_table; + return 0; /* Nothing to sync if voltage wasn't changed */ if (!opp_table->enabled) - goto put_table; + return 0; for (i = 0; i < opp_table->regulator_count; i++) { reg = opp_table->regulators[i]; ret = regulator_sync_voltage(reg); if (ret) - break; + return ret; } -put_table: - /* Drop reference taken by _find_opp_table() */ - dev_pm_opp_put_opp_table(opp_table); - return ret; + return 0; } EXPORT_SYMBOL_GPL(dev_pm_opp_sync_regulators); @@ -3076,18 +2963,13 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_disable); */ int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb) { - struct opp_table *opp_table; - int ret; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return PTR_ERR(opp_table); - ret = blocking_notifier_chain_register(&opp_table->head, nb); - - dev_pm_opp_put_opp_table(opp_table); - - return ret; + return blocking_notifier_chain_register(&opp_table->head, nb); } EXPORT_SYMBOL(dev_pm_opp_register_notifier); @@ -3101,18 +2983,13 @@ EXPORT_SYMBOL(dev_pm_opp_register_notifier); int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb) { - struct opp_table *opp_table; - int ret; + struct opp_table *opp_table __free(put_opp_table); opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return PTR_ERR(opp_table); - ret = blocking_notifier_chain_unregister(&opp_table->head, nb); - - dev_pm_opp_put_opp_table(opp_table); - - return ret; + return blocking_notifier_chain_unregister(&opp_table->head, nb); } EXPORT_SYMBOL(dev_pm_opp_unregister_notifier); @@ -3125,7 +3002,7 @@ EXPORT_SYMBOL(dev_pm_opp_unregister_notifier); */ void dev_pm_opp_remove_table(struct device *dev) { - struct opp_table *opp_table; + struct opp_table *opp_table __free(put_opp_table); /* Check for existing table for 'dev' */ opp_table = _find_opp_table(dev); @@ -3146,8 +3023,5 @@ void dev_pm_opp_remove_table(struct device *dev) **/ if (_opp_remove_all_static(opp_table)) dev_pm_opp_put_opp_table(opp_table); - - /* Drop reference taken by _find_opp_table() */ - dev_pm_opp_put_opp_table(opp_table); } EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table); diff --git a/drivers/opp/cpu.c b/drivers/opp/cpu.c index 12c429b407ca..97989d4fe336 100644 --- a/drivers/opp/cpu.c +++ b/drivers/opp/cpu.c @@ -43,7 +43,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, struct cpufreq_frequency_table **opp_table) { - struct dev_pm_opp *opp; struct cpufreq_frequency_table *freq_table = NULL; int i, max_opps, ret = 0; unsigned long rate; @@ -57,6 +56,8 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, return -ENOMEM; for (i = 0, rate = 0; i < max_opps; i++, rate++) { + struct dev_pm_opp *opp __free(put_opp); + /* find next rate */ opp = dev_pm_opp_find_freq_ceil(dev, &rate); if (IS_ERR(opp)) { @@ -69,8 +70,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, /* Is Boost/turbo opp ? */ if (dev_pm_opp_is_turbo(opp)) freq_table[i].flags = CPUFREQ_BOOST_FREQ; - - dev_pm_opp_put(opp); } freq_table[i].driver_data = i; @@ -155,10 +154,10 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table); int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask) { + struct opp_table *opp_table __free(put_opp_table); struct opp_device *opp_dev; - struct opp_table *opp_table; struct device *dev; - int cpu, ret = 0; + int cpu; opp_table = _find_opp_table(cpu_dev); if (IS_ERR(opp_table)) @@ -186,9 +185,7 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED; } - dev_pm_opp_put_opp_table(opp_table); - - return ret; + return 0; } EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); @@ -204,33 +201,26 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); */ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) { + struct opp_table *opp_table __free(put_opp_table); struct opp_device *opp_dev; - struct opp_table *opp_table; - int ret = 0; opp_table = _find_opp_table(cpu_dev); if (IS_ERR(opp_table)) return PTR_ERR(opp_table); - if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) { - ret = -EINVAL; - goto put_opp_table; - } + if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) + return -EINVAL; cpumask_clear(cpumask); if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) { - mutex_lock(&opp_table->lock); + guard(mutex)(&opp_table->lock); list_for_each_entry(opp_dev, &opp_table->dev_list, node) cpumask_set_cpu(opp_dev->dev->id, cpumask); - mutex_unlock(&opp_table->lock); } else { cpumask_set_cpu(cpu_dev->id, cpumask); } -put_opp_table: - dev_pm_opp_put_opp_table(opp_table); - - return ret; + return 0; } EXPORT_SYMBOL_GPL(dev_pm_opp_get_sharing_cpus); diff --git a/drivers/opp/of.c b/drivers/opp/of.c index a24f76f5fd01..505d79821584 100644 --- a/drivers/opp/of.c +++ b/drivers/opp/of.c @@ -45,7 +45,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node); struct opp_table *_managed_opp(struct device *dev, int index) { struct opp_table *opp_table, *managed_table = NULL; - struct device_node *np; + struct device_node *np __free(device_node); np = _opp_of_get_opp_desc_node(dev->of_node, index); if (!np) @@ -60,17 +60,13 @@ struct opp_table *_managed_opp(struct device *dev, int index) * But the OPPs will be considered as shared only if the * OPP table contains a "opp-shared" property. */ - if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) { - _get_opp_table_kref(opp_table); - managed_table = opp_table; - } + if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) + managed_table = dev_pm_opp_get_opp_table_ref(opp_table); break; } } - of_node_put(np); - return managed_table; } @@ -80,18 +76,13 @@ static struct dev_pm_opp *_find_opp_of_np(struct opp_table *opp_table, { struct dev_pm_opp *opp; - mutex_lock(&opp_table->lock); + guard(mutex)(&opp_table->lock); list_for_each_entry(opp, &opp_table->opp_list, node) { - if (opp->np == opp_np) { - dev_pm_opp_get(opp); - mutex_unlock(&opp_table->lock); - return opp; - } + if (opp->np == opp_np) + return dev_pm_opp_get(opp); } - mutex_unlock(&opp_table->lock); - return NULL; } @@ -104,27 +95,20 @@ static struct device_node *of_parse_required_opp(struct device_node *np, /* The caller must call dev_pm_opp_put_opp_table() after the table is used */ static struct opp_table *_find_table_of_opp_np(struct device_node *opp_np) { + struct device_node *opp_table_np __free(device_node); struct opp_table *opp_table; - struct device_node *opp_table_np; opp_table_np = of_get_parent(opp_np); if (!opp_table_np) - goto err; + return ERR_PTR(-ENODEV); - /* It is safe to put the node now as all we need now is its address */ - of_node_put(opp_table_np); + guard(mutex)(&opp_table_lock); - mutex_lock(&opp_table_lock); list_for_each_entry(opp_table, &opp_tables, node) { - if (opp_table_np == opp_table->np) { - _get_opp_table_kref(opp_table); - mutex_unlock(&opp_table_lock); - return opp_table; - } + if (opp_table_np == opp_table->np) + return dev_pm_opp_get_opp_table_ref(opp_table); } - mutex_unlock(&opp_table_lock); -err: return ERR_PTR(-ENODEV); } @@ -149,9 +133,8 @@ static void _opp_table_free_required_tables(struct opp_table *opp_table) opp_table->required_opp_count = 0; opp_table->required_opp_tables = NULL; - mutex_lock(&opp_table_lock); + guard(mutex)(&opp_table_lock); list_del(&opp_table->lazy); - mutex_unlock(&opp_table_lock); } /* @@ -163,7 +146,7 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, struct device_node *opp_np) { struct opp_table **required_opp_tables; - struct device_node *required_np, *np; + struct device_node *np __free(device_node); bool lazy = false; int count, i, size; @@ -171,30 +154,32 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, np = of_get_next_available_child(opp_np, NULL); if (!np) { dev_warn(dev, "Empty OPP table\n"); - return; } count = of_count_phandle_with_args(np, "required-opps", NULL); if (count <= 0) - goto put_np; + return; size = sizeof(*required_opp_tables) + sizeof(*opp_table->required_devs); required_opp_tables = kcalloc(count, size, GFP_KERNEL); if (!required_opp_tables) - goto put_np; + return; opp_table->required_opp_tables = required_opp_tables; opp_table->required_devs = (void *)(required_opp_tables + count); opp_table->required_opp_count = count; for (i = 0; i < count; i++) { + struct device_node *required_np __free(device_node); + required_np = of_parse_required_opp(np, i); - if (!required_np) - goto free_required_tables; + if (!required_np) { + _opp_table_free_required_tables(opp_table); + return; + } required_opp_tables[i] = _find_table_of_opp_np(required_np); - of_node_put(required_np); if (IS_ERR(required_opp_tables[i])) lazy = true; @@ -206,23 +191,15 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, * The OPP table is not held while allocating the table, take it * now to avoid corruption to the lazy_opp_tables list. */ - mutex_lock(&opp_table_lock); + guard(mutex)(&opp_table_lock); list_add(&opp_table->lazy, &lazy_opp_tables); - mutex_unlock(&opp_table_lock); } - - goto put_np; - -free_required_tables: - _opp_table_free_required_tables(opp_table); -put_np: - of_node_put(np); } void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index) { - struct device_node *np, *opp_np; + struct device_node *np __free(device_node), *opp_np; u32 val; /* @@ -243,8 +220,6 @@ void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, /* Get OPP table node */ opp_np = _opp_of_get_opp_desc_node(np, index); - of_node_put(np); - if (!opp_np) return; @@ -298,15 +273,13 @@ void _of_clear_opp(struct opp_table *opp_table, struct dev_pm_opp *opp) static int _link_required_opps(struct dev_pm_opp *opp, struct opp_table *required_table, int index) { - struct device_node *np; + struct device_node *np __free(device_node); np = of_parse_required_opp(opp->np, index); if (unlikely(!np)) return -ENODEV; opp->required_opps[index] = _find_opp_of_np(required_table, np); - of_node_put(np); - if (!opp->required_opps[index]) { pr_err("%s: Unable to find required OPP node: %pOF (%d)\n", __func__, opp->np, index); @@ -370,19 +343,22 @@ static int lazy_link_required_opps(struct opp_table *opp_table, static void lazy_link_required_opp_table(struct opp_table *new_table) { struct opp_table *opp_table, *temp, **required_opp_tables; - struct device_node *required_np, *opp_np, *required_table_np; struct dev_pm_opp *opp; int i, ret; - mutex_lock(&opp_table_lock); + guard(mutex)(&opp_table_lock); list_for_each_entry_safe(opp_table, temp, &lazy_opp_tables, lazy) { + struct device_node *opp_np __free(device_node); bool lazy = false; /* opp_np can't be invalid here */ opp_np = of_get_next_available_child(opp_table->np, NULL); for (i = 0; i < opp_table->required_opp_count; i++) { + struct device_node *required_np __free(device_node) = NULL; + struct device_node *required_table_np __free(device_node) = NULL; + required_opp_tables = opp_table->required_opp_tables; /* Required opp-table is already parsed */ @@ -393,9 +369,6 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) required_np = of_parse_required_opp(opp_np, i); required_table_np = of_get_parent(required_np); - of_node_put(required_table_np); - of_node_put(required_np); - /* * Newly added table isn't the required opp-table for * opp_table. @@ -405,8 +378,7 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) continue; } - required_opp_tables[i] = new_table; - _get_opp_table_kref(new_table); + required_opp_tables[i] = dev_pm_opp_get_opp_table_ref(new_table); /* Link OPPs now */ ret = lazy_link_required_opps(opp_table, new_table, i); @@ -417,8 +389,6 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) } } - of_node_put(opp_np); - /* All required opp-tables found, remove from lazy list */ if (!lazy) { list_del_init(&opp_table->lazy); @@ -427,22 +397,22 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) _required_opps_available(opp, opp_table->required_opp_count); } } - - mutex_unlock(&opp_table_lock); } static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table) { - struct device_node *np, *opp_np; + struct device_node *opp_np __free(device_node) = NULL; + struct device_node *np __free(device_node) = NULL; struct property *prop; if (!opp_table) { + struct device_node *np __free(device_node); + np = of_node_get(dev->of_node); if (!np) return -ENODEV; opp_np = _opp_of_get_opp_desc_node(np, 0); - of_node_put(np); } else { opp_np = of_node_get(opp_table->np); } @@ -453,15 +423,12 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table) /* Checking only first OPP is sufficient */ np = of_get_next_available_child(opp_np, NULL); - of_node_put(opp_np); if (!np) { dev_err(dev, "OPP table empty\n"); return -EINVAL; } prop = of_find_property(np, "opp-peak-kBps", NULL); - of_node_put(np); - if (!prop || !prop->length) return 0; @@ -471,7 +438,7 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table) int dev_pm_opp_of_find_icc_paths(struct device *dev, struct opp_table *opp_table) { - struct device_node *np; + struct device_node *np __free(device_node) = of_node_get(dev->of_node); int ret, i, count, num_paths; struct icc_path **paths; @@ -481,15 +448,13 @@ int dev_pm_opp_of_find_icc_paths(struct device *dev, else if (ret <= 0) return ret; - ret = 0; - - np = of_node_get(dev->of_node); if (!np) return 0; + ret = 0; + count = of_count_phandle_with_args(np, "interconnects", "#interconnect-cells"); - of_node_put(np); if (count < 0) return 0; @@ -992,15 +957,14 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table) struct dev_pm_opp *opp; /* OPP table is already initialized for the device */ - mutex_lock(&opp_table->lock); - if (opp_table->parsed_static_opps) { - opp_table->parsed_static_opps++; - mutex_unlock(&opp_table->lock); - return 0; - } + scoped_guard(mutex, &opp_table->lock) { + if (opp_table->parsed_static_opps) { + opp_table->parsed_static_opps++; + return 0; + } - opp_table->parsed_static_opps = 1; - mutex_unlock(&opp_table->lock); + opp_table->parsed_static_opps = 1; + } /* We have opp-table node now, iterate over it and add OPPs */ for_each_available_child_of_node(opp_table->np, np) { @@ -1040,15 +1004,14 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table) const __be32 *val; int nr, ret = 0; - mutex_lock(&opp_table->lock); - if (opp_table->parsed_static_opps) { - opp_table->parsed_static_opps++; - mutex_unlock(&opp_table->lock); - return 0; - } + scoped_guard(mutex, &opp_table->lock) { + if (opp_table->parsed_static_opps) { + opp_table->parsed_static_opps++; + return 0; + } - opp_table->parsed_static_opps = 1; - mutex_unlock(&opp_table->lock); + opp_table->parsed_static_opps = 1; + } prop = of_find_property(dev->of_node, "operating-points", NULL); if (!prop) { @@ -1306,8 +1269,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) { - struct device_node *np, *tmp_np, *cpu_np; - int cpu, ret = 0; + struct device_node *np __free(device_node); + int cpu; /* Get OPP descriptor node */ np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); @@ -1320,9 +1283,12 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, /* OPPs are shared ? */ if (!of_property_read_bool(np, "opp-shared")) - goto put_cpu_node; + return 0; for_each_possible_cpu(cpu) { + struct device_node *cpu_np __free(device_node) = NULL; + struct device_node *tmp_np __free(device_node) = NULL; + if (cpu == cpu_dev->id) continue; @@ -1330,29 +1296,22 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, if (!cpu_np) { dev_err(cpu_dev, "%s: failed to get cpu%d node\n", __func__, cpu); - ret = -ENOENT; - goto put_cpu_node; + return -ENOENT; } /* Get OPP descriptor node */ tmp_np = _opp_of_get_opp_desc_node(cpu_np, 0); - of_node_put(cpu_np); if (!tmp_np) { pr_err("%pOF: Couldn't find opp node\n", cpu_np); - ret = -ENOENT; - goto put_cpu_node; + return -ENOENT; } /* CPUs are sharing opp node */ if (np == tmp_np) cpumask_set_cpu(cpu, cpumask); - - of_node_put(tmp_np); } -put_cpu_node: - of_node_put(np); - return ret; + return 0; } EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); @@ -1369,9 +1328,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); */ int of_get_required_opp_performance_state(struct device_node *np, int index) { - struct dev_pm_opp *opp; - struct device_node *required_np; - struct opp_table *opp_table; + struct device_node *required_np __free(device_node); + struct opp_table *opp_table __free(put_opp_table) = NULL; + struct dev_pm_opp *opp __free(put_opp) = NULL; int pstate = -EINVAL; required_np = of_parse_required_opp(np, index); @@ -1382,13 +1341,13 @@ int of_get_required_opp_performance_state(struct device_node *np, int index) if (IS_ERR(opp_table)) { pr_err("%s: Failed to find required OPP table %pOF: %ld\n", __func__, np, PTR_ERR(opp_table)); - goto put_required_np; + return PTR_ERR(opp_table); } /* The OPP tables must belong to a genpd */ if (unlikely(!opp_table->is_genpd)) { pr_err("%s: Performance state is only valid for genpds.\n", __func__); - goto put_required_np; + return -EINVAL; } opp = _find_opp_of_np(opp_table, required_np); @@ -1399,15 +1358,8 @@ int of_get_required_opp_performance_state(struct device_node *np, int index) } else { pstate = opp->level; } - dev_pm_opp_put(opp); - } - dev_pm_opp_put_opp_table(opp_table); - -put_required_np: - of_node_put(required_np); - return pstate; } EXPORT_SYMBOL_GPL(of_get_required_opp_performance_state); @@ -1424,7 +1376,7 @@ EXPORT_SYMBOL_GPL(of_get_required_opp_performance_state); */ bool dev_pm_opp_of_has_required_opp(struct device *dev) { - struct device_node *opp_np, *np; + struct device_node *np __free(device_node) = NULL, *opp_np __free(device_node); int count; opp_np = _opp_of_get_opp_desc_node(dev->of_node, 0); @@ -1432,14 +1384,12 @@ bool dev_pm_opp_of_has_required_opp(struct device *dev) return false; np = of_get_next_available_child(opp_np, NULL); - of_node_put(opp_np); if (!np) { dev_warn(dev, "Empty OPP table\n"); return false; } count = of_count_phandle_with_args(np, "required-opps", NULL); - of_node_put(np); return count > 0; } @@ -1475,7 +1425,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node); static int __maybe_unused _get_dt_power(struct device *dev, unsigned long *uW, unsigned long *kHz) { - struct dev_pm_opp *opp; + struct dev_pm_opp *opp __free(put_opp); unsigned long opp_freq, opp_power; /* Find the right frequency and related OPP */ @@ -1485,7 +1435,6 @@ _get_dt_power(struct device *dev, unsigned long *uW, unsigned long *kHz) return -EINVAL; opp_power = dev_pm_opp_get_power(opp); - dev_pm_opp_put(opp); if (!opp_power) return -EINVAL; @@ -1516,8 +1465,8 @@ _get_dt_power(struct device *dev, unsigned long *uW, unsigned long *kHz) int dev_pm_opp_calc_power(struct device *dev, unsigned long *uW, unsigned long *kHz) { - struct dev_pm_opp *opp; - struct device_node *np; + struct dev_pm_opp *opp __free(put_opp) = NULL; + struct device_node *np __free(device_node); unsigned long mV, Hz; u32 cap; u64 tmp; @@ -1528,7 +1477,6 @@ int dev_pm_opp_calc_power(struct device *dev, unsigned long *uW, return -EINVAL; ret = of_property_read_u32(np, "dynamic-power-coefficient", &cap); - of_node_put(np); if (ret) return -EINVAL; @@ -1538,7 +1486,6 @@ int dev_pm_opp_calc_power(struct device *dev, unsigned long *uW, return -EINVAL; mV = dev_pm_opp_get_voltage(opp) / 1000; - dev_pm_opp_put(opp); if (!mV) return -EINVAL; @@ -1555,20 +1502,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_calc_power); static bool _of_has_opp_microwatt_property(struct device *dev) { - unsigned long power, freq = 0; - struct dev_pm_opp *opp; + struct dev_pm_opp *opp __free(put_opp); + unsigned long freq = 0; /* Check if at least one OPP has needed property */ opp = dev_pm_opp_find_freq_ceil(dev, &freq); if (IS_ERR(opp)) return false; - power = dev_pm_opp_get_power(opp); - dev_pm_opp_put(opp); - if (!power) - return false; - - return true; + return !!dev_pm_opp_get_power(opp); } /** @@ -1584,8 +1526,8 @@ static bool _of_has_opp_microwatt_property(struct device *dev) */ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus) { + struct device_node *np __free(device_node) = NULL; struct em_data_callback em_cb; - struct device_node *np; int ret, nr_opp; u32 cap; @@ -1620,7 +1562,6 @@ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus) * user about the inconsistent configuration. */ ret = of_property_read_u32(np, "dynamic-power-coefficient", &cap); - of_node_put(np); if (ret || !cap) { dev_dbg(dev, "Couldn't find proper 'dynamic-power-coefficient' in DT\n"); ret = -EINVAL; diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h index 5c7c81190e41..9eba63e01a9e 100644 --- a/drivers/opp/opp.h +++ b/drivers/opp/opp.h @@ -251,7 +251,6 @@ struct opp_table { /* Routines internal to opp core */ bool _opp_remove_all_static(struct opp_table *opp_table); -void _get_opp_table_kref(struct opp_table *opp_table); int _get_opp_count(struct opp_table *opp_table); struct opp_table *_find_opp_table(struct device *dev); struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); |