diff options
| author | Ville Syrjälä <ville.syrjala@linux.intel.com> | 2026-03-23 11:43:04 +0200 |
|---|---|---|
| committer | Ville Syrjälä <ville.syrjala@linux.intel.com> | 2026-03-24 15:57:31 +0200 |
| commit | ff854b32b604526100c8468f8915150bd4387288 (patch) | |
| tree | 1e84fee2df76c98eec29b3281e70cff0d4d2b1fe /drivers/gpu/drm/xe | |
| parent | 56d2a47e6b495e7d382d00b91ce182ff2c6a3741 (diff) | |
drm/i915/de: Implement register polling in the display code
The plan is to move all the mmio stuff into the display code itself.
As a first step implement the register polling in intel_de.c.
Currently i915 and xe implement this stuff in slightly different
ways, so there are some functional changes here. Try to go for a
reasonable middle ground between the i915 and xe implementations:
- the exponential backoff limit is the simpler approach taken
by i915 (== just clamp the max sleep duration to 1 ms)
- the fast vs. slow timeout handling is similar to i915 where
we first try the fast timeout and then again the slow timeout
if the condition still isn't satisfied. xe just adds up the
timeouts together, which is a bit weird.
- the atomic wait variant uses udelay() like xe, whereas i915
has no udelay()s in its atomic loop. As a compromise go for a
fixed 1 usec delay for short waits, instead of the somewhat
peculiar xe behaviour where it effectively just does one
iteration of the loop.
- keep the "use udelay() for < 10 usec waits" logic (which
more or less mirrors fsleep()), but include an explicit
might_sleep() even for these short waits when called from
a non-atomic intel_de_wait*() function. This should prevent
people from calling the non-atomic functions from the wrong
place.
Eventually we may want to switch over to poll_timeout*(),
but that lacks the exponential backoff, so a bit too
radical to change in one go.
v2: Initialize ret in intel_de_wait_for_register() to avoid a
warning from the compiler. This is actually a false positive
since we always have fast_timeout_us!=0 when slow_timeout_us!=0,
but the compiler can't see that
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patch.msgid.link/20260323094304.8171-1-ville.syrjala@linux.intel.com
Diffstat (limited to 'drivers/gpu/drm/xe')
| -rw-r--r-- | drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h | 31 |
1 files changed, 0 insertions, 31 deletions
diff --git a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h index a8cfd65119e0..08d7ab933672 100644 --- a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h +++ b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h @@ -98,37 +98,6 @@ static inline u32 intel_uncore_rmw(struct intel_uncore *uncore, return xe_mmio_rmw32(__compat_uncore_to_mmio(uncore), reg, clear, set); } -static inline int -__intel_wait_for_register(struct intel_uncore *uncore, i915_reg_t i915_reg, - u32 mask, u32 value, unsigned int fast_timeout_us, - unsigned int slow_timeout_ms, u32 *out_value) -{ - struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg)); - bool atomic; - - /* - * Replicate the behavior from i915 here, in which sleep is not - * performed if slow_timeout_ms == 0. This is necessary because - * of some paths in display code where waits are done in atomic - * context. - */ - atomic = !slow_timeout_ms && fast_timeout_us > 0; - - return xe_mmio_wait32(__compat_uncore_to_mmio(uncore), reg, mask, value, - fast_timeout_us + 1000 * slow_timeout_ms, - out_value, atomic); -} - -static inline int -__intel_wait_for_register_fw(struct intel_uncore *uncore, i915_reg_t i915_reg, - u32 mask, u32 value, unsigned int fast_timeout_us, - unsigned int slow_timeout_ms, u32 *out_value) -{ - return __intel_wait_for_register(uncore, i915_reg, mask, value, - fast_timeout_us, slow_timeout_ms, - out_value); -} - static inline u32 intel_uncore_read_fw(struct intel_uncore *uncore, i915_reg_t i915_reg) { |
