<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/drivers/gpu/host1x, branch v5.13-rc3</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>gpu: host1x: Add early init and late exit callbacks</title>
<updated>2021-03-31T15:42:14+00:00</updated>
<author>
<name>Thierry Reding</name>
<email>treding@nvidia.com</email>
</author>
<published>2021-03-26T14:51:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=933deb8c7b8e3f83e3dbd0b08e3cad51350d44c4'/>
<id>933deb8c7b8e3f83e3dbd0b08e3cad51350d44c4</id>
<content type='text'>
These callbacks can be used by client drivers to run code during early
init and during late exit. Early init callbacks are run prior to the
regular init callbacks while late exit callbacks run after the regular
exit callbacks.

Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
These callbacks can be used by client drivers to run code during early
init and during late exit. Early init callbacks are run prior to the
regular init callbacks while late exit callbacks run after the regular
exit callbacks.

Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Fix Tegra194 syncpt interrupt threshold</title>
<updated>2021-03-31T15:42:14+00:00</updated>
<author>
<name>Jon Hunter</name>
<email>jonathanh@nvidia.com</email>
</author>
<published>2021-03-29T13:38:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d3555eb7f8c01b9c16d400af9533555757a2c264'/>
<id>d3555eb7f8c01b9c16d400af9533555757a2c264</id>
<content type='text'>
Syncpoint interrupts are not working as expected on Tegra194. The
problem is that the syncpoint interrupt threshold being used is the
global interrupt threshold and not the virtual interrupt threshold.
Fix this by using the virtual interrupt threshold which aligns with
downstream.

Signed-off-by: Jon Hunter &lt;jonathanh@nvidia.com&gt;
Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Syncpoint interrupts are not working as expected on Tegra194. The
problem is that the syncpoint interrupt threshold being used is the
global interrupt threshold and not the virtual interrupt threshold.
Fix this by using the virtual interrupt threshold which aligns with
downstream.

Signed-off-by: Jon Hunter &lt;jonathanh@nvidia.com&gt;
Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Assign intr waiter inside lock</title>
<updated>2021-03-31T15:42:14+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5a8d95d20c406c673258edd4c2bd308c22304657'/>
<id>5a8d95d20c406c673258edd4c2bd308c22304657</id>
<content type='text'>
Move the assignment of the ref out-pointer in host1x_intr_add_action
to happen within the spinlock. With the current arrangement,
it is possible for the waiter to complete before the assignment
has happened, which breaks horribly if the waiter completion
callback tries to use the reference.

In practice, there is currently no situation where this issue can
manifest -- it was first noticed with the upcoming DMA fence
implementation patches. As such this doesn't need to be backported.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Move the assignment of the ref out-pointer in host1x_intr_add_action
to happen within the spinlock. With the current arrangement,
it is possible for the waiter to complete before the assignment
has happened, which breaks horribly if the waiter completion
callback tries to use the reference.

In practice, there is currently no situation where this issue can
manifest -- it was first noticed with the upcoming DMA fence
implementation patches. As such this doesn't need to be backported.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Reserve VBLANK syncpoints at initialization</title>
<updated>2021-03-31T15:42:13+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f5ba33fb9690566c382624637125827b5512e766'/>
<id>f5ba33fb9690566c382624637125827b5512e766</id>
<content type='text'>
On T20-T148 chips, the bootloader can set up a boot splash
screen with DC configured to increment syncpoint 26/27
at VBLANK. Because of this we shouldn't allow these syncpoints
to be allocated until DC has been reset and will no longer
increment them in the background.

As such, on these chips, reserve those two syncpoints at
initialization, and only mark them free once the DC
driver has indicated it's safe to do so.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On T20-T148 chips, the bootloader can set up a boot splash
screen with DC configured to increment syncpoint 26/27
at VBLANK. Because of this we shouldn't allow these syncpoints
to be allocated until DC has been reset and will no longer
increment them in the background.

As such, on these chips, reserve those two syncpoints at
initialization, and only mark them free once the DC
driver has indicated it's safe to do so.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Reset max value when freeing a syncpoint</title>
<updated>2021-03-31T15:42:13+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=aded42ada6eacfa11d349b158e993f66e4741aa7'/>
<id>aded42ada6eacfa11d349b158e993f66e4741aa7</id>
<content type='text'>
With job recovery becoming optional, syncpoints may have a mismatch
between their value and max value when freed. As such, when freeing,
set the max value to the current value of the syncpoint so that it
is in a sane state for the next user.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With job recovery becoming optional, syncpoints may have a mismatch
between their value and max value when freed. As such, when freeing,
set the max value to the current value of the syncpoint so that it
is in a sane state for the next user.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Cleanup and refcounting for syncpoints</title>
<updated>2021-03-31T15:42:13+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=2aed4f5ab04af922a7cf1b616701845c9ed2473f'/>
<id>2aed4f5ab04af922a7cf1b616701845c9ed2473f</id>
<content type='text'>
Add reference counting for allocated syncpoints to allow keeping
them allocated while jobs are referencing them. Additionally,
clean up various places using syncpoint IDs to use host1x_syncpt
pointers instead.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add reference counting for allocated syncpoints to allow keeping
them allocated while jobs are referencing them. Additionally,
clean up various places using syncpoint IDs to use host1x_syncpt
pointers instead.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Use HW-equivalent syncpoint expiration check</title>
<updated>2021-03-30T17:53:24+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f63b42cbc86e12f7d960d1fdaaf93b4373c06c65'/>
<id>f63b42cbc86e12f7d960d1fdaaf93b4373c06c65</id>
<content type='text'>
Make syncpoint expiration checks always use the same logic used by
the hardware. This ensures that there are no race conditions that
could occur because of the hardware triggering a syncpoint interrupt
and then the driver disagreeing.

One situation where this could occur is if a job incremented a
syncpoint too many times -- then the hardware would trigger an
interrupt, but the driver would assume that a syncpoint value
greater than the syncpoint's max value is in the future, and not
clean up the job.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Make syncpoint expiration checks always use the same logic used by
the hardware. This ensures that there are no race conditions that
could occur because of the hardware triggering a syncpoint interrupt
and then the driver disagreeing.

One situation where this could occur is if a job incremented a
syncpoint too many times -- then the hardware would trigger an
interrupt, but the driver would assume that a syncpoint value
greater than the syncpoint's max value is in the future, and not
clean up the job.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Remove cancelled waiters immediately</title>
<updated>2021-03-30T17:53:24+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ecfb888ade427e2da437b48cafd8fc824e80c909'/>
<id>ecfb888ade427e2da437b48cafd8fc824e80c909</id>
<content type='text'>
Before this patch, cancelled waiters would only be cleaned up
once their threshold value was reached. Make host1x_intr_put_ref
process the cancellation immediately to fix this.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Before this patch, cancelled waiters would only be cleaned up
once their threshold value was reached. Make host1x_intr_put_ref
process the cancellation immediately to fix this.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Show number of pending waiters in debugfs</title>
<updated>2021-03-30T17:53:24+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=49a5fb1679952a76861bd2580f785e33e3de712c'/>
<id>49a5fb1679952a76861bd2580f785e33e3de712c</id>
<content type='text'>
Show the number of pending waiters in the debugfs status file.
This is useful for testing to verify that waiters do not leak
or accumulate incorrectly.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Show the number of pending waiters in the debugfs status file.
This is useful for testing to verify that waiters do not leak
or accumulate incorrectly.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gpu: host1x: Allow syncpoints without associated client</title>
<updated>2021-03-30T17:53:24+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=86cec7ece3e62517e2bc0fd796a8a8da4193e7e5'/>
<id>86cec7ece3e62517e2bc0fd796a8a8da4193e7e5</id>
<content type='text'>
Syncpoints don't need to be associated with any client,
so remove the property, and expose host1x_syncpt_alloc.
This will allow allocating syncpoints without prior knowledge
of the engine that it will be used with.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Syncpoints don't need to be associated with any client,
so remove the property, and expose host1x_syncpt_alloc.
This will allow allocating syncpoints without prior knowledge
of the engine that it will be used with.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
