<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/include/linux/blkdev.h, branch v2.6.26-rc5</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>Improve queue_is_locked()</title>
<updated>2008-04-29T19:36:54+00:00</updated>
<author>
<name>Jens Axboe</name>
<email>jens.axboe@oracle.com</email>
</author>
<published>2008-04-29T19:31:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=7663c1e2792a9662b23dec6e19bfcd3d55360b8f'/>
<id>7663c1e2792a9662b23dec6e19bfcd3d55360b8f</id>
<content type='text'>
spin_is_locked() doesn't work on UP without spinlock debugging. Make it
safer and just return 1 on UP, so we don't get false positives. The plan
is to kill this debug function during the -rc cycle.

Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
spin_is_locked() doesn't work on UP without spinlock debugging. Make it
safer and just return 1 on UP, so we don't get false positives. The plan
is to kill this debug function during the -rc cycle.

Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: fix queue locking verification</title>
<updated>2008-04-29T17:16:38+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2008-04-29T17:16:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8f45c1a58a25c3a1a2f42521445e1e786c4c0b92'/>
<id>8f45c1a58a25c3a1a2f42521445e1e786c4c0b92</id>
<content type='text'>
The new queue_flag_set/clear() functions verify that the queue is
locked, but in doing so they will actually instead oops if the queue
lock hasn't been initialized at all.

So fix the lock debug test to consider the "no lock" case to be
unlocked.  This way you get a nice WARN_ON_ONCE() instead of a fatal
oops.

Bug introduced by commit 75ad23bc0fcb4f992a5d06982bf0857ab1738e9e
("block: make queue flags non-atomic").

Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Cc: Nick Piggin &lt;npiggin@suse.de&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The new queue_flag_set/clear() functions verify that the queue is
locked, but in doing so they will actually instead oops if the queue
lock hasn't been initialized at all.

So fix the lock debug test to consider the "no lock" case to be
unlocked.  This way you get a nice WARN_ON_ONCE() instead of a fatal
oops.

Bug introduced by commit 75ad23bc0fcb4f992a5d06982bf0857ab1738e9e
("block: make queue flags non-atomic").

Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Cc: Nick Piggin &lt;npiggin@suse.de&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: Skip I/O merges when disabled</title>
<updated>2008-04-29T12:48:55+00:00</updated>
<author>
<name>Alan D. Brunelle</name>
<email>Alan.Brunelle@hp.com</email>
</author>
<published>2008-04-29T12:44:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ac9fafa1243640349aa481adf473db283a695766'/>
<id>ac9fafa1243640349aa481adf473db283a695766</id>
<content type='text'>
The block I/O + elevator + I/O scheduler code spend a lot of time trying
to merge I/Os -- rightfully so under "normal" circumstances. However,
if one were to know that the incoming I/O stream was /very/ random in
nature, the cycles are wasted.

This patch adds a per-request_queue tunable that (when set) disables
merge attempts (beyond the simple one-hit cache check), thus freeing up
a non-trivial amount of CPU cycles.

Signed-off-by: Alan D. Brunelle &lt;alan.brunelle@hp.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The block I/O + elevator + I/O scheduler code spend a lot of time trying
to merge I/Os -- rightfully so under "normal" circumstances. However,
if one were to know that the incoming I/O stream was /very/ random in
nature, the cycles are wasted.

This patch adds a per-request_queue tunable that (when set) disables
merge attempts (beyond the simple one-hit cache check), thus freeing up
a non-trivial amount of CPU cycles.

Signed-off-by: Alan D. Brunelle &lt;alan.brunelle@hp.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: add large command support</title>
<updated>2008-04-29T12:48:55+00:00</updated>
<author>
<name>FUJITA Tomonori</name>
<email>fujita.tomonori@lab.ntt.co.jp</email>
</author>
<published>2008-04-29T07:54:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d7e3c3249ef23b4617393c69fe464765b4ff1645'/>
<id>d7e3c3249ef23b4617393c69fe464765b4ff1645</id>
<content type='text'>
This patch changes rq-&gt;cmd from the static array to a pointer to
support large commands.

We rarely handle large commands. So for optimization, a struct request
still has a static array for a command. rq_init sets rq-&gt;cmd pointer
to the static array.

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch changes rq-&gt;cmd from the static array to a pointer to
support large commands.

We rarely handle large commands. So for optimization, a struct request
still has a static array for a command. rq_init sets rq-&gt;cmd pointer
to the static array.

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: rename and export rq_init()</title>
<updated>2008-04-29T12:48:55+00:00</updated>
<author>
<name>FUJITA Tomonori</name>
<email>fujita.tomonori@lab.ntt.co.jp</email>
</author>
<published>2008-04-29T07:54:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=2a4aa30c5f967eb6ae874c67fa6fceeee84815f9'/>
<id>2a4aa30c5f967eb6ae874c67fa6fceeee84815f9</id>
<content type='text'>
This rename rq_init() blk_rq_init() and export it. Any path that hands
the request to the block layer needs to call it to initialize the
request.

This is a preparation for large command support, which needs to
initialize the request in a proper way (that is, just doing a memset()
will not work).

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This rename rq_init() blk_rq_init() and export it. Any path that hands
the request to the block layer needs to call it to initialize the
request.

This is a preparation for large command support, which needs to
initialize the request in a proper way (that is, just doing a memset()
will not work).

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: make queue flags non-atomic</title>
<updated>2008-04-29T12:48:33+00:00</updated>
<author>
<name>Nick Piggin</name>
<email>npiggin@suse.de</email>
</author>
<published>2008-04-29T12:48:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=75ad23bc0fcb4f992a5d06982bf0857ab1738e9e'/>
<id>75ad23bc0fcb4f992a5d06982bf0857ab1738e9e</id>
<content type='text'>
We can save some atomic ops in the IO path, if we clearly define
the rules of how to modify the queue flags.

Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We can save some atomic ops in the IO path, if we clearly define
the rules of how to modify the queue flags.

Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: fix memory hotplug and bouncing in block layer</title>
<updated>2008-04-21T07:51:05+00:00</updated>
<author>
<name>Andi Kleen</name>
<email>andi@firstfloor.org</email>
</author>
<published>2008-04-21T07:51:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=2472892a3ce17b177cc0d8099a6391949c75abf2'/>
<id>2472892a3ce17b177cc0d8099a6391949c75abf2</id>
<content type='text'>
Only noticed this while hacking something else, no test case.

blk_max_low_pfn is initialized once at bootup by the block layer from
max_low_pfn.  But max_low_pfn is not necessarily constant over the runtime of
the system when you consider memory hotplug.  What could happen if that
someone adds memory later the block layer wouldn't get updated and then start
bouncing memory unnecessarily.

Also on 64bit blk_max_low_pfn actually isn't needed because it just disables
bouncing essentially and there is no highmem.  And nobody can pass pfns &gt;
max_low_pfn to the block layer, because those wouldn't have a struct page and
I suspect block layer wouldn't be very happy without that.

So set BLK_BOUNCE_HIGH to infinity (-1ULL) on 64bit.  That avoids the problem
of having to update it on memory hotadd.

On 32bit I kept the same behaviour because at least on i386
memory hotadd only adds HIGHMEM, never lowmem.

BLK_BOUNCE_ANY is always set to infinity on both 32 and 64bit.

Signed-off-by: Andi Kleen &lt;ak@suse.de&gt;
Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Acked-by: Yasunori Goto &lt;y-goto@jp.fujitsu.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Only noticed this while hacking something else, no test case.

blk_max_low_pfn is initialized once at bootup by the block layer from
max_low_pfn.  But max_low_pfn is not necessarily constant over the runtime of
the system when you consider memory hotplug.  What could happen if that
someone adds memory later the block layer wouldn't get updated and then start
bouncing memory unnecessarily.

Also on 64bit blk_max_low_pfn actually isn't needed because it just disables
bouncing essentially and there is no highmem.  And nobody can pass pfns &gt;
max_low_pfn to the block layer, because those wouldn't have a struct page and
I suspect block layer wouldn't be very happy without that.

So set BLK_BOUNCE_HIGH to infinity (-1ULL) on 64bit.  That avoids the problem
of having to update it on memory hotadd.

On 32bit I kept the same behaviour because at least on i386
memory hotadd only adds HIGHMEM, never lowmem.

BLK_BOUNCE_ANY is always set to infinity on both 32 and 64bit.

Signed-off-by: Andi Kleen &lt;ak@suse.de&gt;
Cc: Jens Axboe &lt;jens.axboe@oracle.com&gt;
Acked-by: Yasunori Goto &lt;y-goto@jp.fujitsu.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: move the padding adjustment to blk_rq_map_sg</title>
<updated>2008-04-21T07:50:08+00:00</updated>
<author>
<name>FUJITA Tomonori</name>
<email>fujita.tomonori@lab.ntt.co.jp</email>
</author>
<published>2008-04-11T10:56:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f18573abcc57844a7c3c12699d40eead8728cd8a'/>
<id>f18573abcc57844a7c3c12699d40eead8728cd8a</id>
<content type='text'>
blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
that req-&gt;data_len (the true data length) is equal to sum(bio). It
broke the scsi command completion code.

commit e97a294ef6938512b655b1abf17656cf2b26f709 was introduced to fix
the above issue. However, the partial completion code doesn't work
with it. The commit is also a layer violation (scsi mid-layer should
not know about the block layer's padding).

This patch moves the padding adjustment to blk_rq_map_sg (suggested by
James). The padding works like the drain buffer. This patch breaks the
rule that req-&gt;data_len is equal to sum(sg), however, the drain buffer
already broke it. So this patch just restores the rule that
req-&gt;data_len is equal to sub(bio) without breaking anything new.

Now when a low level driver needs padding, blk_rq_map_user and
blk_rq_map_user_iov guarantee there's enough room for padding.
blk_rq_map_sg can safely extend the last entry of a scatter list.

blk_rq_map_sg must extend the last entry of a scatter list only for a
request that got through bio_copy_user_iov. This patches introduces
new REQ_COPY_USER flag.

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Tejun Heo &lt;htejun@gmail.com&gt;
Cc: Mike Christie &lt;michaelc@cs.wisc.edu&gt;
Cc: James Bottomley &lt;James.Bottomley@HansenPartnership.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
that req-&gt;data_len (the true data length) is equal to sum(bio). It
broke the scsi command completion code.

commit e97a294ef6938512b655b1abf17656cf2b26f709 was introduced to fix
the above issue. However, the partial completion code doesn't work
with it. The commit is also a layer violation (scsi mid-layer should
not know about the block layer's padding).

This patch moves the padding adjustment to blk_rq_map_sg (suggested by
James). The padding works like the drain buffer. This patch breaks the
rule that req-&gt;data_len is equal to sum(sg), however, the drain buffer
already broke it. So this patch just restores the rule that
req-&gt;data_len is equal to sub(bio) without breaking anything new.

Now when a low level driver needs padding, blk_rq_map_user and
blk_rq_map_user_iov guarantee there's enough room for padding.
blk_rq_map_sg can safely extend the last entry of a scatter list.

blk_rq_map_sg must extend the last entry of a scatter list only for a
request that got through bio_copy_user_iov. This patches introduces
new REQ_COPY_USER flag.

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Tejun Heo &lt;htejun@gmail.com&gt;
Cc: Mike Christie &lt;michaelc@cs.wisc.edu&gt;
Cc: James Bottomley &lt;James.Bottomley@HansenPartnership.com&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: separate out padding from alignment</title>
<updated>2008-03-04T10:18:17+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>htejun@gmail.com</email>
</author>
<published>2008-03-04T10:18:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=e3790c7d42a545e8fe8b38b513613ca96687b670'/>
<id>e3790c7d42a545e8fe8b38b513613ca96687b670</id>
<content type='text'>
Block layer alignment was used for two different purposes - memory
alignment and padding.  This causes problems in lower layers because
drivers which only require memory alignment ends up with adjusted
rq-&gt;data_len.  Separate out padding such that padding occurs iff
driver explicitly requests it.

Tomo: restorethe code to update bio in blk_rq_map_user
      introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa
      according to padding alignment.

Signed-off-by: Tejun Heo &lt;htejun@gmail.com&gt;
Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Block layer alignment was used for two different purposes - memory
alignment and padding.  This causes problems in lower layers because
drivers which only require memory alignment ends up with adjusted
rq-&gt;data_len.  Separate out padding such that padding occurs iff
driver explicitly requests it.

Tomo: restorethe code to update bio in blk_rq_map_user
      introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa
      according to padding alignment.

Signed-off-by: Tejun Heo &lt;htejun@gmail.com&gt;
Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Signed-off-by: Jens Axboe &lt;jens.axboe@oracle.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: restore the meaning of rq-&gt;data_len to the true data length</title>
<updated>2008-03-04T10:17:11+00:00</updated>
<author>
<name>FUJITA Tomonori</name>
<email>fujita.tomonori@lab.ntt.co.jp</email>
</author>
<published>2008-03-04T10:17:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=7a85f8896f4b4a4a0249563b92af9e3161a6b467'/>
<id>7a85f8896f4b4a4a0249563b92af9e3161a6b467</id>
<content type='text'>
The meaning of rq-&gt;data_len was changed to the length of an allocated
buffer from the true data length. It breaks SG_IO friends and
bsg. This patch restores the meaning of rq-&gt;data_len to the true data
length and adds rq-&gt;extra_len to store an extended length (due to
drain buffer and padding).

This patch also removes the code to update bio in blk_rq_map_user
introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa.
The commit adjusts bio according to memory alignment
(queue_dma_alignment). However, memory alignment is NOT padding
alignment. This adjustment also breaks SG_IO friends and bsg. Padding
alignment needs to be fixed in a proper way (by a separate patch).

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Signed-off-by: Jens Axboe &lt;axboe@carl.home.kernel.dk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The meaning of rq-&gt;data_len was changed to the length of an allocated
buffer from the true data length. It breaks SG_IO friends and
bsg. This patch restores the meaning of rq-&gt;data_len to the true data
length and adds rq-&gt;extra_len to store an extended length (due to
drain buffer and padding).

This patch also removes the code to update bio in blk_rq_map_user
introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa.
The commit adjusts bio according to memory alignment
(queue_dma_alignment). However, memory alignment is NOT padding
alignment. This adjustment also breaks SG_IO friends and bsg. Padding
alignment needs to be fixed in a proper way (by a separate patch).

Signed-off-by: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Signed-off-by: Jens Axboe &lt;axboe@carl.home.kernel.dk&gt;
</pre>
</div>
</content>
</entry>
</feed>
