<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/include/linux/blkdev.h, branch Colibri_T30_LinuxImageV2.1Beta2_20140206</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>block: initialize request_queue's numa node during</title>
<updated>2012-01-11T17:26:34+00:00</updated>
<author>
<name>Mike Snitzer</name>
<email>snitzer@redhat.com</email>
</author>
<published>2011-11-23T09:59:13+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b5f50e1779db150d7c907f1ecf3caa7e31458787'/>
<id>b5f50e1779db150d7c907f1ecf3caa7e31458787</id>
<content type='text'>
commit 5151412dd4338b273afdb107c3772528e9e67d92 upstream.

struct request_queue is allocated with __GFP_ZERO so its "node" field is
zero before initialization.  This causes an oops if node 0 is offline in
the page allocator because its zonelists are not initialized.  From Dave
Young's dmesg:

	SRAT: Node 1 PXM 2 0-d0000000
	SRAT: Node 1 PXM 2 100000000-330000000
	SRAT: Node 0 PXM 1 330000000-630000000
	Initmem setup node 1 0000000000000000-000000000affb000
	...
	Built 1 zonelists in Node order, mobility grouping on.
	...
	BUG: unable to handle kernel paging request at 0000000000001c08
	IP: [&lt;ffffffff8111c355&gt;] __alloc_pages_nodemask+0xb5/0x870

and __alloc_pages_nodemask+0xb5 translates to a NULL pointer on
zonelist-&gt;_zonerefs.

The fix is to initialize q-&gt;node at the time of allocation so the correct
node is passed to the slab allocator later.

Since blk_init_allocated_queue_node() is no longer needed, merge it with
blk_init_allocated_queue().

[rientjes@google.com: changelog, initializing q-&gt;node]
Reported-by: Dave Young &lt;dyoung@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: David Rientjes &lt;rientjes@google.com&gt;
Tested-by: Dave Young &lt;dyoung@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

Change-Id: I24b14588aef6226f3bcdf37e78af61cbe9a31fd2
Reviewed-on: http://git-master/r/74168
Reviewed-by: Varun Wadekar &lt;vwadekar@nvidia.com&gt;
Tested-by: Varun Wadekar &lt;vwadekar@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5151412dd4338b273afdb107c3772528e9e67d92 upstream.

struct request_queue is allocated with __GFP_ZERO so its "node" field is
zero before initialization.  This causes an oops if node 0 is offline in
the page allocator because its zonelists are not initialized.  From Dave
Young's dmesg:

	SRAT: Node 1 PXM 2 0-d0000000
	SRAT: Node 1 PXM 2 100000000-330000000
	SRAT: Node 0 PXM 1 330000000-630000000
	Initmem setup node 1 0000000000000000-000000000affb000
	...
	Built 1 zonelists in Node order, mobility grouping on.
	...
	BUG: unable to handle kernel paging request at 0000000000001c08
	IP: [&lt;ffffffff8111c355&gt;] __alloc_pages_nodemask+0xb5/0x870

and __alloc_pages_nodemask+0xb5 translates to a NULL pointer on
zonelist-&gt;_zonerefs.

The fix is to initialize q-&gt;node at the time of allocation so the correct
node is passed to the slab allocator later.

Since blk_init_allocated_queue_node() is no longer needed, merge it with
blk_init_allocated_queue().

[rientjes@google.com: changelog, initializing q-&gt;node]
Reported-by: Dave Young &lt;dyoung@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: David Rientjes &lt;rientjes@google.com&gt;
Tested-by: Dave Young &lt;dyoung@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

Change-Id: I24b14588aef6226f3bcdf37e78af61cbe9a31fd2
Reviewed-on: http://git-master/r/74168
Reviewed-by: Varun Wadekar &lt;vwadekar@nvidia.com&gt;
Tested-by: Varun Wadekar &lt;vwadekar@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: simplify force plug flush code a little bit</title>
<updated>2011-08-24T14:04:34+00:00</updated>
<author>
<name>Shaohua Li</name>
<email>shaohua.li@intel.com</email>
</author>
<published>2011-08-24T14:04:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=56ebdaf2fa3c5276be201c5d1aff1490b682ecf2'/>
<id>56ebdaf2fa3c5276be201c5d1aff1490b682ecf2</id>
<content type='text'>
Cleaning up the code a little bit. attempt_plug_merge() traverses the plug
list anyway, we can do the request counting there, so stack size is reduced
a little bit.
The motivation here is I suspect if we should count the requests for each
queue (task could handle multiple disks in the meantime), but my test doesn't
show it's worthy doing. If somebody proves we should do it, below change
will make that more easier.

Signed-off-by: Shaohua Li &lt;shli@kernel.org&gt;
Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Cleaning up the code a little bit. attempt_plug_merge() traverses the plug
list anyway, we can do the request counting there, so stack size is reduced
a little bit.
The motivation here is I suspect if we should count the requests for each
queue (task could handle multiple disks in the meantime), but my test doesn't
show it's worthy doing. If somebody proves we should do it, below change
will make that more easier.

Signed-off-by: Shaohua Li &lt;shli@kernel.org&gt;
Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: fix flush machinery for stacking drivers with differring flush flags</title>
<updated>2011-08-15T19:37:25+00:00</updated>
<author>
<name>Jeff Moyer</name>
<email>jmoyer@redhat.com</email>
</author>
<published>2011-08-15T19:37:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4853abaae7e4a2af938115ce9071ef8684fb7af4'/>
<id>4853abaae7e4a2af938115ce9071ef8684fb7af4</id>
<content type='text'>
Commit ae1b1539622fb46e51b4d13b3f9e5f4c713f86ae, block: reimplement
FLUSH/FUA to support merge, introduced a performance regression when
running any sort of fsyncing workload using dm-multipath and certain
storage (in our case, an HP EVA).  The test I ran was fs_mark, and it
dropped from ~800 files/sec on ext4 to ~100 files/sec.  It turns out
that dm-multipath always advertised flush+fua support, and passed
commands on down the stack, where those flags used to get stripped off.
The above commit changed that behavior:

static inline struct request *__elv_next_request(struct request_queue *q)
{
        struct request *rq;

        while (1) {
-               while (!list_empty(&amp;q-&gt;queue_head)) {
+               if (!list_empty(&amp;q-&gt;queue_head)) {
                        rq = list_entry_rq(q-&gt;queue_head.next);
-                       if (!(rq-&gt;cmd_flags &amp; (REQ_FLUSH | REQ_FUA)) ||
-                           (rq-&gt;cmd_flags &amp; REQ_FLUSH_SEQ))
-                               return rq;
-                       rq = blk_do_flush(q, rq);
-                       if (rq)
-                               return rq;
+                       return rq;
                }

Note that previously, a command would come in here, have
REQ_FLUSH|REQ_FUA set, and then get handed off to blk_do_flush:

struct request *blk_do_flush(struct request_queue *q, struct request *rq)
{
        unsigned int fflags = q-&gt;flush_flags; /* may change, cache it */
        bool has_flush = fflags &amp; REQ_FLUSH, has_fua = fflags &amp; REQ_FUA;
        bool do_preflush = has_flush &amp;&amp; (rq-&gt;cmd_flags &amp; REQ_FLUSH);
        bool do_postflush = has_flush &amp;&amp; !has_fua &amp;&amp; (rq-&gt;cmd_flags &amp;
        REQ_FUA);
        unsigned skip = 0;
...
        if (blk_rq_sectors(rq) &amp;&amp; !do_preflush &amp;&amp; !do_postflush) {
                rq-&gt;cmd_flags &amp;= ~REQ_FLUSH;
		if (!has_fua)
			rq-&gt;cmd_flags &amp;= ~REQ_FUA;
	        return rq;
	}

So, the flush machinery was bypassed in such cases (q-&gt;flush_flags == 0
&amp;&amp; rq-&gt;cmd_flags &amp; (REQ_FLUSH|REQ_FUA)).

Now, however, we don't get into the flush machinery at all.  Instead,
__elv_next_request just hands a request with flush and fua bits set to
the scsi_request_fn, even if the underlying request_queue does not
support flush or fua.

The agreed upon approach is to fix the flush machinery to allow
stacking.  While this isn't used in practice (since there is only one
request-based dm target, and that target will now reflect the flush
flags of the underlying device), it does future-proof the solution, and
make it function as designed.

In order to make this work, I had to add a field to the struct request,
inside the flush structure (to store the original req-&gt;end_io).  Shaohua
had suggested overloading the union with rb_node and completion_data,
but the completion data is used by device mapper and can also be used by
other drivers.  So, I didn't see a way around the additional field.

I tested this patch on an HP EVA with both ext4 and xfs, and it recovers
the lost performance.  Comments and other testers, as always, are
appreciated.

Cheers,
Jeff

Signed-off-by: Jeff Moyer &lt;jmoyer@redhat.com&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit ae1b1539622fb46e51b4d13b3f9e5f4c713f86ae, block: reimplement
FLUSH/FUA to support merge, introduced a performance regression when
running any sort of fsyncing workload using dm-multipath and certain
storage (in our case, an HP EVA).  The test I ran was fs_mark, and it
dropped from ~800 files/sec on ext4 to ~100 files/sec.  It turns out
that dm-multipath always advertised flush+fua support, and passed
commands on down the stack, where those flags used to get stripped off.
The above commit changed that behavior:

static inline struct request *__elv_next_request(struct request_queue *q)
{
        struct request *rq;

        while (1) {
-               while (!list_empty(&amp;q-&gt;queue_head)) {
+               if (!list_empty(&amp;q-&gt;queue_head)) {
                        rq = list_entry_rq(q-&gt;queue_head.next);
-                       if (!(rq-&gt;cmd_flags &amp; (REQ_FLUSH | REQ_FUA)) ||
-                           (rq-&gt;cmd_flags &amp; REQ_FLUSH_SEQ))
-                               return rq;
-                       rq = blk_do_flush(q, rq);
-                       if (rq)
-                               return rq;
+                       return rq;
                }

Note that previously, a command would come in here, have
REQ_FLUSH|REQ_FUA set, and then get handed off to blk_do_flush:

struct request *blk_do_flush(struct request_queue *q, struct request *rq)
{
        unsigned int fflags = q-&gt;flush_flags; /* may change, cache it */
        bool has_flush = fflags &amp; REQ_FLUSH, has_fua = fflags &amp; REQ_FUA;
        bool do_preflush = has_flush &amp;&amp; (rq-&gt;cmd_flags &amp; REQ_FLUSH);
        bool do_postflush = has_flush &amp;&amp; !has_fua &amp;&amp; (rq-&gt;cmd_flags &amp;
        REQ_FUA);
        unsigned skip = 0;
...
        if (blk_rq_sectors(rq) &amp;&amp; !do_preflush &amp;&amp; !do_postflush) {
                rq-&gt;cmd_flags &amp;= ~REQ_FLUSH;
		if (!has_fua)
			rq-&gt;cmd_flags &amp;= ~REQ_FUA;
	        return rq;
	}

So, the flush machinery was bypassed in such cases (q-&gt;flush_flags == 0
&amp;&amp; rq-&gt;cmd_flags &amp; (REQ_FLUSH|REQ_FUA)).

Now, however, we don't get into the flush machinery at all.  Instead,
__elv_next_request just hands a request with flush and fua bits set to
the scsi_request_fn, even if the underlying request_queue does not
support flush or fua.

The agreed upon approach is to fix the flush machinery to allow
stacking.  While this isn't used in practice (since there is only one
request-based dm target, and that target will now reflect the flush
flags of the underlying device), it does future-proof the solution, and
make it function as designed.

In order to make this work, I had to add a field to the struct request,
inside the flush structure (to store the original req-&gt;end_io).  Shaohua
had suggested overloading the union with rb_node and completion_data,
but the completion data is used by device mapper and can also be used by
other drivers.  So, I didn't see a way around the additional field.

I tested this patch on an HP EVA with both ext4 and xfs, and it recovers
the lost performance.  Comments and other testers, as always, are
appreciated.

Cheers,
Jeff

Signed-off-by: Jeff Moyer &lt;jmoyer@redhat.com&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: add bsg helper library</title>
<updated>2011-07-31T20:05:09+00:00</updated>
<author>
<name>Mike Christie</name>
<email>michaelc@cs.wisc.edu</email>
</author>
<published>2011-07-31T20:05:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=aa387cc895672b00f807ad7c734a2defaf677712'/>
<id>aa387cc895672b00f807ad7c734a2defaf677712</id>
<content type='text'>
This moves the FC classes bsg code to the block layer and
makes it a lib so that other classes like iscsi and SAS can use it.

It is helpful because working with the request queue, bios,
creating scatterlists, etc are a pain that the LLD does not
have to worry about with normal IOs and should not have to
worry about for bsg requests.

Signed-off-by: Mike Christie &lt;michaelc@cs.wisc.edu&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This moves the FC classes bsg code to the block layer and
makes it a lib so that other classes like iscsi and SAS can use it.

It is helpful because working with the request queue, bios,
creating scatterlists, etc are a pain that the LLD does not
have to worry about with normal IOs and should not have to
worry about for bsg requests.

Signed-off-by: Mike Christie &lt;michaelc@cs.wisc.edu&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: strict rq_affinity</title>
<updated>2011-07-23T18:44:25+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2011-07-23T18:44:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5757a6d76cdf6dda2a492c09b985c015e86779b1'/>
<id>5757a6d76cdf6dda2a492c09b985c015e86779b1</id>
<content type='text'>
Some systems benefit from completions always being steered to the strict
requester cpu rather than the looser "per-socket" steering that
blk_cpu_to_group() attempts by default. This is because the first
CPU in the group mask ends up being completely overloaded with work,
while the others (including the original submitter) has power left
to spare.

Allow the strict mode to be set by writing '2' to the sysfs control
file. This is identical to the scheme used for the nomerges file,
where '2' is a more aggressive setting than just being turned on.

echo 2 &gt; /sys/block/&lt;bdev&gt;/queue/rq_affinity

Cc: Christoph Hellwig &lt;hch@infradead.org&gt;
Cc: Roland Dreier &lt;roland@purestorage.com&gt;
Tested-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some systems benefit from completions always being steered to the strict
requester cpu rather than the looser "per-socket" steering that
blk_cpu_to_group() attempts by default. This is because the first
CPU in the group mask ends up being completely overloaded with work,
while the others (including the original submitter) has power left
to spare.

Allow the strict mode to be set by writing '2' to the sysfs control
file. This is identical to the scheme used for the nomerges file,
where '2' is a more aggressive setting than just being turned on.

echo 2 &gt; /sys/block/&lt;bdev&gt;/queue/rq_affinity

Cc: Christoph Hellwig &lt;hch@infradead.org&gt;
Cc: Roland Dreier &lt;roland@purestorage.com&gt;
Tested-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: reorder request_queue to remove 64 bit alignment padding</title>
<updated>2011-07-13T19:17:49+00:00</updated>
<author>
<name>Richard Kennedy</name>
<email>richard@rsk.demon.co.uk</email>
</author>
<published>2011-07-13T19:17:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d7b7630130e52361af66ce3b994696e2357ba7de'/>
<id>d7b7630130e52361af66ce3b994696e2357ba7de</id>
<content type='text'>
Reorder request_queue to remove 16 bytes of alignment padding in 64 bit
builds.

On my config this shrinks the size of this structure from 1608 to 1592
bytes and therefore needs one fewer cachelines.

Also trivially move the open bracket { to be on the same line as the
structure name to make it easier to grep.

Signed-off-by: Richard Kennedy &lt;richard@rsk.demon.co.uk&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Reorder request_queue to remove 16 bytes of alignment padding in 64 bit
builds.

On my config this shrinks the size of this structure from 1608 to 1592
bytes and therefore needs one fewer cachelines.

Also trivially move the open bracket { to be on the same line as the
structure name to make it easier to grep.

Signed-off-by: Richard Kennedy &lt;richard@rsk.demon.co.uk&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: document blk_plug list access</title>
<updated>2011-07-08T06:19:21+00:00</updated>
<author>
<name>Shaohua Li</name>
<email>shaohua.li@intel.com</email>
</author>
<published>2011-07-08T06:19:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=316cc67d5e03801a5ee4ac660a4dfe9e02aed475'/>
<id>316cc67d5e03801a5ee4ac660a4dfe9e02aed475</id>
<content type='text'>
I'm often confused why not disable preempt when changing blk_plug list. It
would be better to add comments here in case others have the similar concerns.

Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I'm often confused why not disable preempt when changing blk_plug list. It
would be better to add comments here in case others have the similar concerns.

Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: avoid building too big plug list</title>
<updated>2011-07-08T06:19:20+00:00</updated>
<author>
<name>Shaohua Li</name>
<email>shaohua.li@intel.com</email>
</author>
<published>2011-07-08T06:19:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=55c022bbddb2c056b5dff1bd1b1758d31b6d64c9'/>
<id>55c022bbddb2c056b5dff1bd1b1758d31b6d64c9</id>
<content type='text'>
When I test fio script with big I/O depth, I found the total throughput drops
compared to some relative small I/O depth. The reason is the thread accumulates
big requests in its plug list and causes some delays (surely this depends
on CPU speed).
I thought we'd better have a threshold for requests. When a threshold reaches,
this means there is no request merge and queue lock contention isn't severe
when pushing per-task requests to queue, so the main advantages of blk plug
don't exist. We can force a plug list flush in this case.
With this, my test throughput actually increases and almost equals to small
I/O depth. Another side effect is irq off time decreases in blk_flush_plug_list()
for big I/O depth.
The BLK_MAX_REQUEST_COUNT is choosen arbitarily, but 16 is efficiently to
reduce lock contention to me. But I'm open here, 32 is ok in my test too.

Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When I test fio script with big I/O depth, I found the total throughput drops
compared to some relative small I/O depth. The reason is the thread accumulates
big requests in its plug list and causes some delays (surely this depends
on CPU speed).
I thought we'd better have a threshold for requests. When a threshold reaches,
this means there is no request merge and queue lock contention isn't severe
when pushing per-task requests to queue, so the main advantages of blk plug
don't exist. We can force a plug list flush in this case.
With this, my test throughput actually increases and almost equals to small
I/O depth. Another side effect is irq off time decreases in blk_flush_plug_list()
for big I/O depth.
The BLK_MAX_REQUEST_COUNT is choosen arbitarily, but 16 is efficiently to
reduce lock contention to me. But I'm open here, 32 is ok in my test too.

Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'for-linus' into for-3.1/core</title>
<updated>2011-07-01T14:17:13+00:00</updated>
<author>
<name>Jens Axboe</name>
<email>jaxboe@fusionio.com</email>
</author>
<published>2011-07-01T14:17:13+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=04bf7869ca0fd12009aee301cac2264a36df4d98'/>
<id>04bf7869ca0fd12009aee301cac2264a36df4d98</id>
<content type='text'>
Conflicts:
	block/blk-throttle.c
	block/cfq-iosched.c

Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Conflicts:
	block/blk-throttle.c
	block/cfq-iosched.c

Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block:fix the comment error in blkdev.h</title>
<updated>2011-06-13T08:45:38+00:00</updated>
<author>
<name>Wanlong Gao</name>
<email>wanlong.gao@gmail.com</email>
</author>
<published>2011-06-13T08:45:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4d0d98b60eba726e0a4f3e6617628b070c444707'/>
<id>4d0d98b60eba726e0a4f3e6617628b070c444707</id>
<content type='text'>
There is not a function rq_init but blk_rq_init in block/blk-core.c.

Signed-off-by: Wanlong Gao &lt;wanlong.gao@gmail.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is not a function rq_init but blk_rq_init in block/blk-core.c.

Signed-off-by: Wanlong Gao &lt;wanlong.gao@gmail.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
