Age | Commit message (Collapse) | Author |
|
Pull UBI update from Artem Bityutskiy:
"Nothing exciting, just clean-ups and nicification. Oh, and one small
optimization which makes UBI to use less RAM."
* tag 'upstream-3.8-rc1' of git://git.infradead.org/linux-ubi:
UBI: embed ubi_debug_info field in ubi_device struct
UBI: introduce helpers dbg_chk_{io, gen}
UBI: replace memcpy with struct assignment
UBI: remove spurious comment
UBI: gluebi: rename misleading variables
UBI: do not allocate the memory unnecessarily
UBI: use list_move_tail instead of list_del/list_add_tail
|
|
ubi_debug_info struct was dynamically allocated which
is always suboptimal, for it tends to fragment memory
and make the code error-prone.
Fix this by embedding it in ubi_device struct.
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
With this patch code is a bit more readable and there's no
generated code or functionality impact.
Furthermore, this abstracts implementation details and
will allow to change ubi_debug_info in a less invasive way.
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
As ubi_self_check_all_ff() might sleep we are not allowed
to call it from atomic context.
For now we call it only from ubi_wl_get_peb().
There are some code paths where it would also make sense,
but these paths are currently atomic and only enabled
when fastmap is used.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
If UBI is built without fastmap, get_peb_for_wl() has to
remove the PEB manially from the free tree.
Otherwise the requested PEB lives in two trees.
Reported-by: Zach Sadecki <zsadecki@itwatchdogs.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
This kind of memcpy() is error-prone. Its replacement with a struct
assignment is prefered because it's type-safe and much easier to read.
Found by coccinelle. Hand patched and reviewed.
Tested by compilation only.
A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
identifier struct_name;
struct struct_name to;
struct struct_name from;
expression E;
@@
-memcpy(&(to), &(from), E);
+to = from;
// </smpl>
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
This line of comment looks completely bogus.
It was introduced in:
commit d99383b00eba9c6ac3dea462d718b2849012ee03
Author: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Date: Wed May 18 14:47:34 2011 +0300
UBI: change the interface of a debugging check function
Remove it.
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Both names 'total_read' and 'total_written' are actually used
as the number of bytes left to read and write.
Fix this confusion by renaming both to 'bytes_left'.
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
UBI reserves an LEB sized buffer for various needs. We can use this buffer
while scanning, instead of allocating another one. This patch was originally
created by Jan Luebbe <jlu@pengutronix.de>, but then he dropped it and I picked
up and tweaked a little bit.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Using list_move_tail() instead of list_del() + list_add_tail().
dpatch engine is used to auto generate this patch.
(https://github.com/weiyj/dpatch)
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Pull UBI fastmap changes from Artem Bityutskiy:
"This pull request contains the UBI fastmap support implemented by
Richard Weinberger from Linutronix. Fastmap is designed to address
UBI's slow scanning issues. Namely, it introduces a new on-flash
data-structure called "fastmap", which stores the information about
logical<->physical eraseblocks mappings. So now to get this
information just read the fastmap, instead of doing full scan. More
information here can be found in Richard's announcement in LKML
(Subject: UBI: Fastmap request for inclusion (v19)):
http://thread.gmane.org/gmane.linux.kernel/1364922/focus=1369109
One thing I want to explicitly say is that fastmap did not have large
enough linux-next exposure. It is partially my fault - I did not
respond quickly enough. I _really_ apologize for this. But it had
good testing and disabled by default, so I do not expect that we'll
break anything.
Fastmap is declared as experimental so far, and it is off by default.
We did declare that the on-flash format may be changed. The reason
for this is that no one used it in real production so far, so there is
a high risk that something is missing. Besides, we do not have
user-space tools supporting fastmap so far.
Nevertheless, I suggest we merge this feature. Many people want UBI's
scanning bottleneck to be fixed and merging fastmap now should
accelerate its production use. The plan is to make it bullet-prove,
somewhat clean-up, and make it the default for UBI. I do not know how
many kernel releases will it take.
Basically, I what I want to do for fastmap is something like Linus did
for btrfs few years ago."
* tag 'upstream-3.7-rc1-fastmap' of git://git.infradead.org/linux-ubi:
UBI: Wire-up fastmap
UBI: Add fastmap core
UBI: Add fastmap support to the WL sub-system
UBI: Add fastmap stuff to attach.c
UBI: Wire-up ->fm_sem
UBI: Add fastmap bits to build.c
UBI: Add self_check_eba()
UBI: Export next_sqnum()
UBI: Add fastmap stuff to ubi.h
UBI: Add fastmap on-flash data structures
|
|
Make fastmap known to Kconfig, UBI Makefile and MAINTAINERS.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
To make fastmap possible the WL sub-system needs some
changes.
Mostly to support fastmaps pools.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
- Export compare_lebs() as fastmap needs this function.
- Implement fastmap scan logic.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Fastmap uses ->fm_sem to stop EBA changes while writing
a new fastmap.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
self_check_eba() compares two ubi_attach_info objects.
Fastmap uses this function for self checks.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Fastmap needs next_sqnum(), rename it to ubi_next_sqnum()
and make it non-static.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
This patch adds fastmap specific data structures to ubi.h.
It moves also struct ubi_work to ubi.h as it is now needed
for more than one c file.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Add the on-flash data structures neeed by fastmap
to ubi-media.h
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Pull UBI changes from Artem Bityutskiy:
"The main change is the way we reserve eraseblocks for bad blocks
handling. We used to reserve 2% of the partition, but now we are more
aggressive and we reserve 2% of the entire chip, which is what
actually manufacturers specify in data sheets. We introduced an
option to users to override the default, though.
There are a couple of fixes as well, and a number of cleanups."
* tag 'upstream-3.7-rc1' of git://git.infradead.org/linux-ubi: (24 commits)
UBI: fix trivial typo 'it' => 'is'
UBI: load after mtd device drivers
UBI: print less
UBI: use pr_ helper instead of printk
UBI: comply with coding style
UBI: erase free PEB with bitflip in EC header
UBI: fix autoresize handling in R/O mode
UBI: add max_beb_per1024 to attach ioctl
UBI: allow specifying bad PEBs limit using module parameter
UBI: check max_beb_per1024 value in ubi_attach_mtd_dev
UBI: prepare for max_beb_per1024 module parameter addition
UBI: introduce MTD_PARAM_MAX_COUNT
UBI: separate bad_peb_limit in a function
arm: sam9_l9260_defconfig: correct CONFIG_MTD_UBI_BEB_LIMIT
UBI: use the whole MTD device size to get bad_peb_limit
mtd: mtdparts: introduce mtd_get_device_size
mtd: mark mtd_is_partition argument as constant
arm: sam9_l9260_defconfig: remove non-existing config option
UBI: kill CONFIG_MTD_UBI_BEB_RESERVE
UBI: limit amount of reserved eraseblocks for bad PEB handling
...
|
|
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Use 'late_initcall()' in UBI to make sure it initializes after MTD drivers.
Signed-off-by: Jiang Lu <lu.jiang@windriver.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
UBI was mistakingly using 'kfree()' instead of 'kmem_cache_free()' when
freeing "attach eraseblock" structures in vtbl.c. Thankfully, this happened
only when we were doing auto-format, so many systems were unaffected. However,
there are still many users affected.
It is strange, but the system did not crash and nothing bad happened when
the SLUB memory allocator was used. However, in case of SLOB we observed an
crash right away.
This problem was introduced in 2.6.39 by commit
"6c1e875 UBI: add slab cache for ubi_scan_leb objects"
A note for stable trees:
Because variable were renamed, this won't cleanly apply to older kernels.
Changing names like this should help:
1. ai -> si
2. aeb_slab_cache -> seb_slab_cache
3. new_aeb -> new_seb
Reported-by: Richard Genoud <richard.genoud@gmail.com>
Tested-by: Richard Genoud <richard.genoud@gmail.com>
Tested-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: stable@vger.kernel.org [v2.6.39+]
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
UBI currently prints a lot of information when it mounts a volume, which
bothers some people. Make it less chatty - print only important information
by default.
Get rid of 'dbg_msg()' macro completely.
Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Use 'pr_err()' instead of 'printk(KERN_ERR', etc.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Join all the split printk lines in order to stop checkpatch complaining.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Without this patch, these PEB are not scrubbed until we put data in them.
Bitflip can accumulate latter and we can loose the EC header (but VID header
should be intact and allow to recover data).
Signed-off-by: Matthieu Castet <matthieu.castet@parrot.com>
Cc: stable@vger.kernel.org
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Currently UBI fails in autoresize when it is in R/O mode (e.g., because the
underlying MTD device is R/O). This patch fixes the issue - we just skip
autoresize and print a warning.
Reported-by: Pali Rohár <pali.rohar@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
This patch provides a possibility to set the "maximum expected number of
bad blocks per 1024 blocks" (max_beb_per1024) for each mtd device using
the UBI_IOCATT ioctl.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
This patch provides the possibility to adjust the "maximum expected number of
bad blocks per 1024 blocks" (max_beb_per1024) for each mtd device.
The majority of NAND devices have their max_beb_per1024 equal to 20, but
sometimes it's more.
Now, we can adjust that via a kernel parameter:
ubi.mtd=<name|num|path>[,<vid_hdr_offs>[,max_beb_per1024]]
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
max_beb_per1024 shouldn't be negative, and a 0 value will be treated as
the default value. For the upper bound, 768/1024 should be enough.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
This patch prepare the way for the addition of max_beb_per1024 module
parameter. There's no functional change.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Reviewed-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
No functional changes here, just to prepare for next patch.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
On NAND flash devices, UBI reserves some physical erase blocks (PEB) for
bad block handling. Today, the number of reserved PEB can only be set as a
percentage of the total number of PEB in each MTD partition. For example, for a
NAND flash with 128KiB PEB, 2 MTD partition of 20MiB (mtd0) and 100MiB (mtd1)
and 2% reserved PEB:
- the UBI device on mtd0 will have 2 PEB reserved
- the UBI device on mtd1 will have 16 PEB reserved
The problem with this behaviour is that NAND flash manufacturers give a
minimum number of valid block (NVB) during the endurance life of the
device, e.g.:
Parameter Symbol Min Max Unit Notes
--------------------------------------------------------------
Valid block number NVB 1004 1024 Blocks 1
From this number we can deduce the maximum number of bad PEB that a device will
contain during its endurance life: a 128MiB NAND flash (1024 PEB) will not have
less than 20 bad blocks during the flash endurance life.
But the manufacturer doesn't tell where those bad block will appear. He doesn't
say either if they will be equally disposed on the whole device (and I'm pretty
sure they won't). So, according to the datasheets, we should reserve the
maximum number of bad PEB for each UBI device (worst case scenario: 20 bad
blocks appears on the smallest MTD partition).
So this patch make UBI use the whole MTD device size to calculate the maximum
bad expected eraseblocks.
The Kconfig option is in per1024 blocks, thus it can have a default value of 20
which is *very* common for NAND devices.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
CONFIG_MTD_UBI_BEB_RESERVE and MIN_RESEVED_PEBS are no longer used,
since the amount of reserved eraseblocks for bad PEB handling is now
derived from 'ubi->bad_peb_limit' (ubi's maximum expected bad
eraseblocks).
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
|
|
The existing mechanism of reserving PEBs for bad PEB handling has two
flaws:
- It is calculated as a percentage of good PEBs instead of total PEBs.
- There's no limit on the amount of PEBs UBI reserves for future bad
eraseblock handling.
This patch changes the mechanism to overcome these flaws.
The desired level of PEBs reserved for bad PEB handling (beb_rsvd_level)
is set to the maximum expected bad eraseblocks (bad_peb_limit) minus the
existing number of bad eraseblocks (bad_peb_count).
The actual amount of PEBs reserved for bad PEB handling is usually set
to the desired level (but in some circumstances may be lower than the
desired level, e.g. when attaching to a device that has too few
available PEBs to satisfy the desired level).
In the case where the device has too many bad PEBs (above the expected
limit), then the desired level, and the actual amount of PEBs reserved
are set to zero. No PEBs will be set aside for future bad eraseblock
handling - even if some PEBs are made available (e.g. by shrinking a
volume).
If another PEB goes bad, and there are available PEBs, then the
eraseblock will be marked bad (consuming one available PEB). But if
there are no available PEBs, ubi will go into readonly mode.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
|
|
Introduce 'ubi->bad_peb_limit', which specifies an upper limit of PEBs
UBI expects to go bad. Currently, it is initialized to a fixed percentage
of total PEBs in the UBI device (configurable via CONFIG_MTD_UBI_BEB_LIMIT).
The 'bad_peb_limit' is intended to be used for calculating the amount of PEBs
UBI needs to reserve for bad eraseblock handling.
Artem: minor amendments.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
|
|
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Currently, there are several locations where an attempt to reserve more
PEBs for bad PEB handling is made, with the same code being duplicated.
Harmonize it by introducing 'ubi_update_reserved()'.
Also, improve the debug message issued, making it more descriptive.
Artem: amended the patch a little.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
|
|
The function name within the comment was not aligned with the actual
function name.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
|
|
Signed-off-by: Peter Meerwald <pmeerw@pmeerw.net>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
|
|
The actual value (1%) is too low for actual NAND devices, a huge
majority of device has 2% maximum bad blocks (SLC or MLC).
(Actually it's 20 blocks on a 1024 blocks device, 40/2048...)
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
|
|
Commit "e9b4cf2 UBI: fix debugfs-less systems support" fixed one
regression but introduced a different regression - the debugfs is now always
compiled out. Root cause: IS_ENABLED() arguments should be used with the
CONFIG_* prefix.
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Commit "62f38455 UBI: modify ubi_wl_flush function to clear work queue for a lnum"
takes the 'work_sem' semaphore in write mode for the entire loop, which is not
very good because it will block other workers for potentially long time. We do
not need to have it in write mode - read mode is enough, and we do not need to
hole it over the entire loop. So this patch turns changes the locking: takes
'work_sem' in read mode and pushes it down to the loop.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|
|
Commit "aa44d1d UBI: remove Kconfig debugging option" broke UBI and it
refuses to initialize if debugfs (CONFIG_DEBUG_FS) is disabled. I incorrectly
assumed that debugfs files creation function will return success if debugfs
is disabled, but they actually return -ENODEV. This patch fixes the issue.
Reported-by: Paul Parsons <lost.distance@yahoo.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Tested-by: Paul Parsons <lost.distance@yahoo.com>
|
|
This patch modifies ubi_wl_flush to force the erasure of
particular volume id / logical eraseblock number pairs. Previous functionality
is preserved when passing UBI_ALL for both values. The locations where ubi_wl_flush
were called are appropriately changed: ubi_leb_erase only flushes for the
erased LEB, and ubi_create_volume forces only flushing for its volume id.
External code can call this new feature via the new function ubi_flush() added
to kapi.c, which simply passes through to ubi_wl_flush().
This was tested by disabling the call to do_work in ubi thread, which results
in the work queue remaining unless explicitly called to remove. UBIFS was
changed to call ubifs_leb_change 50 times for four different LEBs. Then the
new function was called to clear the queue: passing wrong volume ids / lnum,
correct ones, and finally UBI_ALL for both to ensure it was finally all
cleard. The work queue was dumped each time and the selective removal
of the particular LEB numbers was observed. Extra checks were enabled and
ubifs's integck was also run. Finally, the drive was repeatedly filled and
emptied to ensure that the queue was cleared normally.
Artem: amended the patch.
Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
|