Age | Commit message (Collapse) | Author |
|
Pull ARM updates from Russell King:
"Another mixture of changes this time around:
- Split XIP linker file from main linker file to make it more
maintainable, and various XIP fixes, and clean up a resulting
macro.
- Decompressor cleanups from Masahiro Yamada
- Avoid printing an error for a missing L2 cache
- Remove some duplicated symbols in System.map, and move
vectors/stubs back into kernel VMA
- Various low priority fixes from Arnd
- Updates to allow bus match functions to return negative errno
values, touching some drivers and the driver core. Greg has acked
these changes.
- Virtualisation platform udpates form Jean-Philippe Brucker.
- Security enhancements from Kees Cook
- Rework some Kconfig dependencies and move PSCI idle management code
out of arch/arm into drivers/firmware/psci.c
- ARM DMA mapping updates, touching media, acked by Mauro.
- Fix places in ARM code which should be using virt_to_idmap() so
that Keystone2 can work.
- Fix Marvell Tauros2 to work again with non-DT boots.
- Provide a delay timer for ARM Orion platforms"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (45 commits)
ARM: 8546/1: dma-mapping: refactor to fix coherent+cma+gfp=0
ARM: 8547/1: dma-mapping: store buffer information
ARM: 8543/1: decompressor: rename suffix_y to compress-y
ARM: 8542/1: decompressor: merge piggy.*.S and simplify Makefile
ARM: 8541/1: decompressor: drop redundant FORCE in Makefile
ARM: 8540/1: decompressor: use clean-files instead of extra-y to clean files
ARM: 8539/1: decompressor: drop more unneeded assignments to "targets"
ARM: 8538/1: decompressor: drop unneeded assignments to "targets"
ARM: 8532/1: uncompress: mark putc as inline
ARM: 8531/1: turn init_new_context into an inline function
ARM: 8530/1: remove VIRT_TO_BUS
ARM: 8537/1: drop unused DEBUG_RODATA from XIP_KERNEL
ARM: 8536/1: mm: hide __start_rodata_section_aligned for non-debug builds
ARM: 8535/1: mm: DEBUG_RODATA makes no sense with XIP_KERNEL
ARM: 8534/1: virt: fix hyp-stub build for pre-ARMv7 CPUs
ARM: make the physical-relative calculation more obvious
ARM: 8512/1: proc-v7.S: Adjust stack address when XIP_KERNEL
ARM: 8411/1: Add default SPARSEMEM settings
ARM: 8503/1: clk_register_clkdev: remove format string interface
ARM: 8529/1: remove 'i' and 'zi' targets
...
|
|
There are few things about *pte_alloc*() helpers worth cleaning up:
- 'vma' argument is unused, let's drop it;
- most __pte_alloc() callers do speculative check for pmd_none(),
before taking ptl: let's introduce pte_alloc() macro which does
the check.
The only direct user of __pte_alloc left is userfaultfd, which has
different expectation about atomicity wrt pmd.
- pte_alloc_map() and pte_alloc_map_lock() are redefined using
pte_alloc().
[sudeep.holla@arm.com: fix build for arm64 hugetlbpage]
[sfr@canb.auug.org.au: fix arch/arm/mm/mmu.c some more]
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
For an XIP build, _etext does not represent the end of the
binary image that needs to stay mapped into the MODULES_VADDR area.
Years ago, data came before text in the memory map. However,
now that the order is text/init/data, an XIP_KERNEL needs to map
up to the data location in order to keep from cutting off
parts of the kernel that are needed.
We only map up to the beginning of data because data has already been
copied, so there's no reason to keep it around anymore.
A new symbol is created to make it clear what it is we are referring
to.
This fixes the bug where you might lose the end of your kernel area
after page table setup is complete.
Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC multiplatform code updates from Arnd Bergmann:
"This branch is the culmination of 5 years of effort to bring the ARMv6
and ARMv7 platforms together such that they can all be enabled and
boot the same kernel. It has been a tremendous amount of cleanup and
refactoring by a huge number of people, and creation of several new
(and major) subsystems to better abstract out all the platform details
in an appropriate manner.
The bulk of this branch is a large patchset from Arnd that brings
several of the more minor and older platforms we have closer to
multiplatform support. Among these are MMP, S3C64xx, Orion5x, mv78xx0
and realview Much of this is moving around header files from old mach
directories, but there are also some cleanup patches of debug_ll
(lowlevel debug per-platform options) and other parts.
Linus Walleij also has some patchs to clean up the older ARM Realview
platforms by finally introducing DT support, and Rob Herring has some
for ARM Versatile which is now DT-only. Both of these platforms are
now multiplatform.
Finally, a couple of patches from Russell for Dove PMU, and a fix from
Valentin Rothberg for Exynos ADC, which were rebased on top of the
series to avoid conflicts"
* tag 'armsoc-multiplatform' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (75 commits)
ARM: realview: don't select SMP_ON_UP for UP builds
ARM: s3c: simplify s3c_irqwake_{e,}intallow definition
ARM: s3c64xx: fix pm-debug compilation
iio: exynos-adc: fix irqf_oneshot.cocci warnings
ARM: realview: build realview-dt SMP support only when used
ARM: realview: select apropriate targets
ARM: realview: clean up header files
ARM: realview: make all header files local
ARM: no longer make CPU targets visible separately
ARM: integrator: use explicit core module options
ARM: realview: enable multiplatform
ARM: make default platform work for NOMMU
ARM: debug-ll: move DEBUG_LL_UART_EFM32 to correct Kconfig location
ARM: defconfig: use correct debug_ll settings
ARM: versatile: convert to multi-platform
ARM: versatile: merge mach code into a single file
ARM: versatile: switch to DT only booting and remove legacy code
ARM: versatile: add DT based PCI detection
ARM: pxa: mark ezx structures as __maybe_unused
ARM: pxa: mark raumfeld init functions as __maybe_unused
...
|
|
|
|
The VMSA field of MMFR0 (bottom 4 bits) is incremented for each
added feature. PXN is supported if the value is >= 4 and LPAE
is supported if it is >= 5.
In case a kernel with CONFIG_ARM_LPAE disabled is used on a
processor that supports LPAE, we can still use PXN in short
descriptors. So check for >= 4 not == 4.
Signed-off-by: Jungseung Lee <js07.lee@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Take the new memblock attribute MEMBLOCK_NOMAP into account when
deciding whether a certain region is or should be covered by the
kernel direct mapping.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
This implements create_mapping_late(), which we will use to populate
the UEFI Runtime Services page tables.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
Add support to the kernel translation table population routines for
creating non-global mappings. This will be used by the UEFI runtime
services, which will use temporary mappings in the userland range.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
To allow __create_mapping() to be used for populating UEFI Runtime
Services page tables, factor out the allocation routine 'early_alloc'
and pass it down as a function pointer into alloc_init_[pud|pmd|pte].
This way, new users of __create_mapping() can supply another allocation
function.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
In order to be able to reuse the core mapping logic of create_mapping
for mapping the UEFI Runtime Services into a private set of page tables,
split it off from create_mapping() into a separate function
__create_mapping which we will wire up in a subsequent patch.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
This enables the generic early_ioremap implementation for ARM.
It uses the fixmap region reserved for kmap. Since early_ioremap
is only supported before paging_init(), and kmap is only supported
afterwards, this is guaranteed not to cause any clashes.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
In a multiplatform configuration, we may end up building a kernel for
both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the
xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a
build error from the coprocessor instructions.
Since we know this code will only have to run on an actual xscale
processor, we can simply build the entire file for ARMv5TE.
Related to this, we need to handle the iWMMXT initialization sequence
differently during boot, to ensure we don't try to touch xscale
specific registers on other CPUs from the xscale_cp0_init initcall.
cpu_is_xscale() used to be hardcoded to '1' in any configuration that
enables any XScale-compatible core, but this breaks once we can have a
combined kernel with MMP1 and something else.
In this patch, I replace the existing cpu_is_xscale() macro with a new
cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and
mohawk, which makes the behavior more deterministic.
The two existing users of cpu_is_xscale() are modified accordingly,
but slightly change behavior for kernels that enable CPU_MOHAWK without
also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave
PMD_BIT4 in the page tables untouched, now they clear it as we've always
done for kernels that enable both MOHAWK and the support for the older
CPU types.
Since the previous behavior was inconsistent, I assume it was
unintentional.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Install a non-faulting handler just before unmasking imprecise aborts
and switch back to the regular one after unmasking is done.
This catches any pending imprecise abort that the firmware/bootloader
may have left behind that would normally crash the kernel at that point.
As there are apparently a lot of bootlaoders out there that do such a
thing it makes sense to handle it in the common startup code.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Tyler Baker <tyler.baker@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds imprecise abort enable/disable macros and uses them to
enable imprecise aborts early when starting the kernel.
This helps in tracking down the real cause for such imprecise abort, as
they are handled as soon as they occur. Until now those aborts would
only be enabled when entering the userspace and as a consequence crash
the first userspace process if any abort had been raised during kernel
startup.
Signed-off-by: Fabrice Gasnier <fabrice.gasnier@st.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
for-linus
|
|
Keep the machine vectors in its own domain to avoid software based
user access control from making the vector code inaccessible, and
thereby deadlocking the machine.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Add early fixmap support, initially to support permanent, fixed
mapping support for early console. A temporary, early pte is
created which is migrated to a permanent mapping in paging_init.
This is also needed since the attributes may change as the memory
types are initialized. The 3MiB range of fixmap spans two pte
tables, but currently only one pte is created for early fixmap
support.
Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since
the index for kmap does not start at zero anymore. This reverts
4221e2e6b316 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and
FIX_KMAP_END") to some extent.
Cc: Mark Salter <msalter@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
Add a nmessage to suggest that HIGHMEM is enabled when physical memory
is truncated due to lack of virtual address space to map it in the low
memory mapping.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The memblock limit is currently used in find_limits
to find the bounds for ZONE_NORMAL. The memblock
limit may need to be rounded down a PMD size to ensure
allocations are fully mapped though. This has the side
effect of reducing the amount of memory in ZONE_NORMAL.
Once all lowmem is mapped, it's safe to change the memblock
limit back to include the unaligned section. Adjust the
memblock limit after lowmem mapping is complete.
Before:
# cat /proc/zoneinfo | grep managed
managed 62907
managed 424
After:
# cat /proc/zoneinfo | grep managed
managed 63331
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
Eliminate the needless nommu version of this function, and get rid of
the proc_info_list structure argument - we no longer need this in order
to fix up the page table entries.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Re-implement the physical address space switching to be architecturally
compliant. This involves flushing the caches, disabling the MMU, and
only then updating the page tables. Once that is complete, the system
can be brought back up again.
Since we disable the MMU, we need to do the update in assembly code.
Luckily, the entries which need updating are fairly trivial, and are
all setup by the early assembly code. We can merely adjust each entry
by the delta required.
Not only does this fix the code to be architecturally compliant, but it
fixes a couple of bugs too:
1. The original code would only ever update the first L2 entry covering
a fraction of the kernel; the remainder were left untouched.
2. The L2 entries covering the DTB blob were likewise untouched.
This solution fixes up all entries.
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The init_meminfo() method is not about initialising meminfo - it's about
fixing up the physical to virtual translation so that we use a different
physical address space, possibly above the 4GB physical address space.
Therefore, the name "init_meminfo()" is confusing.
Rename it to pv_fixup() instead.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
There is no point platform code doing this, let's move it into the
generic code so it doesn't get duplicated.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Make the init_meminfo function return the offset to be applied to the
phys-to-virt translation constants. This allows us to move the update
into generic code, along with the requirements for this update.
This avoids platforms having to know the details of the phys-to-virt
translation support.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
At boot time we round the memblock limit down to section size in an
attempt to ensure that we will have mapped this RAM with section
mappings prior to allocating from it. When mapping RAM we iterate over
PMD-sized chunks, creating these section mappings.
Section mappings are only created when the end of a chunk is aligned to
section size. Unfortunately, with classic page tables (where PMD_SIZE is
2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
size the first 1M will not be mapped despite having been accounted for
in the memblock limit. This has been observed to result in page tables
being allocated from unmapped memory, causing boot-time hangs.
This patch modifies the memblock limit rounding to always round down to
PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
will round the memblock limit down to a 2M boundary, matching the limits
on section mappings, and preventing allocations from unmapped memory.
For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Stefan Agner <stefan@agner.ch>
Tested-by: Stefan Agner <stefan@agner.ch>
Acked-by: Laura Abbott <labbott@redhat.com>
Tested-by: Hans de Goede <hdegoede@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Now local variables kernel_x_start and kernel_x_end defined using
'unsigned long' type which is wrong because they represent physical
memory range and will be calculated wrongly if LPAE is enabled.
As result, all following code in map_lowmem() will not work correctly.
For example, Keystone 2 boot is broken because
kernel_x_start == 0x0000 0000
kernel_x_end == 0x0080 0000
instead of
kernel_x_start == 0x0000 0008 0000 0000
kernel_x_end == 0x0000 0008 0080 0000
and as result whole low memory will be mapped with MT_MEMORY_RW
permissions by code (start > kernel_x_end):
} else if (start >= kernel_x_end) {
map.pfn = __phys_to_pfn(start);
map.virtual = __phys_to_virt(start);
map.length = end - start;
map.type = MT_MEMORY_RW;
create_mapping(&map);
}
Hence, fix it by using phys_addr_t type for variables kernel_x_start
and kernel_x_end.
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
set_memory_* functions have same implementation
except memory attribute.
This patch makes to use common function for these, and pull out
the functions into arch/arm/mm/pageattr.c like arm64 did.
It will reduce code size and enhance the readability.
Signed-off-by: Jungseung Lee <js07.lee@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Modern ARMv7-A/R cores optionally implement below new
hardware feature:
- PXN:
Privileged execute-never(PXN) is a security feature. PXN bit
determines whether the processor can execute software from
the region. This is effective solution against ret2usr attack.
On an implementation that does not include the LPAE, PXN is
optionally supported.
This patch set PXN bit on user page table for preventing
user code execution with privilege mode.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jungseung Lee <js07.lee@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Convert many (but not all) printk(KERN_* to pr_* to simplify the code.
We take the opportunity to join some printk lines together so we don't
split the message across several lines, and we also add a few levels
to some messages which were previously missing them.
Tested-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux into devel-stable
generic fixmaps
ARM support for CONFIG_DEBUG_RODATA
|
|
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions
into section-sized areas that can have different permisions. Performs
the NX permission changes during free_initmem, so that init memory can be
reclaimed.
This uses section size instead of PMD size to reduce memory lost to
padding on non-LPAE systems.
Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.
Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
|
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h.
Also makes sure that the fixmap allocation fits into the expected range.
Based on patch by Rabin Vincent.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Rabin Vincent <rabin@rab.in>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
|
With commit a05e54c103b0 ("ARM: 8031/2: change fixmap mapping region to
support 32 CPUs"), the fixmap region was expanded to 2MB, but it
precluded any other uses of the fixmap region. In order to support other
uses the fixmap region needs to be expanded beyond 2MB. Fortunately, the
adjacent 1MB range 0xffe00000-0xfff00000 is availabe.
Remove fixmap_page_table ptr and lookup the page table via the virtual
address so that the fixmap region can span more that one pmd. The 2nd
pmd is already created since it is shared with the vector page.
Signed-off-by: Rob Herring <robh@kernel.org>
[kees: fixed CONFIG_DEBUG_HIGHMEM get_fixmap() calls]
[kees: moved pte allocation outside of CONFIG_HIGHMEM]
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
|
Use the more common pr_warn.
Other miscellanea:
o Coalesce formats
o Realign arguments
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Add further comments to the early page table remap code to explain what
the code is doing, why it is doing it, but more importantly to explain
that the code is not architecturally compliant and is squarely in
"UNPREDICTABLE" behaviour territory.
Add a warning and tainting of the kernel too.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
This does the same as the previous commit, but for the S bit, which also
needs to match the initial value which the assembly code used for the
same reasons. Again, we add a check for SMP to ensure that the page
tables are correctly setup for SMP.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Fix a long standing bug where, for ARMv6+, we don't fully ensure that
the C code sets the same cache policy as the assembly code. This was
introduced partially by commit 11179d8ca28d ([ARM] 4497/1: Only allow
safe cache configurations on ARMv6 and later) and also by adding SMP
support.
This patch sets the default cache policy based on the flags used by the
assembly code, and then ensures that when a cache policy command line
argument is used, we verify that on ARMv6, it matches the initial setup.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
adjust_cr() is not used anymore, so let's get rid of it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Keep all bits of alignment handling together.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Several places open-code this manipulation, let's consolidate this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
memblock is now fully integrated into the kernel and is the prefered
method for tracking memory. Rather than reinvent the wheel with
meminfo, migrate to using memblock directly instead of meminfo as
an intermediate.
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
dsb st can be used to ensure completion of pending cache maintenance
operations, so use it for the v7 cache maintenance operations.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
In 32-bit ARM systems, the fixmap mapping region can support no more
than 14 CPUs(total: 896k; one CPU: 64K). And we can configure NR_CPUS
up to 32. So there is a mismatch.
This patch moves fixmapping region downwards to region 0xffc00000-
0xffe00000. Then the fixmap mapping region can support up to 32 CPUs.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Liu Hua <sdu.liu@huawei.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
'unstable/sa11x0' into for-next
|
|
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is
because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do
not have hardware thread registers. The lack of these registers requires
the kernel to update the vectors page at each context switch in order to
write a new TLS pointer. This write must be done via the userspace
mapping, since aliasing caches can lead to expensive flushing when using
kmap. Finally, this requires the vectors page to be mapped r/w for
kernel and r/o for user, which has implications for things like put_user
which must trigger CoW appropriately when targetting user pages.
The upshot of all this is that a v6/v7 kernel makes use of domains to
segregate kernel and user memory accesses. This has the nasty
side-effect of making device mappings executable, which has been
observed to cause subtle bugs on recent cores (e.g. Cortex-A15
performing a speculative instruction fetch from the GIC and acking an
interrupt in the process).
This patch solves this problem by removing the remaining domain support
from ARMv6. A new memory type is added specifically for the vectors page
which allows that page (and only that page) to be mapped as user r/o,
kernel r/w. All other user r/o pages are mapped also as kernel r/o.
Patch co-developed with Russell King.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|