- 28 Jan, 2008 24 commits
-
-
Stuart Menefy authored
Presently most of the 29-bit physical parts do P1/P2 segmentation with a 1:1 cached/uncached mapping, jumping between the two to control the caching behaviour. This provides the basic infrastructure to maintain this behaviour on 32-bit physical parts that don't map P1/P2 at all, using a shiny new linker section and corresponding fixmap entry. Signed-off-by:
Stuart Menefy <stuart.menefy@st.com> Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Nobuhiro Iwamatsu authored
Signed-off-by:
Nobuhiro Iwamatsu <iwamatsu@nigauri.org> Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Stuart Menefy authored
Signed-off-by:
Stuart Menefy <stuart.menefy@st.com> Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Consolidates the HUGETLB definitions and others. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
We intend to share the mm options, so move the SH-only subtypes up a level. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
- 19 Nov, 2007 2 commits
-
-
Paul Mundt authored
With the refactored update_mmu_cache() introduced in older kernels, there's no longer any need to take the page_table_lock in this path, so simply drop it completely. Without this, performance degradation is seen on SMP on heavily threaded workloads that don't use the split ptlock, and ultimately we have no reason to contend for the lock in the first place. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
The __do_page_fault() fast-path contains a UTLB flush in order to force an ITLB reload, this isn't needed in practice as the ITLB is auto-reloaded from the UTLB anyways, which is already displaced by the manual 'ldtlb' in the update_mmu_cache() path. This provides a measurable speed up in the TLB miss fast-path. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
- 07 Nov, 2007 5 commits
-
-
Paul Mundt authored
Now that copy_to_user_page()/copy_from_user_page() are wired up, we can drop the old __copy_xxx() implementations. Now that the page colouring scheme has changed via kmap_coherent(), we can avoid the flush in these specific helpers. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This moves copy_{to,from}_user_page() out-of-line on SH-4 and converts for the kmap_coherent() API. Based on the MIPS implementation. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
With the kmap_coherent() API in place, this is trivial to implement, and lets us avoid the cache flush in certain cases. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
The ST40 stuff in-tree hasn't built for some time, and hasn't been updated for over 3 years. ST maintains their own out-of-tree changes and rebases occasionally, and that's ultimately where all of the ST40 users go anyways. In order for the ST40 code to be brought up to date most of the stuff removed in this changeset would have to be rewritten anyways, so there's very little benefit in keeping the remnants around either. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Follow the MIPS and sparc64 changes for -Werror instrumentation. Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
- 02 Nov, 2007 1 commit
-
-
Stuart Menefy authored
movca.l is restricted to SH-4 and up only, though compilers that are unable to support ISA tuning (especially older versions of binutils) will happily compile in the bogus opcode on older parts. Conditionalize it to fix SH-3 regressions noted by Kristoffer. Signed-off-by:
Stuart Menefy <stuart.menefy@st.com> Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
- 19 Oct, 2007 1 commit
-
-
Serge E. Hallyn authored
is_init() is an ambiguous name for the pid==1 check. Split it into is_global_init() and is_container_init(). A cgroup init has it's tsk->pid == 1. A global init also has it's tsk->pid == 1 and it's active pid namespace is the init_pid_ns. But rather than check the active pid namespace, compare the task structure with 'init_pid_ns.child_reaper', which is initialized during boot to the /sbin/init process and never changes. Changelog: 2.6.22-rc4-mm2-pidns1: - Use 'init_pid_ns.child_reaper' to determine if a given task is the global init (/sbin/init) process. This would improve performance and remove dependence on the task_pid(). 2.6.21-mm2-pidns2: - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc, ppc,avr32}/traps.c for the _exception() call to is_global_init(). This way, we kill only the cgroup if the cgroup's init has a bug rather than force a kernel panic. [akpm@linux-foundation.org: fix comment] [sukadev@us.ibm.com: Use is_global_init() in arch/m32r/mm/fault.c] [bunk@stusta.de: kernel/pid.c: remove unused exports] [sukadev@us.ibm.com: Fix capability.c to work with threaded init] Signed-off-by:
Serge E. Hallyn <serue@us.ibm.com> Signed-off-by:
Sukadev Bhattiprolu <sukadev@us.ibm.com> Acked-by:
Pavel Emelianov <xemul@openvz.org> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Herbert Poetzel <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@sw.ru> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 17 Oct, 2007 2 commits
-
-
Ralf Baechle authored
dma_cache_(wback|inv|wback_inv) were the earliest attempt on a generalized cache managment API for I/O purposes. Originally it was basically the raw MIPS low level cache API exported to the entire world. The API has suffered from a lack of documentation, was not very widely used unlike it's more modern brothers and can easily be replaced by dma_cache_sync. So remove it rsp. turn the surviving bits back into an arch private API, as discussed on linux-arch. Signed-off-by:
Ralf Baechle <ralf@linux-mips.org> Acked-by:
Paul Mundt <lethal@linux-sh.org> Acked-by:
Paul Mackerras <paulus@samba.org> Acked-by:
David S. Miller <davem@davemloft.net> Acked-by:
Kyle McMartin <kyle@parisc-linux.org> Acked-by:
Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Lameter authored
Slab constructors currently have a flags parameter that is never used. And the order of the arguments is opposite to other slab functions. The object pointer is placed before the kmem_cache pointer. Convert ctor(void *object, struct kmem_cache *s, unsigned long flags) to ctor(struct kmem_cache *s, void *object) throughout the kernel [akpm@linux-foundation.org: coupla fixes] Signed-off-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 16 Oct, 2007 2 commits
-
-
KAMEZAWA Hiroyuki authored
Now, arch dependent code around CONFIG_MEMORY_HOTREMOVE is a mess. This patch cleans up them. This is against 2.6.23-rc6-mm1. - fix compile failure on ia64/ CONFIG_MEMORY_HOTPLUG && !CONFIG_MEMORY_HOTREMOVE case. - For !CONFIG_MEMORY_HOTREMOVE, add generic no-op remove_memory(), which returns -EINVAL. - removed remove_pages() only used in powerpc. - removed no-op remove_memory() in i386, sh, sparc64, x86_64. - only powerpc returns -ENOSYS at memory hot remove(no-op). changes it to return -EINVAL. Note: Currently, only ia64 supports CONFIG_MEMORY_HOTREMOVE. I welcome other archs if there are requirements and testers. Signed-off-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Will Schmidt authored
We have had complaints where a threaded application is left in a bad state after one of it's threads is killed when we hit a VM: out_of_memory condition. Killing just one of the process threads can leave the application in a bad state, whereas killing the entire process group would allow for the application to restart, or be otherwise handled, and makes it very obvious that something has gone wrong. This change allows the entire process group to be taken down, rather than just the one thread. Signed-off-by:
Will Schmidt <will_schmidt@vnet.ibm.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Andi Kleen <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Matthew Wilcox <willy@debian.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 28 Sep, 2007 1 commit
-
-
Stuart Menefy authored
This implements a fast-path for small (less than 12 bytes) copies, with the existing path treated as the slow-path and left as the default behaviour for all other copy sizes. Signed-off-by:
Stuart Menefy <stuart.menefy@st.com> Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
- 27 Sep, 2007 2 commits
-
-
Paul Mundt authored
Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
When using URAM in NUMA mode another active region is needed. Bump this up so we don't trigger the region truncation in add_active_range(). Signed-off-by:
Paul Mundt <lethal@linux-sh.org>
-