Commit Graph

42488 Commits

Author SHA1 Message Date
Andi Kleen
2fff0a4841 [PATCH] Generic: Move __user cast into probe_kernel_address
Caller of probe_kernel_address shouldn't need to know that
pka is internally implemented with __get_user. So move the
__user cast into pka.

Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:05 +01:00
Yinghai Lu
5df0287ecc [PATCH] x86-64: Extend clear_irq_vector
Clear the irq releated entries in irq_vector, irq_domain and vector_irq
instead of clearing irq_vector only. So when new irq is created, it
could reuse that vector. (actually is the second loop scanning from
FIRST_DEVICE_VECTOR+8). This could avoid the vectors are used up
with enough module inserting and removing

Cc: Eric W. Biedierman <ebiederm@xmission.com>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-By: Yinghai Lu <yinghai.lu@amd.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:05 +01:00
Andi Kleen
3760dd6efa [PATCH] i386: Use CLFLUSH instead of WBINVD in change_page_attr
CLFLUSH is a lot faster than WBINVD so try to use that.

Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:05 +01:00
Andi Kleen
770d132f03 [PATCH] i386: Retrieve CLFLUSH size from CPUID
Also report it in /proc/cpuinfo similar to x86-64.

Needed for followon patch

Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:05 +01:00
Andi Kleen
ea7322decb [PATCH] x86-64: Speed and clean up cache flushing in change_page_attr
CLFLUSH is a lot faster than WBINVD so avoid the later if at all
possible.

Always pass the complete list of pages to other CPUs to cut down
the number of IPIs.

Minor other cleanup and sync with i386 version.

Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:05 +01:00
Joe Korty
74b47a7844 [PATCH] i386: Fix entry.S code with !CONFIG_VM86
The entry.S code at work_notifysig is surely wrong.  It drops into unrelated
code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n.

	[PATCH] Make vm86 support optional
	tree 9b5daef528
	pushed to git Jan 8, 2006, and first appears in 2.6.16

The 'fix' here is to also compile out the vm86 test & branch when
CONFIG_VM86=n.

Signed-off-by: Joe Korty <joe.korty@ccur.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Avi Kivity
249e83fe83 [PATCH] x86-64: Extract segment descriptor definitions for use outside
Code that wants to use struct desc_struct cannot do so on i386 because
desc.h contains other code that will only compile on x86_64.

So extract the structure definitions into a asm-x86_64/desc_defs.h.

Signed-off-by: Avi Kivity <avi@qumranet.com>
Signed-off-by: Andi Kleen <ak@suse.de>

 include/asm-x86_64/desc.h      |   53 -------------------------------
 include/asm-x86_64/desc_defs.h |   69 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 52 deletions(-)
2006-12-07 02:14:04 +01:00
Vivek Goyal
4c7aa6c3b2 [PATCH] i386: Mark CONFIG_RELOCATABLE EXPERIMENTAL
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Vivek Goyal
be274eeaf2 [PATCH] i386: extend bzImage protocol for relocatable protected mode kernel
Extend bzImage protocol to enable bootloaders to load a completely relocatable
bzImage.  Now protected mode component of kernel is also relocatable and a
boot-loader can load the protected mode component at a differnt physical
address than 1MB.  (If kernel was built with CONFIG_RELOCATABLE)

Kexec can make use of it to load this kernel at a different physical address
to capture kernel crash dumps.

Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:04 +01:00
Vivek Goyal
e69f202d0a [PATCH] i386: Implement CONFIG_PHYSICAL_ALIGN
o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN.
  Hardcoding the kernel physical start value creates a problem in relocatable
  kernel context due to boot loader limitations. For ex, if somebody
  compiles a relocatable kernel to be run from address 4MB, but this kernel
  will run from location 1MB as grub loads the kernel at physical address
  1MB. Kernel thinks that I am a relocatable kernel and I should run from
  the address I have been loaded at. So somebody wanting to run kernel
  from 4MB alignment location (for improved performance regions) can't do
  that.

o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make
  more sense in relocatable kernel context. At run time kernel will move
  itself to a physical addr location which meets user specified alignment
  restrictions.

Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Vivek Goyal
6a044b3a0a [PATCH] i386: Warn upon absolute relocations being present
o Relocations generated w.r.t absolute symbols are not processed as by
  definition, absolute symbols are not to be relocated. Explicitly warn
  user about absolutions relocations present at compile time.

o These relocations get introduced either due to linker optimizations or
  some programming oversights.

o Also create a list of symbols which have been audited to be safe and
  don't emit warnings for these.

Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Eric W. Biederman
968de4f026 [PATCH] i386: Relocatable kernel support
This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is
selected it will be able to be loaded at any 4K aligned address below
1G.  The technique used is to compile the decompressor with -fPIC and
modify it so the decompressor is fully relocatable.  For the main
kernel relocations are generated.  Resulting in a kernel that is relocatable
with no runtime overhead and no need to modify the source code.

A reserved 32bit word in the parameters has been assigned
to serve as a stack so we figure out where are running.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Eric W. Biederman
fd593d1277 [PATCH] relocatable kernel: Kallsyms generate relocatable symbols
Print the addresses of non-absolute symbols relative to _text
so that ld will generate relocations.  Allowing a relocatable
kernel to relocate them.  We can't actually use the symbol names
because kallsyms includes static symbols that are not exported
from their object files.

Add the _text symbol definitions to the architectures which don't
define it otherwise linker will fail.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Eric W. Biederman
2a43f3ede4 [PATCH] i386: CONFIG_PHYSICAL_START cleanup
Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but
it triggers a full kernel rebuild for the silliest of reasons.  This
modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h
which prevents the full rebuild problem, which makes the code much
more maintainer and hopefully user friendly.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:04 +01:00
Eric W. Biederman
8621b81c74 [PATCH] i386: Reserve kernel memory starting from _text
Currently when we are reserving the memory the kernel text
resides in we start at __PHYSICAL_START which happens to be
correct but not very obvious.  In addition when we start relocating
the kernel __PHYSICAL_START is the wrong value, as it is an
absolute symbol that does not get relocated.

By starting the reservation at __pa_symbol(_text)
the code is clearer and will be correct when relocated.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Eric W. Biederman
9f45accf17 [PATCH] i386: define __pa_symbol()
On x86_64 we have to be careful with calculating the physical
address of kernel symbols.  Both because of compiler odditities
and because the symbols live in a different range of the virtual
address space.

Having a defintition of __pa_symbol that works on both x86_64 and
i386 simplifies writing code that works for both x86_64 and
i386 that has these kinds of dependencies.

So this patch adds the trivial i386 __pa_symbol definition.

Added assembly magic similar to RELOC_HIDE as suggested by Andi Kleen.
Just picked it up from x86_64.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Vivek Goyal
6ed018845f [PATCH] i386: Add comment for align to vmlinux.lds
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Vivek Goyal
6569580de7 [PATCH] i386: Distinguish absolute symbols
Ld knows about 2 kinds of symbols,  absolute and section
relative.  Section relative symbols symbols change value
when a section is moved and absolute symbols do not.

Currently in the linker script we have several labels
marking the beginning and ending of sections that
are outside of sections, making them absolute symbols.
Having a mixture of absolute and section relative
symbols refereing to the same data is currently harmless
but it is confusing.

This must be done carefully as newer revs of ld do not place
symbols that appear in sections without data and instead
ld makes those symbols global :(

My ultimate goal is to build a relocatable kernel.  The
safest and least intrusive technique is to generate
relocation entries so the kernel can be relocated at load
time.  The only penalty would be an increase in the size
of the kernel binary.  The problem is that if absolute and
relocatable symbols are not properly specified absolute symbols
will be relocated or section relative symbols won't be, which
is fatal.

The practical motivation is that when generating kernels that
will run from a reserved area for analyzing what caused
a kernel panic, it is simpler if you don't need to hard code
the physical memory location they will run at, especially
for the distributions.

[AK: and merged:]

o Also put a message so that in future people can be aware of it and
  avoid introducing absolute symbols.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Andi Kleen
67fd44fea2 [PATCH] x86-64: Implement compat code for SIOCSIFHWBROADCAST
This network ioctl wasn't handled before.

Reported by Alexandra.Kossovsky@oktetlabs.ru (Alexandra Kossovsky)
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Andi Kleen
9c5f8be462 [PATCH] x86: Mention PCI instead of RAM in NMI parity error message
On modern systems RAM errors don't cause NMIs, but it's usually
caused by PCI SERR. Mention PCI instead of RAM in the printk.

Reported by r_hayashi@ctc-g.co.jp (Ryutaro Hayashi)

Cc:  r_hayashi@ctc-g.co.jp

Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Alan Cox
7cd8b6861e [PATCH] x86: remove last two pci_find offenders in the core code
Resending as I believe the discussion about them established they were
correct.

Signed-off-by: Alan Cox <alan@redhat.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:03 +01:00
Andi Kleen
72690a2118 [PATCH] x86: Don't use nested idle loops
Currently the idle loop has two nested loops -- one high level
in cpu_idle and in some low level idle functions another one.

Looping in the low level idle functions breaks the idle notifiers
because interrupts waking up sleep states need to execute
exit_idle() which is only in cpu_idle().

So don't do that, only loop in cpu_idle(). This only removes
code.

In some cases e.g. poll_idle the idle loop is a little longer
now because cpu_idle checks more things. I hope that isn't a problem
ACPI idle doesn't change behaviour because it never looped anyways.

Cc: len.brown@intel.com
Cc: eranian@hpl.hp.com
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Andi Kleen
63cb683c6e [PATCH] i386: PDA: Fix math emulator for new pt_regs
This patch fixes the math emulator, which had not been adjusted
to match the changed struct pt_regs.

AK: extracted from larger patch by Jeremy.
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Jeremy Fitzhardinge
70463daca8 [PATCH] i386: Store the interrupt regs pointer in the PDA
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:03 +01:00
Jeremy Fitzhardinge
ec7fcaabbf [PATCH] i386: Implement "current" with the PDA
Use the pcurrent field in the PDA to implement the "current" macro.  This ends
up compiling down to a single instruction to get the current task.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:03 +01:00
Jeremy Fitzhardinge
b2938f8808 [PATCH] i386: Implement smp_processor_id() with the PDA
Use the cpu_number in the PDA to implement raw_smp_processor_id.  This is a
little simpler than using thread_info, though the cpu field in thread_info
cannot be removed since it is used for things other than getting the current
CPU in common code.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:03 +01:00
Jeremy Fitzhardinge
49d26b6eaa [PATCH] i386: Update sys_vm86 to cope with changed pt_regs and %gs usage
sys_vm86 uses a struct kernel_vm86_regs, which is identical to pt_regs, but
adds an extra space for all the segment registers.  Previously this structure
was completely independent, so changes in pt_regs had to be reflected in
kernel_vm86_regs.  This changes just embeds pt_regs in kernel_vm86_regs, and
makes the appropriate changes to vm86.c to deal with the new naming.

Also, since %gs is dealt with differently in the kernel, this change adjusts
vm86.c to reflect this.

While making these changes, I also cleaned up some frankly bizarre code which
was added when auditing was added to sys_vm86.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:03 +01:00
Jeremy Fitzhardinge
66e10a44d7 [PATCH] i386: Fix places where using %gs changes the usermode ABI
There are a few places where the change in struct pt_regs and the use of %gs
affect the userspace ABI.  These are primarily debugging interfaces where
thread state can be inspected or extracted.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:02 +01:00
Jeremy Fitzhardinge
f95d47caae [PATCH] i386: Use %gs as the PDA base-segment in the kernel
This patch is the meat of the PDA change.  This patch makes several related
changes:

1: Most significantly, %gs is now used in the kernel.  This means that on
   entry, the old value of %gs is saved away, and it is reloaded with
   __KERNEL_PDA.

2: entry.S constructs the stack in the shape of struct pt_regs, and this
   is passed around the kernel so that the process's saved register
   state can be accessed.

   Unfortunately struct pt_regs doesn't currently have space for %gs
   (or %fs). This patch extends pt_regs to add space for gs (no space
   is allocated for %fs, since it won't be used, and it would just
   complicate the code in entry.S to work around the space).

3: Because %gs is now saved on the stack like %ds, %es and the integer
   registers, there are a number of places where it no longer needs to
   be handled specially; namely context switch, and saving/restoring the
   register state in a signal context.

4: And since kernel threads run in kernel space and call normal kernel
   code, they need to be created with their %gs == __KERNEL_PDA.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:02 +01:00
Jeremy Fitzhardinge
6211119580 [PATCH] i386: Initialize the per-CPU data area
When a CPU is brought up, a PDA and GDT are allocated for it.  The GDT's
__KERNEL_PDA entry is pointed to the allocated PDA memory, so that all
references using this segment descriptor will refer to the PDA.

This patch rearranges CPU initialization a bit, so that the GDT/PDA are set up
as early as possible in cpu_init().  Also for secondary CPUs, GDT+PDA are
preallocated and initialized so all the secondary CPU needs to do is set up
the ldt and load %gs.  This will be important once smp_processor_id() and
current use the PDA.

In all cases, the PDA is set up in head.S, before a CPU starts running C code,
so the PDA is always available.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Cc: James Bottomley <James.Bottomley@SteelEye.com>
Cc: Matt Tolentino <matthew.e.tolentino@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:02 +01:00
Jeremy Fitzhardinge
9ca36101a8 [PATCH] i386: Basic definitions for i386-pda
This patch has the basic definitions of struct i386_pda, and the segment
selector in the GDT.

asm-i386/pda.h is more or less a direct copy of asm-x86_64/pda.h.  The most
interesting difference is the use of _proxy_pda, which is used to give gcc a
model for the actual memory operations on the real pda structure.  No actual
reference is ever made to _proxy_pda, so it is never defined.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:02 +01:00
Jeremy Fitzhardinge
eb5b7b9d86 [PATCH] i386: Use asm-offsets for the offsets of registers into the pt_regs struct
Use asm-offsets for the offsets of registers into the pt_regs struct, rather
than having hard-coded constants

I left the constants in the comments of entry.S because they're useful for
reference; the code in entry.S is very dependent on the layout of pt_regs,
even when using asm-offsets.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Keith Owens <kaos@ocs.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:02 +01:00
Jan Beulich
bcddc0155f [PATCH] x86-64: miscellaneous entry.S adjustments
This patch:
- makes ret_from_sys_call no longer global (all external users were
  previously switched to use int_ret_from_sys_call)
- adjusts placement of a CFI_{REMEMBER,RESTORE}_STATE pair to better
  fit logic flow
- eliminates an unnecessary pair of CFI_{REMEMBER,RESTORE}_STATE
- glues together function- and unwinder-wise the previously separate
  system_call and int_ret_from_sys_call function fragments

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:02 +01:00
Andrew Morton
da68933e0a [PATCH] x86-64: dump_trace() atomicity fix
Fix

BUG: using smp_processor_id() in preemptible [00000001] code:

in backtracer on preemptible debug kernels.

Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:02 +01:00
Stephane Eranian
bb0d977ed4 [PATCH] i386: add Intel Core related PMU MSRs
- add Intel Precise-Event Based sampling (PEBS) related MSR
- add Intel Data Save (DS) Area related MSR
- add Intel Core microarchitecure performance counter MSRs

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:02 +01:00
Stephane Eranian
86efef50cf [PATCH] x86-64: x86-64 add Intel Core related PMU MSRs definitions
Add o the x86-64 tree a bunch of MSRs related to performance
monitoring for the processors based on Intel Core microarchitecture.
It also adds some architectural MSRs for PEBS. A similar patch for i386 will
follow.

changelog:
	- add Intel Precise-Event Based sampling (PEBS) related MSR
	- add Intel Data Save (DS) Area related MSR
	- add Intel Core microarchitecure performance counter MSRs

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:02 +01:00
Amol Lad
fa5cecd111 [PATCH] i386: add missing iounmap in i386 hpet clocksource code
ioremap must be balanced by an iounmap and failing to do so can result
in a memory leak.

Tested (compilation only):
- using allmodconfig
- making sure the files are compiling without any warning/error due to
new changes

Signed-off-by: Amol Lad <amol@verismonetworks.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:02 +01:00
Amol Lad
c0e84b9901 [PATCH] i386: Add iounmap in error paths in hpet code
Signed-off-by: Amol Lad <amol@verismonetworks.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:02 +01:00
Aaron Durbin
399287229c [PATCH] x86-64: Insert Local and IO APIC(s) into resource map
Insert the Local APIC and IO APIC(s) into the resource tree.  It allows the
APIC resources to be visible within /proc/iomem.  The patch also takes into
account IO APIC(s) mapped in the PCI space by deferring the insertion until
after PCI has allocated its necessary resources.

Signed-off-by: Aaron Durbin <adurbin@google.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@muc.de>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:01 +01:00
Chuck Ebbert
acc207616a [PATCH] i386: add sleazy FPU optimization
i386 port of the sLeAZY-fpu feature.  Chuck reports that this gives him a +/-
0.4% improvement on his simple benchmark

x86_64 description follows:

Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every*
context switch a trap is taken for the first FPU use to restore the FPU
context lazily.  This is of course great for applications that have very
sporadic or no FPU use (since then you avoid doing the expensive save/restore
all the time).  However for very frequent FPU users...  you take an extra trap
every context switch.

The patch below adds a simple heuristic to this code: After 5 consecutive
context switches of FPU use, the lazy behavior is disabled and the context
gets restored every context switch.  If the app indeed uses the FPU, the trap
is avoided.  (the chance of the 6th time slice using FPU after the previous 5
having done so are quite high obviously).

After 256 switches, this is reset and lazy behavior is returned (until there
are 5 consecutive ones again).  The reason for this is to give apps that do
longer bursts of FPU use still the lazy behavior back after some time.

Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:01 +01:00
Stas Sergeev
be44d2aabc [PATCH] i386: espfix cleanup
Clean up the espfix code:

- Introduced PER_CPU() macro to be used from asm
- Introduced GET_DESC_BASE() macro to be used from asm
- Rewrote the fixup code in asm, as calling a C code with the altered %ss
  appeared to be unsafe
- No longer altering the stack from a .fixup section
- 16bit per-cpu stack is no longer used, instead the stack segment base
  is patched the way so that the high word of the kernel and user %esp
  are the same.
- Added the limit-patching for the espfix segment. (Chuck Ebbert)

[jeremy@goop.org: use the x86 scaling addressing mode rather than shifting]
Signed-off-by: Stas Sergeev <stsp@aknet.ru>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Zachary Amsden <zach@vmware.com>
Acked-by: Chuck Ebbert <76306.1226@compuserve.com>
Acked-by: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:01 +01:00
Andrew Morton
bb81a09e55 [PATCH] x86: all cpu backtrace
When a spinlock lockup occurs, arrange for the NMI code to emit an all-cpu
backtrace, so we get to see which CPU is holding the lock, and where.

Cc: Andi Kleen <ak@muc.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:01 +01:00
Jeremy Fitzhardinge
e5e3a04289 [PATCH] i386: remove default_ldt, and simplify ldt-setting.
This patch removes the default_ldt[] array, as it has been unused since
iBCS stopped being supported.  This means it is now possible to actually
set an empty LDT segment.

In order to deal with this, the set_ldt_desc/load_LDT pair has been
replaced with a single set_ldt() operation which is responsible for both
setting up the LDT descriptor in the GDT, and reloading the LDT register.
If there are no LDT entries, the LDT register is loaded with a NULL
descriptor.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Acked-by: Zachary Amsden <zach@vmware.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:01 +01:00
Alexey Dobriyan
e2764a1e30 [PATCH] x86-64: use BUILD_BUG_ON in FPU code
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:01 +01:00
Stephane Eranian
42ed458aa5 [PATCH] i386: i386 add X86_FEATURE_PEBS and detection
Here is a patch (used by perfmon2) to detect the presence of the Precise Event
Based Sampling (PEBS) feature for i386.  The patch also adds the cpu_has_pebs
macro.

- adds X86_FEATURE_PEBS

- adds cpu_has_pebs to test for X86_FEATURE_PEBS

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:01 +01:00
Stephane Eranian
d7731c0ff6 [PATCH] i386: i386 rename X86_FEATURE_DTES to X86_FEATURE_DS
Here is a patch (used by perfmon2) that renames X86_FEATURE_DTES to
X86_FEATURE_DS to match Intel's documentation for the Debug Store save area on
i386.  The patch also adds cpu_has_ds.

- rename X86_FEATURE_DTES to X86_FEATURE_DS to match documentation

- adds cpu_has_ds to test for X86_FEATURE_DS

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07 02:14:01 +01:00
Yinghai Lu
d8cebe65ea [PATCH] x86-64: remove duplicated cpu_mask_to_apicid in x86_64 smp.h
inline function cpu_mask_to_apicid in smp.h is duplicated with macro
in mach_apic.h.

Signed-off-by: Yinghai Lu <yinghai.lu@amd.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:01 +01:00
Stephane Eranian
36b2a8d5af [PATCH] x86-64: add X86_FEATURE_PEBS and detection
Here is a patch (used by perfmon2) to detect the presence of the
Precise Event Based Sampling (PEBS) feature for Intel 64-bit processors.
The patch also adds the cpu_has_pebs macro.

changelog:
	- adds X86_FEATURE_PEBS
	- adds cpu_has_pebs to test for X86_FEATURE_PEBS

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:01 +01:00
Stephane Eranian
bd1d599518 [PATCH] x86-64: x86_64 rename X86_FEATURE_DTES to X86_FEATURE_DS
Here is a patch (used by perfmon2) that renamed X86_FEATURE_DTES
to X86_FEATURE_DS to match Intel's documentation for the Debug Store
save area. The patch also adds cpu_has_ds.

changelog:
	- rename X86_FEATURE_DTES to X86_FEATURE_DS to match documentation
	- adds cpu_has_ds to test for X86_FEATURE_DS

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:00 +01:00
Andi Kleen
87e1652c78 [PATCH] x86-64: Don't keep interrupts disabled while spinning in spinlocks
Follows i386.

Based on patch from some folks at Google (MikeW, Edward G.?), but
completely redone by AK.

Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07 02:14:00 +01:00