summaryrefslogtreecommitdiffstats
path: root/arch/ia64/kernel
Commit message (Collapse)AuthorAgeFilesLines
* [IA64] Use "PER_CPU" form of EXPORT macroTony Luck2005-05-311-1/+1
| | | | | | I was gently reminded that there are per-cpu forms of the EXPORT_SYMBOL macros. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] sys_mmap doesn't follow posix.1 when parameter len=0Zhang Yanmin2005-05-261-7/+0
| | | | | | | | | | | | | | | In IA64 kernel, sys_mmap calls do_mmap2 and do_mmap2 returns addr if len=0, which means the mmap sys call succeeds. Posix.1 says: The mmap() function shall fail if: [EINVAL] The value of len is zero. Here is a patch to fix it. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Acked-by: David Mosberger <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] initialize spinlock pfm_alt_install_checkTony Luck2005-05-181-1/+1
| | | | | | | I applied the penultimate version of the perfmon patch, which didn't have the initialization of the new spinlock that was added. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] alternate perfmon handlerTony Luck2005-05-181-15/+160
| | | | | | | | | | | | | | | Patch from Charles Spirakis Some linux customers want to optimize their applications on the latest hardware but are not yet willing to upgrade to the latest kernel. This patch provides a way to plug in an alternate, basic, and GPL'ed PMU subsystem to help with their monitoring needs or for specialty work. It can also be used in case of serious unexpected bugs in perfmon. Mutual exclusion between the two subsystems is guaranteed, hence no conflict can arise from both subsystem being present. Acked-by: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] cpe interrupts are not being enabled.Russ Anderson2005-05-171-2/+2
| | | | | | | | | | | | | | acpi_request_vector() is called in ia64_mca_init() to get the cpe_vector. The problem is that acpi_request_vector() looks in platform_intr_list[] to get the vector, but platform_intr_list[] is not initialized with a valid vector until later (in sn_setup()). Without a valid vector the code defaults to polling mode. This patch moves the call to acpi_request_vector() from ia64_mca_init() to ia64_mca_late_init(), which is after platform_intr_list[] is initialized. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Correct convert_to_non_syscall()David Mosberger-Tang2005-05-171-3/+17
| | | | | | | | convert_to_non_syscall() has the same problem that unwind_to_user() used to have. Fix it likewise. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Avoid .spillpsp directive in handcoded assemblyDavid Mosberger-Tang2005-05-101-2/+2
| | | | | | | | | | | | | | | | | | Some time ago, GAS was fixed to bring the .spillpsp directive in line with the Intel assembler manual (there was some disagreement as to whether or not there is a built-in 16-byte offset). Unfortunately, there are two places in the kernel where this directive is used in handwritten assembly files and those of course relied on the "buggy" behavior. As a result, when using a "fixed" assembler, the kernel picks up the UNaT bits from the wrong place (off by 16) and randomly sets NaT bits on the scratch registers. This can be noticed easily by looking at a coredump and finding various scratch registers with unexpected NaT values. The patch below fixes this by using the .spillsp directive instead, which works correctly no matter what assembler is in use. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] fix "section mismatch" compile-time-errorDavid Mosberger-Tang2005-05-091-1/+1
| | | | | | | | | | | | | | I noticed this typo when trying to compile a kernel which had CONFIG_HOTPLUG turned off. In that case, __devinit is no longer a no-op and the compiler then detects a section-conflict. Fix by using __devinitdata instead of __devinit. Same patch also submitted by Darren Williams to fix compilation error using sim_defconfig (which has CONFIG_HOTPLUG=n). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Darren Williams <dsw@gelato.unsw.edu.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Fix stack placement when INIT hits in kernel mode.David Mosberger-Tang2005-05-061-2/+1
| | | | | | | | | | | | Without this patch, the stack is placed _below_ the current task structure, which is risky at best. Tony, I think this patch needs to go into 2.6.12, since it fixes a real bug. Without it, INIT may case secondary errors, which would be most unpleasant. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.gitDavid Woodhouse2005-05-057-54/+93
|\
| * [IA64] Fix two warnings introduced by perfmon patches.Tony Luck2005-05-031-3/+1
| | | | | | | | Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] another perfmon fix (take2)stephane eranian2005-05-031-10/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - pfm_context_load(): change return value from EINVAL to EBUSY when context is already loaded. - pfm_check_task_state(): pass test if context state is MASKED. It is safe to give access on PFM_CTX_MASKED because the PMU state (PMD) is stable and saved in software state. This helps multiplexing programs such as the example given in libpfm-3.1. Signed-off-by: stephane eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] perfmon & PAL_HALT againStephane Eranian2005-05-032-3/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pmu_active test is based on the values of PSR.up. THIS IS THE PROBLEM as it does not take into account the lazy restore logic which is as follow (simplified): context switch out: save PMDs clear psr.up release ownership context switch in: if (ctx->last_cpu == smp_processor_id() && ctx->cpu_activation == cpu_activation) { set psr.up return } restore PMD restore PMC ctx->last_cpu = smp_processor_id(); ctx->activation = ++cpu_activation; set psr.up The key here is that on context switch out, we clear psr.up and on context switch in we check if nobody else used the PMU on that processor since last time we came. In that case, we assume the PMD/PMC are ours and we simply reactivate. The Caliper problem is that between the moment we context switch out and the moment we come back, nobody effectively used the PMU BUT the processor went idle. Normally this would have no incidence but PAL_HALT does alter the PMU registers. In default_idle(), the test on psr.up is not strong enough to cover this case and we go into PAL which trashed the PMU resgisters. When we come back we falsely assume that this is our state yet it is corrupted. Very nasty indeed. To avoid the problem it is necessary to forbid going to PAL_HALT as soon as perfmon installs some valid state in the PMU registers. This happens with an application attaches a context to a thread or CPU. It is not enough to check the psr/dcr bits. Hence I propose the attached patch. It adds a callback in process.c to modify the condition to enter PAL on idle. Basically, now it is conditional to pal_halt=1 AND perfmon saying it is okay. Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] MCA recovery improvementsRuss Anderson2005-05-032-5/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jack Steiner uncovered some opportunities for improvement in the MCA recovery code. 1) Set bsp to save registers on the kernel stack. 2) Disable interrupts while in the MCA recovery code. 3) Change the way the user process is killed, to avoid a panic in schedule. Testing shows that these changes make the recovery code much more reliable with the 2.6.12 kernel. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] fix ia64 syscall auditingDavid Woodhouse2005-05-032-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Attached is a patch against David's audit.17 kernel that adds checks for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and signal handling code paths. The patch enables auditing of system calls set up via fsys_bubble_down, as well as ensuring that audit_syscall_exit() is called on return from sigreturn. Neglecting to check for TIF_SYSCALL_AUDIT at these points results in incorrect information in audit_context, causing frequent system panics when system call auditing is enabled on an ia64 system. I have tested this patch and have seen no problems with it. [Original patch from Amy Griffis ported to current kernel by David Woodhouse] From: Amy Griffis <amy.griffis@hp.com> From: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Chris Wright <chrisw@osdl.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] reduce cacheline bouncing in cpu_idle_waitZwane Mwaikambo2005-05-031-15/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | Andi noted that during normal runtime cpu_idle_map is bounced around a lot, and occassionally at a higher frequency than the timer interrupt wakeup which we normally exit pm_idle from. So switch to a percpu variable. I didn't move things to the slow path because it would involve adding scheduler code to wakeup the idle thread on the cpus we're waiting for. Signed-off-by: Zwane Mwaikambo <zwane@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] use common pxm functionAlex Williamson2005-05-031-18/+5
| | | | | | | | | | | | | | | | This patch simplifies a couple places where we search for _PXM values in ACPI namespace. Thanks, Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] fix typos caught by new assemblerDavid Mosberger-Tang2005-05-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Patch below fixes 3 trivial typos which are caught by the new assembler (v2.169.90). Please apply. [Note: fix to memcpy that was also part of this patch was separately applied from patches by H.J. and Andreas ... so the delta here only has the other two fixes. -Tony] Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.gitDavid Woodhouse2005-05-033-17/+4
|\|
| * [PATCH] convert that currently tests _NSIG directly to use valid_signal()Jesper Juhl2005-05-011-2/+3
| | | | | | | | | | | | | | | | | | Convert most of the current code that uses _NSIG directly to instead use valid_signal(). This avoids gcc -W warnings and off-by-one errors. Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] consolidate sys_shmatStephen Rothwell2005-05-012-15/+1
| | | | | | | | | | | | Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] fix ia64 syscall auditingAmy Griffis2005-04-292-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Attached is a patch against David's audit.17 kernel that adds checks for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and signal handling code paths.The patch enables auditing of system calls set up via fsys_bubble_down, as well as ensuring that audit_syscall_exit() is called on return from sigreturn. Neglecting to check for TIF_SYSCALL_AUDIT at these points results in incorrect information in audit_context, causing frequent system panics when system call auditing is enabled on an ia64 system. Signed-off-by: Amy Griffis <amy.griffis@hp.com> Signed-off-by: David Woodhouse <dwmw2@infradead.org>
* | [AUDIT] Don't allow ptrace to fool auditing, log arch of audited syscalls.2005-04-291-8/+13
|/ | | | | | | | | | | | | | | | | | We were calling ptrace_notify() after auditing the syscall and arguments, but the debugger could have _changed_ them before the syscall was actually invoked. Reorder the calls to fix that. While we're touching ever call to audit_syscall_entry(), we also make it take an extra argument: the architecture of the syscall which was made, because some architectures allow more than one type of syscall. Also add an explicit success/failure flag to audit_syscall_exit(), for the benefit of architectures which return that in a condition register rather than only returning a single register. Change type of syscall return value to 'long' not 'int'. Signed-off-by: David Woodhouse <dwmw2@infradead.org>
* [IA64] iosapic.c: typo ... s/spin_unlock_irq/spin_unlock/Kenji Kaneshige2005-04-251-1/+1
| | | | | | | vector sharing patch had a typo ... mismatched spin_lock() with a spin_unlock_irq(). Fix from Kenji Kaneshige. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] print "siblings" before {physical,core,thread} idTony Luck2005-04-251-1/+1
| | | | | | | | Rohit and Suresh changed their mind about the order to print things in /proc/cpuinfo, but didn't include the change in the version of the patch they sent to me. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] vector sharing (Large I/O system support)Kenji Kaneshige2005-04-252-89/+285
| | | | | | | | | | | | | | | Current ia64 linux cannot handle greater than 184 interrupt sources because of the lack of vectors. The following patch enables ia64 linux to handle greater than 184 interrupt sources by allowing the same vector number to be shared by multiple IOSAPIC's RTEs. The design of this patch is besed on "Intel(R) Itanium(R) Processor Family Interrupt Architecture Guide". Even if you don't have a large I/O system, you can see the behavior of vector sharing by changing IOSAPIC_LAST_DEVICE_VECTOR to fewer value. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] multi-core/multi-thread identificationSuresh Siddha2005-04-252-2/+273
| | | | | | | | | | | | | | | | | | | | | | Version 3 - rediffed to apply on top of Ashok's hotplug cpu patch. /proc/cpuinfo output in step with x86. This is an updated MC/MT identification patch based on the previous discussions on list. Add the Multi-core and Multi-threading detection for IPF. - Add new core and threading related fields in /proc/cpuinfo. Physical id Core id Thread id Siblings - setup the cpu_core_map and cpu_sibling_map appropriately - Handles Hot plug CPU Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Gordon Jin <gordon.jin@intel.com> Signed-off-by: Rohit Seth <rohit.seth@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] fix syscall-optimization goofDavid Mosberger-Tang2005-04-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Sadly, I goofed in this syscall-tuning patch: ChangeSet 1.1966.1.40 2005/01/22 13:31:05 davidm@hpl.hp.com [IA64] Improve ia64_leave_syscall() for McKinley-type cores. Optimize ia64_leave_syscall() a bit better for McKinley-type cores. The patch looks big, but that's mostly due to renaming r16/r17 to r2/r3. Good for a 13 cycle improvement. The problem is that the size of the physical stacked registers was loaded into the wrong register (r3 instead of r17). Since r17 by coincidence always had the value 1, this had the effect of turning rse_clear_invalid into a no-op. That poses the risk of leaking kernel state back to user-land and is hence not acceptable. The fix below is simple, but unfortunately it costs us about 28 cycles in syscall overhead. ;-( Unfortunately, there isn't much we can do about that since those registers have to be cleared one way or another. --david Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] perfmon: make pfm_sysctl a global, and other cleanupStephane Eranian2005-04-252-42/+30
| | | | | | | | | | | | | | | | | | - make pfm_sysctl a global such that it is possible to enable/disable debug printk in sampling formats using PFM_DEBUG. - remove unused pfm_debug_var variable - fix a bug in pfm_handle_work where an BUG_ON() could be triggered. There is a path where pfm_handle_work() can be called with interrupts enabled, i.e., when TIF_NEED_RESCHED is set. The fix correct the masking and unmasking of interrupts in pfm_handle_work() such that we restore the interrupt mask as it was upon entry. signed-off-by: stephane eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] speed up syscall path a bit moreDavid Mosberger-Tang2005-04-251-6/+6
| | | | | | | | | | Recently I noticed that clearing ar.ssd/ar.csd right before srlz.d is causing significant stalling in the syscall path. The patch below fixes that by moving the register-writes after srlz.d. On a Madison, this drops break-based getpid() from 241 to 226 cycles (-15 cycles). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Tighten up unw_unwind_to_user checkKeith Owens2005-04-251-10/+17
| | | | | | | | | | | | | | Detect user space by the unwind frame with predicate PRED_USER_STACK set, instead of a user space IP. Tighten up the last ditch check for running off the top of the kernel stack. Based on a suggestion by David Mosberger, reworked to fit the current tree. This survives my stress test which used to break 2.6.9 kernels. Unlike 2.6.11, the stress test now unwinds to the correct point, so gdb can get the user space registers. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] add missing cpu_relax() in ITC syncing codeDavid Mosberger-Tang2005-04-251-4/+7
| | | | | | | Call cpu_relax() in busy-waiting loops of the ITC-syncing code. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Fix build errors for !HOTPLUG case.Ashok Raj2005-04-221-3/+7
| | | | | Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] cpu hotplug: return offlined cpus to SALAshok Raj2005-04-224-110/+361
| | | | | | | | | | | | | | | | This patch is required to support cpu removal for IPF systems. Existing code just fakes the real offline by keeping it run the idle thread, and polling for the bit to re-appear in the cpu_state to get out of the idle loop. For the cpu-offline to work correctly, we need to pass control of this CPU back to SAL so it can continue in the boot-rendez mode. This gives the SAL control to not pick this cpu as the monarch processor for global MCA events, and addition does not wait for this cpu to checkin with SAL for global MCA events as well. The handoff is implemented as documented in SAL specification section 3.2.5.1 "OS_BOOT_RENDEZ to SAL return State" Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-1658-0/+35362
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!