1. 12 Mar, 2010 1 commit
  2. 01 Mar, 2010 26 commits
  3. 27 Feb, 2010 2 commits
    • Ian Campbell's avatar
      x86, paravirt: Remove kmap_atomic_pte paravirt op. · dad52fc0
      Ian Campbell authored
      Now that both Xen and VMI disable allocations of PTE pages from high
      memory this paravirt op serves no further purpose.
      
      This effectively reverts ce6234b5
      
       "add kmap_atomic_pte for mapping
      highpte pages".
      Signed-off-by: default avatarIan Campbell <ian.campbell@citrix.com>
      LKML-Reference: <1267204562-11844-3-git-send-email-ian.campbell@citrix.com>
      Acked-by: default avatarAlok Kataria <akataria@vmware.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      dad52fc0
    • Russ Anderson's avatar
      x86: Enable NMI on all cpus on UV · 78c06176
      Russ Anderson authored
      
      Enable NMI on all cpus in UV system and add an NMI handler
      to dump_stack on each cpu.
      
      By default on x86 all the cpus except the boot cpu have NMI
      masked off.  This patch enables NMI on all cpus in UV system
      and adds an NMI handler to dump_stack on each cpu.  This
      way if a system hangs we can NMI the machine and get a
      backtrace from all the cpus.
      
      Version 2: Use x86_platform driver mechanism for nmi init, per
                 Ingo's suggestion.
      
      Version 3: Clean up Ingo's nits.
      Signed-off-by: default avatarRuss Anderson <rja@sgi.com>
      LKML-Reference: <20100226164912.GA24439@sgi.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      78c06176
  4. 25 Feb, 2010 6 commits
    • Thomas Gleixner's avatar
      x86, olpc: Use pci subarch init for OLPC · d5d0e88c
      Thomas Gleixner authored
      
      Replace the #ifdef'ed OLPC-specific init functions by a conditional
      x86_init function.  If the function returns 0 we leave pci_arch_init,
      otherwise we continue.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Andres Salomon <dilinger@collabora.co.uk>
      LKML-Reference: <43F901BD926A4E43B106BF17856F0755A318CE89@orsmsx508.amr.corp.intel.com>
      Signed-off-by: default avatarJacob Pan <jacob.jun.pan@intel.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      d5d0e88c
    • Thomas Gleixner's avatar
      x86, pci: Add arch_init to x86_init abstraction · 4fb6088a
      Thomas Gleixner authored
      
      Added an abstraction function for arch specific init calls.
      Signed-off-by: default avatarJacob Pan <jacob.jun.pan@intel.com>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      LKML-Reference: <43F901BD926A4E43B106BF17856F0755A318CE84@orsmsx508.amr.corp.intel.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      4fb6088a
    • Masami Hiramatsu's avatar
      kprobes/x86: Support kprobes jump optimization on x86 · c0f7ac3a
      Masami Hiramatsu authored
      
      Introduce x86 arch-specific optimization code, which supports
      both of x86-32 and x86-64.
      
      This code also supports safety checking, which decodes whole of
      a function in which probe is inserted, and checks following
      conditions before optimization:
       - The optimized instructions which will be replaced by a jump instruction
         don't straddle the function boundary.
       - There is no indirect jump instruction, because it will jumps into
         the address range which is replaced by jump operand.
       - There is no jump/loop instruction which jumps into the address range
         which is replaced by jump operand.
       - Don't optimize kprobes if it is in functions into which fixup code will
         jumps.
      
      This uses text_poke_multibyte() which doesn't support modifying
      code on NMI/MCE handler. However, since kprobes itself doesn't
      support NMI/MCE code probing, it's not a problem.
      
      Changes in v9:
       - Use *_text_reserved() for checking the probe can be optimized.
       - Verify jump address range is in 2G range when preparing slot.
       - Backup original code when switching optimized buffer, instead of
         preparing buffer, because there can be int3 of other probes in
         preparing phase.
       - Check kprobe is disabled in arch_check_optimized_kprobe().
       - Strictly check indirect jump opcodes (ff /4, ff /5).
      
      Changes in v6:
       - Split stop_machine-based jump patching code.
       - Update comments and coding style.
      
      Changes in v5:
       - Introduce stop_machine-based jump replacing.
      Signed-off-by: default avatarMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133446.6725.78994.stgit@localhost6.localdomain6>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      c0f7ac3a
    • Masami Hiramatsu's avatar
      x86: Add text_poke_smp for SMP cross modifying code · 3d55cc8a
      Masami Hiramatsu authored
      
      Add generic text_poke_smp for SMP which uses stop_machine()
      to synchronize modifying code.
      This stop_machine() method is officially described at "7.1.3
      Handling Self- and Cross-Modifying Code" on the intel's
      software developer's manual 3A.
      
      Since stop_machine() can't protect code against NMI/MCE, this
      function can not modify those handlers. And also, this function
      is basically for modifying multibyte-single-instruction. For
      modifying multibyte-multi-instructions, we need another special
      trap & detour code.
      
      This code originaly comes from immediate values with
      stop_machine() version. Thanks Jason and Mathieu!
      Signed-off-by: default avatarMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133438.6725.80273.stgit@localhost6.localdomain6>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3d55cc8a
    • Masami Hiramatsu's avatar
      kprobes/x86: Cleanup RELATIVEJUMP_INSTRUCTION to RELATIVEJUMP_OPCODE · d498f763
      Masami Hiramatsu authored
      
      Change RELATIVEJUMP_INSTRUCTION macro to RELATIVEJUMP_OPCODE
      since it represents just the opcode byte.
      Signed-off-by: default avatarMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133349.6725.99302.stgit@localhost6.localdomain6>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d498f763
    • Ian Campbell's avatar
      x86, mm: Allow highmem user page tables to be disabled at boot time · 14315592
      Ian Campbell authored
      
      Distros generally (I looked at Debian, RHEL5 and SLES11) seem to
      enable CONFIG_HIGHPTE for any x86 configuration which has highmem
      enabled. This means that the overhead applies even to machines which
      have a fairly modest amount of high memory and which therefore do not
      really benefit from allocating PTEs in high memory but still pay the
      price of the additional mapping operations.
      
      Running kernbench on a 4G box I found that with CONFIG_HIGHPTE=y but
      no actual highptes being allocated there was a reduction in system
      time used from 59.737s to 55.9s.
      
      With CONFIG_HIGHPTE=y and highmem PTEs being allocated:
        Average Optimal load -j 4 Run (std deviation):
        Elapsed Time 175.396 (0.238914)
        User Time 515.983 (5.85019)
        System Time 59.737 (1.26727)
        Percent CPU 263.8 (71.6796)
        Context Switches 39989.7 (4672.64)
        Sleeps 42617.7 (246.307)
      
      With CONFIG_HIGHPTE=y but with no highmem PTEs being allocated:
        Average Optimal load -j 4 Run (std deviation):
        Elapsed Time 174.278 (0.831968)
        User Time 515.659 (6.07012)
        System Time 55.9 (1.07799)
        Percent CPU 263.8 (71.266)
        Context Switches 39929.6 (4485.13)
        Sleeps 42583.7 (373.039)
      
      This patch allows the user to control the allocation of PTEs in
      highmem from the command line ("userpte=nohigh") but retains the
      status-quo as the default.
      
      It is possible that some simple heuristic could be developed which
      allows auto-tuning of this option however I don't have a sufficiently
      large machine available to me to perform any particularly meaningful
      experiments. We could probably handwave up an argument for a threshold
      at 16G of total RAM.
      
      Assuming 768M of lowmem we have 196608 potential lowmem PTE
      pages. Each page can map 2M of RAM in a PAE-enabled configuration,
      meaning a maximum of 384G of RAM could potentially be mapped using
      lowmem PTEs.
      
      Even allowing generous factor of 10 to account for other required
      lowmem allocations, generous slop to account for page sharing (which
      reduces the total amount of RAM mappable by a given number of PT
      pages) and other innacuracies in the estimations it would seem that
      even a 32G machine would not have a particularly pressing need for
      highmem PTEs. I think 32G could be considered to be at the upper bound
      of what might be sensible on a 32 bit machine (although I think in
      practice 64G is still supported).
      
      It's seems questionable if HIGHPTE is even a win for any amount of RAM
      you would sensibly run a 32 bit kernel on rather than going 64 bit.
      Signed-off-by: default avatarIan Campbell <ian.campbell@citrix.com>
      LKML-Reference: <1266403090-20162-1-git-send-email-ian.campbell@citrix.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      14315592
  5. 24 Feb, 2010 5 commits