- 07 May, 2007 10 commits
-
-
Christoph Lameter authored
This patch was recently posted to lkml and acked by Pekka. The flag SLAB_MUST_HWCACHE_ALIGN is 1. Never checked by SLAB at all. 2. A duplicate of SLAB_HWCACHE_ALIGN for SLUB 3. Fulfills the role of SLAB_HWCACHE_ALIGN for SLOB. The only remaining use is in sparc64 and ppc64 and their use there reflects some earlier role that the slab flag once may have had. If its specified then SLAB_HWCACHE_ALIGN is also specified. The flag is confusing, inconsistent and has no purpose. Remove it. Acked-by:
Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Miller authored
I ported this to sparc64 as per the patch below, tested on UP SunBlade1500 and 24 cpu Niagara T1000. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Christoph Lameter <clameter@sgi.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Andi Kleen <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Lameter authored
If we add a new flag so that we can distinguish between the first page and the tail pages then we can avoid to use page->private in the first page. page->private == page for the first page, so there is no real information in there. Freeing up page->private makes the use of compound pages more transparent. They become more usable like real pages. Right now we have to be careful f.e. if we are going beyond PAGE_SIZE allocations in the slab on i386 because we can then no longer use the private field. This is one of the issues that cause us not to support debugging for page size slabs in SLAB. Having page->private available for SLUB would allow more meta information in the page struct. I can probably avoid the 16 bit ints that I have in there right now. Also if page->private is available then a compound page may be equipped with buffer heads. This may free up the way for filesystems to support larger blocks than page size. We add PageTail as an alias of PageReclaim. Compound pages cannot currently be reclaimed. Because of the alias one needs to check PageCompound first. The RFC for the this approach was discussed at http://marc.info/?t=117574302800001&r=1&w=2 [nacc@us.ibm.com: fix hugetlbfs] Signed-off-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Lameter authored
PowerPC uses the slab allocator to manage the lowest level of the page table. In high cpu configurations we also use the page struct to split the page table lock. Disallow the selection of SLUB for that case. Signed-off-by:
Christoph Lameter <clameter@sgi.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Paul Mackerras <paulus@samba.org> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Lameter authored
This is a new slab allocator which was motivated by the complexity of the existing code in mm/slab.c. It attempts to address a variety of concerns with the existing implementation. A. Management of object queues A particular concern was the complex management of the numerous object queues in SLAB. SLUB has no such queues. Instead we dedicate a slab for each allocating CPU and use objects from a slab directly instead of queueing them up. B. Storage overhead of object queues SLAB Object queues exist per node, per CPU. The alien cache queue even has a queue array that contain a queue for each processor on each node. For very large systems the number of queues and the number of objects that may be caught in those queues grows exponentially. On our systems with 1k nodes / processors we have several gigabytes just tied up for storing references to objects for those queues This does not include the objects that could be on those queues. One fears that the whole memory of the machine could one day be consumed by those queues. C. SLAB meta data overhead SLAB has overhead at the beginning of each slab. This means that data cannot be naturally aligned at the beginning of a slab block. SLUB keeps all meta data in the corresponding page_struct. Objects can be naturally aligned in the slab. F.e. a 128 byte object will be aligned at 128 byte boundaries and can fit tightly into a 4k page with no bytes left over. SLAB cannot do this. D. SLAB has a complex cache reaper SLUB does not need a cache reaper for UP systems. On SMP systems the per CPU slab may be pushed back into partial list but that operation is simple and does not require an iteration over a list of objects. SLAB expires per CPU, shared and alien object queues during cache reaping which may cause strange hold offs. E. SLAB has complex NUMA policy layer support SLUB pushes NUMA policy handling into the page allocator. This means that allocation is coarser (SLUB does interleave on a page level) but that situation was also present before 2.6.13. SLABs application of policies to individual slab objects allocated in SLAB is certainly a performance concern due to the frequent references to memory policies which may lead a sequence of objects to come from one node after another. SLUB will get a slab full of objects from one node and then will switch to the next. F. Reduction of the size of partial slab lists SLAB has per node partial lists. This means that over time a large number of partial slabs may accumulate on those lists. These can only be reused if allocator occur on specific nodes. SLUB has a global pool of partial slabs and will consume slabs from that pool to decrease fragmentation. G. Tunables SLAB has sophisticated tuning abilities for each slab cache. One can manipulate the queue sizes in detail. However, filling the queues still requires the uses of the spin lock to check out slabs. SLUB has a global parameter (min_slab_order) for tuning. Increasing the minimum slab order can decrease the locking overhead. The bigger the slab order the less motions of pages between per CPU and partial lists occur and the better SLUB will be scaling. G. Slab merging We often have slab caches with similar parameters. SLUB detects those on boot up and merges them into the corresponding general caches. This leads to more effective memory use. About 50% of all caches can be eliminated through slab merging. This will also decrease slab fragmentation because partial allocated slabs can be filled up again. Slab merging can be switched off by specifying slub_nomerge on boot up. Note that merging can expose heretofore unknown bugs in the kernel because corrupted objects may now be placed differently and corrupt differing neighboring objects. Enable sanity checks to find those. H. Diagnostics The current slab diagnostics are difficult to use and require a recompilation of the kernel. SLUB contains debugging code that is always available (but is kept out of the hot code paths). SLUB diagnostics can be enabled via the "slab_debug" option. Parameters can be specified to select a single or a group of slab caches for diagnostics. This means that the system is running with the usual performance and it is much more likely that race conditions can be reproduced. I. Resiliency If basic sanity checks are on then SLUB is capable of detecting common error conditions and recover as best as possible to allow the system to continue. J. Tracing Tracing can be enabled via the slab_debug=T,<slabcache> option during boot. SLUB will then protocol all actions on that slabcache and dump the object contents on free. K. On demand DMA cache creation. Generally DMA caches are not needed. If a kmalloc is used with __GFP_DMA then just create this single slabcache that is needed. For systems that have no ZONE_DMA requirement the support is completely eliminated. L. Performance increase Some benchmarks have shown speed improvements on kernbench in the range of 5-10%. The locking overhead of slub is based on the underlying base allocation size. If we can reliably allocate larger order pages then it is possible to increase slub performance much further. The anti-fragmentation patches may enable further performance increases. Tested on: i386 UP + SMP, x86_64 UP + SMP + NUMA emulation, IA64 NUMA + Simulator SLUB Boot options slub_nomerge Disable merging of slabs slub_min_order=x Require a minimum order for slab caches. This increases the managed chunk size and therefore reduces meta data and locking overhead. slub_min_objects=x Mininum objects per slab. Default is 8. slub_max_order=x Avoid generating slabs larger than order specified. slub_debug Enable all diagnostics for all caches slub_debug=<options> Enable selective options for all caches slub_debug=<o>,<cache> Enable selective options for a certain set of caches Available Debug options F Double Free checking, sanity and resiliency R Red zoning P Object / padding poisoning U Track last free / alloc T Trace all allocs / frees (only use for individual slabs). To use SLUB: Apply this patch and then select SLUB as the default slab allocator. [hugh@veritas.com: fix an oops-causing locking error] [akpm@linux-foundation.org: various stupid cleanups and small fixes] Signed-off-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Hugh Dickins <hugh@veritas.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Heiko Carstens authored
Architectures that don't support DMA can say so by adding a config NO_DMA to their Kconfig file. This will prevent compilation of some dma specific driver code. Also dma-mapping-broken.h isn't needed anymore on at least s390. This avoids compilation and linking of otherwise dead/broken code. Other architectures that include dma-mapping-broken.h are arm26, h8300, m68k, m68knommu and v850. If these could be converted as well we could get rid of the header file. Signed-off-by:
Heiko Carstens <heiko.carstens@de.ibm.com> "John W. Linville" <linville@tuxdriver.com> Cc: Kyle McMartin <kyle@parisc-linux.org> Cc: <James.Bottomley@SteelEye.com> Cc: Tejun Heo <htejun@gmail.com> Cc: Jeff Garzik <jeff@garzik.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: <geert@linux-m68k.org> Cc: <zippel@linux-m68k.org> Cc: <spyro@f2s.com> Cc: <uclinux-v850@lsi.nec.co.jp> Cc: <ysato@users.sourceforge.jp> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Gibson authored
At present, the serial core always allows setserial in userspace to change the port address, irq and base clock of any serial port. That makes sense for legacy ISA ports, but not for (say) embedded ns16550 compatible serial ports at peculiar addresses. In these cases, the kernel code configuring the ports must know exactly where they are, and their clocking arrangements (which can be unusual on embedded boards). It doesn't make sense for userspace to change these settings. Therefore, this patch defines a UPF_FIXED_PORT flag for the uart_port structure. If this flag is set when the serial port is configured, any attempts to alter the port's type, io address, irq or base clock with setserial are ignored. In addition this patch uses the new flag for on-chip serial ports probed in arch/powerpc/kernel/legacy_serial.c, and for other hard-wired serial ports probed by drivers/serial/of_serial.c. Signed-off-by:
David Gibson <dwg@au1.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Koeller authored
Add support for the integrated serial ports of the MIPS RM9122 processor and its relatives. The patch also does some whitespace cleanup. [akpm@linux-foundation.org: cleanups] Signed-off-by:
Thomas Koeller <thomas.koeller@baslerweb.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Marc St-Jean authored
Serial driver patch for the PMC-Sierra MSP71xx devices. There are three different fixes: 1 Fix for DesignWare APB THRE errata: In brief, this is a non-standard 16550 in that the THRE interrupt will not re-assert itself simply by disabling and re-enabling the THRI bit in the IER, it is only re-enabled if a character is actually sent out. It appears that the "8250-uart-backup-timer.patch" in the "mm" tree also fixes it so we have dropped our initial workaround. This patch now needs to be applied on top of that "mm" patch. 2 Fix for Busy Detect on LCR write: The DesignWare APB UART has a feature which causes a new Busy Detect interrupt to be generated if it's busy when the LCR is written. This fix saves the value of the LCR and rewrites it after clearing the interrupt. 3 Workaround for interrupt/data concurrency issue: The SoC needs to ensure that writes that can cause interrupts to be cleared reach the UART before returning from the ISR. This fix reads a non-destructive register on the UART so the read transaction completion ensures the previously queued write transaction has also completed. Signed-off-by:
Marc St-Jean <Marc_St-Jean@pmc-sierra.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
This was broken. It adds complexity, for no good reason. Rather than separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(), and preferably __pa() too - and just use "virt_to_phys()" instead, which is more readable and has nicer semantics. However, right now, just undo the separation, and make __pa_symbol() be the exact same as __pa(). That fixes the bugs this patch introduced, and we can do the fairly obvious cleanups later. Do the new __phys_addr() function (which is now the actual workhorse for the unified __pa()/__pa_symbol()) as a real external function, that way all the potential issues with compile/link-time optimizations of constant symbol addresses go away, and we can also, if we choose to, add more sanity-checking of the argument. Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Cc: Andi Kleen <ak@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 06 May, 2007 1 commit
-
-
Russell King authored
utrace removes the ptrace_message field in task_struct. Move our use of this field into a new member in thread_info called "syscall" Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- 05 May, 2007 13 commits
-
-
Arnaud Patard authored
linux/mmc/protocol.h header is gone, thus breaking the build of the mach-qt2410.c file. As this header is not used, I'm removing it. The right headers may still be added later if needed. Signed-off-by:
Arnaud Patard <arnaud.patard@rtp-net.org> Signed-off-by:
Ben Dooks <ben-linux@fluff.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
__ioremap() took a set of page table flags (specifically the cacheable and bufferable bits) to control the mapping type. However, with the advent of ARMv6, this is far too limited. Replace the page table flags with a memory type index, so that the desired attributes can be selected from the mem_type table. Finally, to prevent silent miscompilation due to the differing arguments, rename the __ioremap() and __ioremap_pfn() functions. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Add cached device type for ioremap_cached(). Group all device memory types together, and ensure that they all have a "MT_DEVICE" prefix. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Change the memory types table to define the L1 descriptor bit 4 to be in terms of the ARMv6 definition - execute never. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
David Brownell authored
Fix oops in omap16xx mpuio suspend/resume code; field wasn't initialized Signed-off-by:
David Brownell <dbrownell@users.sourceforge.net> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
David Brownell authored
GPIO and MPUIO wake updates: - Hook MPUIOs into the irq wakeup framework too. This uses a platform device to update irq enables during system sleep states, instead of a sys_device, since the latter is no longer needed for such things. - Also forward enable/disable irq wake requests to the relevant GPIO controller, so the top level IRQ dispatcher can (eventually) handle these wakeup events automatically if more than one GPIO pin needs to be a wakeup event source. - Minor tweak to the 24xx non-wakeup gpio stuff: no need to check such read-only data under the spinlock. This assumes (maybe wrongly?) that only 16xx can do GPIO wakeup; without a 15xx I can't test such stuff. Also this expects the top level IRQ dispatcher to properly handle requests to enable/disable irq wake, which is currently known to be wrong: omap1 saves the flags but ignores them, omap2 doesn't even save it. (Wakeup events are, wrongly, hardwired in the relevant mach-omapX/pm.c file ...) So MPUIO irqs won't yet trigger system wakeup. Signed-off-by:
David Brownell <dbrownell@users.sourceforge.net> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
David Brownell authored
Speedup and shrink GPIO irq handling code, by using a pointer that's available in the irq_chip structure instead of calling the get_gpio_bank() function. On OMAP1 this saves 44 words, most of which were in IRQ critical path methods. Hey, every few instructions help. Signed-off-by:
David Brownell <dbrownell@users.sourceforge.net> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Syed Mohammed Khasim authored
This patch adds minimal OMAP2430 support to plat-omap files to get the kernel booting on 2430SDP. Signed-off-by:
Syed Mohammed Khasim <x0khasim@ti.com> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
David Brownell authored
More GPIO/IRQ cleanup: - compile-time removal of much useless code * mpuio support on non-OMAP1. * 15xx/730/24xx gpio support on 1610 * 15xx/730/16xx gpio support on 24xx * etc - remove all BUG() calls, which are always bad news ... replaced some with normal fault reports for that call, others with WARN_ON(1). - small mpuio bugfix: add missing set_type() method Oh, and fix a minor merge issue: inode->u.generic_ip is now gone. Signed-off-by:
David Brownell <dbrownell@users.sourceforge.net> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
David Brownell authored
Add some GPIO debug support: /sys/kernel/debug/omap_gpio dumps the state of all GPIOs that have been claimed, including basic IRQ info if relevant. Tested on 24xx, 16xx. Includes minor bugfixes: recording IRQ trigger mode (this should probably be a genirq patch), adding missing space to non-wakeup warning Signed-off-by:
David Brownell <dbrownell@users.sourceforge.net> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Juha Yrjola authored
Some GPIOs on OMAP2420 do not have wakeup capabilities. If these GPIOs are configured as IRQ sources, spurious interrupts will be generated each time the core domain enters retention. Signed-off-by:
Juha Yrjola <juha.yrjola@solidboot.com> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Juha Yrjola authored
Enable 24xx GPIO autoidling Signed-off-by:
Juha Yrjola <juha.yrjola@solidboot.com> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Michael-Luke Jones authored
This patch adds support for the D-Link DSM-G600 Rev A. This is an ARM XScale IXP4xx system relatively similar to the NSLU2 and NAS-100D already supported by mainline. An important difference is Gigabit Ethernet support using the Via Velocity chipset. This patch is the combined work of Michael Westerhof and Alessandro Zummo, with contributions from Michael-Luke Jones. This version addresses review comments from rmk and Deepak Saxena. Signed-off-by:
Michael-Luke Jones <mlj28@cam.ac.uk> Signed-off-by:
Alessandro Zummo <a.zummo@towertech.it> Signed-off-by:
Michael Westerhof <mwester@dls.net> Signed-off-by:
Deepak Saxena <dsaxena@plexity.net> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- 04 May, 2007 16 commits
-
-
Geert Uytterhoeven authored
net/rxrpc/af-rxrpc.ko needs csum_partial_copy_from_user Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Finn Thain authored
Fix a crash caused by requests placed in the queue with the completed flag already set. This lead to some ADB_SYNC requests returning early and their request structs being popped off the stack while still queued. Stack corruption ensued or an invalid request callback pointer was invoked or both. Eliminate macii_retransmit() and its buggy implementation of macii_write(). Have macii_queue_poll() fully initialise the request queues. Fix a bug in macii_queue_poll() where the last_req pointer was not being set. This caused some requests to leave the queue before being completed (and would also corrupt the stack under certain conditions). Fix a race in macii_start that could set the state machine to "reading" while current_req was null. No longer send poll commands with the ADBREQ_REPLY flag -- doing that caused the replies to be stored in the request buffer where they were forgotten about. Don't autopoll by continuously sending new Talk commands. Get the controller to do that for us. This reduces the ADB interrupt rate on an idle bus to about 5 per second. Only autopoll the devices that were probed. Explicitly clear the interrupt flag when polling. Use disable_irq rather than local_irq_save when polling. Remove excess local_irq_save/restore pairs. Improve bus timeout and service request detection. Remove unused code (last_reply, adb_dir etc) and unneeded code (prefix_len, first_byte etc). Change TIP and TACK to their correct names on this ADB controller (ST_EVEN and ST_ODD). Add some commentry. Add a generous quantity of sanity checks (BUG_ONs). Let m68k macs use the adb_sync boot param too. Tested on Mac II, Mac IIci, Quadra 650, Quadra 700 etc. Signed-off-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Finn Thain authored
There are no slow IRQs on Macs since Roman Zippel's IRQ reorganisation that went into 2.6.16 and removed mac_irq_list[] and the do_mac_irq_list() dispatcher. (They were implemented in do_mac_irq_list() by lowering the IPL.) Hence there's no more use for mutual exclusion in the Mac interrupt dispatchers. Remove it. Signed-off-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Finn Thain authored
Some Macs lack a slot interrupt enable register. So the existing code makes disabled and unregistered slot IRQ lines outputs set high. This seems to work on quadras, but does not work on genuine VIAs (perhaps the card still succeeds in pulling the line low, or perhaps because this increases the settle time on the port A input, meaning that the CA1 IRQ could fire before the slot line reads active). Because of this, the nubus_active flags were used to mask IRQs, which is actually worse than the problem it tries to solve. Any interrupt masked by nubus_active will remain asserted and prevent further transitions on CA1. And so the nubus gets wedged regardless of hardware (emulated VIA ASIC, real VIA chip or RBV). The best solution to this hardware limitation of genuine VIAs is to disable the umbrella SLOTS IRQ when disabling a slot on those machines. Unfortunately, this means all slot IRQs get disabled when any slot IRQ is disabled. But it is only a problem when there's more than 1 device using nubus interrupts. Another potential problem for genuine VIAs is an unregistered nubus IRQ. Eventually it will be possible to enable the CA1 interrupt by installing its handler only _after_ all nubus drivers have loaded but _before_ the kernel needs them, at which time this last problem can be fixed. For now it can be worked around: - disable MacOS extensions - don't boot MacOS (use the Emile bootloader instead) - get the bootloaders to disable ROM drivers (Penguin does this for video cards already, don't know about Emile) - physically remove unsupported cards Signed-off-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Finn Thain authored
Make sure that there are no slot IRQs asserted before leaving the nubus handler. If there are and we don't then the nubus gets wedged because this prevents a CA1 transition, which means no more nubus IRQs. Make the interrupt dispatch loops terminate sooner. Explicitly initialise the VIA latches to make the code more easily understood. Also some cleanups. Signed-off-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Finn Thain authored
Reverse the last of a monumental brown-paper-bag commit that went into the 2.3 kernel. Signed-off-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Finn Thain authored
Add some more machines that support A/UX interrupt priorities. There are probably others as well, but I've only tested these ones so far. Signed-off-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Sam Creasey authored
Only attempt to initialize the amount of interrupts a sun3 actually has... Signed-off-by:
Sam Creasey <sammy@sammy.net> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Roman Zippel authored
Add early parameter support and convert current users to it. Signed-off-by:
Roman Zippel <zippel@linux-m68k.org> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Roman Zippel authored
Reformat various m68k files, so they actually look like Linux sources. Signed-off-by:
Roman Zippel <zippel@linux-m68k.org> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Geert Uytterhoeven authored
Recent cross-compilers are called m68k-linux-gnu-gcc instead of m68k-linux-gcc Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Michael Schmitz authored
Atari keyboard and mouse support. (reformating and Kconfig fixes by Roman Zippel) Signed-off-by:
Michael Schmitz <schmitz@debian.org> Signed-off-by:
Roman Zippel <zippel@linux-m68k.org> Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Martin Schwidefsky authored
Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Commit c1821c2e introduced the uaccess structure that is used to select the correct set of user copy functions for the different execution modes (standard vs. noexec vs. z9 optimized user copy). The uaccess symbol is exported with EXPORT_SYMBOL_GPL. This breaks all non-gpl modules that use user copy. To make them work again change the export to EXPORT_SYMBOL. Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Jan Glauber authored
Register aes-s390 algorithms with the actual supported max keylen size Signed-off-by:
Jan Glauber <jan.glauber@de.ibm.com> Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Christoph Hellwig authored
And here's a port of the powerpc patch to get rid of the notifier chain completely to s390. It's ontop of Martins patch as that one is in mainline already. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com>
-