- 26 Dec, 2014 1 commit
-
-
Varun Chitre authored
-
- 19 Dec, 2014 5 commits
-
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
- 17 Dec, 2014 8 commits
-
-
Varun Chitre authored
This is the 3.4.105 stable release
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
Varun Chitre authored
-
- 01 Dec, 2014 26 commits
-
-
Zefan Li authored
-
Guillaume Nault authored
commit eed4d839b0cdf9d84b0a9bc63de90fd5e1e886fb upstream. Use dst_entry held by sk_dst_get() to retrieve tunnel's PMTU. The dst_mtu(__sk_dst_get(tunnel->sock)) call was racy. __sk_dst_get() could return NULL if tunnel->sock->sk_dst_cache was reset just before the call, thus making dst_mtu() dereference a NULL pointer: [ 1937.661598] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020 [ 1937.664005] IP: [<ffffffffa049db88>] pppol2tp_connect+0x33d/0x41e [l2tp_ppp] [ 1937.664005] PGD daf0c067 PUD d9f93067 PMD 0 [ 1937.664005] Oops: 0000 [#1] SMP [ 1937.664005] Modules linked in: l2tp_ppp l2tp_netlink l2tp_core ip6table_filter ip6_tables iptable_filter ip_tables ebtable_nat ebtables x_tables udp_tunnel pppoe pppox ppp_generic slhc deflate ctr twofish_generic twofish_x86_64_3way xts lrw gf128mul glue_helper twofish_x86_64 twofish_common blowfish_generic blowfish_x86_64 blowfish_common des_generic cbc xcbc rmd160 sha512_generic hmac crypto_null af_key xfrm_algo 8021q garp bridge stp llc tun atmtcp clip atm ext3 mbcache jbd iTCO_wdt coretemp kvm_intel iTCO_vendor_support kvm pcspkr evdev ehci_pci lpc_ich mfd_core i5400_edac edac_core i5k_amb shpchp button processor thermal_sys xfs crc32c_generic libcrc32c dm_mod usbhid sg hid sr_mod sd_mod cdrom crc_t10dif crct10dif_common ata_generic ahci ata_piix tg3 libahci libata uhci_hcd ptp ehci_hcd pps_core usbcore scsi_mod libphy usb_common [last unloaded: l2tp_core] [ 1937.664005] CPU: 0 PID: 10022 Comm: l2tpstress Tainted: G O 3.17.0-rc1 #1 [ 1937.664005] Hardware name: HP ProLiant DL160 G5, BIOS O12 08/22/2008 [ 1937.664005] task: ffff8800d8fda790 ti: ffff8800c43c4000 task.ti: ffff8800c43c4000 [ 1937.664005] RIP: 0010:[<ffffffffa049db88>] [<ffffffffa049db88>] pppol2tp_connect+0x33d/0x41e [l2tp_ppp] [ 1937.664005] RSP: 0018:ffff8800c43c7de8 EFLAGS: 00010282 [ 1937.664005] RAX: ffff8800da8a7240 RBX: ffff8800d8c64600 RCX: 000001c325a137b5 [ 1937.664005] RDX: 8c6318c6318c6320 RSI: 000000000000010c RDI: 0000000000000000 [ 1937.664005] RBP: ffff8800c43c7ea8 R08: 0000000000000000 R09: 0000000000000000 [ 1937.664005] R10: ffffffffa048e2c0 R11: ffff8800d8c64600 R12: ffff8800ca7a5000 [ 1937.664005] R13: ffff8800c439bf40 R14: 000000000000000c R15: 0000000000000009 [ 1937.664005] FS: 00007fd7f610f700(0000) GS:ffff88011a600000(0000) knlGS:0000000000000000 [ 1937.664005] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 1937.664005] CR2: 0000000000000020 CR3: 00000000d9d75000 CR4: 00000000000027e0 [ 1937.664005] Stack: [ 1937.664005] ffffffffa049da80 ffff8800d8fda790 000000000000005b ffff880000000009 [ 1937.664005] ffff8800daf3f200 0000000000000003 ffff8800c43c7e48 ffffffff81109b57 [ 1937.664005] ffffffff81109b0e ffffffff8114c566 0000000000000000 0000000000000000 [ 1937.664005] Call Trace: [ 1937.664005] [<ffffffffa049da80>] ? pppol2tp_connect+0x235/0x41e [l2tp_ppp] [ 1937.664005] [<ffffffff81109b57>] ? might_fault+0x9e/0xa5 [ 1937.664005] [<ffffffff81109b0e>] ? might_fault+0x55/0xa5 [ 1937.664005] [<ffffffff8114c566>] ? rcu_read_unlock+0x1c/0x26 [ 1937.664005] [<ffffffff81309196>] SYSC_connect+0x87/0xb1 [ 1937.664005] [<ffffffff813e56f7>] ? sysret_check+0x1b/0x56 [ 1937.664005] [<ffffffff8107590d>] ? trace_hardirqs_on_caller+0x145/0x1a1 [ 1937.664005] [<ffffffff81213dee>] ? trace_hardirqs_on_thunk+0x3a/0x3f [ 1937.664005] [<ffffffff8114c262>] ? spin_lock+0x9/0xb [ 1937.664005] [<ffffffff813092b4>] SyS_connect+0x9/0xb [ 1937.664005] [<ffffffff813e56d2>] system_call_fastpath+0x16/0x1b [ 1937.664005] Code: 10 2a 84 81 e8 65 76 bd e0 65 ff 0c 25 10 bb 00 00 4d 85 ed 74 37 48 8b 85 60 ff ff ff 48 8b 80 88 01 00 00 48 8b b8 10 02 00 00 <48> 8b 47 20 ff 50 20 85 c0 74 0f 83 e8 28 89 83 10 01 00 00 89 [ 1937.664005] RIP [<ffffffffa049db88>] pppol2tp_connect+0x33d/0x41e [l2tp_ppp] [ 1937.664005] RSP <ffff8800c43c7de8> [ 1937.664005] CR2: 0000000000000020 [ 1939.559375] ---[ end trace 82d44500f28f8708 ]--- Fixes: f34c4a35d879 ("l2tp: take PMTU from tunnel UDP socket") Signed-off-by:
Guillaume Nault <g.nault@alphalink.fr> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Cc: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Narendra K authored
commit 936597631dd310e220544dc5c6075d924efd39b2 upstream. The commit 4197aa7b implements 64 bit per ring statistics. But the driver resets the 'total_bytes' and 'total_packets' from RX and TX rings in the RX and TX interrupt handlers to zero. This results in statistics being lost and user space reporting RX and TX statistics as zero. This patch addresses the issue by preventing the resetting of RX and TX ring statistics to zero. Signed-off-by:
Narendra K <narendra_k@dell.com> Tested-by:
Sibai Li <sibai.li@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Cc: Weng Meiling <wengmeiling.weng@huawei.com> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Benjamin Poirier authored
commit cdb3f4a31b64c3a1c6eef40bc01ebc9594c58a8c upstream. There are many cases where this feature does not improve performance or even reduces it. For example, here are the results from tests that I've run using 3.12.6 on one Intel Xeon W3565 and one i7 920 connected by ixgbe adapters. The results are from the Xeon, but they're similar on the i7. All numbers report the mean±stddev over 10 runs of 10s. 1) latency tests similar to what is described in "c6e1a0d1 net: Allow no-cache copy from user on transmit" There is no statistically significant difference between tx-nocache-copy on/off. nic irqs spread out (one queue per cpu) 200x netperf -r 1400,1 tx-nocache-copy off 692000±1000 tps 50/90/95/99% latency (us): 275±2/643.8±0.4/799±1/2474.4±0.3 tx-nocache-copy on 693000±1000 tps 50/90/95/99% latency (us): 274±1/644.1±0.7/800±2/2474.5±0.7 200x netperf -r 14000,14000 tx-nocache-copy off 86450±80 tps 50/90/95/99% latency (us): 334.37±0.02/838±1/2100±20/3990±40 tx-nocache-copy on 86110±60 tps 50/90/95/99% latency (us): 334.28±0.01/837±2/2110±20/3990±20 2) single stream throughput tests tx-nocache-copy leads to higher service demand throughput cpu0 cpu1 demand (Gb/s) (Gcycle) (Gcycle) (cycle/B) nic irqs and netperf on cpu0 (1x netperf -T0,0 -t omni -- -d send) tx-nocache-copy off 9402±5 9.4±0.2 0.80±0.01 tx-nocache-copy on 9403±3 9.85±0.04 0.838±0.004 nic irqs on cpu0, netperf on cpu1 (1x netperf -T1,1 -t omni -- -d send) tx-nocache-copy off 9401±5 5.83±0.03 5.0±0.1 0.923±0.007 tx-nocache-copy on 9404±2 5.74±0.03 5.523±0.009 0.958±0.002 As a second example, here are some results from Eric Dumazet with latest net-next. tx-nocache-copy also leads to higher service demand (cpu is Intel(R) Xeon(R) CPU X5660 @ 2.80GHz) lpq83:~# ./ethtool -K eth0 tx-nocache-copy on lpq83:~# perf stat ./netperf -H lpq84 -c MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lpq84.prod.google.com () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % U us/KB us/KB 87380 16384 16384 10.00 9407.44 2.50 -1.00 0.522 -1.000 Performance counter stats for './netperf -H lpq84 -c': 4282.648396 task-clock # 0.423 CPUs utilized 9,348 context-switches # 0.002 M/sec 88 CPU-migrations # 0.021 K/sec 355 page-faults # 0.083 K/sec 11,812,797,651 cycles # 2.758 GHz [82.79%] 9,020,522,817 stalled-cycles-frontend # 76.36% frontend cycles idle [82.54%] 4,579,889,681 stalled-cycles-backend # 38.77% backend cycles idle [67.33%] 6,053,172,792 instructions # 0.51 insns per cycle # 1.49 stalled cycles per insn [83.64%] 597,275,583 branches # 139.464 M/sec [83.70%] 8,960,541 branch-misses # 1.50% of all branches [83.65%] 10.128990264 seconds time elapsed lpq83:~# ./ethtool -K eth0 tx-nocache-copy off lpq83:~# perf stat ./netperf -H lpq84 -c MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lpq84.prod.google.com () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % U us/KB us/KB 87380 16384 16384 10.00 9412.45 2.15 -1.00 0.449 -1.000 Performance counter stats for './netperf -H lpq84 -c': 2847.375441 task-clock # 0.281 CPUs utilized 11,632 context-switches # 0.004 M/sec 49 CPU-migrations # 0.017 K/sec 354 page-faults # 0.124 K/sec 7,646,889,749 cycles # 2.686 GHz [83.34%] 6,115,050,032 stalled-cycles-frontend # 79.97% frontend cycles idle [83.31%] 1,726,460,071 stalled-cycles-backend # 22.58% backend cycles idle [66.55%] 2,079,702,453 instructions # 0.27 insns per cycle # 2.94 stalled cycles per insn [83.22%] 363,773,213 branches # 127.757 M/sec [83.29%] 4,242,732 branch-misses # 1.17% of all branches [83.51%] 10.128449949 seconds time elapsed CC: Tom Herbert <therbert@google.com> Signed-off-by:
Benjamin Poirier <bpoirier@suse.de> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Hannes Frederic Sowa authored
commit 916e4cf46d0204806c062c8c6c4d1f633852c5b6 upstream. Currently we generate a new fragmentation id on UFO segmentation. It is pretty hairy to identify the correct net namespace and dst there. Especially tunnels use IFF_XMIT_DST_RELEASE and thus have no skb_dst available at all. This causes unreliable or very predictable ipv6 fragmentation id generation while segmentation. Luckily we already have pregenerated the ip6_frag_id in ip6_ufo_append_data and can use it here. Signed-off-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net> [bwh: Backported to 3.2: adjust filename, indentation] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Ben Hutchings authored
Dmitry Semyonov reported that after upgrading from 3.2.54 to 3.2.57 the rtl8192ce driver will crash when its interface is brought up. The oops message shows: [ 1833.611397] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 [ 1833.611455] IP: [<ffffffffa0410c6a>] rtl92ce_update_hal_rate_tbl+0x29/0x4db [rtl8192ce] ... [ 1833.613326] Call Trace: [ 1833.613346] [<ffffffffa02ad9c6>] ? rtl92c_dm_watchdog+0xd0b/0xec9 [rtl8192c_common] [ 1833.613391] [<ffffffff8105b5cf>] ? process_one_work+0x161/0x269 [ 1833.613425] [<ffffffff8105c598>] ? worker_thread+0xc2/0x145 [ 1833.613458] [<ffffffff8105c4d6>] ? manage_workers.isra.25+0x15b/0x15b [ 1833.613496] [<ffffffff8105f6d9>] ? kthread+0x76/0x7e [ 1833.613527] [<ffffffff81356b74>] ? kernel_thread_helper+0x4/0x10 [ 1833.613563] [<ffffffff8105f663>] ? kthread_worker_fn+0x139/0x139 [ 1833.613598] [<ffffffff81356b70>] ? gs_change+0x13/0x13 Disassembly of rtl92ce_update_hal_rate_tbl() shows that the 'sta' parameter was null. None of the changes to the rtlwifi family between 3.2.54 and 3.2.57 seem to directly cause this, and reverting commit f78bccd79ba3 ('rtlwifi: rtl8192ce: Fix too long disable of IRQs') doesn't fix it. rtl92c_dm_watchdog() calls rtl92ce_update_hal_rate_tbl() via rtl92c_dm_refresh_rate_adaptive_mask(), which does not appear in the call trace as it was inlined. That function has been completely removed upstream which may explain why this crash wasn't seen there. I'm not sure that it is sensible to completely remove rtl92c_dm_refresh_rate_adaptive_mask() without making other compensating changes elsewhere, so try to work around this for 3.2 by checking for a null pointer in rtl92c_dm_refresh_rate_adaptive_mask() and then skipping the call to rtl92ce_update_hal_rate_tbl(). References: https://bugs.debian.org/745137 References: https://bugs.debian.org/745462 Reported-by:
Dmitry Semyonov <linulin@gmail.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Cc: Larry Finger <Larry.Finger@lwfinger.net> Cc: Chaoming Li <chaoming_li@realsil.com.cn> Cc: Satoshi IWAMOTO <satoshi.iwamoto@nifty.ne.jp> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Marcelo Ricardo Leitner authored
Further tests revealed that after moving the garbage collector to a work queue and protecting it with a spinlock may leave the system prone to soft lockups if bottom half gets very busy. It was reproced with a set of firewall rules that REJECTed packets. If the NIC bottom half handler ends up running on the same CPU that is running the garbage collector on a very large cache, the garbage collector will not be able to do its job due to the amount of work needed for handling the REJECTs and also won't reschedule. The fix is to disable bottom half during the garbage collecting, as it already was in the first place (most calls to it came from softirqs). Signed-off-by:
Marcelo Ricardo Leitner <mleitner@redhat.com> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Marcelo Ricardo Leitner authored
When rt_intern_hash() has to deal with neighbour cache overflowing, it triggers the route cache garbage collector in an attempt to free some references on neighbour entries. Such call cannot be done async but should also not run in parallel with an already-running one, so that they don't collapse fighting over the hash lock entries. This patch thus blocks parallel executions with spinlocks: - A call from worker and from rt_intern_hash() are not the same, and cannot be merged, thus they will wait each other on rt_gc_lock. - Calls to gc from rt_intern_hash() may happen in parallel but we must wait for it to finish in order to try again. This dedup and synchrinozation is then performed by the locking just before calling __do_rt_garbage_collect(). Signed-off-by:
Marcelo Ricardo Leitner <mleitner@redhat.com> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Marcelo Ricardo Leitner authored
Currently the route garbage collector gets called by dst_alloc() if it have more entries than the threshold. But it's an expensive call, that don't really need to be done by then. Another issue with current way is that it allows running the garbage collector with the same start parameters on multiple CPUs at once, which is not optimal. A system may even soft lockup if the cache is big enough as the garbage collectors will be fighting over the hash lock entries. This patch thus moves the garbage collector to run asynchronously on a work queue, much similar to how rt_expire_check runs. There is one condition left that allows multiple executions, which is handled by the next patch. Signed-off-by:
Marcelo Ricardo Leitner <mleitner@redhat.com> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
James Bottomley authored
commit d555a2abf3481f81303d835046a5ec2c4fb3ca8e upstream. We unconditionally execute scsi_eh_get_sense() to make sure all failed commands that should have sense attached, do. However, the routine forgets that some commands, because of the way they fail, will not have any sense code ... we should not bother them with a REQUEST_SENSE command. Fix this by testing to see if we actually got a CHECK_CONDITION return and skip asking for sense if we don't. Tested-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
James Bottomley <JBottomley@Parallels.com> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Mikulas Patocka authored
commit d49ec52ff6ddcda178fc2476a109cf1bd1fa19ed upstream. The DM crypt target accesses memory beyond allocated space resulting in a crash on 32 bit x86 systems. This bug is very old (it dates back to 2.6.25 commit 3a7f6c99 "dm crypt: use async crypto"). However, this bug was masked by the fact that kmalloc rounds the size up to the next power of two. This bug wasn't exposed until 3.17-rc1 commit 298a9fa08a ("dm crypt: use per-bio data"). By switching to using per-bio data there was no longer any padding beyond the end of a dm-crypt allocated memory block. To minimize allocation overhead dm-crypt puts several structures into one block allocated with kmalloc. The block holds struct ablkcipher_request, cipher-specific scratch pad (crypto_ablkcipher_reqsize(any_tfm(cc))), struct dm_crypt_request and an initialization vector. The variable dmreq_start is set to offset of struct dm_crypt_request within this memory block. dm-crypt allocates the block with this size: cc->dmreq_start + sizeof(struct dm_crypt_request) + cc->iv_size. When accessing the initialization vector, dm-crypt uses the function iv_of_dmreq, which performs this calculation: ALIGN((unsigned long)(dmreq + 1), crypto_ablkcipher_alignmask(any_tfm(cc)) + 1). dm-crypt allocated "cc->iv_size" bytes beyond the end of dm_crypt_request structure. However, when dm-crypt accesses the initialization vector, it takes a pointer to the end of dm_crypt_request, aligns it, and then uses it as the initialization vector. If the end of dm_crypt_request is not aligned on a crypto_ablkcipher_alignmask(any_tfm(cc)) boundary the alignment causes the initialization vector to point beyond the allocated space. Fix this bug by calculating the variable iv_size_padding and adding it to the allocated size. Also correct the alignment of dm_crypt_request. struct dm_crypt_request is specific to dm-crypt (it isn't used by the crypto subsystem at all), so it is aligned on __alignof__(struct dm_crypt_request). Also align per_bio_data_size on ARCH_KMALLOC_MINALIGN, so that it is aligned as if the block was allocated with kmalloc. Reported-by:
Krzysztof Kolasa <kkolasa@winsoft.pl> Tested-by:
Milan Broz <gmazyland@gmail.com> Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> [lizf: Backported by Mikulas] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Eric W. Biederman authored
commit a6138db815df5ee542d848318e5dae681590fccd upstream. Kenton Varda <kenton@sandstorm.io> discovered that by remounting a read-only bind mount read-only in a user namespace the MNT_LOCK_READONLY bit would be cleared, allowing an unprivileged user to the remount a read-only mount read-write. Correct this by replacing the mask of mount flags to preserve with a mask of mount flags that may be changed, and preserve all others. This ensures that any future bugs with this mask and remount will fail in an easy to detect way where new mount flags simply won't change. Acked-by:
Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com> Cc: Francis Moreau <francis.moro@gmail.com> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Felipe Balbi authored
commit da64c27d3c93ee9f89956b9de86c4127eb244494 upstream. LDISCs shouldn't call tty->ops->write() from within ->write_wakeup(). ->write_wakeup() is called with port lock taken and IRQs disabled, tty->ops->write() will try to acquire the same port lock and we will deadlock. Acked-by:
Marcel Holtmann <marcel@holtmann.org> Reviewed-by:
Peter Hurley <peter@hurleysoftware.com> Reported-by:
Huang Shijie <b32955@freescale.com> Signed-off-by:
Felipe Balbi <balbi@ti.com> Tested-by:
Andreas Bießmann <andreas@biessmann.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [tim.niemeyer@corscience.de: rebased on 3.4.103] Signed-off-by:
Tim Niemeyer <tim.niemeyer@corscience.de> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Pawel Moll authored
commit b3f207855f57b9c8f43a547a801340bb5cbc59e5 upstream. When running a 32-bit userspace on a 64-bit kernel (eg. i386 application on x86_64 kernel or 32-bit arm userspace on arm64 kernel) some of the perf ioctls must be treated with special care, as they have a pointer size encoded in the command. For example, PERF_EVENT_IOC_ID in 32-bit world will be encoded as 0x80042407, but 64-bit kernel will expect 0x80082407. In result the ioctl will fail returning -ENOTTY. This patch solves the problem by adding code fixing up the size as compat_ioctl file operation. Reported-by:
Drew Richardson <drew.richardson@arm.com> Signed-off-by:
Pawel Moll <pawel.moll@arm.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/1402671812-9078-1-git-send-email-pawel.moll@arm.com Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: David Ahern <dsahern@gmail.com> [lizf: Backported to 3.4 by David Ahern] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Sergio Gelato authored
BugLink: http://bugs.launchpad.net/bugs/1348670 Fix regression introduced in pre-3.14 kernels by cherry-picking aa07c713ecfc0522916f3cd57ac628ea6127c0ec (NFSD: Call ->set_acl with a NULL ACL structure if no entries). The affected code was removed in 3.14 by commit 4ac7249ea5a0ceef9f8269f63f33cc873c3fac61 (nfsd: use get_acl and ->set_acl). The ->set_acl methods are already able to cope with a NULL argument. Signed-off-by:
Sergio Gelato <Sergio.Gelato@astro.su.se> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Theodore Ts'o authored
commit a9cfcd63e8d206ce4235c355d857c4fbdf0f4587 upstream. Thanks to Dan Carpenter for extending smatch to find bugs like this. (This was found using a development version of smatch.) Fixes: 36de928641ee48b2078d3fe9514242aaa2f92013 Reported-by: Dan Carpenter <dan.carpenter@oracle.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> [lizf: Backported to 3.4: - s/new.bh/new_bh/ - drop the change to ext4_cross_rename()] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Theodore Ts'o authored
commit 36de928641ee48b2078d3fe9514242aaa2f92013 upstream. If we run into some kind of error, such as ENOMEM, while calling ext4_getblk() or ext4_dx_find_entry(), we need to make sure this error gets propagated up to ext4_find_entry() and then to its callers. This way, transient errors such as ENOMEM can get propagated to the VFS. This is important so that the system calls return the appropriate error, and also so that in the case of ext4_lookup(), we return an error instead of a NULL inode, since that will result in a negative dentry cache entry that will stick around long past the OOM condition which caused a transient ENOMEM error. Google-Bug-Id: #17142205 Signed-off-by:
Theodore Ts'o <tytso@mit.edu> [lizf: Backported to 3.4: - adjust context - s/old.bh/old_bh/g - s/new.bh/new_bh/g - drop the changes to ext4_find_delete_entry() and ext4_cross_rename() - add return value check for one more exr4_find_entry() in ext4_rename()] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Johannes Berg authored
commit bd8c78e78d5011d8111bc2533ee73b13a3bd6c42 upstream. In testmode and vendor command reply/event SKBs we use the skb cb data to store nl80211 parameters between allocation and sending. This causes the code for CONFIG_NETLINK_MMAP to get confused, because it takes ownership of the skb cb data when the SKB is handed off to netlink, and it doesn't explicitly clear it. Clear the skb cb explicitly when we're done and before it gets passed to netlink to avoid this issue. Reported-by:
Assaf Azulay <assaf.azulay@intel.com> Reported-by:
David Spinadel <david.spinadel@intel.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com> [lizf: Backported to 3.4: adjust context] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Jens Axboe authored
commit 46f341ffcfb5d8530f7d1e60f3be06cce6661b62 upstream. Commit 2da78092 changed the locking from a mutex to a spinlock, so we now longer sleep in this context. But there was a leftover might_sleep() in there, which now triggers since we do the final free from an RCU callback. Get rid of it. Reported-by:
Pontus Fuchs <pontus.fuchs@gmail.com> Signed-off-by:
Jens Axboe <axboe@fb.com> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Josh Triplett authored
commit 361e9dfbaae84b0b246ed18d1ab7c82a1a41b53e upstream. The buffers sized by CONFIG_LOG_BUF_SHIFT and CONFIG_LOG_CPU_MAX_BUF_SHIFT do not exist if CONFIG_PRINTK=n, so don't ask about their size at all. Signed-off-by:
Josh Triplett <josh@joshtriplett.org> Acked-by:
Randy Dunlap <rdunlap@infradead.org> [lizf: Backported to 3.4: - drop the change to CONFIG_LOG_CPU_MAX_BUF_SHIFT as it doesn't exist in 3.4] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Peter Zijlstra authored
commit 6c72e3501d0d62fc064d3680e5234f3463ec5a86 upstream. Oleg noticed that a cleanup by Sylvain actually uncovered a bug; by calling perf_event_free_task() when failing sched_fork() we will not yet have done the memset() on ->perf_event_ctxp[] and will therefore try and 'free' the inherited contexts, which are still in use by the parent process. This is bad.. Suggested-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Sylvain 'ythier' Hitier <sylvain.hitier@gmail.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Mel Gorman authored
commit d3cb8bf6081b8b7a2dabb1264fe968fd870fa595 upstream. A migration entry is marked as write if pte_write was true at the time the entry was created. The VMA protections are not double checked when migration entries are being removed as mprotect marks write-migration-entries as read. It means that potentially we take a spurious fault to mark PTEs write again but it's straight-forward. However, there is a race between write migrations being marked read and migrations finishing. This potentially allows a PTE to be write that should have been read. Close this race by double checking the VMA permissions using maybe_mkwrite when migration completes. [torvalds@linux-foundation.org: use maybe_mkwrite] Signed-off-by:
Mel Gorman <mgorman@suse.de> Acked-by:
Rik van Riel <riel@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> [lizf: Backported to 3.4: adjust context] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Xiubo Li authored
commit 6596aa047b624aeec2ea321962cfdecf9953a383 upstream. Since we cannot make sure the 'params->num_regs' will always be none zero here, and then if it equals to zero, the kmemdup() will return ZERO_SIZE_PTR, which equals to ((void *)16). So this patch fix this with just doing the zero check before calling kmemdup(). Signed-off-by:
Xiubo Li <Li.Xiubo@freescale.com> Signed-off-by:
Mark Brown <broonie@kernel.org> [lizf: Backported to 3.4: adjust context] Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Robin Murphy authored
commit 5ca918e5e3f9df4634077c06585c42bc6a8d699a upstream. The alignment fixup incorrectly decodes faulting ARM VLDn/VSTn instructions (where the optional alignment hint is given but incorrect) as LDR/STR, leading to register corruption. Detect these and correctly treat them as unhandled, so that userspace gets the fault it expects. Reported-by:
Simon Hosie <simon.hosie@arm.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Miklos Szeredi authored
commit b928095b0a7cff7fb9fcf4c706348ceb8ab2c295 upstream. If overwriting an empty directory with rename, then need to drop the extra nlink. Test prog: #include <stdio.h> #include <fcntl.h> #include <err.h> #include <sys/stat.h> int main(void) { const char *test_dir1 = "test-dir1"; const char *test_dir2 = "test-dir2"; int res; int fd; struct stat statbuf; res = mkdir(test_dir1, 0777); if (res == -1) err(1, "mkdir(\"%s\")", test_dir1); res = mkdir(test_dir2, 0777); if (res == -1) err(1, "mkdir(\"%s\")", test_dir2); fd = open(test_dir2, O_RDONLY); if (fd == -1) err(1, "open(\"%s\")", test_dir2); res = rename(test_dir1, test_dir2); if (res == -1) err(1, "rename(\"%s\", \"%s\")", test_dir1, test_dir2); res = fstat(fd, &statbuf); if (res == -1) err(1, "fstat(%i)", fd); if (statbuf.st_nlink != 0) { fprintf(stderr, "nlink is %lu, should be 0\n", statbuf.st_nlink); return 1; } return 0; } Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-
Joseph Qi authored
commit 5760a97c7143c208fa3a8f8cad0ed7dd672ebd28 upstream. There is a deadlock case which reported by Guozhonghua: https://oss.oracle.com/pipermail/ocfs2-devel/2014-September/010079.html This case is caused by &res->spinlock and &dlm->master_lock misordering in different threads. It was introduced by commit 8d400b81 ("ocfs2/dlm: Clean up refmap helpers"). Since lockres is new, it doesn't not require the &res->spinlock. So remove it. Fixes: 8d400b81 ("ocfs2/dlm: Clean up refmap helpers") Signed-off-by:
Joseph Qi <joseph.qi@huawei.com> Reviewed-by:
joyce.xue <xuejiufei@huawei.com> Reported-by:
Guozhonghua <guozhonghua@h3c.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Zefan Li <lizefan@huawei.com>
-