]> asedeno.scripts.mit.edu Git - linux.git/log
linux.git
8 years agoARM/PCI: Move align_resource function pointer to pci_host_bridge structure
Gabriele Paoloni [Wed, 11 Nov 2015 01:12:25 +0000 (09:12 +0800)]
ARM/PCI: Move align_resource function pointer to pci_host_bridge structure

Commit b3a72384fe29 ("ARM/PCI: Replace pci_sys_data->align_resource with
global function pointer") introduced an ARM-specific align_resource()
function pointer.  This is not portable to other arches and doesn't work
for platforms with two different PCIe host bridge controllers.

Move the function pointer to the pci_host_bridge structure so each host
bridge driver can specify its own align_resource() function.

Signed-off-by: Gabriele Paoloni <gabriele.paoloni@huawei.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
8 years agoMerge branch 'for-linus' of git://git.kernel.dk/linux-block
Linus Torvalds [Wed, 25 Nov 2015 19:08:35 +0000 (11:08 -0800)]
Merge branch 'for-linus' of git://git.kernel.dk/linux-block

Pull more block layer fixes from Jens Axboe:
 "I wasn't going to send off a new pull before next week, but the blk
  flush fix from Jan from the other day introduced a regression.  It's
  rare enough not to have hit during testing, since it requires both a
  device that rejects the first flush, and bad timing while it does
  that.  But since someone did hit it, let's get the revert into 4.4-rc3
  so we don't have a released rc with that known issue.

  Apart from that revert, three other fixes:

   - From Christoph, a fix for a missing unmap in NVMe request
     preparation.

   - An NVMe fix from Nishanth that fixes data corruption on powerpc.

   - Also from Christoph, fix a list_del() attempt on blk-mq that didn't
     have a matching list_add() at timer start"

* 'for-linus' of git://git.kernel.dk/linux-block:
  Revert "blk-flush: Queue through IO scheduler when flush not required"
  block: fix blk_abort_request for blk-mq drivers
  nvme: add missing unmaps in nvme_queue_rq
  NVMe: default to 4k device page size

8 years agoARM: OMAP4+: SMP: use lockless clkdm/pwrdm api in omap4_boot_secondary
Grygorii Strashko [Mon, 16 Nov 2015 17:38:53 +0000 (19:38 +0200)]
ARM: OMAP4+: SMP: use lockless clkdm/pwrdm api in omap4_boot_secondary

OMAP CPU hotplug uses cpu1's clocks and power domains for CPU1 wake up
from low power states (or turn on CPU1). This part of code is also
part of system suspend (disable_nonboot_cpus()).
>From other side, cpu1's clocks and power domains are used by CPUIdle. All above
functionality is mutually exclusive and, therefore, lockless clkdm/pwrdm api
can be used in omap4_boot_secondary().

This fixes below back-trace on -RT which is triggered by
pwrdm_lock/unlock():

BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
 in_atomic(): 1, irqs_disabled(): 0, pid: 118, name: sh
 9 locks held by sh/118:
  #0:  (sb_writers#4){.+.+.+}, at: [<c0144a6c>] vfs_write+0x13c/0x164
  #1:  (&of->mutex){+.+.+.}, at: [<c01b4c70>] kernfs_fop_write+0x48/0x19c
  #2:  (s_active#24){.+.+.+}, at: [<c01b4c78>] kernfs_fop_write+0x50/0x19c
  #3:  (device_hotplug_lock){+.+.+.}, at: [<c03cbff0>] lock_device_hotplug_sysfs+0xc/0x4c
  #4:  (&dev->mutex){......}, at: [<c03cd284>] device_online+0x14/0x88
  #5:  (cpu_add_remove_lock){+.+.+.}, at: [<c003af90>] cpu_up+0x50/0x1a0
  #6:  (cpu_hotplug.lock){++++++}, at: [<c003ae48>] cpu_hotplug_begin+0x0/0xc4
  #7:  (cpu_hotplug.lock#2){+.+.+.}, at: [<c003aec0>] cpu_hotplug_begin+0x78/0xc4
  #8:  (boot_lock){+.+...}, at: [<c002b254>] omap4_boot_secondary+0x1c/0x178
 Preemption disabled at:[<  (null)>]   (null)

 CPU: 0 PID: 118 Comm: sh Not tainted 4.1.12-rt11-01998-gb4a62c3-dirty #137
 Hardware name: Generic DRA74X (Flattened Device Tree)
 [<c0017574>] (unwind_backtrace) from [<c0013be8>] (show_stack+0x10/0x14)
 [<c0013be8>] (show_stack) from [<c05a8670>] (dump_stack+0x80/0x94)
 [<c05a8670>] (dump_stack) from [<c05ad158>] (rt_spin_lock+0x24/0x54)
 [<c05ad158>] (rt_spin_lock) from [<c0030dac>] (clkdm_wakeup+0x10/0x2c)
 [<c0030dac>] (clkdm_wakeup) from [<c002b2c0>] (omap4_boot_secondary+0x88/0x178)
 [<c002b2c0>] (omap4_boot_secondary) from [<c0015d00>] (__cpu_up+0xc4/0x164)
 [<c0015d00>] (__cpu_up) from [<c003b09c>] (cpu_up+0x15c/0x1a0)
 [<c003b09c>] (cpu_up) from [<c03cd2d4>] (device_online+0x64/0x88)
 [<c03cd2d4>] (device_online) from [<c03cd360>] (online_store+0x68/0x74)
 [<c03cd360>] (online_store) from [<c01b4ce0>] (kernfs_fop_write+0xb8/0x19c)
 [<c01b4ce0>] (kernfs_fop_write) from [<c0144124>] (__vfs_write+0x20/0xd8)
 [<c0144124>] (__vfs_write) from [<c01449c0>] (vfs_write+0x90/0x164)
 [<c01449c0>] (vfs_write) from [<c01451e4>] (SyS_write+0x44/0x9c)
 [<c01451e4>] (SyS_write) from [<c0010240>] (ret_fast_syscall+0x0/0x54)
 CPU1: smp_ops.cpu_die() returned, trying to resuscitate

Cc: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
8 years agoMerge branch '81xx' into omap-for-v4.4/fixes
Tony Lindgren [Wed, 25 Nov 2015 18:56:40 +0000 (10:56 -0800)]
Merge branch '81xx' into omap-for-v4.4/fixes

8 years agoarm: omap2+: add missing HWMOD_NO_IDLEST in 81xx hwmod data
Neil Armstrong [Fri, 13 Nov 2015 16:29:53 +0000 (17:29 +0100)]
arm: omap2+: add missing HWMOD_NO_IDLEST in 81xx hwmod data

Add missing HWMOD_NO_IDLEST hwmod flag for entries not
having omap4 clkctrl values.
The emac0 hwmod flag fixes the davinci_emac driver probe
since the return of pm_resume() call is now checked.

This solves the following boot errors :
[    0.121429] omap_hwmod: l4_ls: _wait_target_ready failed: -16
[    0.121441] omap_hwmod: l4_ls: cannot be enabled for reset (3)
[    0.124342] omap_hwmod: l4_hs: _wait_target_ready failed: -16
[    0.124352] omap_hwmod: l4_hs: cannot be enabled for reset (3)
[    1.967228] omap_hwmod: emac0: _wait_target_ready failed: -16

Cc: Brian Hutchinson <b.hutchman@gmail.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
8 years agoRevert "blk-flush: Queue through IO scheduler when flush not required"
Jens Axboe [Wed, 25 Nov 2015 17:12:54 +0000 (10:12 -0700)]
Revert "blk-flush: Queue through IO scheduler when flush not required"

This reverts commit 1b2ff19e6a957b1ef0f365ad331b608af80e932e.

Jan writes:

--

Thanks for report! After some investigation I found out we allocate
elevator specific data in __get_request() only for non-flush requests. And
this is actually required since the flush machinery uses the space in
struct request for something else. Doh. So my patch is just wrong and not
easy to fix since at the time __get_request() is called we are not sure
whether the flush machinery will be used in the end. Jens, please revert
1b2ff19e6a957b1ef0f365ad331b608af80e932e. Thanks!

I'm somewhat surprised that you can reliably hit the race where flushing
gets disabled for the device just while the request is in flight. But I
guess during boot it makes some sense.

--

So let's just revert it, we can fix the queue run manually after the
fact. This race is rare enough that it didn't trigger in testing, it
requires the specific disable-while-in-flight scenario to trigger.

8 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Wed, 25 Nov 2015 17:01:49 +0000 (09:01 -0800)]
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "Bug fixes for all architectures.  Nothing really stands out"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (21 commits)
  KVM: nVMX: remove incorrect vpid check in nested invvpid emulation
  arm64: kvm: report original PAR_EL1 upon panic
  arm64: kvm: avoid %p in __kvm_hyp_panic
  KVM: arm/arm64: vgic: Trust the LR state for HW IRQs
  KVM: arm/arm64: arch_timer: Preserve physical dist. active state on LR.active
  KVM: arm/arm64: Fix preemptible timer active state crazyness
  arm64: KVM: Add workaround for Cortex-A57 erratum 834220
  arm64: KVM: Fix AArch32 to AArch64 register mapping
  ARM/arm64: KVM: test properly for a PTE's uncachedness
  KVM: s390: fix wrong lookup of VCPUs by array index
  KVM: s390: avoid memory overwrites on emergency signal injection
  KVM: Provide function for VCPU lookup by id
  KVM: s390: fix pfmf intercept handler
  KVM: s390: enable SIMD only when no VCPUs were created
  KVM: x86: request interrupt window when IRQ chip is split
  KVM: x86: set KVM_REQ_EVENT on local interrupt request from user space
  KVM: x86: split kvm_vcpu_ready_for_interrupt_injection out of dm_request_for_irq_injection
  KVM: x86: fix interrupt window handling in split IRQ chip case
  MIPS: KVM: Uninit VCPU in vcpu_create error path
  MIPS: KVM: Fix CACHE immediate offset sign extension
  ...

8 years agodrm/radeon: make some dpm errors debug only
Alex Deucher [Mon, 23 Nov 2015 21:38:12 +0000 (16:38 -0500)]
drm/radeon: make some dpm errors debug only

"Could not force DPM to low", etc. is usually harmless and
just confuses users.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
8 years agoarm64: efi: correctly map runtime regions
Mark Rutland [Mon, 23 Nov 2015 11:09:11 +0000 (11:09 +0000)]
arm64: efi: correctly map runtime regions

The kernel may use a page granularity of 4K, 16K, or 64K depending on
configuration.

When mapping EFI runtime regions, we use memrange_efi_to_native to round
the physical base address of a region down to a kernel page boundary,
and round the size up to a kernel page boundary, adding the residue left
over from rounding down the physical base address. We do not round down
the virtual base address.

In __create_mapping we account for the offset of the virtual base from a
granule boundary, adding the residue to the size before rounding the
base down to said granule boundary.

Thus we account for the residue twice, and when the residue is non-zero
will cause __create_mapping to map an additional page at the end of the
region. Depending on the memory map, this page may be in a region we are
not intended/permitted to map, or may clash with a different region that
we wish to map. In typical cases, mapping the next item in the memory
map will overwrite the erroneously created entry, as we sort the memory
map in the stub.

As __create_mapping can cope with base addresses which are not page
aligned, we can instead rely on it to map the region appropriately, and
simplify efi_virtmap_init by removing the unnecessary code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: mm: fix fault_info table xFSC decoding
Mark Rutland [Mon, 23 Nov 2015 15:09:36 +0000 (15:09 +0000)]
arm64: mm: fix fault_info table xFSC decoding

We are missing descriptions for some valid xFSC values in the fault info
table (e.g. "TLB conflict abort"), and have erroneous descriptions for
reserved values (e.g. "asynchronous external abort", "debug event").

This patch adds the missing xFSC values, and removes erroneous decoding
of values reserved by the architecture, as described in ARM DDI 0487A.h.

At the same time, fixed the unbalanced brackets for the synchronous
parity error strings in the table.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: fix building without CONFIG_UID16
Arnd Bergmann [Fri, 20 Nov 2015 11:12:21 +0000 (12:12 +0100)]
arm64: fix building without CONFIG_UID16

As reported by Michal Simek, building an ARM64 kernel with CONFIG_UID16
disabled currently fails because the system call table still needs to
reference the individual function entry points that are provided by
kernel/sys_ni.c in this case, and the declarations are hidden inside
of #ifdef CONFIG_UID16:

arch/arm64/include/asm/unistd32.h:57:8: error: 'sys_lchown16' undeclared here (not in a function)
 __SYSCALL(__NR_lchown, sys_lchown16)

I believe this problem only exists on ARM64, because older architectures
tend to not need declarations when their system call table is built
in assembly code, while newer architectures tend to not need UID16
support. ARM64 only uses these system calls for compatibility with
32-bit ARM binaries.

This changes the CONFIG_UID16 check into CONFIG_HAVE_UID16, which is
set unconditionally on ARM64 with CONFIG_COMPAT, so we see the
declarations whenever we need them, but otherwise the behavior is
unchanged.

Fixes: af1839eb4bd4 ("Kconfig: clean up the long arch list for the UID16 config option")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoARM: orion5x: Fix legacy get_irqnr_and_base
Nicolas Pitre [Mon, 23 Nov 2015 03:44:19 +0000 (22:44 -0500)]
ARM: orion5x: Fix legacy get_irqnr_and_base

Commit 5be9fc23cd ("ARM: orion5x: fix legacy orion5x IRQ numbers") shifted
IRQ numbers by one but didn't update the get_irqnr_and_base macro
accordingly.  This macro is involved when CONFIG_MULTI_IRQ_HANDLER
is not defined.

[jac: 5d6bed2a9c went in to v4.2, but was backported to v3.18]

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Fixes: 5be9fc23cd ("ARM: orion5x: fix legacy orion5x IRQ numbers")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
8 years agoARM: dove: Fix legacy get_irqnr_and_base
Nicolas Pitre [Mon, 23 Nov 2015 03:40:03 +0000 (22:40 -0500)]
ARM: dove: Fix legacy get_irqnr_and_base

Commit 5d6bed2a9c ("ARM: dove: fix legacy dove IRQ numbers") shifted
IRQ numbers by one but didn't update the get_irqnr_and_base macro
accordingly.  This macro is involved when CONFIG_MULTI_IRQ_HANDLER
is not defined.

[jac: 5d6bed2a9c went in to v4.2, but was backported to v3.18]

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Fixes: 5d6bed2a9c ("ARM: dove: fix legacy dove IRQ numbers")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
8 years agoKVM: nVMX: remove incorrect vpid check in nested invvpid emulation
Haozhong Zhang [Wed, 25 Nov 2015 09:21:39 +0000 (17:21 +0800)]
KVM: nVMX: remove incorrect vpid check in nested invvpid emulation

This patch removes the vpid check when emulating nested invvpid
instruction of type all-contexts invalidation. The existing code is
incorrect because:
 (1) According to Intel SDM Vol 3, Section "INVVPID - Invalidate
     Translations Based on VPID", invvpid instruction does not check
     vpid in the invvpid descriptor when its type is all-contexts
     invalidation.
 (2) According to the same document, invvpid of type all-contexts
     invalidation does not require there is an active VMCS, so/and
     get_vmcs12() in the existing code may result in a NULL-pointer
     dereference. In practice, it can crash both KVM itself and L1
     hypervisors that use invvpid (e.g. Xen).

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
8 years agobtrfs: fix balance range usage filters in 4.4-rc
Holger Hoffstätte [Tue, 17 Nov 2015 11:29:32 +0000 (12:29 +0100)]
btrfs: fix balance range usage filters in 4.4-rc

There's a regression in 4.4-rc since commit bc3094673f22
(btrfs: extend balance filter usage to take minimum and maximum) in that
existing (non-ranged) balance with -dusage=x no longer works; all chunks
are skipped.

After staring at the code for a while and wondering why a non-ranged
balance would even need min and max thresholds (..which then were not
set correctly, leading to the bug) I realized that the only problem
was the fact that the filter functions were named wrong, thanks to
patching copypasta. Simply renaming both functions lets the existing
btrfs-progs call balance with -dusage=x and now the non-ranged filter
function is invoked, properly using only a single chunk limit.

Signed-off-by: Holger Hoffstätte <holger.hoffstaette@googlemail.com>
Fixes: bc3094673f22 ("btrfs: extend balance filter usage to take minimum and maximum")
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agobtrfs: qgroup: account shared subtree during snapshot delete
Mark Fasheh [Thu, 5 Nov 2015 22:38:00 +0000 (14:38 -0800)]
btrfs: qgroup: account shared subtree during snapshot delete

Commit 0ed4792 ('btrfs: qgroup: Switch to new extent-oriented qgroup
mechanism.') removed our qgroup accounting during
btrfs_drop_snapshot(). Predictably, this results in qgroup numbers
going bad shortly after a snapshot is removed.

Fix this by adding a dirty extent record when we encounter extents during
our shared subtree walk. This effectively restores the functionality we had
with the original shared subtree walking code in 1152651 (btrfs: qgroup:
account shared subtrees during snapshot delete).

The idea with the original patch (and this one) is that shared subtrees can
get skipped during drop_snapshot. The shared subtree walk then allows us a
chance to visit those extents and add them to the qgroup work for later
processing. This ultimately makes the accounting for drop snapshot work.

The new qgroup code nicely handles all the other extents during the tree
walk via the ref dec/inc functions so we don't have to add actions beyond
what we had originally.

Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: use btrfs_get_fs_root in resolve_indirect_ref
Josef Bacik [Thu, 5 Nov 2015 22:37:58 +0000 (14:37 -0800)]
Btrfs: use btrfs_get_fs_root in resolve_indirect_ref

The backref code will look up the fs_root we're trying to resolve our indirect
refs for, unfortunately we use btrfs_read_fs_root_no_name, which returns -ENOENT
if the ref is 0.  This isn't helpful for the qgroup stuff with snapshot delete
as it won't be able to search down the snapshot we are deleting, which will
cause us to miss roots.  So use btrfs_get_fs_root and send false for check_ref
so we can always get the root we're looking for.  Thanks,

Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agobtrfs: qgroup: fix quota disable during rescan
Justin Maggard [Fri, 6 Nov 2015 18:36:42 +0000 (10:36 -0800)]
btrfs: qgroup: fix quota disable during rescan

There's a race condition that leads to a NULL pointer dereference if you
disable quotas while a quota rescan is running.  To fix this, we just need
to wait for the quota rescan worker to actually exit before tearing down
the quota structures.

Signed-off-by: Justin Maggard <jmaggard@netgear.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: fix race between cleaner kthread and space cache writeout
Filipe Manana [Mon, 23 Nov 2015 15:25:16 +0000 (15:25 +0000)]
Btrfs: fix race between cleaner kthread and space cache writeout

When a block group becomes unused and the cleaner kthread is currently
running, we can end up getting the current transaction aborted with error
-ENOENT when we try to commit the transaction, leading to the following
trace:

  [59779.258768] WARNING: CPU: 3 PID: 5990 at fs/btrfs/extent-tree.c:3740 btrfs_write_dirty_block_groups+0x17c/0x214 [btrfs]()
  [59779.272594] BTRFS: Transaction aborted (error -2)
  (...)
  [59779.291137] Call Trace:
  [59779.291621]  [<ffffffff812566f4>] dump_stack+0x4e/0x79
  [59779.292543]  [<ffffffff8104d0a6>] warn_slowpath_common+0x9f/0xb8
  [59779.293435]  [<ffffffffa04cb81f>] ? btrfs_write_dirty_block_groups+0x17c/0x214 [btrfs]
  [59779.295000]  [<ffffffff8104d107>] warn_slowpath_fmt+0x48/0x50
  [59779.296138]  [<ffffffffa04c2721>] ? write_one_cache_group.isra.32+0x77/0x82 [btrfs]
  [59779.297663]  [<ffffffffa04cb81f>] btrfs_write_dirty_block_groups+0x17c/0x214 [btrfs]
  [59779.299141]  [<ffffffffa0549b0d>] commit_cowonly_roots+0x1de/0x261 [btrfs]
  [59779.300359]  [<ffffffffa04dd5b6>] btrfs_commit_transaction+0x4c4/0x99c [btrfs]
  [59779.301805]  [<ffffffffa04b5df4>] btrfs_sync_fs+0x145/0x1ad [btrfs]
  [59779.302893]  [<ffffffff81196634>] sync_filesystem+0x7f/0x93
  (...)
  [59779.318186] ---[ end trace 577e2daff90da33a ]---

The following diagram illustrates a sequence of steps leading to this
problem:

       CPU 1                                             CPU 2

                           <at transaction N>

                                                        adds bg A to list
                                                        fs_info->unused_bgs

                                                        adds bg B to list
                                                        fs_info->unused_bgs

                           <transaction kthread
                            commits transaction N
                            and wakes up the
                            cleaner kthread>

  cleaner kthread
    delete_unused_bgs()

      sees bg A in list
      fs_info->unused_bgs

      btrfs_start_transaction()

                           <transaction N + 1 starts>

      deletes bg A

                                                        update_block_group(bg C)

                                                          --> adds bg C to list
                                                              fs_info->unused_bgs

      deletes bg B

      sees bg C in the list
      fs_info->unused_bgs

      btrfs_remove_chunk(bg C)
        btrfs_remove_block_group(bg C)

          --> checks if the block group
              is in a dirty list, and
              because it isn't now, it
              does nothing

          --> the block group item
              is deleted from the
              extent tree

                                                          --> adds bg C to list
                                                              transaction->dirty_bgs

                                                         some task calls
                                                         btrfs_commit_transaction(t N + 1)
                                                           commit_cowonly_roots()
                                                             btrfs_write_dirty_block_groups()
                                                               --> sees bg C in cur_trans->dirty_bgs
                                                               --> calls write_one_cache_group()
                                                                   which returns -ENOENT because
                                                                   it did not find the block group
                                                                   item in the extent tree
                                                               --> transaction aborte with -ENOENT
                                                                   because write_one_cache_group()
                                                                   returned that error

So fix this by adding a block group to the list of dirty block groups
before adding it to the list of unused block groups.

This happened on a stress test using fsstress plus concurrent calls to
fallocate 20G and truncate (releasing part of the space allocated with
fallocate).

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: fix scrub preventing unused block groups from being deleted
Filipe Manana [Thu, 19 Nov 2015 11:45:48 +0000 (11:45 +0000)]
Btrfs: fix scrub preventing unused block groups from being deleted

Currently scrub can race with the cleaner kthread when the later attempts
to delete an unused block group, and the result is preventing the cleaner
kthread from ever deleting later the block group - unless the block group
becomes used and unused again. The following diagram illustrates that
race:

              CPU 1                                 CPU 2

 cleaner kthread
   btrfs_delete_unused_bgs()

     gets block group X from
     fs_info->unused_bgs and
     removes it from that list

                                             scrub_enumerate_chunks()

                                               searches device tree using
                                               its commit root

                                               finds device extent for
                                               block group X

                                               gets block group X from the tree
                                               fs_info->block_group_cache_tree
                                               (via btrfs_lookup_block_group())

                                               sets bg X to RO

     sees the block group is
     already RO and therefore
     doesn't delete it nor adds
     it back to unused list

So fix this by making scrub add the block group again to the list of
unused block groups if the block group is still unused when it finished
scrubbing it and it hasn't been removed already.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: fix race between scrub and block group deletion
Filipe Manana [Thu, 19 Nov 2015 10:57:20 +0000 (10:57 +0000)]
Btrfs: fix race between scrub and block group deletion

Scrub can race with the cleaner kthread deleting block groups that are
unused (and with relocation too) leading to a failure with error -EINVAL
that gets returned to user space.

The following diagram illustrates how it happens:

              CPU 1                                 CPU 2

 cleaner kthread
   btrfs_delete_unused_bgs()

     gets block group X from
     fs_info->unused_bgs

     sets block group to RO

       btrfs_remove_chunk(bg X)

         deletes device extents

                                         scrub_enumerate_chunks()

                                           searches device tree using
                                           its commit root

                                           finds device extent for
                                           block group X

                                           gets block group X from the tree
                                           fs_info->block_group_cache_tree
                                           (via btrfs_lookup_block_group())

                                           sets bg X to RO (again)

          btrfs_remove_block_group(bg X)

            deletes block group from
            fs_info->block_group_cache_tree

            removes extent map from
            fs_info->mapping_tree

                                               scrub_chunk(offset X)

                                                 searches fs_info->mapping_tree
                                                 for extent map starting at
                                                 offset X

                                                    --> doesn't find any such
                                                        extent map
                                                    --> returns -EINVAL and scrub
                                                        errors out to userspace
                                                        with -EINVAL

Fix this by dealing with an extent map lookup failure as an indicator of
block group deletion.
Issue reproduced with fstest btrfs/071.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agobtrfs: fix rcu warning during device replace
David Sterba [Thu, 19 Nov 2015 10:35:17 +0000 (11:35 +0100)]
btrfs: fix rcu warning during device replace

The test btrfs/011 triggers a rcu warning
Reviewed-by: Anand Jain <anand.jain@oracle.com>
===============================
[ INFO: suspicious RCU usage. ]
4.4.0-rc1-default+ #286 Tainted: G        W
-------------------------------
fs/btrfs/volumes.c:1977 suspicious rcu_dereference_check() usage!

other info that might help us debug this:

rcu_scheduler_active = 1, debug_locks = 0
4 locks held by btrfs/28786:

0:  (&fs_info->dev_replace.lock_finishing_cancel_unmount){+.+...}, at: [<ffffffffa00bc785>] btrfs_dev_replace_finishing+0x45/0xa00 [btrfs]
1:  (uuid_mutex){+.+.+.}, at: [<ffffffffa00bc84f>] btrfs_dev_replace_finishing+0x10f/0xa00 [btrfs]
2:  (&fs_devs->device_list_mutex){+.+.+.}, at: [<ffffffffa00bc868>] btrfs_dev_replace_finishing+0x128/0xa00 [btrfs]
3:  (&fs_info->chunk_mutex){+.+...}, at: [<ffffffffa00bc87d>] btrfs_dev_replace_finishing+0x13d/0xa00 [btrfs]

stack backtrace:
CPU: 0 PID: 28786 Comm: btrfs Tainted: G        W       4.4.0-rc1-default+ #286
Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS ASNBCPT1.86C.0031.B00.1006301607 06/30/2010
0000000000000001 ffff8800a07dfb48 ffffffff8141d47b 0000000000000001
0000000000000001 0000000000000000 ffff8801464a4f00 ffff8800a07dfb78
ffffffff810cd883 ffff880146eb9400 ffff8800a3698600 ffff8800a33fe220
Call Trace:
[<ffffffff8141d47b>] dump_stack+0x4f/0x74
[<ffffffff810cd883>] lockdep_rcu_suspicious+0x103/0x140
[<ffffffffa0071261>] btrfs_rm_dev_replace_remove_srcdev+0x111/0x130 [btrfs]
[<ffffffff810d354d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff81449536>] ? __percpu_counter_sum+0x66/0x80
[<ffffffffa00bcc15>] btrfs_dev_replace_finishing+0x4d5/0xa00 [btrfs]
[<ffffffffa00bc96e>] ? btrfs_dev_replace_finishing+0x22e/0xa00 [btrfs]
[<ffffffffa00a8795>] ? btrfs_scrub_dev+0x415/0x6d0 [btrfs]
[<ffffffffa003ea69>] ? btrfs_start_transaction+0x9/0x20 [btrfs]
[<ffffffffa00bda79>] btrfs_dev_replace_start+0x339/0x590 [btrfs]
[<ffffffff81196aa5>] ? __might_fault+0x95/0xa0
[<ffffffffa0078638>] btrfs_ioctl_dev_replace+0x118/0x160 [btrfs]
[<ffffffff811409c6>] ? stack_trace_call+0x46/0x70
[<ffffffffa007c914>] ? btrfs_ioctl+0x24/0x1770 [btrfs]
[<ffffffffa007ce43>] btrfs_ioctl+0x553/0x1770 [btrfs]
[<ffffffff811409c6>] ? stack_trace_call+0x46/0x70
[<ffffffff811d6eb1>] ? do_vfs_ioctl+0x21/0x5a0
[<ffffffff811d6f1c>] do_vfs_ioctl+0x8c/0x5a0
[<ffffffff811e3336>] ? __fget_light+0x86/0xb0
[<ffffffff811e3369>] ? __fdget+0x9/0x20
[<ffffffff811d7451>] ? SyS_ioctl+0x21/0x80
[<ffffffff811d7483>] SyS_ioctl+0x53/0x80
[<ffffffff81b1efd7>] entry_SYSCALL_64_fastpath+0x12/0x6f

This is because of unprotected use of rcu_dereference in
btrfs_scratch_superblocks. We can't add rcu locks around the whole
function because we read the superblock.

The fix will use the rcu string buffer directly without the rcu locking.
Thi is safe as the device will not go away in the meantime. We're
holding the device list mutexes.

Restructuring the code to narrow down the rcu section turned out to be
impossible, we need to call filp_open (through update_dev_time) on the
buffer and this could call kmalloc/__might_sleep. We could call kstrdup
with GFP_ATOMIC but it's not absolutely necessary.

Fixes: 12b1c2637b6e (Btrfs: enhance btrfs_scratch_superblock to scratch all superblocks)
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agobtrfs: Continue replace when set_block_ro failed
Zhaolei [Tue, 17 Nov 2015 10:46:17 +0000 (18:46 +0800)]
btrfs: Continue replace when set_block_ro failed

xfstests/011 failed in node with small_size filesystem.
Can be reproduced by following script:
  DEV_LIST="/dev/vdd /dev/vde"
  DEV_REPLACE="/dev/vdf"

  do_test()
  {
      local mkfs_opt="$1"
      local size="$2"

      dmesg -c >/dev/null
      umount $SCRATCH_MNT &>/dev/null

      echo  mkfs.btrfs -f $mkfs_opt "${DEV_LIST[*]}"
      mkfs.btrfs -f $mkfs_opt "${DEV_LIST[@]}" || return 1
      mount "${DEV_LIST[0]}" $SCRATCH_MNT

      echo -n "Writing big files"
      dd if=/dev/urandom of=$SCRATCH_MNT/t0 bs=1M count=1 >/dev/null 2>&1
      for ((i = 1; i <= size; i++)); do
          echo -n .
          /bin/cp $SCRATCH_MNT/t0 $SCRATCH_MNT/t$i || return 1
      done
      echo

      echo Start replace
      btrfs replace start -Bf "${DEV_LIST[0]}" "$DEV_REPLACE" $SCRATCH_MNT || {
          dmesg
          return 1
      }
      return 0
  }

  # Set size to value near fs size
  # for example, 1897 can trigger this bug in 2.6G device.
  #
  ./do_test "-d raid1 -m raid1" 1897

System will report replace fail with following warning in dmesg:
 [  134.710853] BTRFS: dev_replace from /dev/vdd (devid 1) to /dev/vdf started
 [  135.542390] BTRFS: btrfs_scrub_dev(/dev/vdd, 1, /dev/vdf) failed -28
 [  135.543505] ------------[ cut here ]------------
 [  135.544127] WARNING: CPU: 0 PID: 4080 at fs/btrfs/dev-replace.c:428 btrfs_dev_replace_start+0x398/0x440()
 [  135.545276] Modules linked in:
 [  135.545681] CPU: 0 PID: 4080 Comm: btrfs Not tainted 4.3.0 #256
 [  135.546439] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
 [  135.547798]  ffffffff81c5bfcf ffff88003cbb3d28 ffffffff817fe7b5 0000000000000000
 [  135.548774]  ffff88003cbb3d60 ffffffff810a88f1 ffff88002b030000 00000000ffffffe4
 [  135.549774]  ffff88003c080000 ffff88003c082588 ffff88003c28ab60 ffff88003cbb3d70
 [  135.550758] Call Trace:
 [  135.551086]  [<ffffffff817fe7b5>] dump_stack+0x44/0x55
 [  135.551737]  [<ffffffff810a88f1>] warn_slowpath_common+0x81/0xc0
 [  135.552487]  [<ffffffff810a89e5>] warn_slowpath_null+0x15/0x20
 [  135.553211]  [<ffffffff81448c88>] btrfs_dev_replace_start+0x398/0x440
 [  135.554051]  [<ffffffff81412c3e>] btrfs_ioctl+0x1d2e/0x25c0
 [  135.554722]  [<ffffffff8114c7ba>] ? __audit_syscall_entry+0xaa/0xf0
 [  135.555506]  [<ffffffff8111ab36>] ? current_kernel_time64+0x56/0xa0
 [  135.556304]  [<ffffffff81201e3d>] do_vfs_ioctl+0x30d/0x580
 [  135.557009]  [<ffffffff8114c7ba>] ? __audit_syscall_entry+0xaa/0xf0
 [  135.557855]  [<ffffffff810011d1>] ? do_audit_syscall_entry+0x61/0x70
 [  135.558669]  [<ffffffff8120d1c1>] ? __fget_light+0x61/0x90
 [  135.559374]  [<ffffffff81202124>] SyS_ioctl+0x74/0x80
 [  135.559987]  [<ffffffff81809857>] entry_SYSCALL_64_fastpath+0x12/0x6f
 [  135.560842] ---[ end trace 2a5c1fc3205abbdd ]---

Reason:
 When big data writen to fs, the whole free space will be allocated
 for data chunk.
 And operation as scrub need to set_block_ro(), and when there is
 only one metadata chunk in system(or other metadata chunks
 are all full), the function will try to allocate a new chunk,
 and failed because no space in device.

Fix:
 When set_block_ro failed for metadata chunk, it is not a problem
 because scrub_lock paused commit_trancaction in same time, and
 metadata are always cowed, so the on-the-fly writepages will not
 write data into same place with scrub/replace.
 Let replace continue in this case is no problem.

Tested by above script, and xfstests/011, plus 100 times xfstests/070.

Changelog v1->v2:
1: Add detail comments in source and commit-message.
2: Add dmesg detail into commit-message.
3: Limit return value of -ENOSPC to be passed.
All suggested by: Filipe Manana <fdmanana@gmail.com>

Suggested-by: Filipe Manana <fdmanana@gmail.com>
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agobtrfs: fix clashing number of the enhanced balance usage filter
David Sterba [Mon, 16 Nov 2015 15:50:13 +0000 (16:50 +0100)]
btrfs: fix clashing number of the enhanced balance usage filter

I've accidentally picked an already used number for the enhanced usage
filter represented by BTRFS_BALANCE_ARGS_USAGE_RANGE, clashing with
BTRFS_BALANCE_ARGS_CONVERT. Introduced during the development phase,
no backward compatibility issues.

Reported-by: Holger Hoffstätte <holger.hoffstaette@googlemail.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: bc3094673f22 ("btrfs: extend balance filter usage to take minimum and maximum")
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: fix the number of transaction units needed to remove a block group
Filipe Manana [Fri, 13 Nov 2015 23:57:17 +0000 (23:57 +0000)]
Btrfs: fix the number of transaction units needed to remove a block group

We were using only 1 transaction unit when attempting to delete an unused
block group but in reality we need 3 + N units, where N corresponds to the
number of stripes. We were accounting only for the addition of the orphan
item (for the block group's free space cache inode) but we were not
accounting that we need to delete one block group item from the extent
tree, one free space item from the tree of tree roots and N device extent
items from the device tree.

While one unit is not enough, it worked most of the time because for each
single unit we are too pessimistic and assume an entire tree path, with
the highest possible heigth (8), needs to be COWed with eventual node
splits at every possible level in the tree, so there was usually enough
reserved space for removing all the items and adding the orphan item.

However after adding the orphan item, writepages() can by called by the VM
subsystem against the btree inode when we are under memory pressure, which
causes writeback to start for the nodes we COWed before, this forces the
operation to remove the free space item to COW again some (or all of) the
same nodes (in the tree of tree roots). Even without writepages() being
called, we could fail with ENOSPC because these items are located in
multiple trees and one of them might have a higher heigth and require
node/leaf splits at many levels, exhausting all the reserved space before
removing all the items and adding the orphan.

In the kernel 4.0 release, commit 3d84be799194 ("Btrfs: fix BUG_ON in
btrfs_orphan_add() when delete unused block group"), we attempted to fix
a BUG_ON due to ENOSPC when trying to add the orphan item by making the
cleaner kthread reserve one transaction unit before attempting to remove
the block group, but this was not enough. We had a couple user reports
still hitting the same BUG_ON after 4.0, like Stefan Priebe's report on
a 4.2-rc6 kernel for example:

    http://www.spinics.net/lists/linux-btrfs/msg46070.html

So fix this by reserving all the necessary units of metadata.

Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Fixes: 3d84be799194 ("Btrfs: fix BUG_ON in btrfs_orphan_add() when delete unused block group")
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: use global reserve when deleting unused block group after ENOSPC
Filipe Manana [Fri, 13 Nov 2015 23:57:16 +0000 (23:57 +0000)]
Btrfs: use global reserve when deleting unused block group after ENOSPC

It's possible to reach a state where the cleaner kthread isn't able to
start a transaction to delete an unused block group due to lack of enough
free metadata space and due to lack of unallocated device space to allocate
a new metadata block group as well. If this happens try to use space from
the global block group reserve just like we do for unlink operations, so
that we don't reach a permanent state where starting a transaction for
filesystem operations (file creation, renames, etc) keeps failing with
-ENOSPC. Such an unfortunate state was observed on a machine where over
a dozen unused data block groups existed and the cleaner kthread was
failing to delete them due to ENOSPC error when attempting to start a
transaction, and even running balance with a -dusage=0 filter failed with
ENOSPC as well. Also unmounting and mounting again the filesystem didn't
help. Allowing the cleaner kthread to use the global block reserve to
delete the unused data block groups fixed the problem.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoBtrfs: tests: checking for NULL instead of IS_ERR()
Dan Carpenter [Tue, 10 Nov 2015 09:10:03 +0000 (12:10 +0300)]
Btrfs: tests: checking for NULL instead of IS_ERR()

btrfs_alloc_dummy_root() return an error pointer on failure, it never
returns NULL.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agobtrfs: fix signed overflows in btrfs_sync_file
David Sterba [Mon, 9 Nov 2015 10:44:45 +0000 (11:44 +0100)]
btrfs: fix signed overflows in btrfs_sync_file

The calculation of range length in btrfs_sync_file leads to signed
overflow. This was caught by PaX gcc SIZE_OVERFLOW plugin.

https://forums.grsecurity.net/viewtopic.php?f=1&t=4284

The fsync call passes 0 and LLONG_MAX, the range length does not fit to
loff_t and overflows, but the value is converted to u64 so it silently
works as expected.

The minimal fix is a typecast to u64, switching functions to take
(start, end) instead of (start, len) would be more intrusive.

Coccinelle script found that there's one more opencoded calculation of
the length.

<smpl>
@@
loff_t start, end;
@@
* end - start
</smpl>

CC: stable@vger.kernel.org
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
8 years agoarm64: early_alloc: Fix check for allocation failure
Suzuki K. Poulose [Fri, 20 Nov 2015 17:45:40 +0000 (17:45 +0000)]
arm64: early_alloc: Fix check for allocation failure

In early_alloc we check if the memblock_alloc failed by checking
the virtual address of the result, which will never fail. This patch
fixes it to check the actual result for failure.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agortc: ds1307: fix kernel splat due to wakeup irq handling
Felipe Balbi [Wed, 11 Nov 2015 16:11:01 +0000 (10:11 -0600)]
rtc: ds1307: fix kernel splat due to wakeup irq handling

Since commit 3fffd1283927 ("i2c: allow specifying
separate wakeup interrupt in device tree") we have
automatic wakeup irq support for i2c devices. That
commit missed the fact that rtc-1307 had its own
wakeup irq handling and ended up introducing a
kernel splat for at least Beagle x15 boards.

Fix that by reverting original commit _and_ passing
correct interrupt names on DTS so i2c-core can
choose correct IRQ as wakeup.

Now that we have automatic wakeirq support, we can
revert the original commit which did it manually.

Fixes the following warning:

[   10.346582] WARNING: CPU: 1 PID: 263 at linux/drivers/base/power/wakeirq.c:43 dev_pm_attach_wake_irq+0xbc/0xd4()
[   10.359244] rtc-ds1307 2-006f: wake irq already initialized

Cc: Tony Lindgren <tony@atomide.com>
Cc: Nishanth Menon <nm@ti.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
8 years agodrm/nouveau/volt/pwm/gk104: fix an off-by-one resulting in the voltage not being set
Martin Peres [Thu, 5 Nov 2015 08:07:38 +0000 (09:07 +0100)]
drm/nouveau/volt/pwm/gk104: fix an off-by-one resulting in the voltage not being set

Reported-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Martin Peres <martin.peres@free.fr>
8 years agodrm/nouveau/nvif: allow userspace access to its own client object
Ben Skeggs [Tue, 24 Nov 2015 05:34:51 +0000 (15:34 +1000)]
drm/nouveau/nvif: allow userspace access to its own client object

Regression from "abi16: implement limited interoperability with
usif/nvif".

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/gr/gf100-: fix oops when calling zbc methods
Ben Skeggs [Wed, 25 Nov 2015 02:39:01 +0000 (12:39 +1000)]
drm/nouveau/gr/gf100-: fix oops when calling zbc methods

Somehow missed these two when removing dodgy void casts during the
rework.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/gr/gf117-: assume no PPC if NV_PGRAPH_GPC_GPM_PD_PES_TPC_ID_MASK is zero
Ben Skeggs [Sun, 22 Nov 2015 19:47:19 +0000 (05:47 +1000)]
drm/nouveau/gr/gf117-: assume no PPC if NV_PGRAPH_GPC_GPM_PD_PES_TPC_ID_MASK is zero

fdo#92761

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/gr/gf117-: read NV_PGRAPH_GPC_GPM_PD_PES_TPC_ID_MASK from correct GPC
Ben Skeggs [Sun, 22 Nov 2015 19:31:51 +0000 (05:31 +1000)]
drm/nouveau/gr/gf117-: read NV_PGRAPH_GPC_GPM_PD_PES_TPC_ID_MASK from correct GPC

Each GPCCS unit was reading the mask from GPC0, which causes problems on
boards where some GPCs are missing PPCs.

Part of the fix for fdo#92761.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/gr/gf100-: split out per-gpc address calculation macro
Ben Skeggs [Sun, 22 Nov 2015 19:24:32 +0000 (05:24 +1000)]
drm/nouveau/gr/gf100-: split out per-gpc address calculation macro

There's a few places where we need to access a GPC register from ucode,
but outside of the falcon's io address space.  To do this we need to
calculate the offset based on which GPC we're executing on.

This used to be done manually, but we've since found a "base" offset
that can be added by the hardware.  To use this, an extra bit needs to
be set in the register address, which is what this macro achieves.

There should be no functional change from this commit.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/bios: return actual size of the buffer retrieved via _ROM
Ben Skeggs [Thu, 19 Nov 2015 03:18:34 +0000 (13:18 +1000)]
drm/nouveau/bios: return actual size of the buffer retrieved via _ROM

Fixes detection of a failed attempt at fetching the entire ROM image
in one-shot (a violation of the spec, that works a lot of the time).

Tested on a HP Zbook 15 G2.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/instmem: protect instobj list with a spinlock
Ben Skeggs [Tue, 10 Nov 2015 23:48:13 +0000 (09:48 +1000)]
drm/nouveau/instmem: protect instobj list with a spinlock

No locking is required for the traversal of this list, as it only
happens during suspend/resume where nothing else can be executing.

Fixes some of the issues noticed during parallel piglit runs.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/pci: enable c800 magic for some unknown Samsung laptop
Ben Skeggs [Thu, 5 Nov 2015 01:00:29 +0000 (11:00 +1000)]
drm/nouveau/pci: enable c800 magic for some unknown Samsung laptop

fdo#70354 - comment #88.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agodrm/nouveau/pci: enable c800 magic for Clevo P157SM
Karol Herbst [Tue, 3 Nov 2015 22:16:04 +0000 (23:16 +0100)]
drm/nouveau/pci: enable c800 magic for Clevo P157SM

this is needed for my gpu

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
8 years agoKEYS: Fix handling of stored error in a negatively instantiated user key
David Howells [Tue, 24 Nov 2015 21:36:31 +0000 (21:36 +0000)]
KEYS: Fix handling of stored error in a negatively instantiated user key

If a user key gets negatively instantiated, an error code is cached in the
payload area.  A negatively instantiated key may be then be positively
instantiated by updating it with valid data.  However, the ->update key
type method must be aware that the error code may be there.

The following may be used to trigger the bug in the user key type:

    keyctl request2 user user "" @u
    keyctl add user user "a" @u

which manifests itself as:

BUG: unable to handle kernel paging request at 00000000ffffff8a
IP: [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 kernel/rcu/tree.c:3046
PGD 7cc30067 PUD 0
Oops: 0002 [#1] SMP
Modules linked in:
CPU: 3 PID: 2644 Comm: a.out Not tainted 4.3.0+ #49
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff88003ddea700 ti: ffff88003dd88000 task.ti: ffff88003dd88000
RIP: 0010:[<ffffffff810a376f>]  [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280
 [<ffffffff810a376f>] __call_rcu.constprop.76+0x1f/0x280 kernel/rcu/tree.c:3046
RSP: 0018:ffff88003dd8bdb0  EFLAGS: 00010246
RAX: 00000000ffffff82 RBX: 0000000000000000 RCX: 0000000000000001
RDX: ffffffff81e3fe40 RSI: 0000000000000000 RDI: 00000000ffffff82
RBP: ffff88003dd8bde0 R08: ffff88007d2d2da0 R09: 0000000000000000
R10: 0000000000000000 R11: ffff88003e8073c0 R12: 00000000ffffff82
R13: ffff88003dd8be68 R14: ffff88007d027600 R15: ffff88003ddea700
FS:  0000000000b92880(0063) GS:ffff88007fd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000000ffffff8a CR3: 000000007cc5f000 CR4: 00000000000006e0
Stack:
 ffff88003dd8bdf0 ffffffff81160a8a 0000000000000000 00000000ffffff82
 ffff88003dd8be68 ffff88007d027600 ffff88003dd8bdf0 ffffffff810a39e5
 ffff88003dd8be20 ffffffff812a31ab ffff88007d027600 ffff88007d027620
Call Trace:
 [<ffffffff810a39e5>] kfree_call_rcu+0x15/0x20 kernel/rcu/tree.c:3136
 [<ffffffff812a31ab>] user_update+0x8b/0xb0 security/keys/user_defined.c:129
 [<     inline     >] __key_update security/keys/key.c:730
 [<ffffffff8129e5c1>] key_create_or_update+0x291/0x440 security/keys/key.c:908
 [<     inline     >] SYSC_add_key security/keys/keyctl.c:125
 [<ffffffff8129fc21>] SyS_add_key+0x101/0x1e0 security/keys/keyctl.c:60
 [<ffffffff8185f617>] entry_SYSCALL_64_fastpath+0x12/0x6a arch/x86/entry/entry_64.S:185

Note the error code (-ENOKEY) in EDX.

A similar bug can be tripped by:

    keyctl request2 trusted user "" @u
    keyctl add trusted user "a" @u

This should also affect encrypted keys - but that has to be correctly
parameterised or it will fail with EINVAL before getting to the bit that
will crashes.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
8 years agoblock: fix blk_abort_request for blk-mq drivers
Christoph Hellwig [Fri, 30 Oct 2015 12:47:04 +0000 (20:47 +0800)]
block: fix blk_abort_request for blk-mq drivers

We only added the request to the request list for the !blk-mq case,
so we should only delete it in that case as well.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
8 years agonvme: add missing unmaps in nvme_queue_rq
Christoph Hellwig [Fri, 16 Oct 2015 05:58:31 +0000 (07:58 +0200)]
nvme: add missing unmaps in nvme_queue_rq

When we fail various metadata related operations in nvme_queue_rq we
need to unmap the data SGL.

Cc: stable@vger.kernel.org
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
8 years agoNVMe: default to 4k device page size
Nishanth Aravamudan [Tue, 24 Nov 2015 16:55:05 +0000 (09:55 -0700)]
NVMe: default to 4k device page size

We received a bug report recently when DDW (64-bit direct DMA on Power)
is not enabled for NVMe devices. In that case, we fall back to 32-bit
DMA via the IOMMU, which is always done via 4K TCEs (Translation Control
Entries).

The NVMe device driver, though, assumes that the DMA alignment for the
PRP entries will match the device's page size, and that the DMA aligment
matches the kernel's page aligment. On Power, the the IOMMU page size,
as mentioned above, can be 4K, while the device can have a page size of
8K, while the kernel has a page size of 64K. This eventually trips the
BUG_ON in nvme_setup_prps(), as we have a 'dma_len' that is a multiple
of 4K but not 8K (e.g., 0xF000).

In this particular case of page sizes, we clearly want to use the
IOMMU's page size in the driver. And generally, the NVMe driver in this
function should be using the IOMMU's page size for the default device
page size, rather than the kernel's page size. There is not currently an
API to obtain the IOMMU's page size across all architectures and in the
interest of a stop-gap fix to this functional issue, default the NVMe
device page size to 4K, with the intent of adding such an API and
implementation across all architectures in the next merge window.

With the functionally equivalent v3 of this patch, our hardware test
exerciser survives when using 32-bit DMA; without the patch, the kernel
will BUG within a few minutes.

Signed-off-by: Nishanth Aravamudan <nacc at linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
8 years agoPCI: hisi: Fix deferred probing
Arnd Bergmann [Tue, 24 Nov 2015 21:38:07 +0000 (15:38 -0600)]
PCI: hisi: Fix deferred probing

The hisi_pcie_probe() function is incorrectly marked as __init, as Kconfig
tells us:

  WARNING: drivers/pci/host/built-in.o(.data+0x7780): Section mismatch in reference from the variable hisi_pcie_driver to the function .init.text:hisi_pcie_probe()

If the probe for this device gets deferred past the point where __init
functions are removed, or the device is unbound and then reattached to the
driver, we branch into uninitialized memory, which is bad.

Remove the __init annotation from hisi_pcie_probe() and
hisi_add_pcie_port().

Fixes: 500a1d9a43e0 ("PCI: hisi: Add HiSilicon SoC Hip05 PCIe driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Zhou Wang <wangzhou1@hisilicon.com>
8 years agoMerge tag 'dm-4.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device...
Linus Torvalds [Tue, 24 Nov 2015 20:53:11 +0000 (12:53 -0800)]
Merge tag 'dm-4.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper fixes from Mike Snitzer:
 "Two fixes for 4.4-rc1's DM ioctl changes that introduced the potential
  for infinite recursion on ioctl (with DM multipath).

  And four stable fixes:

   - A DM thin-provisioning fix to restore 'error_if_no_space' setting
     when a thin-pool is made writable again (after having been out of
     space).

   - A DM thin-provisioning fix to properly advertise discard support
     for thin volumes that are stacked on a thin-pool whose underlying
     data device doesn't support discards.

   - A DM ioctl fix to allow ctrl-c to break out of an ioctl retry loop
     when DM multipath is configured to 'queue_if_no_path'.

   - A DM crypt fix for a possible hang on dm-crypt device removal"

* tag 'dm-4.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm thin: fix regression in advertised discard limits
  dm crypt: fix a possible hang due to race condition on exit
  dm mpath: fix infinite recursion in ioctl when no paths and !queue_if_no_path
  dm: do not reuse dm_blk_ioctl block_device input as local variable
  dm: fix ioctl retry termination with signal
  dm thin: restore requested 'error_if_no_space' setting on OODS to WRITE transition

8 years agoPCI: designware: Remove incorrect io_base assignment
Stanimir Varbanov [Tue, 24 Nov 2015 09:36:52 +0000 (11:36 +0200)]
PCI: designware: Remove incorrect io_base assignment

"pp->io" is an I/O resource, e.g., "[io 0x0000-0xffff]"; "pp->io_base" is
the CPU physical address of a region where the host bridge converts CPU
memory accesses into PCI I/O transactions.

Corrupting pp->io_base by assigning pp->io->start to it breaks access to
the PCI I/O space, as reported by Kishon.

Remove the invalid assignment.

[bhelgaas: changelog]
Fixes: 0021d22b73d6 ("PCI: designware: Use of_pci_get_host_bridge_resources() to parse DT")
Reported-and-tested-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
8 years agopidns: fix NULL dereference in __task_pid_nr_ns()
Eric Dumazet [Tue, 24 Nov 2015 19:39:54 +0000 (11:39 -0800)]
pidns: fix NULL dereference in __task_pid_nr_ns()

I got a crash during a "perf top" session that was caused by a race in
__task_pid_nr_ns() :

pid_nr_ns() was inlined, but apparently compiler chose to read
task->pids[type].pid twice, and the pid->level dereference crashed
because we got a NULL pointer at the second read :

    if (pid && ns->level <= pid->level) { // CRASH

Just use RCU API properly to solve this race, and not worry about "perf
top" crashing hosts :(

get_task_pid() can benefit from same fix.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
8 years agoALSA: hda - Fix noise on Gigabyte Z170X mobo
Takashi Iwai [Tue, 24 Nov 2015 19:02:12 +0000 (20:02 +0100)]
ALSA: hda - Fix noise on Gigabyte Z170X mobo

Gigabyte Z710X mobo with ALC1150 codec gets significant noises from
the analog loopback routes even if their inputs are all muted.
Simply kill the aamix for fixing it.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=108301
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
8 years agoselinux: fix bug in conditional rules handling
Stephen Smalley [Mon, 23 Nov 2015 21:07:41 +0000 (16:07 -0500)]
selinux: fix bug in conditional rules handling

commit fa1aa143ac4a ("selinux: extended permissions for ioctls")
introduced a bug into the handling of conditional rules, skipping the
processing entirely when the caller does not provide an extended
permissions (xperms) structure.  Access checks from userspace using
/sys/fs/selinux/access do not include such a structure since that
interface does not presently expose extended permission information.
As a result, conditional rules were being ignored entirely on userspace
access requests, producing denials when access was allowed by
conditional rules in the policy.  Fix the bug by only skipping
computation of extended permissions in this situation, not the entire
conditional rules processing.

Reported-by: Laurent Bigonville <bigon@debian.org>
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
[PM: fixed long lines in patch description]
Cc: stable@vger.kernel.org # 4.3
Signed-off-by: Paul Moore <pmoore@redhat.com>
8 years agoMerge tag 'kvm-arm-for-v4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Tue, 24 Nov 2015 18:34:40 +0000 (19:34 +0100)]
Merge tag 'kvm-arm-for-v4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master

KVM/ARM Fixes for v4.4-rc3.

Includes some timer fixes, properly unmapping PTEs, an errata fix, and two
tweaks to the EL2 panic code.

8 years agoPCI: Prevent out of bounds access in numa_node override
Mathias Krause [Mon, 9 Nov 2015 19:00:27 +0000 (20:00 +0100)]
PCI: Prevent out of bounds access in numa_node override

Commit 1266963170f5 ("PCI: Prevent out of bounds access in numa_node
override") missed that the user-provided node could also be negative.
Handle this case as well to avoid out-of-bounds accesses to the
node_states[] array.  However, allow the special value -1, i.e.
NUMA_NO_NODE, to be able to set the 'no specific node' configuration.

Fixes: 1266963170f5 ("PCI: Prevent out of bounds access in numa_node override")
Fixes: 63692df103e9 ("PCI: Allow numa_node override via sysfs")
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Sasha Levin <sasha.levin@oracle.com>
CC: Prarit Bhargava <prarit@redhat.com>
CC: stable@vger.kernel.org # v3.19+
8 years agoMerge branch 'for-linus' of git://git.kernel.dk/linux-block
Linus Torvalds [Tue, 24 Nov 2015 18:26:30 +0000 (10:26 -0800)]
Merge branch 'for-linus' of git://git.kernel.dk/linux-block

Pull block layer fixes from Jens Axboe:
 "A round of fixes/updates for the current series.

  This looks a little bigger than it is, but that's mainly because we
  pushed the lightnvm enabled null_blk change out of the merge window so
  it could be updated a bit.  The rest of the volume is also mostly
  lightnvm.  In particular:

   - Lightnvm.  Various fixes, additions, updates from Matias and
     Javier, as well as from Wenwei Tao.

   - NVMe:
        - Fix for potential arithmetic overflow from Keith.
        - Also from Keith, ensure that we reap pending completions from
          a completion queue before deleting it.  Fixes kernel crashes
          when resetting a device with IO pending.
        - Various little lightnvm related tweaks from Matias.

   - Fixup flushes to go through the IO scheduler, for the cases where a
     flush is not required.  Fixes a case in CFQ where we would be
     idling and not see this request, hence not break the idling.  From
     Jan Kara.

   - Use list_{first,prev,next} in elevator.c for cleaner code.  From
     Gelian Tang.

   - Fix for a warning trigger on btrfs and raid on single queue blk-mq
     devices, where we would flush plug callbacks with preemption
     disabled.  From me.

   - A mac partition validation fix from Kees Cook.

   - Two merge fixes from Ming, marked stable.  A third part is adding a
     new warning so we'll notice this quicker in the future, if we screw
     up the accounting.

   - Cleanup of thread name/creation in mtip32xx from Rasmus Villemoes"

* 'for-linus' of git://git.kernel.dk/linux-block: (32 commits)
  blk-merge: warn if figured out segment number is bigger than nr_phys_segments
  blk-merge: fix blk_bio_segment_split
  block: fix segment split
  blk-mq: fix calling unplug callbacks with preempt disabled
  mac: validate mac_partition is within sector
  mtip32xx: use formatting capability of kthread_create_on_node
  NVMe: reap completion entries when deleting queue
  lightnvm: add free and bad lun info to show luns
  lightnvm: keep track of block counts
  nvme: lightnvm: use admin queues for admin cmds
  lightnvm: missing free on init error
  lightnvm: wrong return value and redundant free
  null_blk: do not del gendisk with lightnvm
  null_blk: use device addressing mode
  null_blk: use ppa_cache pool
  NVMe: Fix possible arithmetic overflow for max segments
  blk-flush: Queue through IO scheduler when flush not required
  null_blk: register as a LightNVM device
  elevator: use list_{first,prev,next}_entry
  lightnvm: cleanup queue before target removal
  ...

8 years agosoc: Mediatek: Enable SCPSYS power domain driver by default
Eddie Huang [Fri, 13 Nov 2015 10:50:35 +0000 (18:50 +0800)]
soc: Mediatek: Enable SCPSYS power domain driver by default

If enable Mediatek 8173 SoC, it should also enable power domain
driver. Otherwise access clk subsystem register will fail.

Signed-off-by: Eddie Huang <eddie.huang@mediatek.com>
Acked-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
8 years agodrm/radeon: make rv770_set_sw_state failures non-fatal
Alex Deucher [Mon, 23 Nov 2015 21:43:29 +0000 (16:43 -0500)]
drm/radeon: make rv770_set_sw_state failures non-fatal

On some cards it takes a relatively long time for the change
to take place.  Make a timeout non-fatal.

bug:
https://bugs.freedesktop.org/show_bug.cgi?id=76130

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
8 years agoarm64: kvm: report original PAR_EL1 upon panic
Mark Rutland [Mon, 16 Nov 2015 13:58:29 +0000 (13:58 +0000)]
arm64: kvm: report original PAR_EL1 upon panic

If we call __kvm_hyp_panic while a guest context is active, we call
__restore_sysregs before acquiring the system register values for the
panic, in the process throwing away the PAR_EL1 value at the point of
the panic.

This patch modifies __kvm_hyp_panic to stash the PAR_EL1 value prior to
restoring host register values, enabling us to report the original
values at the point of the panic.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoarm64: kvm: avoid %p in __kvm_hyp_panic
Mark Rutland [Mon, 16 Nov 2015 13:55:51 +0000 (13:55 +0000)]
arm64: kvm: avoid %p in __kvm_hyp_panic

Currently __kvm_hyp_panic uses %p for values which are not pointers,
such as the ESR value. This can confusingly lead to "(null)" being
printed for the value.

Use %x instead, and only use %p for host pointers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoKVM: arm/arm64: vgic: Trust the LR state for HW IRQs
Christoffer Dall [Tue, 24 Nov 2015 15:34:31 +0000 (16:34 +0100)]
KVM: arm/arm64: vgic: Trust the LR state for HW IRQs

We were probing the physial distributor state for the active state of a
HW virtual IRQ, because we had seen evidence that the LR state was not
cleared when the guest deactivated a virtual interrupted.

However, this issue turned out to be a software bug in the GIC, which
was solved by: 84aab5e68c2a5e1e18d81ae8308c3ce25d501b29
(KVM: arm/arm64: arch_timer: Preserve physical dist. active
state on LR.active, 2015-11-24)

Therefore, get rid of the complexities and just look at the LR.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoKVM: arm/arm64: arch_timer: Preserve physical dist. active state on LR.active
Christoffer Dall [Tue, 24 Nov 2015 15:23:05 +0000 (16:23 +0100)]
KVM: arm/arm64: arch_timer: Preserve physical dist. active state on LR.active

We were incorrectly removing the active state from the physical
distributor on the timer interrupt when the timer output level was
deasserted.  We shouldn't be doing this without considering the virtual
interrupt's active state, because the architecture requires that when an
LR has the HW bit set and the pending or active bits set, then the
physical interrupt must also have the corresponding bits set.

This addresses an issue where we have been observing an inconsistency
between the LR state and the physical distributor state where the LR
state was active and the physical distributor was not active, which
shouldn't happen.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoKVM: arm/arm64: Fix preemptible timer active state crazyness
Christoffer Dall [Tue, 24 Nov 2015 09:31:07 +0000 (10:31 +0100)]
KVM: arm/arm64: Fix preemptible timer active state crazyness

We were setting the physical active state on the GIC distributor in a
preemptible section, which could cause us to set the active state on
different physical CPU from the one we were actually going to run on,
hacoc ensues.

Since we are no longer descheduling/scheduling soft timers in the
flush/sync timer functions, simply moving the timer flush into a
non-preemptible section.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoarm64: KVM: Add workaround for Cortex-A57 erratum 834220
Marc Zyngier [Mon, 16 Nov 2015 10:28:18 +0000 (10:28 +0000)]
arm64: KVM: Add workaround for Cortex-A57 erratum 834220

Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults
when a Stage 1 permission fault or device alignment fault should
have been reported.

This patch implements the workaround (which is to validate that the
Stage-1 translation actually succeeds) by using code patching.

Cc: stable@vger.kernel.org
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoarm64: KVM: Fix AArch32 to AArch64 register mapping
Marc Zyngier [Mon, 16 Nov 2015 10:28:17 +0000 (10:28 +0000)]
arm64: KVM: Fix AArch32 to AArch64 register mapping

When running a 32bit guest under a 64bit hypervisor, the ARMv8
architecture defines a mapping of the 32bit registers in the 64bit
space. This includes banked registers that are being demultiplexed
over the 64bit ones.

On exceptions caused by an operation involving a 32bit register, the
HW exposes the register number in the ESR_EL2 register. It was so
far understood that SW had to distinguish between AArch32 and AArch64
accesses (based on the current AArch32 mode and register number).

It turns out that I misinterpreted the ARM ARM, and the clue is in
D1.20.1: "For some exceptions, the exception syndrome given in the
ESR_ELx identifies one or more register numbers from the issued
instruction that generated the exception. Where the exception is
taken from an Exception level using AArch32 these register numbers
give the AArch64 view of the register."

Which means that the HW is already giving us the translated version,
and that we shouldn't try to interpret it at all (for example, doing
an MMIO operation from the IRQ mode using the LR register leads to
very unexpected behaviours).

The fix is thus not to perform a call to vcpu_reg32() at all from
vcpu_reg(), and use whatever register number is supplied directly.
The only case we need to find out about the mapping is when we
actively generate a register access, which only occurs when injecting
a fault in a guest.

Cc: stable@vger.kernel.org
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoARM/arm64: KVM: test properly for a PTE's uncachedness
Ard Biesheuvel [Tue, 10 Nov 2015 14:11:20 +0000 (15:11 +0100)]
ARM/arm64: KVM: test properly for a PTE's uncachedness

The open coded tests for checking whether a PTE maps a page as
uncached use a flawed '(pte_val(xxx) & CONST) != CONST' pattern,
which is not guaranteed to work since the type of a mapping is
not a set of mutually exclusive bits

For HYP mappings, the type is an index into the MAIR table (i.e, the
index itself does not contain any information whatsoever about the
type of the mapping), and for stage-2 mappings it is a bit field where
normal memory and device types are defined as follows:

    #define MT_S2_NORMAL            0xf
    #define MT_S2_DEVICE_nGnRE      0x1

I.e., masking *and* comparing with the latter matches on the former,
and we have been getting lucky merely because the S2 device mappings
also have the PTE_UXN bit set, or we would misidentify memory mappings
as device mappings.

Since the unmap_range() code path (which contains one instance of the
flawed test) is used both for HYP mappings and stage-2 mappings, and
considering the difference between the two, it is non-trivial to fix
this by rewriting the tests in place, as it would involve passing
down the type of mapping through all the functions.

However, since HYP mappings and stage-2 mappings both deal with host
physical addresses, we can simply check whether the mapping is backed
by memory that is managed by the host kernel, and only perform the
D-cache maintenance if this is the case.

Cc: stable@vger.kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Pavel Fedin <p.fedin@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
8 years agoARM: dts: vfxxx: Fix dspi[01] spi-num-chipselects.
Cory Tusar [Thu, 19 Nov 2015 03:54:39 +0000 (22:54 -0500)]
ARM: dts: vfxxx: Fix dspi[01] spi-num-chipselects.

Per the Vybrid Reference Manual (section 3.8.6.1), dspi0 has 6 chip
select signals associated with it, while dspi1 has only 4.

Signed-off-by: Cory Tusar <cory.tusar@pid1solutions.com>
Acked-by: Stefan Agner <stefan@agner.ch>
Cc: <stable@vger.kernel.org>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
8 years agoALSA: hda - Fix headphone noise after Dell XPS 13 resume back from S3
Hui Wang [Tue, 24 Nov 2015 03:08:18 +0000 (11:08 +0800)]
ALSA: hda - Fix headphone noise after Dell XPS 13 resume back from S3

We have a machine Dell XPS 13 with the codec alc256, after resume back
from S3, the headphone has noise when play sound.

Through comparing with the coeff vaule before and after S3, we found
restoring a coeff register will help remove noise.

BugLink: https://bugs.launchpad.net/bugs/1519168
Cc: Kailang Yang <kailang@realtek.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
8 years agoblk-merge: warn if figured out segment number is bigger than nr_phys_segments
Ming Lei [Tue, 24 Nov 2015 02:35:31 +0000 (10:35 +0800)]
blk-merge: warn if figured out segment number is bigger than nr_phys_segments

We had seen lots of reports of this kind issue, so add one
warnning in blk-merge, then it can be triggered easily and
avoid to depend on warning/bug from drivers.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
8 years agoblk-merge: fix blk_bio_segment_split
Ming Lei [Tue, 24 Nov 2015 02:35:30 +0000 (10:35 +0800)]
blk-merge: fix blk_bio_segment_split

Commit bdced438acd83a(block: setup bi_phys_segments after
splitting) introduces function of computing bio->bi_phys_segments
during bio splitting.

Unfortunately both bio->bi_seg_front_size and bio->bi_seg_back_size
arn't computed, so too many physical segments may be obtained
for one request since both the two are used to check if one segment
across two bios can be possible.

This patch fixes the issue by computing the two variables in
blk_bio_segment_split().

Fixes: bdced438acd83a(block: setup bi_phys_segments after splitting)
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Reported-by: Mark Salter <msalter@redhat.com>
Tested-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
8 years agoblock: fix segment split
Ming Lei [Tue, 24 Nov 2015 02:35:29 +0000 (10:35 +0800)]
block: fix segment split

Inside blk_bio_segment_split(), previous bvec pointer(bvprvp)
always points to the iterator local variable, which is obviously
wrong, so fix it by pointing to the local variable of 'bvprv'.

Fixes: 5014c311baa2b(block: fix bogus compiler warnings in blk-merge.c)
Cc: stable@kernel.org #4.3
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Reported-by: Mark Salter <msalter@redhat.com>
Tested-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
8 years agonfs4: limit callback decoding to received bytes
Benjamin Coddington [Fri, 20 Nov 2015 14:55:30 +0000 (09:55 -0500)]
nfs4: limit callback decoding to received bytes

A truncated cb_compound request will cause the client to decode null or
data from a previous callback for nfs4.1 backchannel case, or uninitialized
data for the nfs4.0 case. This is because the path through
svc_process_common() advances the request's iov_base and decrements iov_len
without adjusting the overall xdr_buf's len field.  That causes
xdr_init_decode() to set up the xdr_stream with an incorrect length in
nfs4_callback_compound().

Fixing this for the nfs4.1 backchannel case first requires setting the
correct iov_len and page_len based on the length of received data in the
same manner as the nfs4.0 case.

Then the request's xdr_buf length can be adjusted for both cases based upon
the remaining iov_len and page_len.

Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs4: start callback_ident at idr 1
Benjamin Coddington [Fri, 20 Nov 2015 14:56:20 +0000 (09:56 -0500)]
nfs4: start callback_ident at idr 1

If clp->cl_cb_ident is zero, then nfs_cb_idr_remove_locked() skips removing
it when the nfs_client is freed.  A decoding or server bug can then find
and try to put that first nfs_client which would lead to a crash.

Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Fixes: d6870312659d ("nfs4client: convert to idr_alloc()")
Cc: stable@vger.kernel.org
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs: use sliding delay when LAYOUTGET gets NFS4ERR_DELAY
Jeff Layton [Thu, 19 Nov 2015 19:30:26 +0000 (14:30 -0500)]
nfs: use sliding delay when LAYOUTGET gets NFS4ERR_DELAY

When LAYOUTGET gets NFS4ERR_DELAY, we currently will wait 15s before
retrying the call. That is a _very_ long time, so add a timeout value to
struct nfs4_layoutget and pass nfs4_async_handle_error a pointer to it.
This allows the RPC engine to use a sliding delay window, instead of a
15s delay.

Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agoNFS4: Cleanup FATTR4_WORD0_FS_LOCATIONS after decoding success
Kinglong Mee [Wed, 18 Nov 2015 02:39:26 +0000 (10:39 +0800)]
NFS4: Cleanup FATTR4_WORD0_FS_LOCATIONS after decoding success

Commit 1ca843a2d2 "nfs: Fix GETATTR bitmap verification" has check
the bitmap after decoding success, but decode_attr_fs_locations forgets
cleanup the FATTR4_WORD0_FS_LOCATIONS bits.

decode_getfattr_attrs always return -EIO when meeting FS_LOCATIONS now.

ls: cannot access /mnt/referal: Input/output error
ls: cannot access /mnt/replicas: Input/output error
total 32
drwxr-xr-x. 7 root root 8192 Nov 16 20:36 pnfs
??????????? ? ?    ?       ?            ? referal
??????????? ? ?    ?       ?            ? replicas

v2: clear the bit earlier

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agoNFS: Properly set NFS v4.2 NFSDBG_FACILITY
Anna Schumaker [Mon, 16 Nov 2015 19:51:07 +0000 (14:51 -0500)]
NFS: Properly set NFS v4.2 NFSDBG_FACILITY

NFS v4.2 operations can work outside of pNFS, so dprintk() output
shouldn't be placed under NFSDBG_PNFS.

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs: reduce the amount of ifdefs for v4.2 in nfs4file.c
Christoph Hellwig [Fri, 13 Nov 2015 08:38:49 +0000 (09:38 +0100)]
nfs: reduce the amount of ifdefs for v4.2 in nfs4file.c

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs: use btrfs ioctl defintions for clone
Christoph Hellwig [Fri, 13 Nov 2015 08:38:48 +0000 (09:38 +0100)]
nfs: use btrfs ioctl defintions for clone

The NFS CLONE_RANGE defintion was wrong and thus never worked.  Fix this
by simply using the btrfs ioctl defintion.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs: allow intra-file CLONE
Christoph Hellwig [Fri, 13 Nov 2015 08:38:47 +0000 (09:38 +0100)]
nfs: allow intra-file CLONE

Originally CLONE didn't allow for intra-file clones, but we recently
updated the spec to support this feature which is also supported by
local Linux file systems.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs: offer native ioctls even if CONFIG_COMPAT is set
Christoph Hellwig [Fri, 13 Nov 2015 08:38:46 +0000 (09:38 +0100)]
nfs: offer native ioctls even if CONFIG_COMPAT is set

Without this for example 64-bit binaries on typical amd64 distributions
would not be able to use ioctls on NFS.  For now this only affects clones.
Additionally ->compat_ioctl is defined even for non-compat builds, so
get rid of the pointless ifdef.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agonfs: pass on count for CLONE operations
Christoph Hellwig [Fri, 13 Nov 2015 08:38:45 +0000 (09:38 +0100)]
nfs: pass on count for CLONE operations

Currently we pass uninitialized stack garbage in the count parameter.
The value is usually large enought to clone whole files and thus let
simple tests pass, but it makes the tests for range clones very unhappy.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
8 years agovfs: Avoid softlockups with sendfile(2)
Jan Kara [Mon, 23 Nov 2015 12:09:51 +0000 (13:09 +0100)]
vfs: Avoid softlockups with sendfile(2)

The following test program from Dmitry can cause softlockups or RCU
stalls as it copies 1GB from tmpfs into eventfd and we don't have any
scheduling point at that path in sendfile(2) implementation:

        int r1 = eventfd(0, 0);
        int r2 = memfd_create("", 0);
        unsigned long n = 1<<30;
        fallocate(r2, 0, 0, n);
        sendfile(r1, r2, 0, n);

Add cond_resched() into __splice_from_pipe() to fix the problem.

CC: Dmitry Vyukov <dvyukov@google.com>
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
8 years agovfs: Make sendfile(2) killable even better
Jan Kara [Mon, 23 Nov 2015 12:09:50 +0000 (13:09 +0100)]
vfs: Make sendfile(2) killable even better

Commit 296291cdd162 (mm: make sendfile(2) killable) fixed an issue where
sendfile(2) was doing a lot of tiny writes into a filesystem and thus
was unkillable for a long time. However sendfile(2) can be (mis)used to
issue lots of writes into arbitrary file descriptor such as evenfd or
similar special file descriptors which never hit the standard filesystem
write path and thus are still unkillable. E.g. the following example
from Dmitry burns CPU for ~16s on my test system without possibility to
be killed:

        int r1 = eventfd(0, 0);
        int r2 = memfd_create("", 0);
        unsigned long n = 1<<30;
        fallocate(r2, 0, 0, n);
        sendfile(r1, r2, 0, n);

There are actually quite a few tests for pending signals in sendfile
code however we data to write is always available none of them seems to
trigger. So fix the problem by adding a test for pending signal into
splice_from_pipe_next() also before the loop waiting for pipe buffers to
be available. This should fix all the lockup issues with sendfile of the
do-ton-of-tiny-writes nature.

CC: stable@vger.kernel.org
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
8 years agofix sysvfs symlinks
Al Viro [Tue, 24 Nov 2015 02:11:08 +0000 (21:11 -0500)]
fix sysvfs symlinks

The thing got broken back in 2002 - sysvfs does *not* have inline
symlinks; even short ones have bodies stored in the first block
of file.  sysv_symlink() handles that correctly; unfortunately,
attempting to look an existing symlink up will end up confusing
them for inline symlinks, and interpret the block number containing
the body as the body itself.

Nobody has noticed until now, which says something about the level
of testing sysvfs gets ;-/

Cc: stable@vger.kernel.org # all of them, not that anyone cared
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
8 years agoimx: thermal: use CPU temperature grade info for thresholds
Tim Harvey [Thu, 19 Nov 2015 14:49:40 +0000 (06:49 -0800)]
imx: thermal: use CPU temperature grade info for thresholds

The IMX6Q/IMX6DL SoC's have a 2-bit temperature grade stored in OTP which
is valid for all IMX6 SoC's (despite the fact that the IMXSDLRM and
IMXSXRM do not document this - this has been proven via tests as well as
verified by Freescale FAE).

Instead of assuming a fixed 85C for passive cooling threshold and 105C for
critical use the thermal grade for these configurations.

We will set the critical to maxT - 5C and passive to maxT - 10C.

Cc: Anson Huang <b20788@freescale.com>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Jon Nettleton <jon@solid-run.com>
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
----
v3:
 - rebase against linux-soc-thermal.git
 - added ack's from Shawn and Jon
v2:
 - remove check for IMX6Q and update comments: The OTP values have been tested
   on IMX6SOLO, IMX6DUALLITE, and IMX6SX and Freescale FAE has shared data with
   me that the OTP settings are the same and that the reference manuals will
   reflect this in their next updates.
 - set critical to max - 5C
 - set passive to max - 10C
 - display max temp in info
 - do not allow passive to be set above critical
Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
8 years agothermal: fix thermal_zone_bind_cooling_device prototype
Arnd Bergmann [Tue, 17 Nov 2015 16:48:52 +0000 (17:48 +0100)]
thermal: fix thermal_zone_bind_cooling_device prototype

When the prototype for thermal_zone_bind_cooling_device
changed, the static inline wrapper function was left alone,
which in theory can cause build warnings:

I have seen this error in the past:
drivers/thermal/db8500_thermal.c: In function 'db8500_cdev_bind':
drivers/thermal/db8500_thermal.c:78:9: error: too many arguments to function 'thermal_zone_bind_cooling_device'
   ret = thermal_zone_bind_cooling_device(thermal, i, cdev,

while this one no longer shows up, there is no doubt that
the prototype is still wrong, so let's just fix it anyway.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 6cd9e9f629f1 ("thermal: of: fix cooling device weights in device tree")
Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
8 years agoRevert "thermal: qcom_spmi: allow compile test"
Arnd Bergmann [Mon, 16 Nov 2015 21:43:50 +0000 (22:43 +0100)]
Revert "thermal: qcom_spmi: allow compile test"

This just caused build errors:

warning: (QCOM_SPMI_TEMP_ALARM) selects REGMAP_SPMI which has unmet direct dependencies (SPMI)
drivers/built-in.o: In function `regmap_spmi_ext_gather_write':
:(.text+0x609b0): undefined reference to `spmi_ext_register_write'
:(.text+0x609f0): undefined reference to `spmi_ext_register_writel'

While it's generally a good idea to allow compile testing, in this
case, it just doesn't work, so reverting the patch that
introduced the compile-test variant seems the most appropriate
solution.

Note that SPMI also has a 'depends on ARCH_QCOM || COMPILE_TEST'
statement, so we should be able to enable SPMI on all architectures
for compile testing already.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: cb7fb4d34202 ("thermal: qcom_spmi: allow compile test")
Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
8 years agocpufreq: SCPI: Depend on SCPI clk driver
Punit Agrawal [Wed, 18 Nov 2015 13:52:44 +0000 (13:52 +0000)]
cpufreq: SCPI: Depend on SCPI clk driver

The SCPI clk driver registers the virtual cpufreq device that kicks off
initialisation of the SCPI cpufreq driver. Also, clk_get() will fail for
the cpufreq driver if the SCPI clk driver is missing.

Fix this by making the SCPI cpufreq driver explicitly depend on the SCPI
clk driver.

Fixes: 8def31034d03 (cpufreq: arm_big_little: add SCPI interface driver)
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8 years agoMerge back earlier cpufreq fixes for v4.4.
Rafael J. Wysocki [Mon, 23 Nov 2015 22:49:57 +0000 (23:49 +0100)]
Merge back earlier cpufreq fixes for v4.4.

8 years agocpufreq: intel_pstate: Fix limits->max_perf rounding error
Prarit Bhargava [Fri, 20 Nov 2015 23:47:57 +0000 (18:47 -0500)]
cpufreq: intel_pstate: Fix limits->max_perf rounding error

A rounding error was found in the calculation of limits->max_perf
in intel_pstate_set_policy(), which is used to calculate the max and min
pstate values in intel_pstate_get_min_max().  In that code,
limits->max_perf is truncated to 2 hex digits such that, for example,
0x169 was incorrectly calculated to 0x16 instead of 0x17.  This resulted in
the pstate being set one level too low.  This patch rounds the value of
limits->max_perf up instead of down so that the correct max pstate can
be reached.

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8 years agocpufreq: intel_pstate: Fix limits->max_policy_pct rounding error
Prarit Bhargava [Fri, 20 Nov 2015 23:47:56 +0000 (18:47 -0500)]
cpufreq: intel_pstate: Fix limits->max_policy_pct rounding error

I have a Intel (6,63) processor with a "marketing" frequency (from
/proc/cpuinfo) of 2100MHz, and a max turbo frequency of 2600MHz.  I
can execute

cpupower frequency-set -g powersave --min 1200MHz --max 2100MHz

and the max_freq_pct is set to 80.  When adding load to the system I noticed
that the cpu frequency only reached 2000MHZ and not 2100MHz as expected.

This is because limits->max_policy_pct is calculated as 2100 * 100 /2600 = 80.7
and is rounded down to 80 when it should be rounded up to 81.  This patch
adds a DIV_ROUND_UP() which will return the correct value.

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8 years agocpufreq: Always remove sysfs cpuX/cpufreq link on ->remove_dev()
Viresh Kumar [Sat, 21 Nov 2015 03:36:49 +0000 (09:06 +0530)]
cpufreq: Always remove sysfs cpuX/cpufreq link on ->remove_dev()

Subsys interface's ->remove_dev() is called when the cpufreq driver is
unregistering or the CPU is getting physically removed. We keep removing
the cpuX/cpufreq link for all CPUs except the last one, which is a
mistake as all CPUs contain a link now.

Because of this, one CPU from each policy will still contain a link (to
an already removed policyX directory), after the cpufreq driver is
unregistered.

Fix that by removing the link first and then only see if the policy is
required to be freed. That will make sure that no links are left out.

Fixes: 96bdda61f58b ("cpufreq: create cpu/cpufreq/policyX directories")
Reported-and-tested-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8 years agocpufreq: CPPC: Initialize and check CPUFreq CPU co-ord type correctly
Ashwin Chaugule [Thu, 19 Nov 2015 15:40:07 +0000 (10:40 -0500)]
cpufreq: CPPC: Initialize and check CPUFreq CPU co-ord type correctly

The CPU policy struct indicates the co-ordination type
for all CPUs of a common freq domain. Initialize it
correctly using the CPU specific data gathered from
CPPC ACPI lib via acpi_get_psd_map().

The PSD object is optional, so the cpu->shared_type
can also be 0. So instead of assuming any value other
than SW_ANY(0xFD) is unsupported, explictly check
if shared_type is SW_ALL and then bail.

Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8 years agoMerge tag 'linux-kselftest-4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Mon, 23 Nov 2015 21:19:27 +0000 (13:19 -0800)]
Merge tag 'linux-kselftest-4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull kselftest fixes from Shuah Khan:
 "This update consists of one minor documentation fix and a fix to an
  existing test"

* tag 'linux-kselftest-4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
  selftests/seccomp: Get page size from sysconf
  tools:testing/selftests: fix typo in futex/README

8 years agodm thin: fix regression in advertised discard limits
Mike Snitzer [Mon, 23 Nov 2015 18:44:38 +0000 (13:44 -0500)]
dm thin: fix regression in advertised discard limits

When establishing a thin device's discard limits we cannot rely on the
underlying thin-pool device's discard capabilities (which are inherited
from the thin-pool's underlying data device) given that DM thin devices
must provide discard support even when the thin-pool's underlying data
device doesn't support discards.

Users were exposed to this thin device discard limits regression if
their thin-pool's underlying data device does _not_ support discards.
This regression caused all upper-layers that called the
blkdev_issue_discard() interface to not be able to issue discards to
thin devices (because discard_granularity was 0).  This regression
wasn't caught earlier because the device-mapper-test-suite's extensive
'thin-provisioning' discard tests are only ever performed against
thin-pool's with data devices that support discards.

Fix is to have thin_io_hints() test the pool's 'discard_enabled' feature
rather than inferring whether or not a thin device's discard support
should be enabled by looking at the thin-pool's discard_granularity.

Fixes: 216076705 ("dm thin: disable discard support for thin devices if pool's is disabled")
Reported-by: Mike Gerber <mike@sprachgewalt.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 4.1+
8 years agoARM: dts: keystone: k2l: fix kernel crash when clk_ignore_unused is not in bootargs
Murali Karicheri [Mon, 23 Nov 2015 18:35:25 +0000 (10:35 -0800)]
ARM: dts: keystone: k2l: fix kernel crash when clk_ignore_unused is not in bootargs

Currently kernel crash randomly when K2L EVM is booted without
clk_ignore_unused in the bootargs. This workaround is not needed
on other K2 devices such as K2HK and K2E and with this fix, we can
remove the workaround altogether. netcp driver on K2L uses linked
ram on OSR (On chip Static RAM) and requires the clock to this peripheral
enabled for proper functioning. This is the reason for the kernel crash.
So add the clock node to fix this issue.

While at it, remove the workaround documentation as well.

With the fix applied, clk_summary dump shows the clock to OSR enabled.

cat /sys/kernel/debug/clk/clk_summary
 ------cut--------------
   tcp3d-1                   0            0   399360000          0 0
   tcp3d-0                   0            0   399360000          0 0
   osr                       1            1   399360000          0 0
   fftc-0                    0            0   399360000          0 0
 -----cut----------------
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
8 years agosoc: ti: knav_qmss_queue: Fix linking RAM setup for queue managers
Michal Morawiec [Mon, 23 Nov 2015 18:35:21 +0000 (10:35 -0800)]
soc: ti: knav_qmss_queue: Fix linking RAM setup for queue managers

Configure linking RAM for both queue managers also in case
when only linking RAM 0 is specified in device tree.

Currently hwqueue driver configures linking RAM(s) to be used
cooperatively by the QMs (shared mode). Therefore if both
queue managers are used then both must be configured with
exactly the same linking RAM info (base address and size)
independent of the number of linking RAM(s) specified in the
device tree.
For proper operation only one linking RAM is required and in most
cases this can be internal one as long as it is able to handle
the number of descriptors used in the system.
Current driver code however skips configuration of second
queue manager if second linking RAM is not specified.
If the configuration for the QM2 is missing there will be
a crash when it tries to push/pop descriptors from its queues.

Signed-off-by: Michal Morawiec <michal.1.morawiec.ext@nokia.com>
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
8 years agosoc: ti: use request_firmware_direct() as acc firmware is optional
Murali Karicheri [Mon, 19 Oct 2015 18:09:34 +0000 (11:09 -0700)]
soc: ti: use request_firmware_direct() as acc firmware is optional

When firmware image for PDSP firmware is absent in the file system
the kernel boot with ramfs/nfs is stuck for 60 seconds being the
the default timeout. request_firmware_direct() is to take care of
such optional firmware loading and hence replace the call in the
driver with this API.

Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
8 years agodrm/amdgpu: move dependency handling out of atomic section v2
Christian König [Thu, 12 Nov 2015 20:10:35 +0000 (21:10 +0100)]
drm/amdgpu: move dependency handling out of atomic section v2

This way the driver isn't limited in the dependency handling callback.

v2: remove extra check in amd_sched_entity_pop_job()

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
8 years agodrm/amdgpu: optimize scheduler fence handling
Christian König [Thu, 5 Nov 2015 11:57:10 +0000 (12:57 +0100)]
drm/amdgpu: optimize scheduler fence handling

We only need to wait for jobs to be scheduled when
the dependency is from the same scheduler.

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
8 years agoARC: dw2 unwind: Remove falllback linear search thru FDE entries
Vineet Gupta [Mon, 23 Nov 2015 14:02:51 +0000 (19:32 +0530)]
ARC: dw2 unwind: Remove falllback linear search thru FDE entries

Fixes STAR 9000953410: "perf callgraph profiling causing RCU stalls"

| perf record -g -c 15000 -e cycles /sbin/hackbench
|
| INFO: rcu_preempt self-detected stall on CPU
| 1: (1 GPs behind) idle=609/140000000000002/0 softirq=2914/2915 fqs=603
| Task dump for CPU 1:

in-kernel dwarf unwinder has a fast binary lookup and a fallback linear
search (which iterates thru each of ~11K entries) thus takes 2 orders of
magnitude longer (~3 million cycles vs. 2000). Routines written in hand
assembler lack dwarf info (as we don't support assembler CFI pseudo-ops
yet) fail the unwinder binary lookup, hit linear search, failing
nevertheless in the end.

However the linear search is pointless as binary lookup tables are created
from it in first place. It is impossible to have binary lookup fail while
succeed the linear search. It is pure waste of cycles thus removed by
this patch.

This manifested as RCU stalls / NMI watchdog splat when running
hackbench under perf with callgraph profiling. The triggering condition
was perf counter overflowing in routine lacking dwarf info (like memset)
leading to patheic 3 million cycle unwinder slow path and by the time it
returned new interrupts were already pending (Timer, IPI) and taken
rightaway. The original memset didn't make forward progress, system kept
accruing more interrupts and more unwinder delayes in a vicious feedback
loop, ultimately triggering the NMI diagnostic.

Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
8 years agopowerpc/tm: Check for already reclaimed tasks
Michael Neuling [Thu, 19 Nov 2015 04:44:45 +0000 (15:44 +1100)]
powerpc/tm: Check for already reclaimed tasks

Currently we can hit a scenario where we'll tm_reclaim() twice.  This
results in a TM bad thing exception because the second reclaim occurs
when not in suspend mode.

The scenario in which this can happen is the following.  We attempt to
deliver a signal to userspace.  To do this we need obtain the stack
pointer to write the signal context.  To get this stack pointer we
must tm_reclaim() in case we need to use the checkpointed stack
pointer (see get_tm_stackpointer()).  Normally we'd then return
directly to userspace to deliver the signal without going through
__switch_to().

Unfortunatley, if at this point we get an error (such as a bad
userspace stack pointer), we need to exit the process.  The exit will
result in a __switch_to().  __switch_to() will attempt to save the
process state which results in another tm_reclaim().  This
tm_reclaim() now causes a TM Bad Thing exception as this state has
already been saved and the processor is no longer in TM suspend mode.
Whee!

This patch checks the state of the MSR to ensure we are TM suspended
before we attempt the tm_reclaim().  If we've already saved the state
away, we should no longer be in TM suspend mode.  This has the
additional advantage of checking for a potential TM Bad Thing
exception.

Found using syscall fuzzer.

Fixes: fb09692e71f1 ("powerpc: Add reclaim and recheckpoint functions for context switching transactional memory processes")
Cc: stable@vger.kernel.org # v3.9+
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/tm: Block signal return setting invalid MSR state
Michael Neuling [Thu, 19 Nov 2015 04:44:44 +0000 (15:44 +1100)]
powerpc/tm: Block signal return setting invalid MSR state

Currently we allow both the MSR T and S bits to be set by userspace on
a signal return.  Unfortunately this is a reserved configuration and
will cause a TM Bad Thing exception if attempted (via rfid).

This patch checks for this case in both the 32 and 64 bit signals
code.  If both T and S are set, we mark the context as invalid.

Found using a syscall fuzzer.

Fixes: 2b0a576d15e0 ("powerpc: Add new transactional memory state to the signal context")
Cc: stable@vger.kernel.org # v3.9+
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>