Rudolf Marek <R.Marek@sh.cvut.cz>
Rui Saraiva <rmps@joel.ist.utl.pt>
Sachin P Sant <ssant@in.ibm.com>
+Sarangdhar Joshi <spjoshi@codeaurora.org>
Sam Ravnborg <sam@mars.ravnborg.org>
Santosh Shilimkar <ssantosh@kernel.org>
Santosh Shilimkar <santosh.shilimkar@oracle.org>
Simon Kelley <simon@thekelleys.org.uk>
Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr>
Stephen Hemminger <shemminger@osdl.org>
+Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
+Subhash Jadavani <subhashj@codeaurora.org>
Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sumit Semwal <sumit.semwal@ti.com>
Tejun Heo <htejun@gmail.com>
Thomas Graf <tgraf@suug.ch>
+Thomas Pedersen <twp@codeaurora.org>
Tony Luck <tony.luck@intel.com>
Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com>
Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de>
+++ /dev/null
-What: /sys/devices/.../deferred_probe
-Date: August 2016
-Contact: Ben Hutchings <ben.hutchings@codethink.co.uk>
-Description:
- The /sys/devices/.../deferred_probe attribute is
- present for all devices. If a driver detects during
- probing a device that a related device is not yet
- ready, it may defer probing of the first device. The
- kernel will retry probing the first device after any
- other device is successfully probed. This attribute
- reads as 1 if probing of this device is currently
- deferred, or 0 otherwise.
"irq" and "wakeup" names are recognized by I2C core, other names are
left to individual drivers.
+- host-notify
+ device uses SMBus host notify protocol instead of interrupt line.
+
- multi-master
states that there is another master active on this bus. The OS can use
this information to adapt power management to keep the arbitration awake
used by the device. I2C core will assign "irq" interrupt (or the very first
interrupt if not using interrupt names) as primary interrupt for the slave.
+Alternatively, devices supporting SMbus Host Notify, and connected to
+adapters that support this feature, may use "host-notify" property. I2C
+core will create a virtual interrupt for Host Notify and assign it as
+primary interrupt for the slave.
+
Also, if device is marked as a wakeup source, I2C core will set up "wakeup"
interrupt for the device. If "wakeup" interrupt name is not present in the
binding, then primary interrupt will be used as wakeup interrupt.
--- /dev/null
+EEPROM / CSR SMBus-slave interface of IDT 89HPESx devices
+
+Required properties:
+ - compatible : should be "<manufacturer>,<type>"
+ Basically there is only one manufacturer: idt, but some
+ compatible devices may be produced in future. Following devices
+ are supported: 89hpes8nt2, 89hpes12nt3, 89hpes24nt6ag2,
+ 89hpes32nt8ag2, 89hpes32nt8bg2, 89hpes12nt12g2, 89hpes16nt16g2,
+ 89hpes24nt24g2, 89hpes32nt24ag2, 89hpes32nt24bg2;
+ 89hpes12n3, 89hpes12n3a, 89hpes24n3, 89hpes24n3a;
+ 89hpes32h8, 89hpes32h8g2, 89hpes48h12, 89hpes48h12g2,
+ 89hpes48h12ag2, 89hpes16h16, 89hpes22h16, 89hpes22h16g2,
+ 89hpes34h16, 89hpes34h16g2, 89hpes64h16, 89hpes64h16g2,
+ 89hpes64h16ag2;
+ 89hpes12t3g2, 89hpes24t3g2, 89hpes16t4, 89hpes4t4g2,
+ 89hpes10t4g2, 89hpes16t4g2, 89hpes16t4ag2, 89hpes5t5,
+ 89hpes6t5, 89hpes8t5, 89hpes8t5a, 89hpes24t6, 89hpes6t6g2,
+ 89hpes24t6g2, 89hpes16t7, 89hpes32t8, 89hpes32t8g2,
+ 89hpes48t12, 89hpes48t12g2.
+ - reg : I2C address of the IDT 89HPESx device.
+
+Optionally there can be EEPROM-compatible subnode:
+ - compatible: There are five EEPROM devices supported: 24c32, 24c64, 24c128,
+ 24c256 and 24c512 differed by size.
+ - reg: Custom address of EEPROM device (If not specified IDT 89HPESx
+ (optional) device will try to communicate with EEPROM sited by default
+ address - 0x50)
+ - read-only : Parameterless property disables writes to the EEPROM
+ (optional)
+
+Example:
+ idt@60 {
+ compatible = "idt,89hpes32nt8ag2";
+ reg = <0x74>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ eeprom@50 {
+ compatible = "onsemi,24c64";
+ reg = <0x50>;
+ read-only;
+ };
+ };
+
--- /dev/null
+Page fragments
+--------------
+
+A page fragment is an arbitrary-length arbitrary-offset area of memory
+which resides within a 0 or higher order compound page. Multiple
+fragments within that page are individually refcounted, in the page's
+reference counter.
+
+The page_frag functions, page_frag_alloc and page_frag_free, provide a
+simple allocation framework for page fragments. This is used by the
+network stack and network device drivers to provide a backing region of
+memory for use as either an sk_buff->head, or to be used in the "frags"
+portion of skb_shared_info.
+
+In order to make use of the page fragment APIs a backing page fragment
+cache is needed. This provides a central point for the fragment allocation
+and tracks allows multiple calls to make use of a cached page. The
+advantage to doing this is that multiple calls to get_page can be avoided
+which can be expensive at allocation time. However due to the nature of
+this caching it is required that any calls to the cache be protected by
+either a per-cpu limitation, or a per-cpu limitation and forcing interrupts
+to be disabled when executing the fragment allocation.
+
+The network stack uses two separate caches per CPU to handle fragment
+allocation. The netdev_alloc_cache is used by callers making use of the
+__netdev_alloc_frag and __netdev_alloc_skb calls. The napi_alloc_cache is
+used by callers of the __napi_alloc_frag and __napi_alloc_skb calls. The
+main difference between these two calls is the context in which they may be
+called. The "netdev" prefixed functions are usable in any context as these
+functions will disable interrupts, while the "napi" prefixed functions are
+only usable within the softirq context.
+
+Many network device drivers use a similar methodology for allocating page
+fragments, but the page fragments are cached at the ring or descriptor
+level. In order to enable these cases it is necessary to provide a generic
+way of tearing down a page cache. For this reason __page_frag_cache_drain
+was implemented. It allows for freeing multiple references from a single
+page via a single call. The advantage to doing this is that it allows for
+cleaning up the multiple references that were added to a page in order to
+avoid calling get_page per allocation.
+
+Alexander Duyck, Nov 29, 2016.
Q: Patchwork web based patch tracking system site
T: SCM tree type and location.
Type is one of: git, hg, quilt, stgit, topgit
- B: Bug tracking system location.
S: Status, one of the following:
Supported: Someone is actually paid to look after this.
Maintained: Someone actually looks after it.
S: Supported
F: sound/soc/atmel
-ATMEL DMA DRIVER
-M: Nicolas Ferre <nicolas.ferre@atmel.com>
-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Supported
-F: drivers/dma/at_hdmac.c
-F: drivers/dma/at_hdmac_regs.h
-F: include/linux/platform_data/dma-atmel.h
-
ATMEL XDMA DRIVER
M: Ludovic Desroches <ludovic.desroches@atmel.com>
L: linux-arm-kernel@lists.infradead.org
RADEON and AMDGPU DRM DRIVERS
M: Alex Deucher <alexander.deucher@amd.com>
M: Christian König <christian.koenig@amd.com>
-L: dri-devel@lists.freedesktop.org
+L: amd-gfx@lists.freedesktop.org
T: git git://people.freedesktop.org/~agd5f/linux
S: Supported
F: drivers/gpu/drm/radeon/
F: arch/x86/include/asm/mshyperv.h
F: arch/x86/include/uapi/asm/hyperv.h
F: arch/x86/kernel/cpu/mshyperv.c
+F: arch/x86/hyperv
F: drivers/hid/hid-hyperv.c
F: drivers/hv/
F: drivers/input/serio/hyperv-keyboard.c
F: drivers/tty/serial/atmel_serial.c
F: include/linux/atmel_serial.h
+MICROCHIP / ATMEL DMA DRIVER
+M: Ludovic Desroches <ludovic.desroches@microchip.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+L: dmaengine@vger.kernel.org
+S: Supported
+F: drivers/dma/at_hdmac.c
+F: drivers/dma/at_hdmac_regs.h
+F: include/linux/platform_data/dma-atmel.h
+
MICROCHIP / ATMEL ISC DRIVER
M: Songjun Wu <songjun.wu@microchip.com>
L: linux-media@vger.kernel.org
VERSION = 4
PATCHLEVEL = 10
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc4
NAME = Roaring Lionus
# *DOCUMENTATION*
CLK("da830-mmc.0", NULL, &mmcsd0_clk),
CLK("da830-mmc.1", NULL, &mmcsd1_clk),
CLK("ti-aemif", NULL, &aemif_clk),
- /*
- * The only user of this clock is davinci_nand and it get's it through
- * con_id. The nand node itself is created from within the aemif
- * driver to guarantee that it's probed after the aemif timing
- * parameters are configured. of_dev_auxdata is not accessible from
- * the aemif driver and can't be passed to of_platform_populate(). For
- * that reason we're leaving the dev_id here as NULL.
- */
- CLK(NULL, "aemif", &aemif_nand_clk),
+ CLK("davinci-nand.0", "aemif", &aemif_nand_clk),
CLK("ohci-da8xx", "usb11", &usb11_clk),
CLK("musb-da8xx", "usb20", &usb20_clk),
CLK("spi_davinci.0", NULL, &spi0_clk),
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/irqdomain.h>
+#include <linux/platform_data/ti-aemif.h>
#include <asm/mach/arch.h>
#include "cp_intc.h"
#include <mach/da8xx.h>
+static struct of_dev_auxdata da850_aemif_auxdata_lookup[] = {
+ OF_DEV_AUXDATA("ti,davinci-nand", 0x62000000, "davinci-nand.0", NULL),
+ {}
+};
+
+static struct aemif_platform_data aemif_data = {
+ .dev_lookup = da850_aemif_auxdata_lookup,
+};
+
static struct of_dev_auxdata da850_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("ti,davinci-i2c", 0x01c22000, "i2c_davinci.1", NULL),
OF_DEV_AUXDATA("ti,davinci-i2c", 0x01e28000, "i2c_davinci.2", NULL),
OF_DEV_AUXDATA("ti,davinci-dm6467-emac", 0x01e20000, "davinci_emac.1",
NULL),
OF_DEV_AUXDATA("ti,da830-mcasp-audio", 0x01d00000, "davinci-mcasp.0", NULL),
- OF_DEV_AUXDATA("ti,da850-aemif", 0x68000000, "ti-aemif", NULL),
+ OF_DEV_AUXDATA("ti,da850-aemif", 0x68000000, "ti-aemif", &aemif_data),
OF_DEV_AUXDATA("ti,da850-tilcdc", 0x01e13000, "da8xx_lcdc.0", NULL),
OF_DEV_AUXDATA("ti,da830-ohci", 0x01e25000, "ohci-da8xx", NULL),
OF_DEV_AUXDATA("ti,da830-musb", 0x01e00000, "musb-da8xx", NULL),
/*
* Pseudo-ops for PC-relative adr/ldr/str <reg>, <symbol> where
- * <symbol> is within the range +/- 4 GB of the PC.
+ * <symbol> is within the range +/- 4 GB of the PC when running
+ * in core kernel context. In module context, a movz/movk sequence
+ * is used, since modules may be loaded far away from the kernel
+ * when KASLR is in effect.
*/
/*
* @dst: destination register (64 bit wide)
* @sym: name of the symbol
- * @tmp: optional scratch register to be used if <dst> == sp, which
- * is not allowed in an adrp instruction
*/
- .macro adr_l, dst, sym, tmp=
- .ifb \tmp
+ .macro adr_l, dst, sym
+#ifndef MODULE
adrp \dst, \sym
add \dst, \dst, :lo12:\sym
- .else
- adrp \tmp, \sym
- add \dst, \tmp, :lo12:\sym
- .endif
+#else
+ movz \dst, #:abs_g3:\sym
+ movk \dst, #:abs_g2_nc:\sym
+ movk \dst, #:abs_g1_nc:\sym
+ movk \dst, #:abs_g0_nc:\sym
+#endif
.endm
/*
* the address
*/
.macro ldr_l, dst, sym, tmp=
+#ifndef MODULE
.ifb \tmp
adrp \dst, \sym
ldr \dst, [\dst, :lo12:\sym]
adrp \tmp, \sym
ldr \dst, [\tmp, :lo12:\sym]
.endif
+#else
+ .ifb \tmp
+ adr_l \dst, \sym
+ ldr \dst, [\dst]
+ .else
+ adr_l \tmp, \sym
+ ldr \dst, [\tmp]
+ .endif
+#endif
.endm
/*
* while <src> needs to be preserved.
*/
.macro str_l, src, sym, tmp
+#ifndef MODULE
adrp \tmp, \sym
str \src, [\tmp, :lo12:\sym]
+#else
+ adr_l \tmp, \sym
+ str \src, [\tmp]
+#endif
.endm
/*
ncontig = find_num_contig(vma->vm_mm, addr, cpte,
*cpte, &pgsize);
for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize) {
- changed = ptep_set_access_flags(vma, addr, cpte,
+ changed |= ptep_set_access_flags(vma, addr, cpte,
pfn_pte(pfn,
hugeprot),
dirty);
# Xen paravirtualization support
obj-$(CONFIG_XEN) += xen/
+# Hyper-V paravirtualization support
+obj-$(CONFIG_HYPERVISOR_GUEST) += hyperv/
+
# lguest paravirtualization support
obj-$(CONFIG_LGUEST_GUEST) += lguest/
#include <linux/types.h>
#include "ctype.h"
+#include "string.h"
int memcmp(const void *s1, const void *s2, size_t len)
{
#define memset(d,c,l) __builtin_memset(d,c,l)
#define memcmp __builtin_memcmp
+extern int strcmp(const char *str1, const char *str2);
+extern int strncmp(const char *cs, const char *ct, size_t count);
+extern size_t strlen(const char *s);
+extern char *strstr(const char *s1, const char *s2);
+extern size_t strnlen(const char *s, size_t maxlen);
+extern unsigned int atou(const char *s);
+extern unsigned long long simple_strtoull(const char *cp, char **endp,
+ unsigned int base);
+
#endif /* BOOT_STRING_H */
const char *basename;
struct simd_skcipher_alg *simd;
} aesni_simd_skciphers2[] = {
-#if IS_ENABLED(CONFIG_CRYPTO_PCBC)
+#if (defined(MODULE) && IS_ENABLED(CONFIG_CRYPTO_PCBC)) || \
+ IS_BUILTIN(CONFIG_CRYPTO_PCBC)
{
.algname = "pcbc(aes)",
.drvname = "pcbc-aes-aesni",
jmp __switch_to
END(__switch_to_asm)
-/*
- * The unwinder expects the last frame on the stack to always be at the same
- * offset from the end of the page, which allows it to validate the stack.
- * Calling schedule_tail() directly would break that convention because its an
- * asmlinkage function so its argument has to be pushed on the stack. This
- * wrapper creates a proper "end of stack" frame header before the call.
- */
-ENTRY(schedule_tail_wrapper)
- FRAME_BEGIN
-
- pushl %eax
- call schedule_tail
- popl %eax
-
- FRAME_END
- ret
-ENDPROC(schedule_tail_wrapper)
/*
* A newly forked process directly context switches into this address.
*
* edi: kernel thread arg
*/
ENTRY(ret_from_fork)
- call schedule_tail_wrapper
+ FRAME_BEGIN /* help unwinder find end of stack */
+
+ /*
+ * schedule_tail() is asmlinkage so we have to put its 'prev' argument
+ * on the stack.
+ */
+ pushl %eax
+ call schedule_tail
+ popl %eax
testl %ebx, %ebx
jnz 1f /* kernel threads are uncommon */
2:
/* When we fork, we trace the syscall return in the child, too. */
- movl %esp, %eax
+ leal FRAME_OFFSET(%esp), %eax
call syscall_return_slowpath
+ FRAME_END
jmp restore_all
/* kernel thread */
#include <asm/smap.h>
#include <asm/pgtable_types.h>
#include <asm/export.h>
+#include <asm/frame.h>
#include <linux/err.h>
.code64
* r12: kernel thread arg
*/
ENTRY(ret_from_fork)
+ FRAME_BEGIN /* help unwinder find end of stack */
movq %rax, %rdi
- call schedule_tail /* rdi: 'prev' task parameter */
+ call schedule_tail /* rdi: 'prev' task parameter */
- testq %rbx, %rbx /* from kernel_thread? */
- jnz 1f /* kernel threads are uncommon */
+ testq %rbx, %rbx /* from kernel_thread? */
+ jnz 1f /* kernel threads are uncommon */
2:
- movq %rsp, %rdi
+ leaq FRAME_OFFSET(%rsp),%rdi /* pt_regs pointer */
call syscall_return_slowpath /* returns with IRQs disabled */
TRACE_IRQS_ON /* user mode is traced as IRQS on */
SWAPGS
+ FRAME_END
jmp restore_regs_and_iret
1:
if (event->attr.precise_ip > precise)
return -EOPNOTSUPP;
+
+ /* There's no sense in having PEBS for non sampling events: */
+ if (!is_sampling_event(event))
+ return -EINVAL;
}
/*
* check that PEBS LBR correction does not conflict with
x86_pmu.num_counters, INTEL_PMC_MAX_GENERIC);
x86_pmu.num_counters = INTEL_PMC_MAX_GENERIC;
}
- x86_pmu.intel_ctrl = (1 << x86_pmu.num_counters) - 1;
+ x86_pmu.intel_ctrl = (1ULL << x86_pmu.num_counters) - 1;
if (x86_pmu.num_counters_fixed > INTEL_PMC_MAX_FIXED) {
WARN(1, KERN_ERR "hw perf events fixed %d > max(%d), clipping!",
.stop = cstate_pmu_event_stop,
.read = cstate_pmu_event_update,
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
+ .module = THIS_MODULE,
};
static struct pmu cstate_pkg_pmu = {
.stop = cstate_pmu_event_stop,
.read = cstate_pmu_event_update,
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
+ .module = THIS_MODULE,
};
static const struct cstate_model nhm_cstates __initconst = {
continue;
/* log dropped samples number */
- if (error[bit])
+ if (error[bit]) {
perf_log_lost_samples(event, error[bit]);
+ if (perf_event_account_interrupt(event))
+ x86_pmu_stop(event, 0);
+ }
+
if (counts[bit]) {
__intel_pmu_pebs_event(event, iregs, base,
top, bit, counts[bit]);
rapl_pmus->pmu.start = rapl_pmu_event_start;
rapl_pmus->pmu.stop = rapl_pmu_event_stop;
rapl_pmus->pmu.read = rapl_pmu_event_read;
+ rapl_pmus->pmu.module = THIS_MODULE;
return 0;
}
.start = uncore_pmu_event_start,
.stop = uncore_pmu_event_stop,
.read = uncore_pmu_event_read,
+ .module = THIS_MODULE,
};
} else {
pmu->pmu = *pmu->type->pmu;
void hswep_uncore_cpu_init(void)
{
- int pkg = topology_phys_to_logical_pkg(0);
+ int pkg = boot_cpu_data.logical_proc_id;
if (hswep_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
hswep_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
--- /dev/null
+obj-y := hv_init.o
--- /dev/null
+/*
+ * X86 specific Hyper-V initialization code.
+ *
+ * Copyright (C) 2016, Microsoft, Inc.
+ *
+ * Author : K. Y. Srinivasan <kys@microsoft.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for more
+ * details.
+ *
+ */
+
+#include <linux/types.h>
+#include <asm/hypervisor.h>
+#include <asm/hyperv.h>
+#include <asm/mshyperv.h>
+#include <linux/version.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/clockchips.h>
+
+
+#ifdef CONFIG_X86_64
+
+static struct ms_hyperv_tsc_page *tsc_pg;
+
+static u64 read_hv_clock_tsc(struct clocksource *arg)
+{
+ u64 current_tick;
+
+ if (tsc_pg->tsc_sequence != 0) {
+ /*
+ * Use the tsc page to compute the value.
+ */
+
+ while (1) {
+ u64 tmp;
+ u32 sequence = tsc_pg->tsc_sequence;
+ u64 cur_tsc;
+ u64 scale = tsc_pg->tsc_scale;
+ s64 offset = tsc_pg->tsc_offset;
+
+ rdtscll(cur_tsc);
+ /* current_tick = ((cur_tsc *scale) >> 64) + offset */
+ asm("mulq %3"
+ : "=d" (current_tick), "=a" (tmp)
+ : "a" (cur_tsc), "r" (scale));
+
+ current_tick += offset;
+ if (tsc_pg->tsc_sequence == sequence)
+ return current_tick;
+
+ if (tsc_pg->tsc_sequence != 0)
+ continue;
+ /*
+ * Fallback using MSR method.
+ */
+ break;
+ }
+ }
+ rdmsrl(HV_X64_MSR_TIME_REF_COUNT, current_tick);
+ return current_tick;
+}
+
+static struct clocksource hyperv_cs_tsc = {
+ .name = "hyperv_clocksource_tsc_page",
+ .rating = 400,
+ .read = read_hv_clock_tsc,
+ .mask = CLOCKSOURCE_MASK(64),
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+#endif
+
+static u64 read_hv_clock_msr(struct clocksource *arg)
+{
+ u64 current_tick;
+ /*
+ * Read the partition counter to get the current tick count. This count
+ * is set to 0 when the partition is created and is incremented in
+ * 100 nanosecond units.
+ */
+ rdmsrl(HV_X64_MSR_TIME_REF_COUNT, current_tick);
+ return current_tick;
+}
+
+static struct clocksource hyperv_cs_msr = {
+ .name = "hyperv_clocksource_msr",
+ .rating = 400,
+ .read = read_hv_clock_msr,
+ .mask = CLOCKSOURCE_MASK(64),
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
+static void *hypercall_pg;
+/*
+ * This function is to be invoked early in the boot sequence after the
+ * hypervisor has been detected.
+ *
+ * 1. Setup the hypercall page.
+ * 2. Register Hyper-V specific clocksource.
+ */
+void hyperv_init(void)
+{
+ u64 guest_id;
+ union hv_x64_msr_hypercall_contents hypercall_msr;
+
+ if (x86_hyper != &x86_hyper_ms_hyperv)
+ return;
+
+ /*
+ * Setup the hypercall page and enable hypercalls.
+ * 1. Register the guest ID
+ * 2. Enable the hypercall and register the hypercall page
+ */
+ guest_id = generate_guest_id(0, LINUX_VERSION_CODE, 0);
+ wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id);
+
+ hypercall_pg = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_EXEC);
+ if (hypercall_pg == NULL) {
+ wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+ return;
+ }
+
+ rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+ hypercall_msr.enable = 1;
+ hypercall_msr.guest_physical_address = vmalloc_to_pfn(hypercall_pg);
+ wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+
+ /*
+ * Register Hyper-V specific clocksource.
+ */
+#ifdef CONFIG_X86_64
+ if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) {
+ union hv_x64_msr_hypercall_contents tsc_msr;
+
+ tsc_pg = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL);
+ if (!tsc_pg) {
+ clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100);
+ return;
+ }
+
+ rdmsrl(HV_X64_MSR_REFERENCE_TSC, tsc_msr.as_uint64);
+
+ tsc_msr.enable = 1;
+ tsc_msr.guest_physical_address = vmalloc_to_pfn(tsc_pg);
+
+ wrmsrl(HV_X64_MSR_REFERENCE_TSC, tsc_msr.as_uint64);
+ clocksource_register_hz(&hyperv_cs_tsc, NSEC_PER_SEC/100);
+ return;
+ }
+#endif
+ /*
+ * For 32 bit guests just use the MSR based mechanism for reading
+ * the partition counter.
+ */
+
+ if (ms_hyperv.features & HV_X64_MSR_TIME_REF_COUNT_AVAILABLE)
+ clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100);
+}
+
+/*
+ * hv_do_hypercall- Invoke the specified hypercall
+ */
+u64 hv_do_hypercall(u64 control, void *input, void *output)
+{
+ u64 input_address = (input) ? virt_to_phys(input) : 0;
+ u64 output_address = (output) ? virt_to_phys(output) : 0;
+#ifdef CONFIG_X86_64
+ u64 hv_status = 0;
+
+ if (!hypercall_pg)
+ return (u64)ULLONG_MAX;
+
+ __asm__ __volatile__("mov %0, %%r8" : : "r" (output_address) : "r8");
+ __asm__ __volatile__("call *%3" : "=a" (hv_status) :
+ "c" (control), "d" (input_address),
+ "m" (hypercall_pg));
+
+ return hv_status;
+
+#else
+
+ u32 control_hi = control >> 32;
+ u32 control_lo = control & 0xFFFFFFFF;
+ u32 hv_status_hi = 1;
+ u32 hv_status_lo = 1;
+ u32 input_address_hi = input_address >> 32;
+ u32 input_address_lo = input_address & 0xFFFFFFFF;
+ u32 output_address_hi = output_address >> 32;
+ u32 output_address_lo = output_address & 0xFFFFFFFF;
+
+ if (!hypercall_pg)
+ return (u64)ULLONG_MAX;
+
+ __asm__ __volatile__ ("call *%8" : "=d"(hv_status_hi),
+ "=a"(hv_status_lo) : "d" (control_hi),
+ "a" (control_lo), "b" (input_address_hi),
+ "c" (input_address_lo), "D"(output_address_hi),
+ "S"(output_address_lo), "m" (hypercall_pg));
+
+ return hv_status_lo | ((u64)hv_status_hi << 32);
+#endif /* !x86_64 */
+}
+EXPORT_SYMBOL_GPL(hv_do_hypercall);
+
+void hyperv_report_panic(struct pt_regs *regs)
+{
+ static bool panic_reported;
+
+ /*
+ * We prefer to report panic on 'die' chain as we have proper
+ * registers to report, but if we miss it (e.g. on BUG()) we need
+ * to report it on 'panic'.
+ */
+ if (panic_reported)
+ return;
+ panic_reported = true;
+
+ wrmsrl(HV_X64_MSR_CRASH_P0, regs->ip);
+ wrmsrl(HV_X64_MSR_CRASH_P1, regs->ax);
+ wrmsrl(HV_X64_MSR_CRASH_P2, regs->bx);
+ wrmsrl(HV_X64_MSR_CRASH_P3, regs->cx);
+ wrmsrl(HV_X64_MSR_CRASH_P4, regs->dx);
+
+ /*
+ * Let Hyper-V know there is crash data available
+ */
+ wrmsrl(HV_X64_MSR_CRASH_CTL, HV_CRASH_CTL_CRASH_NOTIFY);
+}
+EXPORT_SYMBOL_GPL(hyperv_report_panic);
+
+bool hv_is_hypercall_page_setup(void)
+{
+ union hv_x64_msr_hypercall_contents hypercall_msr;
+
+ /* Check if the hypercall page is setup */
+ hypercall_msr.as_uint64 = 0;
+ rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+
+ if (!hypercall_msr.enable)
+ return false;
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(hv_is_hypercall_page_setup);
#define INTEL_FAM6_ATOM_SILVERMONT2 0x4D /* Avaton/Rangely */
#define INTEL_FAM6_ATOM_AIRMONT 0x4C /* CherryTrail / Braswell */
#define INTEL_FAM6_ATOM_MERRIFIELD 0x4A /* Tangier */
-#define INTEL_FAM6_ATOM_MOOREFIELD 0x5A /* Annidale */
+#define INTEL_FAM6_ATOM_MOOREFIELD 0x5A /* Anniedale */
#define INTEL_FAM6_ATOM_GOLDMONT 0x5C
#define INTEL_FAM6_ATOM_DENVERTON 0x5F /* Goldmont Microserver */
#define exttable_size(et) ((et)->count * EXT_SIGNATURE_SIZE + EXT_HEADER_SIZE)
+static inline u32 intel_get_microcode_revision(void)
+{
+ u32 rev, dummy;
+
+ native_wrmsrl(MSR_IA32_UCODE_REV, 0);
+
+ /* As documented in the SDM: Do a CPUID 1 here */
+ native_cpuid_eax(1);
+
+ /* get the current revision from MSR 0x8B */
+ native_rdmsr(MSR_IA32_UCODE_REV, dummy, rev);
+
+ return rev;
+}
+
#ifdef CONFIG_MICROCODE_INTEL
extern void __init load_ucode_intel_bsp(void);
extern void load_ucode_intel_ap(void);
#include <linux/interrupt.h>
#include <asm/hyperv.h>
+/*
+ * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
+ * is set by CPUID(HVCPUID_VERSION_FEATURES).
+ */
+enum hv_cpuid_function {
+ HVCPUID_VERSION_FEATURES = 0x00000001,
+ HVCPUID_VENDOR_MAXFUNCTION = 0x40000000,
+ HVCPUID_INTERFACE = 0x40000001,
+
+ /*
+ * The remaining functions depend on the value of
+ * HVCPUID_INTERFACE
+ */
+ HVCPUID_VERSION = 0x40000002,
+ HVCPUID_FEATURES = 0x40000003,
+ HVCPUID_ENLIGHTENMENT_INFO = 0x40000004,
+ HVCPUID_IMPLEMENTATION_LIMITS = 0x40000005,
+};
+
struct ms_hyperv_info {
u32 features;
u32 misc_features;
extern struct ms_hyperv_info ms_hyperv;
+/*
+ * Declare the MSR used to setup pages used to communicate with the hypervisor.
+ */
+union hv_x64_msr_hypercall_contents {
+ u64 as_uint64;
+ struct {
+ u64 enable:1;
+ u64 reserved:11;
+ u64 guest_physical_address:52;
+ };
+};
+
+/*
+ * TSC page layout.
+ */
+
+struct ms_hyperv_tsc_page {
+ volatile u32 tsc_sequence;
+ u32 reserved1;
+ volatile u64 tsc_scale;
+ volatile s64 tsc_offset;
+ u64 reserved2[509];
+};
+
+/*
+ * The guest OS needs to register the guest ID with the hypervisor.
+ * The guest ID is a 64 bit entity and the structure of this ID is
+ * specified in the Hyper-V specification:
+ *
+ * msdn.microsoft.com/en-us/library/windows/hardware/ff542653%28v=vs.85%29.aspx
+ *
+ * While the current guideline does not specify how Linux guest ID(s)
+ * need to be generated, our plan is to publish the guidelines for
+ * Linux and other guest operating systems that currently are hosted
+ * on Hyper-V. The implementation here conforms to this yet
+ * unpublished guidelines.
+ *
+ *
+ * Bit(s)
+ * 63 - Indicates if the OS is Open Source or not; 1 is Open Source
+ * 62:56 - Os Type; Linux is 0x100
+ * 55:48 - Distro specific identification
+ * 47:16 - Linux kernel version number
+ * 15:0 - Distro specific identification
+ *
+ *
+ */
+
+#define HV_LINUX_VENDOR_ID 0x8800
+
+/*
+ * Generate the guest ID based on the guideline described above.
+ */
+
+static inline __u64 generate_guest_id(__u64 d_info1, __u64 kernel_version,
+ __u64 d_info2)
+{
+ __u64 guest_id = 0;
+
+ guest_id = (((__u64)HV_LINUX_VENDOR_ID) << 56);
+ guest_id |= (d_info1 << 48);
+ guest_id |= (kernel_version << 16);
+ guest_id |= d_info2;
+
+ return guest_id;
+}
+
+
+/* Free the message slot and signal end-of-message if required */
+static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
+{
+ /*
+ * On crash we're reading some other CPU's message page and we need
+ * to be careful: this other CPU may already had cleared the header
+ * and the host may already had delivered some other message there.
+ * In case we blindly write msg->header.message_type we're going
+ * to lose it. We can still lose a message of the same type but
+ * we count on the fact that there can only be one
+ * CHANNELMSG_UNLOAD_RESPONSE and we don't care about other messages
+ * on crash.
+ */
+ if (cmpxchg(&msg->header.message_type, old_msg_type,
+ HVMSG_NONE) != old_msg_type)
+ return;
+
+ /*
+ * Make sure the write to MessageType (ie set to
+ * HVMSG_NONE) happens before we read the
+ * MessagePending and EOMing. Otherwise, the EOMing
+ * will not deliver any more messages since there is
+ * no empty slot
+ */
+ mb();
+
+ if (msg->header.message_flags.msg_pending) {
+ /*
+ * This will cause message queue rescan to
+ * possibly deliver another msg from the
+ * hypervisor
+ */
+ wrmsrl(HV_X64_MSR_EOM, 0);
+ }
+}
+
+#define hv_get_current_tick(tick) rdmsrl(HV_X64_MSR_TIME_REF_COUNT, tick)
+#define hv_init_timer(timer, tick) wrmsrl(timer, tick)
+#define hv_init_timer_config(config, val) wrmsrl(config, val)
+
+#define hv_get_simp(val) rdmsrl(HV_X64_MSR_SIMP, val)
+#define hv_set_simp(val) wrmsrl(HV_X64_MSR_SIMP, val)
+
+#define hv_get_siefp(val) rdmsrl(HV_X64_MSR_SIEFP, val)
+#define hv_set_siefp(val) wrmsrl(HV_X64_MSR_SIEFP, val)
+
+#define hv_get_synic_state(val) rdmsrl(HV_X64_MSR_SCONTROL, val)
+#define hv_set_synic_state(val) wrmsrl(HV_X64_MSR_SCONTROL, val)
+
+#define hv_get_vp_index(index) rdmsrl(HV_X64_MSR_VP_INDEX, index)
+
+#define hv_get_synint_state(int_num, val) rdmsrl(int_num, val)
+#define hv_set_synint_state(int_num, val) wrmsrl(int_num, val)
+
void hyperv_callback_vector(void);
#ifdef CONFIG_TRACING
#define trace_hyperv_callback_vector hyperv_callback_vector
void hv_remove_kexec_handler(void);
void hv_setup_crash_handler(void (*handler)(struct pt_regs *regs));
void hv_remove_crash_handler(void);
+
+#if IS_ENABLED(CONFIG_HYPERV)
+void hyperv_init(void);
+void hyperv_report_panic(struct pt_regs *regs);
+bool hv_is_hypercall_page_setup(void);
+#endif
#endif
: "memory");
}
+#define native_cpuid_reg(reg) \
+static inline unsigned int native_cpuid_##reg(unsigned int op) \
+{ \
+ unsigned int eax = op, ebx, ecx = 0, edx; \
+ \
+ native_cpuid(&eax, &ebx, &ecx, &edx); \
+ \
+ return reg; \
+}
+
+/*
+ * Native CPUID functions returning a single datum.
+ */
+native_cpuid_reg(eax)
+native_cpuid_reg(ebx)
+native_cpuid_reg(ecx)
+native_cpuid_reg(edx)
+
static inline void load_cr3(pgd_t *pgdir)
{
write_cr3(__pa(pgdir));
if (task == current)
return __builtin_frame_address(0);
- return (unsigned long *)((struct inactive_task_frame *)task->thread.sp)->bp;
+ return &((struct inactive_task_frame *)task->thread.sp)->bp;
}
#else
static inline unsigned long *
asmlinkage void ret_from_fork(void);
-/* data that is pointed to by thread.sp */
+/*
+ * This is the structure pointed to by thread.sp for an inactive task. The
+ * order of the fields must match the code in __switch_to_asm().
+ */
struct inactive_task_frame {
#ifdef CONFIG_X86_64
unsigned long r15;
unsigned long di;
#endif
unsigned long bx;
+
+ /*
+ * These two fields must be together. They form a stack frame header,
+ * needed by get_frame_pointer().
+ */
unsigned long bp;
unsigned long ret_addr;
};
*/
#define HV_X64_MSR_STAT_PAGES_AVAILABLE (1 << 8)
+/* Crash MSR available */
+#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10)
+
/*
* Feature identification: EBX indicates which flags were specified at
* partition creation. The format is the same as the partition creation
*/
#define HV_X64_RELAXED_TIMING_RECOMMENDED (1 << 5)
+/*
+ * Crash notification flag.
+ */
+#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
+
/* MSR used to identify the guest OS. */
#define HV_X64_MSR_GUEST_OS_ID 0x40000000
/* get information required for multi-node processors */
if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
- u32 eax, ebx, ecx, edx;
- cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
- node_id = ecx & 7;
-
- /* get compute unit information */
- smp_num_siblings = ((ebx >> 8) & 3) + 1;
- c->x86_max_cores /= smp_num_siblings;
- c->cpu_core_id = ebx & 0xff;
+ node_id = cpuid_ecx(0x8000001e) & 7;
/*
* We may have multiple LLCs if L3 caches exist, so check if we
{
int bit;
- if (get_option(&arg, &bit) && bit < NCAPINTS*32)
+ if (get_option(&arg, &bit) && bit >= 0 && bit < NCAPINTS * 32)
setup_clear_cpu_cap(bit);
else
return 0;
#include <asm/bugs.h>
#include <asm/cpu.h>
#include <asm/intel-family.h>
+#include <asm/microcode_intel.h>
#ifdef CONFIG_X86_64
#include <linux/topology.h>
(c->x86 == 0x6 && c->x86_model >= 0x0e))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
- if (c->x86 >= 6 && !cpu_has(c, X86_FEATURE_IA64)) {
- unsigned lower_word;
-
- wrmsr(MSR_IA32_UCODE_REV, 0, 0);
- /* Required by the SDM */
- sync_core();
- rdmsr(MSR_IA32_UCODE_REV, lower_word, c->microcode);
- }
+ if (c->x86 >= 6 && !cpu_has(c, X86_FEATURE_IA64))
+ c->microcode = intel_get_microcode_revision();
/*
* Atom erratum AAE44/AAF40/AAG38/AAH41:
{
struct ucode_patch *p;
- p = kzalloc(size, GFP_KERNEL);
+ p = kzalloc(sizeof(struct ucode_patch), GFP_KERNEL);
if (!p)
return ERR_PTR(-ENOMEM);
return patch;
}
-static void cpuid_1(void)
-{
- /*
- * According to the Intel SDM, Volume 3, 9.11.7:
- *
- * CPUID returns a value in a model specific register in
- * addition to its usual register return values. The
- * semantics of CPUID cause it to deposit an update ID value
- * in the 64-bit model-specific register at address 08BH
- * (IA32_BIOS_SIGN_ID). If no update is present in the
- * processor, the value in the MSR remains unmodified.
- *
- * Use native_cpuid -- this code runs very early and we don't
- * want to mess with paravirt.
- */
- unsigned int eax = 1, ebx, ecx = 0, edx;
-
- native_cpuid(&eax, &ebx, &ecx, &edx);
-}
-
static int collect_cpu_info_early(struct ucode_cpu_info *uci)
{
unsigned int val[2];
native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]);
csig.pf = 1 << ((val[1] >> 18) & 7);
}
- native_wrmsrl(MSR_IA32_UCODE_REV, 0);
-
- /* As documented in the SDM: Do a CPUID 1 here */
- cpuid_1();
- /* get the current revision from MSR 0x8B */
- native_rdmsr(MSR_IA32_UCODE_REV, val[0], val[1]);
-
- csig.rev = val[1];
+ csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig;
uci->valid = 1;
static int apply_microcode_early(struct ucode_cpu_info *uci, bool early)
{
struct microcode_intel *mc;
- unsigned int val[2];
+ u32 rev;
mc = uci->mc;
if (!mc)
/* write microcode via MSR 0x79 */
native_wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
- native_wrmsrl(MSR_IA32_UCODE_REV, 0);
- /* As documented in the SDM: Do a CPUID 1 here */
- cpuid_1();
-
- /* get the current revision from MSR 0x8B */
- native_rdmsr(MSR_IA32_UCODE_REV, val[0], val[1]);
- if (val[1] != mc->hdr.rev)
+ rev = intel_get_microcode_revision();
+ if (rev != mc->hdr.rev)
return -1;
#ifdef CONFIG_X86_64
/* Flush global tlb. This is precaution. */
flush_tlb_early();
#endif
- uci->cpu_sig.rev = val[1];
+ uci->cpu_sig.rev = rev;
if (early)
print_ucode(uci);
struct microcode_intel *mc;
struct ucode_cpu_info *uci;
struct cpuinfo_x86 *c;
- unsigned int val[2];
static int prev_rev;
+ u32 rev;
/* We should bind the task to the CPU */
if (WARN_ON(raw_smp_processor_id() != cpu))
/* write microcode via MSR 0x79 */
wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
- wrmsrl(MSR_IA32_UCODE_REV, 0);
-
- /* As documented in the SDM: Do a CPUID 1 here */
- cpuid_1();
- /* get the current revision from MSR 0x8B */
- rdmsr(MSR_IA32_UCODE_REV, val[0], val[1]);
+ rev = intel_get_microcode_revision();
- if (val[1] != mc->hdr.rev) {
+ if (rev != mc->hdr.rev) {
pr_err("CPU%d update to revision 0x%x failed\n",
cpu, mc->hdr.rev);
return -1;
}
- if (val[1] != prev_rev) {
+ if (rev != prev_rev) {
pr_info("updated to revision 0x%x, date = %04x-%02x-%02x\n",
- val[1],
+ rev,
mc->hdr.date & 0xffff,
mc->hdr.date >> 24,
(mc->hdr.date >> 16) & 0xff);
- prev_rev = val[1];
+ prev_rev = rev;
}
c = &cpu_data(cpu);
- uci->cpu_sig.rev = val[1];
- c->microcode = val[1];
+ uci->cpu_sig.rev = rev;
+ c->microcode = rev;
return 0;
}
u8 *ucode_ptr = data, *new_mc = NULL, *mc = NULL;
int new_rev = uci->cpu_sig.rev;
unsigned int leftover = size;
- unsigned int curr_mc_size = 0;
+ unsigned int curr_mc_size = 0, new_mc_size = 0;
unsigned int csig, cpf;
while (leftover) {
vfree(new_mc);
new_rev = mc_header.rev;
new_mc = mc;
+ new_mc_size = mc_size;
mc = NULL; /* trigger new vmalloc */
}
* permanent memory. So it will be loaded early when a CPU is hot added
* or resumes.
*/
- save_mc_for_early(new_mc, curr_mc_size);
+ save_mc_for_early(new_mc, new_mc_size);
pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n",
cpu, new_rev, uci->cpu_sig.rev);
return 0;
}
-static u64 read_hv_clock(struct clocksource *arg)
-{
- u64 current_tick;
- /*
- * Read the partition counter to get the current tick count. This count
- * is set to 0 when the partition is created and is incremented in
- * 100 nanosecond units.
- */
- rdmsrl(HV_X64_MSR_TIME_REF_COUNT, current_tick);
- return current_tick;
-}
-
-static struct clocksource hyperv_cs = {
- .name = "hyperv_clocksource",
- .rating = 400, /* use this when running on Hyperv*/
- .read = read_hv_clock,
- .mask = CLOCKSOURCE_MASK(64),
- .flags = CLOCK_SOURCE_IS_CONTINUOUS,
-};
-
static unsigned char hv_get_nmi_reason(void)
{
return 0;
static void __init ms_hyperv_init_platform(void)
{
+ int hv_host_info_eax;
+ int hv_host_info_ebx;
+ int hv_host_info_ecx;
+ int hv_host_info_edx;
+
/*
* Extract the features and hints
*/
pr_info("HyperV: features 0x%x, hints 0x%x\n",
ms_hyperv.features, ms_hyperv.hints);
+ /*
+ * Extract host information.
+ */
+ if (cpuid_eax(HVCPUID_VENDOR_MAXFUNCTION) >= HVCPUID_VERSION) {
+ hv_host_info_eax = cpuid_eax(HVCPUID_VERSION);
+ hv_host_info_ebx = cpuid_ebx(HVCPUID_VERSION);
+ hv_host_info_ecx = cpuid_ecx(HVCPUID_VERSION);
+ hv_host_info_edx = cpuid_edx(HVCPUID_VERSION);
+
+ pr_info("Hyper-V Host Build:%d-%d.%d-%d-%d.%d\n",
+ hv_host_info_eax, hv_host_info_ebx >> 16,
+ hv_host_info_ebx & 0xFFFF, hv_host_info_ecx,
+ hv_host_info_edx >> 24, hv_host_info_edx & 0xFFFFFF);
+ }
+
#ifdef CONFIG_X86_LOCAL_APIC
if (ms_hyperv.features & HV_X64_MSR_APIC_FREQUENCY_AVAILABLE) {
/*
"hv_nmi_unknown");
#endif
- if (ms_hyperv.features & HV_X64_MSR_TIME_REF_COUNT_AVAILABLE)
- clocksource_register_hz(&hyperv_cs, NSEC_PER_SEC/100);
-
#ifdef CONFIG_X86_IO_APIC
no_timer_check = 1;
#endif
*/
if (efi_enabled(EFI_BOOT))
x86_platform.get_nmi_reason = hv_get_nmi_reason;
+
+#if IS_ENABLED(CONFIG_HYPERV)
+ /*
+ * Setup the hook to get control post apic initialization.
+ */
+ x86_platform.apic_post_init = hyperv_init;
+#endif
}
const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
crystal_khz = 24000; /* 24.0 MHz */
break;
case INTEL_FAM6_SKYLAKE_X:
+ case INTEL_FAM6_ATOM_DENVERTON:
crystal_khz = 25000; /* 25.0 MHz */
break;
case INTEL_FAM6_ATOM_GOLDMONT:
#define FRAME_HEADER_SIZE (sizeof(long) * 2)
+/*
+ * This disables KASAN checking when reading a value from another task's stack,
+ * since the other task could be running on another CPU and could have poisoned
+ * the stack in the meantime.
+ */
+#define READ_ONCE_TASK_STACK(task, x) \
+({ \
+ unsigned long val; \
+ if (task == current) \
+ val = READ_ONCE(x); \
+ else \
+ val = READ_ONCE_NOCHECK(x); \
+ val; \
+})
+
static void unwind_dump(struct unwind_state *state, unsigned long *sp)
{
static bool dumped_before = false;
if (state->regs && user_mode(state->regs))
return 0;
- addr = ftrace_graph_ret_addr(state->task, &state->graph_idx, *addr_p,
+ addr = READ_ONCE_TASK_STACK(state->task, *addr_p);
+ addr = ftrace_graph_ret_addr(state->task, &state->graph_idx, addr,
addr_p);
return __kernel_text_address(addr) ? addr : 0;
if (state->regs)
next_bp = (unsigned long *)state->regs->bp;
else
- next_bp = (unsigned long *)*state->bp;
+ next_bp = (unsigned long *)READ_ONCE_TASK_STACK(state->task,*state->bp);
/* is the next frame pointer an encoded pointer to pt_regs? */
regs = decode_frame_pointer(next_bp);
return true;
bad_address:
+ /*
+ * When unwinding a non-current task, the task might actually be
+ * running on another CPU, in which case it could be modifying its
+ * stack while we're reading it. This is generally not a problem and
+ * can be ignored as long as the caller understands that unwinding
+ * another task will not always succeed.
+ */
+ if (state->task != current)
+ goto the_end;
+
if (state->regs) {
printk_deferred_once(KERN_WARNING
"WARNING: kernel stack regs at %p in %s:%d has bad 'bp' value %p\n",
return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception);
}
+static int segmented_write_std(struct x86_emulate_ctxt *ctxt,
+ struct segmented_address addr,
+ void *data,
+ unsigned int size)
+{
+ int rc;
+ ulong linear;
+
+ rc = linearize(ctxt, addr, size, true, &linear);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+ return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception);
+}
+
/*
* Prefetch the remaining bytes of the instruction without crossing page
* boundary if they are not in fetch_cache yet.
&ctxt->exception);
}
-/* Does not support long mode */
static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
u16 selector, int seg, u8 cpl,
enum x86_transfer_type transfer,
rpl = selector & 3;
- /* NULL selector is not valid for TR, CS and SS (except for long mode) */
- if ((seg == VCPU_SREG_CS
- || (seg == VCPU_SREG_SS
- && (ctxt->mode != X86EMUL_MODE_PROT64 || rpl != cpl))
- || seg == VCPU_SREG_TR)
- && null_selector)
- goto exception;
-
/* TR should be in GDT only */
if (seg == VCPU_SREG_TR && (selector & (1 << 2)))
goto exception;
- if (null_selector) /* for NULL selector skip all following checks */
+ /* NULL selector is not valid for TR, CS and (except for long mode) SS */
+ if (null_selector) {
+ if (seg == VCPU_SREG_CS || seg == VCPU_SREG_TR)
+ goto exception;
+
+ if (seg == VCPU_SREG_SS) {
+ if (ctxt->mode != X86EMUL_MODE_PROT64 || rpl != cpl)
+ goto exception;
+
+ /*
+ * ctxt->ops->set_segment expects the CPL to be in
+ * SS.DPL, so fake an expand-up 32-bit data segment.
+ */
+ seg_desc.type = 3;
+ seg_desc.p = 1;
+ seg_desc.s = 1;
+ seg_desc.dpl = cpl;
+ seg_desc.d = 1;
+ seg_desc.g = 1;
+ }
+
+ /* Skip all following checks */
goto load;
+ }
ret = read_segment_descriptor(ctxt, selector, &seg_desc, &desc_addr);
if (ret != X86EMUL_CONTINUE)
u16 selector, int seg)
{
u8 cpl = ctxt->ops->cpl(ctxt);
+
+ /*
+ * None of MOV, POP and LSS can load a NULL selector in CPL=3, but
+ * they can load it at CPL<3 (Intel's manual says only LSS can,
+ * but it's wrong).
+ *
+ * However, the Intel manual says that putting IST=1/DPL=3 in
+ * an interrupt gate will result in SS=3 (the AMD manual instead
+ * says it doesn't), so allow SS=3 in __load_segment_descriptor
+ * and only forbid it here.
+ */
+ if (seg == VCPU_SREG_SS && selector == 3 &&
+ ctxt->mode == X86EMUL_MODE_PROT64)
+ return emulate_exception(ctxt, GP_VECTOR, 0, true);
+
return __load_segment_descriptor(ctxt, selector, seg, cpl,
X86_TRANSFER_NONE, NULL);
}
}
/* Disable writeback. */
ctxt->dst.type = OP_NONE;
- return segmented_write(ctxt, ctxt->dst.addr.mem,
- &desc_ptr, 2 + ctxt->op_bytes);
+ return segmented_write_std(ctxt, ctxt->dst.addr.mem,
+ &desc_ptr, 2 + ctxt->op_bytes);
}
static int em_sgdt(struct x86_emulate_ctxt *ctxt)
else
size = offsetof(struct fxregs_state, xmm_space[0]);
- return segmented_write(ctxt, ctxt->memop.addr.mem, &fx_state, size);
+ return segmented_write_std(ctxt, ctxt->memop.addr.mem, &fx_state, size);
}
static int fxrstor_fixup(struct x86_emulate_ctxt *ctxt,
if (rc != X86EMUL_CONTINUE)
return rc;
- rc = segmented_read(ctxt, ctxt->memop.addr.mem, &fx_state, 512);
+ rc = segmented_read_std(ctxt, ctxt->memop.addr.mem, &fx_state, 512);
if (rc != X86EMUL_CONTINUE)
return rc;
jump_label_rate_limit(&apic_hw_disabled, HZ);
jump_label_rate_limit(&apic_sw_disabled, HZ);
}
+
+void kvm_lapic_exit(void)
+{
+ static_key_deferred_flush(&apic_hw_disabled);
+ static_key_deferred_flush(&apic_sw_disabled);
+}
int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data);
void kvm_lapic_init(void);
+void kvm_lapic_exit(void);
#define VEC_POS(v) ((v) & (32 - 1))
#define REG_POS(v) (((v) >> 5) << 4)
switch (cap->cap) {
case KVM_CAP_HYPERV_SYNIC:
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return -EINVAL;
return kvm_hv_activate_synic(vcpu);
default:
return -EINVAL;
void kvm_arch_exit(void)
{
+ kvm_lapic_exit();
perf_unregister_guest_info_callbacks(&kvm_guest_cbs);
if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
* We were not able to extract an address from the instruction,
* probably because there was something invalid in it.
*/
- if (info->si_addr == (void *)-1) {
+ if (info->si_addr == (void __user *)-1) {
err = -EINVAL;
goto err_out;
}
set_memory_ro((unsigned long)header, header->pages);
prog->bpf_func = (void *)image;
prog->jited = 1;
+ } else {
+ prog = orig_prog;
}
out_addrs:
return 0;
}
+#define OVERFLOW_ADDR_SHIFT (64 - EFI_PAGE_SHIFT)
+#define OVERFLOW_ADDR_MASK (U64_MAX << OVERFLOW_ADDR_SHIFT)
+#define U64_HIGH_BIT (~(U64_MAX >> 1))
+
+static bool __init efi_memmap_entry_valid(const efi_memory_desc_t *md, int i)
+{
+ u64 end = (md->num_pages << EFI_PAGE_SHIFT) + md->phys_addr - 1;
+ u64 end_hi = 0;
+ char buf[64];
+
+ if (md->num_pages == 0) {
+ end = 0;
+ } else if (md->num_pages > EFI_PAGES_MAX ||
+ EFI_PAGES_MAX - md->num_pages <
+ (md->phys_addr >> EFI_PAGE_SHIFT)) {
+ end_hi = (md->num_pages & OVERFLOW_ADDR_MASK)
+ >> OVERFLOW_ADDR_SHIFT;
+
+ if ((md->phys_addr & U64_HIGH_BIT) && !(end & U64_HIGH_BIT))
+ end_hi += 1;
+ } else {
+ return true;
+ }
+
+ pr_warn_once(FW_BUG "Invalid EFI memory map entries:\n");
+
+ if (end_hi) {
+ pr_warn("mem%02u: %s range=[0x%016llx-0x%llx%016llx] (invalid)\n",
+ i, efi_md_typeattr_format(buf, sizeof(buf), md),
+ md->phys_addr, end_hi, end);
+ } else {
+ pr_warn("mem%02u: %s range=[0x%016llx-0x%016llx] (invalid)\n",
+ i, efi_md_typeattr_format(buf, sizeof(buf), md),
+ md->phys_addr, end);
+ }
+ return false;
+}
+
+static void __init efi_clean_memmap(void)
+{
+ efi_memory_desc_t *out = efi.memmap.map;
+ const efi_memory_desc_t *in = out;
+ const efi_memory_desc_t *end = efi.memmap.map_end;
+ int i, n_removal;
+
+ for (i = n_removal = 0; in < end; i++) {
+ if (efi_memmap_entry_valid(in, i)) {
+ if (out != in)
+ memcpy(out, in, efi.memmap.desc_size);
+ out = (void *)out + efi.memmap.desc_size;
+ } else {
+ n_removal++;
+ }
+ in = (void *)in + efi.memmap.desc_size;
+ }
+
+ if (n_removal > 0) {
+ u64 size = efi.memmap.nr_map - n_removal;
+
+ pr_warn("Removing %d invalid memory map entries.\n", n_removal);
+ efi_memmap_install(efi.memmap.phys_map, size);
+ }
+}
+
void __init efi_print_memmap(void)
{
efi_memory_desc_t *md;
}
}
+ efi_clean_memmap();
+
if (efi_enabled(EFI_DBG))
efi_print_memmap();
}
new_size = efi.memmap.desc_size * num_entries;
- new_phys = memblock_alloc(new_size, 0);
+ new_phys = efi_memmap_alloc(num_entries);
if (!new_phys) {
pr_err("Could not allocate boot services memmap\n");
return;
}
new_size = efi.memmap.desc_size * num_entries;
- new_phys = memblock_alloc(new_size, 0);
+ new_phys = efi_memmap_alloc(num_entries);
if (!new_phys) {
pr_err("Failed to allocate new EFI memmap\n");
return;
obj-$(subst m,y,$(CONFIG_GPIO_INTEL_PMIC)) += platform_pmic_gpio.o
obj-$(subst m,y,$(CONFIG_INTEL_MFLD_THERMAL)) += platform_msic_thermal.o
# SPI Devices
-obj-$(subst m,y,$(CONFIG_SPI_SPIDEV)) += platform_spidev.o
+obj-$(subst m,y,$(CONFIG_SPI_SPIDEV)) += platform_mrfld_spidev.o
# I2C Devices
obj-$(subst m,y,$(CONFIG_SENSORS_EMC1403)) += platform_emc1403.o
obj-$(subst m,y,$(CONFIG_SENSORS_LIS3LV02D)) += platform_lis331.o
* of the License.
*/
+#include <linux/err.h>
#include <linux/init.h>
#include <linux/sfi.h>
#include <linux/spi/pxa2xx_spi.h>
{
struct spi_board_info *spi_info = info;
+ if (intel_mid_identify_cpu() != INTEL_MID_CPU_CHIP_TANGIER)
+ return ERR_PTR(-ENODEV);
+
spi_info->mode = SPI_MODE_0;
spi_info->controller_data = &spidev_spi_chip;
if ((sector | nr_sects) & bs_mask)
return -EINVAL;
- if (discard) {
- ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
- BLKDEV_DISCARD_ZERO, biop);
- if (ret == 0 || (ret && ret != -EOPNOTSUPP))
- goto out;
- }
-
ret = __blkdev_issue_write_zeroes(bdev, sector, nr_sects, gfp_mask,
biop);
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
struct bio *bio = NULL;
struct blk_plug plug;
+ if (discard) {
+ if (!blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
+ BLKDEV_DISCARD_ZERO))
+ return 0;
+ }
+
blk_start_plug(&plug);
ret = __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
&bio, discard);
static inline sector_t blk_zone_start(struct request_queue *q,
sector_t sector)
{
- sector_t zone_mask = blk_queue_zone_size(q) - 1;
+ sector_t zone_mask = blk_queue_zone_sectors(q) - 1;
return sector & ~zone_mask;
}
return -EINVAL;
/* Check alignment (handle eventual smaller last zone) */
- zone_sectors = blk_queue_zone_size(q);
+ zone_sectors = blk_queue_zone_sectors(q);
if (sector & (zone_sectors - 1))
return -EINVAL;
struct block_device *bdev,
sector_t from, sector_t size)
{
- unsigned int zone_size = bdev_zone_size(bdev);
+ unsigned int zone_sectors = bdev_zone_sectors(bdev);
/*
* If this function is called, then the disk is a zoned block device
* regular block devices (no zone operation) and their zone size will
* be reported as 0. Allow this case.
*/
- if (!zone_size)
+ if (!zone_sectors)
return true;
/*
* use it. Check the zone size too: it should be a power of 2 number
* of sectors.
*/
- if (WARN_ON_ONCE(!is_power_of_2(zone_size))) {
+ if (WARN_ON_ONCE(!is_power_of_2(zone_sectors))) {
u32 rem;
- div_u64_rem(from, zone_size, &rem);
+ div_u64_rem(from, zone_sectors, &rem);
if (rem)
return false;
if ((from + size) < get_capacity(disk)) {
- div_u64_rem(size, zone_size, &rem);
+ div_u64_rem(size, zone_sectors, &rem);
if (rem)
return false;
}
} else {
- if (from & (zone_size - 1))
+ if (from & (zone_sectors - 1))
return false;
if ((from + size) < get_capacity(disk) &&
- (size & (zone_size - 1)))
+ (size & (zone_sectors - 1)))
return false;
}
tristate "Holtek Ht16K33 LED controller with keyscan"
depends on FB && OF && I2C && INPUT
select FB_SYS_FOPS
- select FB_CFB_FILLRECT
- select FB_CFB_COPYAREA
- select FB_CFB_IMAGEBLIT
+ select FB_SYS_FILLRECT
+ select FB_SYS_COPYAREA
+ select FB_SYS_IMAGEBLIT
select INPUT_MATRIXKMAP
select FB_BACKLIGHT
help
extern struct kset *devices_kset;
extern void devices_kset_move_last(struct device *dev);
-extern struct device_attribute dev_attr_deferred_probe;
-
#if defined(CONFIG_MODULES) && defined(CONFIG_SYSFS)
extern void module_add_driver(struct module *mod, struct device_driver *drv);
extern void module_remove_driver(struct device_driver *drv);
goto err_remove_dev_groups;
}
- error = device_create_file(dev, &dev_attr_deferred_probe);
- if (error)
- goto err_remove_online;
-
return 0;
- err_remove_online:
- device_remove_file(dev, &dev_attr_online);
err_remove_dev_groups:
device_remove_groups(dev, dev->groups);
err_remove_type_groups:
struct class *class = dev->class;
const struct device_type *type = dev->type;
- device_remove_file(dev, &dev_attr_deferred_probe);
device_remove_file(dev, &dev_attr_online);
device_remove_groups(dev, dev->groups);
static LIST_HEAD(deferred_probe_active_list);
static atomic_t deferred_trigger_count = ATOMIC_INIT(0);
-static ssize_t deferred_probe_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- bool value;
-
- mutex_lock(&deferred_probe_mutex);
- value = !list_empty(&dev->p->deferred_probe);
- mutex_unlock(&deferred_probe_mutex);
-
- return sprintf(buf, "%d\n", value);
-}
-DEVICE_ATTR_RO(deferred_probe);
-
/*
* In some cases, like suspend to RAM or hibernation, It might be reasonable
* to prohibit probing of devices as it could be unsafe.
return -ENOMEM;
for (i = 0; i < nbds_max; i++) {
+ struct request_queue *q;
struct gendisk *disk = alloc_disk(1 << part_shift);
if (!disk)
goto out;
* every gendisk to have its very own request_queue struct.
* These structs are big so we dynamically allocate them.
*/
- disk->queue = blk_mq_init_queue(&nbd_dev[i].tag_set);
- if (!disk->queue) {
+ q = blk_mq_init_queue(&nbd_dev[i].tag_set);
+ if (IS_ERR(q)) {
blk_mq_free_tag_set(&nbd_dev[i].tag_set);
put_disk(disk);
goto out;
}
+ disk->queue = q;
/*
* Tell the block layer that we are not a rotational device
struct virtio_blk_outhdr out_hdr;
struct virtio_scsi_inhdr in_hdr;
u8 status;
+ u8 sense[SCSI_SENSE_BUFFERSIZE];
struct scatterlist sg[];
};
}
if (type == cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_SCSI_CMD)) {
- sg_init_one(&sense, vbr->req->sense, SCSI_SENSE_BUFFERSIZE);
+ memcpy(vbr->sense, vbr->req->sense, SCSI_SENSE_BUFFERSIZE);
+ sg_init_one(&sense, vbr->sense, SCSI_SENSE_BUFFERSIZE);
sgs[num_out + num_in++] = &sense;
sg_init_one(&inhdr, &vbr->in_hdr, sizeof(vbr->in_hdr));
sgs[num_out + num_in++] = &inhdr;
if (err)
goto out_put_disk;
- q = vblk->disk->queue = blk_mq_init_queue(&vblk->tag_set);
+ q = blk_mq_init_queue(&vblk->tag_set);
if (IS_ERR(q)) {
err = -ENOMEM;
goto out_free_tags;
}
+ vblk->disk->queue = q;
q->queuedata = vblk;
#include <linux/genhd.h>
#include <linux/highmem.h>
#include <linux/slab.h>
+#include <linux/backing-dev.h>
#include <linux/string.h>
#include <linux/vmalloc.h>
#include <linux/err.h>
return bvec->bv_len != PAGE_SIZE;
}
+static void zram_revalidate_disk(struct zram *zram)
+{
+ revalidate_disk(zram->disk);
+ /* revalidate_disk reset the BDI_CAP_STABLE_WRITES so set again */
+ zram->disk->queue->backing_dev_info.capabilities |=
+ BDI_CAP_STABLE_WRITES;
+}
+
/*
* Check if request is within bounds and aligned on zram logical blocks.
*/
zram->comp = comp;
zram->disksize = disksize;
set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
+ zram_revalidate_disk(zram);
up_write(&zram->init_lock);
- /*
- * Revalidate disk out of the init_lock to avoid lockdep splat.
- * It's okay because disk's capacity is protected by init_lock
- * so that revalidate_disk always sees up-to-date capacity.
- */
- revalidate_disk(zram->disk);
-
return len;
out_destroy_comp:
/* Make sure all the pending I/O are finished */
fsync_bdev(bdev);
zram_reset_device(zram);
- revalidate_disk(zram->disk);
+ zram_revalidate_disk(zram);
bdput(bdev);
mutex_lock(&bdev->bd_mutex);
controlling the behavior of this hardware.
config DEVPORT
- bool
+ bool "/dev/port character device"
depends on ISA || PCI
default y
+ help
+ Say Y here if you want to support the /dev/port device. The /dev/port
+ device is similar to /dev/mem, but for I/O ports.
source "drivers/s390/char/Kconfig"
#include <linux/kthread.h>
#include <linux/delay.h>
-
-/*
- * The apm_bios device is one of the misc char devices.
- * This is its minor number.
- */
-#define APM_MINOR_DEV 134
-
/*
* One option can be changed at boot time as follows:
* apm=on/off enable/disable APM
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/delay.h>
#include <linux/bcd.h>
#include <linux/mutex.h>
char *kbuf; /* k-addr because vread() takes vmlist_lock rwlock */
int err = 0;
- if (!pfn_valid(PFN_DOWN(p)))
- return -EIO;
-
read = 0;
if (p < (unsigned long) high_memory) {
low_count = count;
* by the kernel or data corruption may occur
*/
kbuf = xlate_dev_kmem_ptr((void *)p);
+ if (!virt_addr_valid(kbuf))
+ return -ENXIO;
if (copy_to_user(buf, kbuf, sz))
return -EFAULT;
* corruption may occur.
*/
ptr = xlate_dev_kmem_ptr((void *)p);
+ if (!virt_addr_valid(ptr))
+ return -ENXIO;
copied = copy_from_user(ptr, buf, sz);
if (copied) {
char *kbuf; /* k-addr because vwrite() takes vmlist_lock rwlock */
int err = 0;
- if (!pfn_valid(PFN_DOWN(p)))
- return -EIO;
-
if (p < (unsigned long) high_memory) {
unsigned long to_write = min_t(unsigned long, count,
(unsigned long)high_memory - p);
}
static struct miscdevice mmtimer_miscdev = {
- SGI_MMTIMER,
- MMTIMER_NAME,
- &mmtimer_fops
+ .minor = SGI_MMTIMER,
+ .name = MMTIMER_NAME,
+ .fops = &mmtimer_fops
};
static struct timespec sgi_clock_offset;
struct pardevice *pdev = NULL;
char *name;
struct pardev_cb ppdev_cb;
+ int rc = 0;
name = kasprintf(GFP_KERNEL, CHRDEV "%x", minor);
if (name == NULL)
port = parport_find_number(minor);
if (!port) {
pr_warn("%s: no associated port!\n", name);
- kfree(name);
- return -ENXIO;
+ rc = -ENXIO;
+ goto err;
}
memset(&ppdev_cb, 0, sizeof(ppdev_cb));
ppdev_cb.private = pp;
pdev = parport_register_dev_model(port, name, &ppdev_cb, minor);
parport_put_port(port);
- kfree(name);
if (!pdev) {
pr_warn("%s: failed to register device!\n", name);
- return -ENXIO;
+ rc = -ENXIO;
+ goto err;
}
pp->pdev = pdev;
dev_dbg(&pdev->dev, "registered pardevice\n");
- return 0;
+err:
+ kfree(name);
+ return rc;
}
static enum ieee1284_phase init_phase(int mode)
select DW_DMAC_CORE
help
Support the Synopsys DesignWare AHB DMA controller on the
- platfroms that enumerate it as a PCI device. For example,
+ platforms that enumerate it as a PCI device. For example,
Intel Medfield has integrated this GPDMA controller.
#define PCI_DEVICE_ID_INTEL_IOAT_BDX8 0x6f2e
#define PCI_DEVICE_ID_INTEL_IOAT_BDX9 0x6f2f
+#define PCI_DEVICE_ID_INTEL_IOAT_SKX 0x2021
+
#define IOAT_VER_1_2 0x12 /* Version 1.2 */
#define IOAT_VER_2_0 0x20 /* Version 2.0 */
#define IOAT_VER_3_0 0x30 /* Version 3.0 */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDX8) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDX9) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SKX) },
+
/* I/OAT v3.3 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD1) },
}
}
+static inline bool is_skx_ioat(struct pci_dev *pdev)
+{
+ return (pdev->device == PCI_DEVICE_ID_INTEL_IOAT_SKX) ? true : false;
+}
+
static bool is_xeon_cb32(struct pci_dev *pdev)
{
return is_jf_ioat(pdev) || is_snb_ioat(pdev) || is_ivb_ioat(pdev) ||
- is_hsw_ioat(pdev) || is_bdx_ioat(pdev);
+ is_hsw_ioat(pdev) || is_bdx_ioat(pdev) || is_skx_ioat(pdev);
}
bool is_bwd_ioat(struct pci_dev *pdev)
/* doing 2 32bit writes to mmio since 1 64b write doesn't work */
ioat_chan->completion =
dma_pool_zalloc(ioat_chan->ioat_dma->completion_pool,
- GFP_KERNEL, &ioat_chan->completion_dma);
+ GFP_NOWAIT, &ioat_chan->completion_dma);
if (!ioat_chan->completion)
return -ENOMEM;
ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH);
order = IOAT_MAX_ORDER;
- ring = ioat_alloc_ring(c, order, GFP_KERNEL);
+ ring = ioat_alloc_ring(c, order, GFP_NOWAIT);
if (!ring)
return -ENOMEM;
device->version = readb(device->reg_base + IOAT_VER_OFFSET);
if (device->version >= IOAT_VER_3_0) {
+ if (is_skx_ioat(pdev))
+ device->version = IOAT_VER_3_2;
err = ioat3_dma_probe(device, ioat_dca_enabled);
if (device->version >= IOAT_VER_3_3)
d->ccr |= CCR_DST_AMODE_POSTINC;
if (port_window) {
d->ccr |= CCR_SRC_AMODE_DBLIDX;
- d->ei = 1;
- /*
- * One frame covers the port_window and by configure
- * the source frame index to be -1 * (port_window - 1)
- * we instruct the sDMA that after a frame is processed
- * it should move back to the start of the window.
- */
- d->fi = -(port_window_bytes - 1);
if (port_window_bytes >= 64)
- d->csdp = CSDP_SRC_BURST_64 | CSDP_SRC_PACKED;
+ d->csdp |= CSDP_SRC_BURST_64;
else if (port_window_bytes >= 32)
- d->csdp = CSDP_SRC_BURST_32 | CSDP_SRC_PACKED;
+ d->csdp |= CSDP_SRC_BURST_32;
else if (port_window_bytes >= 16)
- d->csdp = CSDP_SRC_BURST_16 | CSDP_SRC_PACKED;
+ d->csdp |= CSDP_SRC_BURST_16;
+
} else {
d->ccr |= CCR_SRC_AMODE_CONSTANT;
}
d->ccr |= CCR_SRC_AMODE_POSTINC;
if (port_window) {
d->ccr |= CCR_DST_AMODE_DBLIDX;
+ d->ei = 1;
+ /*
+ * One frame covers the port_window and by configure
+ * the source frame index to be -1 * (port_window - 1)
+ * we instruct the sDMA that after a frame is processed
+ * it should move back to the start of the window.
+ */
+ d->fi = -(port_window_bytes - 1);
if (port_window_bytes >= 64)
- d->csdp = CSDP_DST_BURST_64 | CSDP_DST_PACKED;
+ d->csdp |= CSDP_DST_BURST_64;
else if (port_window_bytes >= 32)
- d->csdp = CSDP_DST_BURST_32 | CSDP_DST_PACKED;
+ d->csdp |= CSDP_DST_BURST_32;
else if (port_window_bytes >= 16)
- d->csdp = CSDP_DST_BURST_16 | CSDP_DST_PACKED;
+ d->csdp |= CSDP_DST_BURST_16;
} else {
d->ccr |= CCR_DST_AMODE_CONSTANT;
}
osg->addr = sg_dma_address(sgent);
osg->en = en;
osg->fn = sg_dma_len(sgent) / frame_bytes;
- if (port_window && dir == DMA_MEM_TO_DEV) {
+ if (port_window && dir == DMA_DEV_TO_MEM) {
osg->ei = 1;
/*
* One frame covers the port_window and by configure
struct omap_dmadev *od;
struct resource *res;
int rc, i, irq;
+ u32 lch_count;
od = devm_kzalloc(&pdev->dev, sizeof(*od), GFP_KERNEL);
if (!od)
spin_lock_init(&od->lock);
spin_lock_init(&od->irq_lock);
- if (!pdev->dev.of_node) {
- od->dma_requests = od->plat->dma_attr->lch_count;
- if (unlikely(!od->dma_requests))
- od->dma_requests = OMAP_SDMA_REQUESTS;
- } else if (of_property_read_u32(pdev->dev.of_node, "dma-requests",
- &od->dma_requests)) {
+ /* Number of DMA requests */
+ od->dma_requests = OMAP_SDMA_REQUESTS;
+ if (pdev->dev.of_node && of_property_read_u32(pdev->dev.of_node,
+ "dma-requests",
+ &od->dma_requests)) {
dev_info(&pdev->dev,
"Missing dma-requests property, using %u.\n",
OMAP_SDMA_REQUESTS);
- od->dma_requests = OMAP_SDMA_REQUESTS;
}
- od->lch_map = devm_kcalloc(&pdev->dev, od->dma_requests,
- sizeof(*od->lch_map), GFP_KERNEL);
+ /* Number of available logical channels */
+ if (!pdev->dev.of_node) {
+ lch_count = od->plat->dma_attr->lch_count;
+ if (unlikely(!lch_count))
+ lch_count = OMAP_SDMA_CHANNELS;
+ } else if (of_property_read_u32(pdev->dev.of_node, "dma-channels",
+ &lch_count)) {
+ dev_info(&pdev->dev,
+ "Missing dma-channels property, using %u.\n",
+ OMAP_SDMA_CHANNELS);
+ lch_count = OMAP_SDMA_CHANNELS;
+ }
+
+ od->lch_map = devm_kcalloc(&pdev->dev, lch_count, sizeof(*od->lch_map),
+ GFP_KERNEL);
if (!od->lch_map)
return -ENOMEM;
/* for cyclic capability */
bool cyclic;
+
+ /* for runtime pm tracking */
+ bool active;
};
struct pl330_dmac {
_stop(pch->thread);
spin_unlock(&pch->thread->dmac->lock);
power_down = true;
+ pch->active = false;
} else {
/* Make sure the PL330 Channel thread is active */
spin_lock(&pch->thread->dmac->lock);
desc->status = PREP;
list_move_tail(&desc->node, &pch->work_list);
if (power_down) {
+ pch->active = true;
spin_lock(&pch->thread->dmac->lock);
_start(pch->thread);
spin_unlock(&pch->thread->dmac->lock);
unsigned long flags;
struct pl330_dmac *pl330 = pch->dmac;
LIST_HEAD(list);
+ bool power_down = false;
pm_runtime_get_sync(pl330->ddma.dev);
spin_lock_irqsave(&pch->lock, flags);
pch->thread->req[0].desc = NULL;
pch->thread->req[1].desc = NULL;
pch->thread->req_running = -1;
+ power_down = pch->active;
+ pch->active = false;
/* Mark all desc done */
list_for_each_entry(desc, &pch->submitted_list, node) {
list_splice_tail_init(&pch->completed_list, &pl330->desc_pool);
spin_unlock_irqrestore(&pch->lock, flags);
pm_runtime_mark_last_busy(pl330->ddma.dev);
+ if (power_down)
+ pm_runtime_put_autosuspend(pl330->ddma.dev);
pm_runtime_put_autosuspend(pl330->ddma.dev);
return 0;
* updated on work_list emptiness status.
*/
WARN_ON(list_empty(&pch->submitted_list));
+ pch->active = true;
pm_runtime_get_sync(pch->dmac->ddma.dev);
}
list_splice_tail_init(&pch->submitted_list, &pch->work_list);
{
struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
struct rcar_dmac *dmac = to_rcar_dmac(chan->device);
+ struct rcar_dmac_chan_map *map = &rchan->map;
struct rcar_dmac_desc_page *page, *_page;
struct rcar_dmac_desc *desc;
LIST_HEAD(list);
free_page((unsigned long)page);
}
+ /* Remove slave mapping if present. */
+ if (map->slave.xfer_size) {
+ dma_unmap_resource(chan->device->dev, map->addr,
+ map->slave.xfer_size, map->dir, 0);
+ map->slave.xfer_size = 0;
+ }
+
pm_runtime_put(chan->device->dev);
}
struct virt_dma_desc *vdesc;
enum dma_status status;
unsigned long flags;
- u32 residue;
+ u32 residue = 0;
status = dma_cookie_status(c, cookie, state);
if ((status == DMA_COMPLETE) || (!state))
spin_lock_irqsave(&chan->vchan.lock, flags);
vdesc = vchan_find_desc(&chan->vchan, cookie);
- if (cookie == chan->desc->vdesc.tx.cookie) {
+ if (chan->desc && cookie == chan->desc->vdesc.tx.cookie)
residue = stm32_dma_desc_residue(chan, chan->desc,
chan->next_sg);
- } else if (vdesc) {
+ else if (vdesc)
residue = stm32_dma_desc_residue(chan,
to_stm32_dma_desc(vdesc), 0);
- } else {
- residue = 0;
- }
-
dma_set_residue(state, residue);
spin_unlock_irqrestore(&chan->vchan.lock, flags);
struct stm32_dma_chan *chan;
struct dma_chan *c;
- if (dma_spec->args_count < 3)
+ if (dma_spec->args_count < 4)
return NULL;
cfg.channel_id = dma_spec->args[0];
cfg.request_line = dma_spec->args[1];
cfg.stream_config = dma_spec->args[2];
- cfg.threshold = 0;
+ cfg.threshold = dma_spec->args[3];
if ((cfg.channel_id >= STM32_DMA_MAX_CHANNELS) || (cfg.request_line >=
STM32_DMA_MAX_REQUEST_ID))
return NULL;
- if (dma_spec->args_count > 3)
- cfg.threshold = dma_spec->args[3];
-
chan = &dmadev->chan[cfg.channel_id];
c = dma_get_slave_channel(&chan->vchan.chan);
match = of_match_node(ti_am335x_master_match, dma_node);
if (!match) {
dev_err(&pdev->dev, "DMA master is not supported\n");
+ of_node_put(dma_node);
return -EINVAL;
}
match = of_match_node(ti_dra7_master_match, dma_node);
if (!match) {
dev_err(&pdev->dev, "DMA master is not supported\n");
+ of_node_put(dma_node);
return -EINVAL;
}
}
/* allocate memory for new EFI memmap */
- new_memmap_phy = memblock_alloc(efi.memmap.desc_size * new_nr_map,
- PAGE_SIZE);
+ new_memmap_phy = efi_memmap_alloc(new_nr_map);
if (!new_memmap_phy)
return;
unsigned long get_dram_base(efi_system_table_t *sys_table_arg);
-efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
- unsigned long orig_fdt_size,
- void *fdt, int new_fdt_size, char *cmdline_ptr,
- u64 initrd_addr, u64 initrd_size,
- efi_memory_desc_t *memory_map,
- unsigned long map_size, unsigned long desc_size,
- u32 desc_ver);
-
efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
void *handle,
unsigned long *new_fdt_addr,
#include "efistub.h"
-efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
- unsigned long orig_fdt_size,
- void *fdt, int new_fdt_size, char *cmdline_ptr,
- u64 initrd_addr, u64 initrd_size,
- efi_memory_desc_t *memory_map,
- unsigned long map_size, unsigned long desc_size,
- u32 desc_ver)
+static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
+ unsigned long orig_fdt_size,
+ void *fdt, int new_fdt_size, char *cmdline_ptr,
+ u64 initrd_addr, u64 initrd_size)
{
int node, num_rsv;
int status;
if (status)
goto fdt_set_fail;
- fdt_val64 = cpu_to_fdt64((u64)(unsigned long)memory_map);
+ fdt_val64 = U64_MAX; /* placeholder */
status = fdt_setprop(fdt, node, "linux,uefi-mmap-start",
&fdt_val64, sizeof(fdt_val64));
if (status)
goto fdt_set_fail;
- fdt_val32 = cpu_to_fdt32(map_size);
+ fdt_val32 = U32_MAX; /* placeholder */
status = fdt_setprop(fdt, node, "linux,uefi-mmap-size",
&fdt_val32, sizeof(fdt_val32));
if (status)
goto fdt_set_fail;
- fdt_val32 = cpu_to_fdt32(desc_size);
status = fdt_setprop(fdt, node, "linux,uefi-mmap-desc-size",
&fdt_val32, sizeof(fdt_val32));
if (status)
goto fdt_set_fail;
- fdt_val32 = cpu_to_fdt32(desc_ver);
status = fdt_setprop(fdt, node, "linux,uefi-mmap-desc-ver",
&fdt_val32, sizeof(fdt_val32));
if (status)
return EFI_LOAD_ERROR;
}
+static efi_status_t update_fdt_memmap(void *fdt, struct efi_boot_memmap *map)
+{
+ int node = fdt_path_offset(fdt, "/chosen");
+ u64 fdt_val64;
+ u32 fdt_val32;
+ int err;
+
+ if (node < 0)
+ return EFI_LOAD_ERROR;
+
+ fdt_val64 = cpu_to_fdt64((unsigned long)*map->map);
+ err = fdt_setprop_inplace(fdt, node, "linux,uefi-mmap-start",
+ &fdt_val64, sizeof(fdt_val64));
+ if (err)
+ return EFI_LOAD_ERROR;
+
+ fdt_val32 = cpu_to_fdt32(*map->map_size);
+ err = fdt_setprop_inplace(fdt, node, "linux,uefi-mmap-size",
+ &fdt_val32, sizeof(fdt_val32));
+ if (err)
+ return EFI_LOAD_ERROR;
+
+ fdt_val32 = cpu_to_fdt32(*map->desc_size);
+ err = fdt_setprop_inplace(fdt, node, "linux,uefi-mmap-desc-size",
+ &fdt_val32, sizeof(fdt_val32));
+ if (err)
+ return EFI_LOAD_ERROR;
+
+ fdt_val32 = cpu_to_fdt32(*map->desc_ver);
+ err = fdt_setprop_inplace(fdt, node, "linux,uefi-mmap-desc-ver",
+ &fdt_val32, sizeof(fdt_val32));
+ if (err)
+ return EFI_LOAD_ERROR;
+
+ return EFI_SUCCESS;
+}
+
#ifndef EFI_FDT_ALIGN
#define EFI_FDT_ALIGN EFI_PAGE_SIZE
#endif
goto fail;
}
- /*
- * Now that we have done our final memory allocation (and free)
- * we can get the memory map key needed for
- * exit_boot_services().
- */
- status = efi_get_memory_map(sys_table, &map);
- if (status != EFI_SUCCESS)
- goto fail_free_new_fdt;
-
status = update_fdt(sys_table,
(void *)fdt_addr, fdt_size,
(void *)*new_fdt_addr, new_fdt_size,
- cmdline_ptr, initrd_addr, initrd_size,
- memory_map, map_size, desc_size, desc_ver);
+ cmdline_ptr, initrd_addr, initrd_size);
/* Succeeding the first time is the expected case. */
if (status == EFI_SUCCESS)
/*
* We need to allocate more space for the new
* device tree, so free existing buffer that is
- * too small. Also free memory map, as we will need
- * to get new one that reflects the free/alloc we do
- * on the device tree buffer.
+ * too small.
*/
efi_free(sys_table, new_fdt_size, *new_fdt_addr);
- sys_table->boottime->free_pool(memory_map);
new_fdt_size += EFI_PAGE_SIZE;
} else {
pr_efi_err(sys_table, "Unable to construct new device tree.\n");
- goto fail_free_mmap;
+ goto fail_free_new_fdt;
}
}
- sys_table->boottime->free_pool(memory_map);
priv.runtime_map = runtime_map;
priv.runtime_entry_count = &runtime_entry_count;
status = efi_exit_boot_services(sys_table, handle, &map, &priv,
if (status == EFI_SUCCESS) {
efi_set_virtual_address_map_t *svam;
+ status = update_fdt_memmap((void *)*new_fdt_addr, &map);
+ if (status != EFI_SUCCESS) {
+ /*
+ * The kernel won't get far without the memory map, but
+ * may still be able to print something meaningful so
+ * return success here.
+ */
+ return EFI_SUCCESS;
+ }
+
/* Install the new virtual address map */
svam = sys_table->runtime->set_virtual_address_map;
status = svam(runtime_entry_count * desc_size, desc_size,
pr_efi_err(sys_table, "Exit boot services failed.\n");
-fail_free_mmap:
- sys_table->boottime->free_pool(memory_map);
-
fail_free_new_fdt:
efi_free(sys_table, new_fdt_size, *new_fdt_addr);
#include <linux/efi.h>
#include <linux/io.h>
#include <asm/early_ioremap.h>
+#include <linux/memblock.h>
+#include <linux/slab.h>
+
+static phys_addr_t __init __efi_memmap_alloc_early(unsigned long size)
+{
+ return memblock_alloc(size, 0);
+}
+
+static phys_addr_t __init __efi_memmap_alloc_late(unsigned long size)
+{
+ unsigned int order = get_order(size);
+ struct page *p = alloc_pages(GFP_KERNEL, order);
+
+ if (!p)
+ return 0;
+
+ return PFN_PHYS(page_to_pfn(p));
+}
+
+/**
+ * efi_memmap_alloc - Allocate memory for the EFI memory map
+ * @num_entries: Number of entries in the allocated map.
+ *
+ * Depending on whether mm_init() has already been invoked or not,
+ * either memblock or "normal" page allocation is used.
+ *
+ * Returns the physical address of the allocated memory map on
+ * success, zero on failure.
+ */
+phys_addr_t __init efi_memmap_alloc(unsigned int num_entries)
+{
+ unsigned long size = num_entries * efi.memmap.desc_size;
+
+ if (slab_is_available())
+ return __efi_memmap_alloc_late(size);
+
+ return __efi_memmap_alloc_early(size);
+}
/**
* __efi_memmap_init - Common code for mapping the EFI memory map
return 0;
}
-static int __init mxs_gpio_init_gc(struct mxs_gpio_port *port, int irq_base)
+static int mxs_gpio_init_gc(struct mxs_gpio_port *port, int irq_base)
{
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
/* FIXME: should the legacy sysfs handling be moved to gpio_device? */
gpiochip_sysfs_unregister(gdev);
+ gpiochip_free_hogs(chip);
/* Numb the device, cancelling all outstanding operations */
gdev->chip = NULL;
gpiochip_irqchip_remove(chip);
acpi_gpiochip_remove(chip);
gpiochip_remove_pin_ranges(chip);
- gpiochip_free_hogs(chip);
of_gpiochip_remove(chip);
/*
* We accept no more calls into the driver from this point, so
else if (type == CGS_UCODE_ID_SMU_SK)
strcpy(fw_name, "amdgpu/polaris10_smc_sk.bin");
break;
+ case CHIP_POLARIS12:
+ strcpy(fw_name, "amdgpu/polaris12_smc.bin");
+ break;
default:
DRM_ERROR("SMC firmware not supported\n");
return -EINVAL;
"STONEY",
"POLARIS10",
"POLARIS11",
+ "POLARIS12",
"LAST",
};
case CHIP_FIJI:
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
case CHIP_CARRIZO:
case CHIP_STONEY:
if (adev->asic_type == CHIP_CARRIZO || adev->asic_type == CHIP_STONEY)
{0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
{0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
{0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ /* Polaris12 */
+ {0x1002, 0x6980, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x6981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x6985, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x6986, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x6987, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x699F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0, 0, 0}
};
switch (adev->asic_type) {
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
case CHIP_TONGA:
case CHIP_FIJI:
case CHIP_TOPAZ:
#define FIRMWARE_STONEY "amdgpu/stoney_uvd.bin"
#define FIRMWARE_POLARIS10 "amdgpu/polaris10_uvd.bin"
#define FIRMWARE_POLARIS11 "amdgpu/polaris11_uvd.bin"
+#define FIRMWARE_POLARIS12 "amdgpu/polaris12_uvd.bin"
/**
* amdgpu_uvd_cs_ctx - Command submission parser context
MODULE_FIRMWARE(FIRMWARE_STONEY);
MODULE_FIRMWARE(FIRMWARE_POLARIS10);
MODULE_FIRMWARE(FIRMWARE_POLARIS11);
+MODULE_FIRMWARE(FIRMWARE_POLARIS12);
static void amdgpu_uvd_idle_work_handler(struct work_struct *work);
case CHIP_POLARIS11:
fw_name = FIRMWARE_POLARIS11;
break;
+ case CHIP_POLARIS12:
+ fw_name = FIRMWARE_POLARIS12;
+ break;
default:
return -EINVAL;
}
#define FIRMWARE_STONEY "amdgpu/stoney_vce.bin"
#define FIRMWARE_POLARIS10 "amdgpu/polaris10_vce.bin"
#define FIRMWARE_POLARIS11 "amdgpu/polaris11_vce.bin"
+#define FIRMWARE_POLARIS12 "amdgpu/polaris12_vce.bin"
#ifdef CONFIG_DRM_AMDGPU_CIK
MODULE_FIRMWARE(FIRMWARE_BONAIRE);
MODULE_FIRMWARE(FIRMWARE_STONEY);
MODULE_FIRMWARE(FIRMWARE_POLARIS10);
MODULE_FIRMWARE(FIRMWARE_POLARIS11);
+MODULE_FIRMWARE(FIRMWARE_POLARIS12);
static void amdgpu_vce_idle_work_handler(struct work_struct *work);
case CHIP_POLARIS11:
fw_name = FIRMWARE_POLARIS11;
break;
+ case CHIP_POLARIS12:
+ fw_name = FIRMWARE_POLARIS12;
+ break;
default:
return -EINVAL;
(const u32)ARRAY_SIZE(stoney_golden_settings_a11));
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev,
polaris11_golden_settings_a11,
(const u32)ARRAY_SIZE(polaris11_golden_settings_a11));
num_crtc = 6;
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
num_crtc = 5;
break;
default:
adev->mode_info.audio.num_pins = 8;
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
adev->mode_info.audio.num_pins = 6;
break;
default:
int pll;
if ((adev->asic_type == CHIP_POLARIS10) ||
- (adev->asic_type == CHIP_POLARIS11)) {
+ (adev->asic_type == CHIP_POLARIS11) ||
+ (adev->asic_type == CHIP_POLARIS12)) {
struct amdgpu_encoder *amdgpu_encoder =
to_amdgpu_encoder(amdgpu_crtc->encoder);
struct amdgpu_encoder_atom_dig *dig = amdgpu_encoder->enc_priv;
return -EINVAL;
if ((adev->asic_type == CHIP_POLARIS10) ||
- (adev->asic_type == CHIP_POLARIS11)) {
+ (adev->asic_type == CHIP_POLARIS11) ||
+ (adev->asic_type == CHIP_POLARIS12)) {
struct amdgpu_encoder *amdgpu_encoder =
to_amdgpu_encoder(amdgpu_crtc->encoder);
int encoder_mode =
adev->mode_info.num_dig = 6;
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
adev->mode_info.num_hpd = 5;
adev->mode_info.num_dig = 5;
break;
amdgpu_atombios_crtc_powergate_init(adev);
amdgpu_atombios_encoder_init_dig(adev);
if ((adev->asic_type == CHIP_POLARIS10) ||
- (adev->asic_type == CHIP_POLARIS11)) {
+ (adev->asic_type == CHIP_POLARIS11) ||
+ (adev->asic_type == CHIP_POLARIS12)) {
amdgpu_atombios_crtc_set_dce_clock(adev, adev->clock.default_dispclk,
DCE_CLOCK_TYPE_DISPCLK, ATOM_GCK_DFS);
amdgpu_atombios_crtc_set_dce_clock(adev, 0,
MODULE_FIRMWARE("amdgpu/polaris10_mec2.bin");
MODULE_FIRMWARE("amdgpu/polaris10_rlc.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_ce.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_pfp.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_me.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_mec.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_mec2.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_rlc.bin");
+
static const struct amdgpu_gds_reg_offset amdgpu_gds_reg_offset[] =
{
{mmGDS_VMID0_BASE, mmGDS_VMID0_SIZE, mmGDS_GWS_VMID0, mmGDS_OA_VMID0},
(const u32)ARRAY_SIZE(tonga_golden_common_all));
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev,
golden_settings_polaris11_a11,
(const u32)ARRAY_SIZE(golden_settings_polaris11_a11));
case CHIP_POLARIS10:
chip_name = "polaris10";
break;
+ case CHIP_POLARIS12:
+ chip_name = "polaris12";
+ break;
case CHIP_STONEY:
chip_name = "stoney";
break;
gb_addr_config = TONGA_GB_ADDR_CONFIG_GOLDEN;
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
ret = amdgpu_atombios_get_gfx_info(adev);
if (ret)
return ret;
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
PIPE_CONFIG(ADDR_SURF_P4_16x16) |
TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
*rconf1 |= 0x0;
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
*rconf |= RB_MAP_PKR0(2) | RB_XSEL2(1) | SE_MAP(2) |
SE_XSEL(1) | SE_YSEL(1);
*rconf1 |= 0x0;
cz_enable_cp_power_gating(adev, true);
else
cz_enable_cp_power_gating(adev, false);
- } else if (adev->asic_type == CHIP_POLARIS11) {
+ } else if ((adev->asic_type == CHIP_POLARIS11) ||
+ (adev->asic_type == CHIP_POLARIS12)) {
gfx_v8_0_init_csb(adev);
gfx_v8_0_init_save_restore_list(adev);
gfx_v8_0_enable_save_restore_machine(adev);
RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK);
WREG32(mmRLC_CGCG_CGLS_CTRL, tmp);
if (adev->asic_type == CHIP_POLARIS11 ||
- adev->asic_type == CHIP_POLARIS10) {
+ adev->asic_type == CHIP_POLARIS10 ||
+ adev->asic_type == CHIP_POLARIS12) {
tmp = RREG32(mmRLC_CGCG_CGLS_CTRL_3D);
tmp &= ~0x3;
WREG32(mmRLC_CGCG_CGLS_CTRL_3D, tmp);
amdgpu_ring_write(ring, 0x0000002A);
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
amdgpu_ring_write(ring, 0x16000012);
amdgpu_ring_write(ring, 0x00000000);
break;
(adev->asic_type == CHIP_FIJI) ||
(adev->asic_type == CHIP_STONEY) ||
(adev->asic_type == CHIP_POLARIS11) ||
- (adev->asic_type == CHIP_POLARIS10)) {
+ (adev->asic_type == CHIP_POLARIS10) ||
+ (adev->asic_type == CHIP_POLARIS12)) {
WREG32(mmCP_MEC_DOORBELL_RANGE_LOWER,
AMDGPU_DOORBELL_KIQ << 2);
WREG32(mmCP_MEC_DOORBELL_RANGE_UPPER,
mqd->cp_hqd_persistent_state = tmp;
if (adev->asic_type == CHIP_STONEY ||
adev->asic_type == CHIP_POLARIS11 ||
- adev->asic_type == CHIP_POLARIS10) {
+ adev->asic_type == CHIP_POLARIS10 ||
+ adev->asic_type == CHIP_POLARIS12) {
tmp = RREG32(mmCP_ME1_PIPE3_INT_CNTL);
tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE3_INT_CNTL, GENERIC2_INT_ENABLE, 1);
WREG32(mmCP_ME1_PIPE3_INT_CNTL, tmp);
static void gfx_v8_0_enable_gfx_static_mg_power_gating(struct amdgpu_device *adev,
bool enable)
{
- if (adev->asic_type == CHIP_POLARIS11)
+ if ((adev->asic_type == CHIP_POLARIS11) ||
+ (adev->asic_type == CHIP_POLARIS12))
/* Send msg to SMU via Powerplay */
amdgpu_set_powergating_state(adev,
AMD_IP_BLOCK_TYPE_SMC,
gfx_v8_0_enable_gfx_dynamic_mg_power_gating(adev, false);
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
if ((adev->pg_flags & AMD_PG_SUPPORT_GFX_SMG) && enable)
gfx_v8_0_enable_gfx_static_mg_power_gating(adev, true);
else
MODULE_FIRMWARE("amdgpu/tonga_mc.bin");
MODULE_FIRMWARE("amdgpu/polaris11_mc.bin");
MODULE_FIRMWARE("amdgpu/polaris10_mc.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_mc.bin");
static const u32 golden_settings_tonga_a11[] =
{
(const u32)ARRAY_SIZE(golden_settings_tonga_a11));
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev,
golden_settings_polaris11_a11,
(const u32)ARRAY_SIZE(golden_settings_polaris11_a11));
case CHIP_POLARIS10:
chip_name = "polaris10";
break;
+ case CHIP_POLARIS12:
+ chip_name = "polaris12";
+ break;
case CHIP_FIJI:
case CHIP_CARRIZO:
case CHIP_STONEY:
MODULE_FIRMWARE("amdgpu/polaris10_sdma1.bin");
MODULE_FIRMWARE("amdgpu/polaris11_sdma.bin");
MODULE_FIRMWARE("amdgpu/polaris11_sdma1.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_sdma.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_sdma1.bin");
static const u32 sdma_offsets[SDMA_MAX_INSTANCE] =
(const u32)ARRAY_SIZE(golden_settings_tonga_a11));
break;
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev,
golden_settings_polaris11_a11,
(const u32)ARRAY_SIZE(golden_settings_polaris11_a11));
case CHIP_POLARIS10:
chip_name = "polaris10";
break;
+ case CHIP_POLARIS12:
+ chip_name = "polaris12";
+ break;
case CHIP_CARRIZO:
chip_name = "carrizo";
break;
#define BIOS_SCRATCH_4 0x5cd
MODULE_FIRMWARE("radeon/tahiti_smc.bin");
-MODULE_FIRMWARE("radeon/tahiti_k_smc.bin");
MODULE_FIRMWARE("radeon/pitcairn_smc.bin");
MODULE_FIRMWARE("radeon/pitcairn_k_smc.bin");
MODULE_FIRMWARE("radeon/verde_smc.bin");
(adev->pdev->device == 0x6817) ||
(adev->pdev->device == 0x6806))
max_mclk = 120000;
- } else if (adev->asic_type == CHIP_VERDE) {
- if ((adev->pdev->revision == 0x81) ||
- (adev->pdev->revision == 0x83) ||
- (adev->pdev->revision == 0x87) ||
- (adev->pdev->device == 0x6820) ||
- (adev->pdev->device == 0x6821) ||
- (adev->pdev->device == 0x6822) ||
- (adev->pdev->device == 0x6823) ||
- (adev->pdev->device == 0x682A) ||
- (adev->pdev->device == 0x682B)) {
- max_sclk = 75000;
- max_mclk = 80000;
- }
} else if (adev->asic_type == CHIP_OLAND) {
if ((adev->pdev->revision == 0xC7) ||
(adev->pdev->revision == 0x80) ||
chip_name = "tahiti";
break;
case CHIP_PITCAIRN:
- if ((adev->pdev->revision == 0x81) ||
- (adev->pdev->device == 0x6810) ||
- (adev->pdev->device == 0x6811) ||
- (adev->pdev->device == 0x6816) ||
- (adev->pdev->device == 0x6817) ||
- (adev->pdev->device == 0x6806))
+ if ((adev->pdev->revision == 0x81) &&
+ ((adev->pdev->device == 0x6810) ||
+ (adev->pdev->device == 0x6811)))
chip_name = "pitcairn_k";
else
chip_name = "pitcairn";
break;
case CHIP_VERDE:
- if ((adev->pdev->revision == 0x81) ||
- (adev->pdev->revision == 0x83) ||
- (adev->pdev->revision == 0x87) ||
- (adev->pdev->device == 0x6820) ||
- (adev->pdev->device == 0x6821) ||
- (adev->pdev->device == 0x6822) ||
- (adev->pdev->device == 0x6823) ||
- (adev->pdev->device == 0x682A) ||
- (adev->pdev->device == 0x682B))
+ if (((adev->pdev->device == 0x6820) &&
+ ((adev->pdev->revision == 0x81) ||
+ (adev->pdev->revision == 0x83))) ||
+ ((adev->pdev->device == 0x6821) &&
+ ((adev->pdev->revision == 0x83) ||
+ (adev->pdev->revision == 0x87))) ||
+ ((adev->pdev->revision == 0x87) &&
+ ((adev->pdev->device == 0x6823) ||
+ (adev->pdev->device == 0x682b))))
chip_name = "verde_k";
else
chip_name = "verde";
break;
case CHIP_OLAND:
- if ((adev->pdev->revision == 0xC7) ||
- (adev->pdev->revision == 0x80) ||
- (adev->pdev->revision == 0x81) ||
- (adev->pdev->revision == 0x83) ||
- (adev->pdev->revision == 0x87) ||
- (adev->pdev->device == 0x6604) ||
- (adev->pdev->device == 0x6605))
+ if (((adev->pdev->revision == 0x81) &&
+ ((adev->pdev->device == 0x6600) ||
+ (adev->pdev->device == 0x6604) ||
+ (adev->pdev->device == 0x6605) ||
+ (adev->pdev->device == 0x6610))) ||
+ ((adev->pdev->revision == 0x83) &&
+ (adev->pdev->device == 0x6610)))
chip_name = "oland_k";
else
chip_name = "oland";
break;
case CHIP_HAINAN:
- if ((adev->pdev->revision == 0x81) ||
- (adev->pdev->revision == 0x83) ||
- (adev->pdev->revision == 0xC3) ||
- (adev->pdev->device == 0x6664) ||
- (adev->pdev->device == 0x6665) ||
- (adev->pdev->device == 0x6667))
+ if (((adev->pdev->revision == 0x81) &&
+ (adev->pdev->device == 0x6660)) ||
+ ((adev->pdev->revision == 0x83) &&
+ ((adev->pdev->device == 0x6660) ||
+ (adev->pdev->device == 0x6663) ||
+ (adev->pdev->device == 0x6665) ||
+ (adev->pdev->device == 0x6667))) ||
+ ((adev->pdev->revision == 0xc3) &&
+ (adev->pdev->device == 0x6665)))
chip_name = "hainan_k";
else
chip_name = "hainan";
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
- static int curstate = -1;
if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
return 0;
- if (curstate == state)
- return 0;
-
- curstate = state;
if (enable) {
/* wait for STATUS to clear */
if (uvd_v5_0_wait_for_idle(handle))
{
u32 tmp;
- /* Fiji, Stoney, Polaris10, Polaris11 are single pipe */
+ /* Fiji, Stoney, Polaris10, Polaris11, Polaris12 are single pipe */
if ((adev->asic_type == CHIP_FIJI) ||
(adev->asic_type == CHIP_STONEY) ||
(adev->asic_type == CHIP_POLARIS10) ||
- (adev->asic_type == CHIP_POLARIS11))
+ (adev->asic_type == CHIP_POLARIS11) ||
+ (adev->asic_type == CHIP_POLARIS12))
return AMDGPU_VCE_HARVEST_VCE1;
/* Tonga and CZ are dual or single pipe */
MODULE_FIRMWARE("amdgpu/polaris10_smc_sk.bin");
MODULE_FIRMWARE("amdgpu/polaris11_smc.bin");
MODULE_FIRMWARE("amdgpu/polaris11_smc_sk.bin");
+MODULE_FIRMWARE("amdgpu/polaris12_smc.bin");
/*
* Indirect registers accessor
break;
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
default:
break;
}
case CHIP_TONGA:
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
case CHIP_CARRIZO:
case CHIP_STONEY:
asic_register_table = cz_allowed_read_registers;
adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x50;
break;
+ case CHIP_POLARIS12:
+ adev->cg_flags = AMD_CG_SUPPORT_UVD_MGCG;
+ adev->pg_flags = 0;
+ adev->external_rev_id = adev->rev_id + 0x64;
+ break;
case CHIP_CARRIZO:
adev->cg_flags = AMD_CG_SUPPORT_UVD_MGCG |
AMD_CG_SUPPORT_GFX_MGCG |
case CHIP_TONGA:
case CHIP_POLARIS10:
case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
vi_common_set_clockgating_state_by_smu(adev, state);
default:
break;
break;
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
amdgpu_ip_block_add(adev, &vi_common_ip_block);
amdgpu_ip_block_add(adev, &gmc_v8_1_ip_block);
amdgpu_ip_block_add(adev, &tonga_ih_ip_block);
#ifndef __AMD_SHARED_H__
#define __AMD_SHARED_H__
-#define AMD_MAX_USEC_TIMEOUT 100000 /* 100 ms */
+#define AMD_MAX_USEC_TIMEOUT 200000 /* 200 ms */
/*
* Supported ASIC types
CHIP_STONEY,
CHIP_POLARIS10,
CHIP_POLARIS11,
+ CHIP_POLARIS12,
CHIP_LAST,
};
break;
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
polaris_set_asic_special_caps(hwmgr);
hwmgr->feature_mask &= ~(PP_UVD_HANDSHAKE_MASK);
break;
phm_cap_set(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_TablelessHardwareInterface);
- if (hwmgr->chip_id == CHIP_POLARIS11)
+ if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id == CHIP_POLARIS12))
phm_cap_set(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_SPLLShutdownSupport);
return 0;
PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris10);
PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
- } else if (hwmgr->chip_id == CHIP_POLARIS11) {
+ } else if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id == CHIP_POLARIS12)) {
result = smu7_program_pt_config_registers(hwmgr, GCCACConfig_Polaris11);
PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris11);
break;
case CHIP_POLARIS11:
case CHIP_POLARIS10:
+ case CHIP_POLARIS12:
polaris10_smum_init(smumgr);
break;
default:
if (!nonblock) {
ret = drm_atomic_helper_wait_for_fences(dev, state, true);
- if (ret)
+ if (ret) {
+ drm_atomic_helper_cleanup_planes(dev, state);
return ret;
+ }
}
/*
struct drm_crtc_state *crtc_state;
struct drm_rect clip = { 0, };
+ if (!state->crtc)
+ return 0;
+
crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc);
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
* - TV Panel encoding via ENCT
*/
+/* HHI Registers */
+#define HHI_VDAC_CNTL0 0x2F4 /* 0xbd offset in data sheet */
+#define HHI_VDAC_CNTL1 0x2F8 /* 0xbe offset in data sheet */
+#define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 offset in data sheet */
+
struct meson_cvbs_enci_mode meson_cvbs_enci_pal = {
.mode_tag = MESON_VENC_MODE_CVBS_PAL,
.hso_begin = 3,
void meson_venc_init(struct meson_drm *priv)
{
+ /* Disable CVBS VDAC */
+ regmap_write(priv->hhi, HHI_VDAC_CNTL0, 0);
+ regmap_write(priv->hhi, HHI_VDAC_CNTL1, 8);
+
+ /* Power Down Dacs */
+ writel_relaxed(0xff, priv->io_base + _REG(VENC_VDAC_SETTING));
+
+ /* Disable HDMI PHY */
+ regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL0, 0);
+
+ /* Disable HDMI */
+ writel_bits_relaxed(0x3, 0,
+ priv->io_base + _REG(VPU_HDMI_SETTING));
+
/* Disable all encoders */
writel_relaxed(0, priv->io_base + _REG(ENCI_VIDEO_EN));
writel_relaxed(0, priv->io_base + _REG(ENCP_VIDEO_EN));
/* Disable CVBS VDAC */
regmap_write(priv->hhi, HHI_VDAC_CNTL0, 0);
- regmap_write(priv->hhi, HHI_VDAC_CNTL1, 0);
+ regmap_write(priv->hhi, HHI_VDAC_CNTL1, 8);
}
static void meson_venc_cvbs_encoder_enable(struct drm_encoder *encoder)
void adreno_flush(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
- uint32_t wptr = get_wptr(gpu->rb);
+ uint32_t wptr;
+
+ /*
+ * Mask wptr value that we calculate to fit in the HW range. This is
+ * to account for the possibility that the last command fit exactly into
+ * the ringbuffer and rb->next hasn't wrapped to zero yet
+ */
+ wptr = get_wptr(gpu->rb) & ((gpu->rb->size / 4) - 1);
/* ensure writes to ringbuffer have hit system memory: */
mb();
pagefault_disable();
}
- if (submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) {
+ if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) ||
+ !(submit_bo.flags & MSM_SUBMIT_BO_FLAGS)) {
DRM_ERROR("invalid flags: %x\n", submit_bo.flags);
ret = -EINVAL;
goto out_unlock;
{
uint32_t i, last_offset = 0;
uint32_t *ptr;
- int ret;
+ int ret = 0;
if (offset % 4) {
DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset);
ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc));
if (ret)
- return -EFAULT;
+ goto out;
if (submit_reloc.submit_offset % 4) {
DRM_ERROR("non-aligned reloc offset: %u\n",
submit_reloc.submit_offset);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
}
/* offset in dwords: */
if ((off >= (obj->base.size / 4)) ||
(off < last_offset)) {
DRM_ERROR("invalid offset %u at reloc %u\n", off, i);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
}
ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid);
if (ret)
- return ret;
+ goto out;
if (valid)
continue;
last_offset = off;
}
+out:
msm_gem_put_vaddr_locked(&obj->base);
- return 0;
+ return ret;
}
static void submit_cleanup(struct msm_gem_submit *submit)
struct msm_ringbuffer *ring;
int ret;
- size = ALIGN(size, 4); /* size should be dword aligned */
+ if (WARN_ON(!is_power_of_2(size)))
+ return ERR_PTR(-EINVAL);
ring = kzalloc(sizeof(*ring), GFP_KERNEL);
if (!ring) {
MODULE_FIRMWARE("radeon/tahiti_mc.bin");
MODULE_FIRMWARE("radeon/tahiti_rlc.bin");
MODULE_FIRMWARE("radeon/tahiti_smc.bin");
-MODULE_FIRMWARE("radeon/tahiti_k_smc.bin");
MODULE_FIRMWARE("radeon/PITCAIRN_pfp.bin");
MODULE_FIRMWARE("radeon/PITCAIRN_me.bin");
switch (rdev->family) {
case CHIP_TAHITI:
chip_name = "TAHITI";
- /* XXX: figure out which Tahitis need the new ucode */
- if (0)
- new_smc = true;
new_chip_name = "tahiti";
pfp_req_size = SI_PFP_UCODE_SIZE * 4;
me_req_size = SI_PM4_UCODE_SIZE * 4;
break;
case CHIP_PITCAIRN:
chip_name = "PITCAIRN";
- if ((rdev->pdev->revision == 0x81) ||
- (rdev->pdev->device == 0x6810) ||
- (rdev->pdev->device == 0x6811) ||
- (rdev->pdev->device == 0x6816) ||
- (rdev->pdev->device == 0x6817) ||
- (rdev->pdev->device == 0x6806))
+ if ((rdev->pdev->revision == 0x81) &&
+ ((rdev->pdev->device == 0x6810) ||
+ (rdev->pdev->device == 0x6811)))
new_smc = true;
new_chip_name = "pitcairn";
pfp_req_size = SI_PFP_UCODE_SIZE * 4;
break;
case CHIP_VERDE:
chip_name = "VERDE";
- if ((rdev->pdev->revision == 0x81) ||
- (rdev->pdev->revision == 0x83) ||
- (rdev->pdev->revision == 0x87) ||
- (rdev->pdev->device == 0x6820) ||
- (rdev->pdev->device == 0x6821) ||
- (rdev->pdev->device == 0x6822) ||
- (rdev->pdev->device == 0x6823) ||
- (rdev->pdev->device == 0x682A) ||
- (rdev->pdev->device == 0x682B))
+ if (((rdev->pdev->device == 0x6820) &&
+ ((rdev->pdev->revision == 0x81) ||
+ (rdev->pdev->revision == 0x83))) ||
+ ((rdev->pdev->device == 0x6821) &&
+ ((rdev->pdev->revision == 0x83) ||
+ (rdev->pdev->revision == 0x87))) ||
+ ((rdev->pdev->revision == 0x87) &&
+ ((rdev->pdev->device == 0x6823) ||
+ (rdev->pdev->device == 0x682b))))
new_smc = true;
new_chip_name = "verde";
pfp_req_size = SI_PFP_UCODE_SIZE * 4;
break;
case CHIP_OLAND:
chip_name = "OLAND";
- if ((rdev->pdev->revision == 0xC7) ||
- (rdev->pdev->revision == 0x80) ||
- (rdev->pdev->revision == 0x81) ||
- (rdev->pdev->revision == 0x83) ||
- (rdev->pdev->revision == 0x87) ||
- (rdev->pdev->device == 0x6604) ||
- (rdev->pdev->device == 0x6605))
+ if (((rdev->pdev->revision == 0x81) &&
+ ((rdev->pdev->device == 0x6600) ||
+ (rdev->pdev->device == 0x6604) ||
+ (rdev->pdev->device == 0x6605) ||
+ (rdev->pdev->device == 0x6610))) ||
+ ((rdev->pdev->revision == 0x83) &&
+ (rdev->pdev->device == 0x6610)))
new_smc = true;
new_chip_name = "oland";
pfp_req_size = SI_PFP_UCODE_SIZE * 4;
break;
case CHIP_HAINAN:
chip_name = "HAINAN";
- if ((rdev->pdev->revision == 0x81) ||
- (rdev->pdev->revision == 0x83) ||
- (rdev->pdev->revision == 0xC3) ||
- (rdev->pdev->device == 0x6664) ||
- (rdev->pdev->device == 0x6665) ||
- (rdev->pdev->device == 0x6667))
+ if (((rdev->pdev->revision == 0x81) &&
+ (rdev->pdev->device == 0x6660)) ||
+ ((rdev->pdev->revision == 0x83) &&
+ ((rdev->pdev->device == 0x6660) ||
+ (rdev->pdev->device == 0x6663) ||
+ (rdev->pdev->device == 0x6665) ||
+ (rdev->pdev->device == 0x6667))) ||
+ ((rdev->pdev->revision == 0xc3) &&
+ (rdev->pdev->device == 0x6665)))
new_smc = true;
new_chip_name = "hainan";
pfp_req_size = SI_PFP_UCODE_SIZE * 4;
(rdev->pdev->device == 0x6817) ||
(rdev->pdev->device == 0x6806))
max_mclk = 120000;
- } else if (rdev->family == CHIP_VERDE) {
- if ((rdev->pdev->revision == 0x81) ||
- (rdev->pdev->revision == 0x83) ||
- (rdev->pdev->revision == 0x87) ||
- (rdev->pdev->device == 0x6820) ||
- (rdev->pdev->device == 0x6821) ||
- (rdev->pdev->device == 0x6822) ||
- (rdev->pdev->device == 0x6823) ||
- (rdev->pdev->device == 0x682A) ||
- (rdev->pdev->device == 0x682B)) {
- max_sclk = 75000;
- max_mclk = 80000;
- }
} else if (rdev->family == CHIP_OLAND) {
if ((rdev->pdev->revision == 0xC7) ||
(rdev->pdev->revision == 0x80) ||
struct tilcdc_crtc *tilcdc_crtc = to_tilcdc_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct tilcdc_drm_private *priv = dev->dev_private;
- uint32_t stat;
+ uint32_t stat, reg;
stat = tilcdc_read_irqstatus(dev);
tilcdc_clear_irqstatus(dev, stat);
dev_err_ratelimited(dev->dev, "%s(0x%08x): Sync lost",
__func__, stat);
tilcdc_crtc->frame_intact = false;
- if (tilcdc_crtc->sync_lost_count++ >
- SYNC_LOST_COUNT_LIMIT) {
- dev_err(dev->dev, "%s(0x%08x): Sync lost flood detected, recovering", __func__, stat);
- queue_work(system_wq, &tilcdc_crtc->recover_work);
- if (priv->rev == 1)
+ if (priv->rev == 1) {
+ reg = tilcdc_read(dev, LCDC_RASTER_CTRL_REG);
+ if (reg & LCDC_RASTER_ENABLE) {
tilcdc_clear(dev, LCDC_RASTER_CTRL_REG,
- LCDC_V1_SYNC_LOST_INT_ENA);
- else
+ LCDC_RASTER_ENABLE);
+ tilcdc_set(dev, LCDC_RASTER_CTRL_REG,
+ LCDC_RASTER_ENABLE);
+ }
+ } else {
+ if (tilcdc_crtc->sync_lost_count++ >
+ SYNC_LOST_COUNT_LIMIT) {
+ dev_err(dev->dev,
+ "%s(0x%08x): Sync lost flood detected, recovering",
+ __func__, stat);
+ queue_work(system_wq,
+ &tilcdc_crtc->recover_work);
tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG,
LCDC_SYNC_LOST);
- tilcdc_crtc->sync_lost_count = 0;
+ tilcdc_crtc->sync_lost_count = 0;
+ }
}
}
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0002) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0003) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PETZL, USB_DEVICE_ID_PETZL_HEADLAMP) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) },
#if IS_ENABLED(CONFIG_MOUSE_SYNAPTICS_USB)
if (!(quirks & CP_RDESC_SWAPPED_MIN_MAX))
return rdesc;
+ if (*rsize < 4)
+ return rdesc;
+
for (i = 0; i < *rsize - 4; i++)
if (rdesc[i] == 0x29 && rdesc[i + 2] == 0x19) {
rdesc[i] = 0x19;
#define USB_VENDOR_ID_PETALYNX 0x18b1
#define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037
+#define USB_VENDOR_ID_PETZL 0x2122
+#define USB_DEVICE_ID_PETZL_HEADLAMP 0x1234
+
#define USB_VENDOR_ID_PHILIPS 0x0471
#define USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE 0x0617
if (ret)
goto out_unlock;
+ /*
+ * The HID over I2C specification states that if a DEVICE needs time
+ * after the PWR_ON request, it should utilise CLOCK stretching.
+ * However, it has been observered that the Windows driver provides a
+ * 1ms sleep between the PWR_ON and RESET requests and that some devices
+ * rely on this.
+ */
+ usleep_range(1000, 5000);
+
i2c_hid_dbg(ihid, "resetting...\n");
ret = i2c_hid_command(client, &hid_reset_cmd, NULL, 0);
}
init_completion(&open_info->waitevent);
+ open_info->waiting_channel = newchannel;
open_msg = (struct vmbus_channel_open_channel *)open_info->msg;
open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL;
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
ret = vmbus_post_msg(open_msg,
- sizeof(struct vmbus_channel_open_channel));
+ sizeof(struct vmbus_channel_open_channel), true);
if (ret != 0) {
err = ret;
list_del(&open_info->msglistentry);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+ if (newchannel->rescind) {
+ err = -ENODEV;
+ goto error_free_gpadl;
+ }
+
if (open_info->response.open_result.status) {
err = -EAGAIN;
goto error_free_gpadl;
conn_msg.guest_endpoint_id = *shv_guest_servie_id;
conn_msg.host_service_id = *shv_host_servie_id;
- return vmbus_post_msg(&conn_msg, sizeof(conn_msg));
+ return vmbus_post_msg(&conn_msg, sizeof(conn_msg), true);
}
EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request);
return ret;
init_completion(&msginfo->waitevent);
+ msginfo->waiting_channel = channel;
gpadlmsg = (struct vmbus_channel_gpadl_header *)msginfo->msg;
gpadlmsg->header.msgtype = CHANNELMSG_GPADL_HEADER;
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
ret = vmbus_post_msg(gpadlmsg, msginfo->msgsize -
- sizeof(*msginfo));
+ sizeof(*msginfo), true);
if (ret != 0)
goto cleanup;
gpadl_body->gpadl = next_gpadl_handle;
ret = vmbus_post_msg(gpadl_body,
- submsginfo->msgsize -
- sizeof(*submsginfo));
+ submsginfo->msgsize - sizeof(*submsginfo),
+ true);
if (ret != 0)
goto cleanup;
}
wait_for_completion(&msginfo->waitevent);
+ if (channel->rescind) {
+ ret = -ENODEV;
+ goto cleanup;
+ }
+
/* At this point, we received the gpadl created msg */
*gpadl_handle = gpadlmsg->gpadl;
return -ENOMEM;
init_completion(&info->waitevent);
+ info->waiting_channel = channel;
msg = (struct vmbus_channel_gpadl_teardown *)info->msg;
list_add_tail(&info->msglistentry,
&vmbus_connection.chn_msg_list);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
- ret = vmbus_post_msg(msg,
- sizeof(struct vmbus_channel_gpadl_teardown));
+ ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_gpadl_teardown),
+ true);
if (ret)
goto post_msg_err;
wait_for_completion(&info->waitevent);
+ if (channel->rescind) {
+ ret = -ENODEV;
+ goto post_msg_err;
+ }
+
post_msg_err:
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
list_del(&info->msglistentry);
msg->header.msgtype = CHANNELMSG_CLOSECHANNEL;
msg->child_relid = channel->offermsg.child_relid;
- ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel));
+ ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel),
+ true);
if (ret) {
pr_err("Close failed: close post msg return is %d\n", ret);
#include <linux/completion.h>
#include <linux/delay.h>
#include <linux/hyperv.h>
+#include <asm/mshyperv.h>
#include "hyperv_vmbus.h"
{ HV_RDV_GUID },
};
+/*
+ * The rescinded channel may be blocked waiting for a response from the host;
+ * take care of that.
+ */
+static void vmbus_rescind_cleanup(struct vmbus_channel *channel)
+{
+ struct vmbus_channel_msginfo *msginfo;
+ unsigned long flags;
+
+
+ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
+
+ list_for_each_entry(msginfo, &vmbus_connection.chn_msg_list,
+ msglistentry) {
+
+ if (msginfo->waiting_channel == channel) {
+ complete(&msginfo->waitevent);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+}
+
static bool is_unsupported_vmbus_devs(const uuid_le *guid)
{
int i;
memset(&msg, 0, sizeof(struct vmbus_channel_relid_released));
msg.child_relid = relid;
msg.header.msgtype = CHANNELMSG_RELID_RELEASED;
- vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released));
+ vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released),
+ true);
}
void hv_event_tasklet_disable(struct vmbus_channel *channel)
init_completion(&vmbus_connection.unload_event);
memset(&hdr, 0, sizeof(struct vmbus_channel_message_header));
hdr.msgtype = CHANNELMSG_UNLOAD;
- vmbus_post_msg(&hdr, sizeof(struct vmbus_channel_message_header));
+ vmbus_post_msg(&hdr, sizeof(struct vmbus_channel_message_header),
+ !crash);
/*
* vmbus_initiate_unload() is also called on crash and the crash can be
channel->rescind = true;
spin_unlock_irqrestore(&channel->lock, flags);
+ vmbus_rescind_cleanup(channel);
+
if (channel->device_obj) {
if (channel->chn_rescind_callback) {
channel->chn_rescind_callback(channel);
msg->msgtype = CHANNELMSG_REQUESTOFFERS;
- ret = vmbus_post_msg(msg,
- sizeof(struct vmbus_channel_message_header));
+ ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_message_header),
+ true);
if (ret != 0) {
pr_err("Unable to request offers - %d\n", ret);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
ret = vmbus_post_msg(msg,
- sizeof(struct vmbus_channel_initiate_contact));
+ sizeof(struct vmbus_channel_initiate_contact),
+ true);
if (ret != 0) {
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
list_del(&msginfo->msglistentry);
goto cleanup;
vmbus_proto_version = version;
- pr_info("Hyper-V Host Build:%d-%d.%d-%d-%d.%d; Vmbus version:%d.%d\n",
- host_info_eax, host_info_ebx >> 16,
- host_info_ebx & 0xFFFF, host_info_ecx,
- host_info_edx >> 24, host_info_edx & 0xFFFFFF,
- version >> 16, version & 0xFFFF);
+ pr_info("Vmbus version:%d.%d\n",
+ version >> 16, version & 0xFFFF);
kfree(msginfo);
return 0;
/*
* vmbus_post_msg - Send a msg on the vmbus's message connection
*/
-int vmbus_post_msg(void *buffer, size_t buflen)
+int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep)
{
union hv_connection_id conn_id;
int ret = 0;
* insufficient resources. Retry the operation a couple of
* times before giving up.
*/
- while (retries < 20) {
+ while (retries < 100) {
ret = hv_post_message(conn_id, 1, buffer, buflen);
switch (ret) {
}
retries++;
- udelay(usec);
- if (usec < 2048)
+ if (can_sleep && usec > 1000)
+ msleep(usec / 1000);
+ else if (usec < MAX_UDELAY_MS * 1000)
+ udelay(usec);
+ else
+ mdelay(usec / 1000);
+
+ if (usec < 256000)
usec *= 2;
}
return ret;
/* The one and only */
struct hv_context hv_context = {
.synic_initialized = false,
- .hypercall_page = NULL,
};
#define HV_TIMER_FREQUENCY (10 * 1000 * 1000) /* 100ns period */
#define HV_MAX_MAX_DELTA_TICKS 0xffffffff
#define HV_MIN_DELTA_TICKS 1
-/*
- * query_hypervisor_info - Get version info of the windows hypervisor
- */
-unsigned int host_info_eax;
-unsigned int host_info_ebx;
-unsigned int host_info_ecx;
-unsigned int host_info_edx;
-
-static int query_hypervisor_info(void)
-{
- unsigned int eax;
- unsigned int ebx;
- unsigned int ecx;
- unsigned int edx;
- unsigned int max_leaf;
- unsigned int op;
-
- /*
- * Its assumed that this is called after confirming that Viridian
- * is present. Query id and revision.
- */
- eax = 0;
- ebx = 0;
- ecx = 0;
- edx = 0;
- op = HVCPUID_VENDOR_MAXFUNCTION;
- cpuid(op, &eax, &ebx, &ecx, &edx);
-
- max_leaf = eax;
-
- if (max_leaf >= HVCPUID_VERSION) {
- eax = 0;
- ebx = 0;
- ecx = 0;
- edx = 0;
- op = HVCPUID_VERSION;
- cpuid(op, &eax, &ebx, &ecx, &edx);
- host_info_eax = eax;
- host_info_ebx = ebx;
- host_info_ecx = ecx;
- host_info_edx = edx;
- }
- return max_leaf;
-}
-
-/*
- * hv_do_hypercall- Invoke the specified hypercall
- */
-u64 hv_do_hypercall(u64 control, void *input, void *output)
-{
- u64 input_address = (input) ? virt_to_phys(input) : 0;
- u64 output_address = (output) ? virt_to_phys(output) : 0;
- void *hypercall_page = hv_context.hypercall_page;
-#ifdef CONFIG_X86_64
- u64 hv_status = 0;
-
- if (!hypercall_page)
- return (u64)ULLONG_MAX;
-
- __asm__ __volatile__("mov %0, %%r8" : : "r" (output_address) : "r8");
- __asm__ __volatile__("call *%3" : "=a" (hv_status) :
- "c" (control), "d" (input_address),
- "m" (hypercall_page));
-
- return hv_status;
-
-#else
-
- u32 control_hi = control >> 32;
- u32 control_lo = control & 0xFFFFFFFF;
- u32 hv_status_hi = 1;
- u32 hv_status_lo = 1;
- u32 input_address_hi = input_address >> 32;
- u32 input_address_lo = input_address & 0xFFFFFFFF;
- u32 output_address_hi = output_address >> 32;
- u32 output_address_lo = output_address & 0xFFFFFFFF;
-
- if (!hypercall_page)
- return (u64)ULLONG_MAX;
-
- __asm__ __volatile__ ("call *%8" : "=d"(hv_status_hi),
- "=a"(hv_status_lo) : "d" (control_hi),
- "a" (control_lo), "b" (input_address_hi),
- "c" (input_address_lo), "D"(output_address_hi),
- "S"(output_address_lo), "m" (hypercall_page));
-
- return hv_status_lo | ((u64)hv_status_hi << 32);
-#endif /* !x86_64 */
-}
-EXPORT_SYMBOL_GPL(hv_do_hypercall);
-
-#ifdef CONFIG_X86_64
-static u64 read_hv_clock_tsc(struct clocksource *arg)
-{
- u64 current_tick;
- struct ms_hyperv_tsc_page *tsc_pg = hv_context.tsc_page;
-
- if (tsc_pg->tsc_sequence != 0) {
- /*
- * Use the tsc page to compute the value.
- */
-
- while (1) {
- u64 tmp;
- u32 sequence = tsc_pg->tsc_sequence;
- u64 cur_tsc;
- u64 scale = tsc_pg->tsc_scale;
- s64 offset = tsc_pg->tsc_offset;
-
- rdtscll(cur_tsc);
- /* current_tick = ((cur_tsc *scale) >> 64) + offset */
- asm("mulq %3"
- : "=d" (current_tick), "=a" (tmp)
- : "a" (cur_tsc), "r" (scale));
-
- current_tick += offset;
- if (tsc_pg->tsc_sequence == sequence)
- return current_tick;
-
- if (tsc_pg->tsc_sequence != 0)
- continue;
- /*
- * Fallback using MSR method.
- */
- break;
- }
- }
- rdmsrl(HV_X64_MSR_TIME_REF_COUNT, current_tick);
- return current_tick;
-}
-
-static struct clocksource hyperv_cs_tsc = {
- .name = "hyperv_clocksource_tsc_page",
- .rating = 425,
- .read = read_hv_clock_tsc,
- .mask = CLOCKSOURCE_MASK(64),
- .flags = CLOCK_SOURCE_IS_CONTINUOUS,
-};
-#endif
-
-
/*
* hv_init - Main initialization routine.
*
*/
int hv_init(void)
{
- int max_leaf;
- union hv_x64_msr_hypercall_contents hypercall_msr;
- void *virtaddr = NULL;
memset(hv_context.synic_event_page, 0, sizeof(void *) * NR_CPUS);
memset(hv_context.synic_message_page, 0,
memset(hv_context.clk_evt, 0,
sizeof(void *) * NR_CPUS);
- max_leaf = query_hypervisor_info();
-
- /*
- * Write our OS ID.
- */
- hv_context.guestid = generate_guest_id(0, LINUX_VERSION_CODE, 0);
- wrmsrl(HV_X64_MSR_GUEST_OS_ID, hv_context.guestid);
-
- /* See if the hypercall page is already set */
- rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
-
- virtaddr = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_EXEC);
-
- if (!virtaddr)
- goto cleanup;
-
- hypercall_msr.enable = 1;
-
- hypercall_msr.guest_physical_address = vmalloc_to_pfn(virtaddr);
- wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
-
- /* Confirm that hypercall page did get setup. */
- hypercall_msr.as_uint64 = 0;
- rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
-
- if (!hypercall_msr.enable)
- goto cleanup;
-
- hv_context.hypercall_page = virtaddr;
-
-#ifdef CONFIG_X86_64
- if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) {
- union hv_x64_msr_hypercall_contents tsc_msr;
- void *va_tsc;
-
- va_tsc = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL);
- if (!va_tsc)
- goto cleanup;
- hv_context.tsc_page = va_tsc;
+ if (!hv_is_hypercall_page_setup())
+ return -ENOTSUPP;
- rdmsrl(HV_X64_MSR_REFERENCE_TSC, tsc_msr.as_uint64);
-
- tsc_msr.enable = 1;
- tsc_msr.guest_physical_address = vmalloc_to_pfn(va_tsc);
-
- wrmsrl(HV_X64_MSR_REFERENCE_TSC, tsc_msr.as_uint64);
- clocksource_register_hz(&hyperv_cs_tsc, NSEC_PER_SEC/100);
- }
-#endif
return 0;
-
-cleanup:
- if (virtaddr) {
- if (hypercall_msr.enable) {
- hypercall_msr.as_uint64 = 0;
- wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
- }
-
- vfree(virtaddr);
- }
-
- return -ENOTSUPP;
}
/*
*/
void hv_cleanup(bool crash)
{
- union hv_x64_msr_hypercall_contents hypercall_msr;
-
- /* Reset our OS id */
- wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
- if (hv_context.hypercall_page) {
- hypercall_msr.as_uint64 = 0;
- wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
- if (!crash)
- vfree(hv_context.hypercall_page);
- hv_context.hypercall_page = NULL;
- }
-
-#ifdef CONFIG_X86_64
- /*
- * Cleanup the TSC page based CS.
- */
- if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) {
- /*
- * Crash can happen in an interrupt context and unregistering
- * a clocksource is impossible and redundant in this case.
- */
- if (!oops_in_progress) {
- clocksource_change_rating(&hyperv_cs_tsc, 10);
- clocksource_unregister(&hyperv_cs_tsc);
- }
-
- hypercall_msr.as_uint64 = 0;
- wrmsrl(HV_X64_MSR_REFERENCE_TSC, hypercall_msr.as_uint64);
- if (!crash)
- vfree(hv_context.tsc_page);
- hv_context.tsc_page = NULL;
- }
-#endif
}
/*
WARN_ON(!clockevent_state_oneshot(evt));
- rdmsrl(HV_X64_MSR_TIME_REF_COUNT, current_tick);
+ hv_get_current_tick(current_tick);
current_tick += delta;
- wrmsrl(HV_X64_MSR_STIMER0_COUNT, current_tick);
+ hv_init_timer(HV_X64_MSR_STIMER0_COUNT, current_tick);
return 0;
}
static int hv_ce_shutdown(struct clock_event_device *evt)
{
- wrmsrl(HV_X64_MSR_STIMER0_COUNT, 0);
- wrmsrl(HV_X64_MSR_STIMER0_CONFIG, 0);
+ hv_init_timer(HV_X64_MSR_STIMER0_COUNT, 0);
+ hv_init_timer_config(HV_X64_MSR_STIMER0_CONFIG, 0);
return 0;
}
timer_cfg.enable = 1;
timer_cfg.auto_enable = 1;
timer_cfg.sintx = VMBUS_MESSAGE_SINT;
- wrmsrl(HV_X64_MSR_STIMER0_CONFIG, timer_cfg.as_uint64);
+ hv_init_timer_config(HV_X64_MSR_STIMER0_CONFIG, timer_cfg.as_uint64);
return 0;
}
goto err;
}
- for_each_online_cpu(cpu) {
+ for_each_present_cpu(cpu) {
hv_context.event_dpc[cpu] = kmalloc(size, GFP_ATOMIC);
if (hv_context.event_dpc[cpu] == NULL) {
pr_err("Unable to allocate event dpc\n");
pr_err("Unable to allocate post msg page\n");
goto err;
}
+
+ INIT_LIST_HEAD(&hv_context.percpu_list[cpu]);
}
return 0;
int cpu;
kfree(hv_context.hv_numa_map);
- for_each_online_cpu(cpu)
+ for_each_present_cpu(cpu)
hv_synic_free_cpu(cpu);
}
* retrieve the initialized message and event pages. Otherwise, we create and
* initialize the message and event pages.
*/
-void hv_synic_init(void *arg)
+int hv_synic_init(unsigned int cpu)
{
- u64 version;
union hv_synic_simp simp;
union hv_synic_siefp siefp;
union hv_synic_sint shared_sint;
union hv_synic_scontrol sctrl;
u64 vp_index;
- int cpu = smp_processor_id();
-
- if (!hv_context.hypercall_page)
- return;
-
- /* Check the version */
- rdmsrl(HV_X64_MSR_SVERSION, version);
-
/* Setup the Synic's message page */
- rdmsrl(HV_X64_MSR_SIMP, simp.as_uint64);
+ hv_get_simp(simp.as_uint64);
simp.simp_enabled = 1;
simp.base_simp_gpa = virt_to_phys(hv_context.synic_message_page[cpu])
>> PAGE_SHIFT;
- wrmsrl(HV_X64_MSR_SIMP, simp.as_uint64);
+ hv_set_simp(simp.as_uint64);
/* Setup the Synic's event page */
- rdmsrl(HV_X64_MSR_SIEFP, siefp.as_uint64);
+ hv_get_siefp(siefp.as_uint64);
siefp.siefp_enabled = 1;
siefp.base_siefp_gpa = virt_to_phys(hv_context.synic_event_page[cpu])
>> PAGE_SHIFT;
- wrmsrl(HV_X64_MSR_SIEFP, siefp.as_uint64);
+ hv_set_siefp(siefp.as_uint64);
/* Setup the shared SINT. */
- rdmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+ hv_get_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT,
+ shared_sint.as_uint64);
shared_sint.as_uint64 = 0;
shared_sint.vector = HYPERVISOR_CALLBACK_VECTOR;
shared_sint.masked = false;
shared_sint.auto_eoi = true;
- wrmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+ hv_set_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT,
+ shared_sint.as_uint64);
/* Enable the global synic bit */
- rdmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
+ hv_get_synic_state(sctrl.as_uint64);
sctrl.enable = 1;
- wrmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
+ hv_set_synic_state(sctrl.as_uint64);
hv_context.synic_initialized = true;
* of cpuid and Linux' notion of cpuid.
* This array will be indexed using Linux cpuid.
*/
- rdmsrl(HV_X64_MSR_VP_INDEX, vp_index);
+ hv_get_vp_index(vp_index);
hv_context.vp_index[cpu] = (u32)vp_index;
- INIT_LIST_HEAD(&hv_context.percpu_list[cpu]);
-
/*
* Register the per-cpu clockevent source.
*/
HV_TIMER_FREQUENCY,
HV_MIN_DELTA_TICKS,
HV_MAX_MAX_DELTA_TICKS);
- return;
+ return 0;
}
/*
/*
* hv_synic_cleanup - Cleanup routine for hv_synic_init().
*/
-void hv_synic_cleanup(void *arg)
+int hv_synic_cleanup(unsigned int cpu)
{
union hv_synic_sint shared_sint;
union hv_synic_simp simp;
union hv_synic_siefp siefp;
union hv_synic_scontrol sctrl;
- int cpu = smp_processor_id();
+ struct vmbus_channel *channel, *sc;
+ bool channel_found = false;
+ unsigned long flags;
if (!hv_context.synic_initialized)
- return;
+ return -EFAULT;
+
+ /*
+ * Search for channels which are bound to the CPU we're about to
+ * cleanup. In case we find one and vmbus is still connected we need to
+ * fail, this will effectively prevent CPU offlining. There is no way
+ * we can re-bind channels to different CPUs for now.
+ */
+ mutex_lock(&vmbus_connection.channel_mutex);
+ list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
+ if (channel->target_cpu == cpu) {
+ channel_found = true;
+ break;
+ }
+ spin_lock_irqsave(&channel->lock, flags);
+ list_for_each_entry(sc, &channel->sc_list, sc_list) {
+ if (sc->target_cpu == cpu) {
+ channel_found = true;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&channel->lock, flags);
+ if (channel_found)
+ break;
+ }
+ mutex_unlock(&vmbus_connection.channel_mutex);
+
+ if (channel_found && vmbus_connection.conn_state == CONNECTED)
+ return -EBUSY;
/* Turn off clockevent device */
if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE) {
hv_ce_shutdown(hv_context.clk_evt[cpu]);
}
- rdmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+ hv_get_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT,
+ shared_sint.as_uint64);
shared_sint.masked = 1;
/* Need to correctly cleanup in the case of SMP!!! */
/* Disable the interrupt */
- wrmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+ hv_set_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT,
+ shared_sint.as_uint64);
- rdmsrl(HV_X64_MSR_SIMP, simp.as_uint64);
+ hv_get_simp(simp.as_uint64);
simp.simp_enabled = 0;
simp.base_simp_gpa = 0;
- wrmsrl(HV_X64_MSR_SIMP, simp.as_uint64);
+ hv_set_simp(simp.as_uint64);
- rdmsrl(HV_X64_MSR_SIEFP, siefp.as_uint64);
+ hv_get_siefp(siefp.as_uint64);
siefp.siefp_enabled = 0;
siefp.base_siefp_gpa = 0;
- wrmsrl(HV_X64_MSR_SIEFP, siefp.as_uint64);
+ hv_set_siefp(siefp.as_uint64);
/* Disable the global synic bit */
- rdmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
+ hv_get_synic_state(sctrl.as_uint64);
sctrl.enable = 0;
- wrmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
+ hv_set_synic_state(sctrl.as_uint64);
+
+ return 0;
}
static const char fcopy_devname[] = "vmbus/hv_fcopy";
static u8 *recv_buffer;
static struct hvutil_transport *hvt;
+static struct completion release_event;
/*
* This state maintains the version number registered by the daemon.
*/
if (cancel_delayed_work_sync(&fcopy_timeout_work))
fcopy_respond_to_host(HV_E_FAIL);
+ complete(&release_event);
}
int hv_fcopy_init(struct hv_util_service *srv)
recv_buffer = srv->recv_buffer;
fcopy_transaction.recv_channel = srv->channel;
+ init_completion(&release_event);
/*
* When this driver loads, the user level daemon that
* processes the host requests may not yet be running.
fcopy_transaction.state = HVUTIL_DEVICE_DYING;
cancel_delayed_work_sync(&fcopy_timeout_work);
hvutil_transport_destroy(hvt);
+ wait_for_completion(&release_event);
}
static const char kvp_devname[] = "vmbus/hv_kvp";
static u8 *recv_buffer;
static struct hvutil_transport *hvt;
+static struct completion release_event;
/*
* Register the kernel component with the user-level daemon.
* As part of this registration, pass the LIC version number.
if (cancel_delayed_work_sync(&kvp_timeout_work))
kvp_respond_to_host(NULL, HV_E_FAIL);
kvp_transaction.state = HVUTIL_DEVICE_INIT;
+ complete(&release_event);
}
int
recv_buffer = srv->recv_buffer;
kvp_transaction.recv_channel = srv->channel;
+ init_completion(&release_event);
/*
* When this driver loads, the user level daemon that
* processes the host requests may not yet be running.
cancel_delayed_work_sync(&kvp_timeout_work);
cancel_work_sync(&kvp_sendkey_work);
hvutil_transport_destroy(hvt);
+ wait_for_completion(&release_event);
}
static const char vss_devname[] = "vmbus/hv_vss";
static __u8 *recv_buffer;
static struct hvutil_transport *hvt;
+static struct completion release_event;
static void vss_timeout_func(struct work_struct *dummy);
static void vss_handle_request(struct work_struct *dummy);
if (cancel_delayed_work_sync(&vss_timeout_work))
vss_respond_to_host(HV_E_FAIL);
vss_transaction.state = HVUTIL_DEVICE_INIT;
+ complete(&release_event);
}
int
hv_vss_init(struct hv_util_service *srv)
{
+ init_completion(&release_event);
if (vmbus_proto_version < VERSION_WIN8_1) {
pr_warn("Integration service 'Backup (volume snapshot)'"
" not supported on this host version.\n");
cancel_delayed_work_sync(&vss_timeout_work);
cancel_work_sync(&vss_handle_request_work);
hvutil_transport_destroy(hvt);
+ wait_for_completion(&release_event);
}
#include <linux/sysctl.h>
#include <linux/reboot.h>
#include <linux/hyperv.h>
+#include <asm/mshyperv.h>
#include "hyperv_vmbus.h"
*/
u64 current_tick;
- rdmsrl(HV_X64_MSR_TIME_REF_COUNT, current_tick);
+ hv_get_current_tick(current_tick);
newtime += (current_tick - wrk->ref_time);
}
host_tns = (newtime - WLTIMEDELTA) * 100;
*/
#define HV_UTIL_NEGO_TIMEOUT 55
-/*
- * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
- * is set by CPUID(HVCPUID_VERSION_FEATURES).
- */
-enum hv_cpuid_function {
- HVCPUID_VERSION_FEATURES = 0x00000001,
- HVCPUID_VENDOR_MAXFUNCTION = 0x40000000,
- HVCPUID_INTERFACE = 0x40000001,
-
- /*
- * The remaining functions depend on the value of
- * HVCPUID_INTERFACE
- */
- HVCPUID_VERSION = 0x40000002,
- HVCPUID_FEATURES = 0x40000003,
- HVCPUID_ENLIGHTENMENT_INFO = 0x40000004,
- HVCPUID_IMPLEMENTATION_LIMITS = 0x40000005,
-};
-
-#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE 0x400
-#define HV_X64_MSR_CRASH_P0 0x40000100
-#define HV_X64_MSR_CRASH_P1 0x40000101
-#define HV_X64_MSR_CRASH_P2 0x40000102
-#define HV_X64_MSR_CRASH_P3 0x40000103
-#define HV_X64_MSR_CRASH_P4 0x40000104
-#define HV_X64_MSR_CRASH_CTL 0x40000105
-#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
-/* Define version of the synthetic interrupt controller. */
-#define HV_SYNIC_VERSION (1)
-
-#define HV_ANY_VP (0xFFFFFFFF)
-
-/* Define synthetic interrupt controller flag constants. */
-#define HV_EVENT_FLAGS_COUNT (256 * 8)
#define HV_EVENT_FLAGS_BYTE_COUNT (256)
#define HV_EVENT_FLAGS_DWORD_COUNT (256 / sizeof(u32))
-/* Define invalid partition identifier. */
-#define HV_PARTITION_ID_INVALID ((u64)0x0)
-
-/* Define port type. */
-enum hv_port_type {
- HVPORT_MSG = 1,
- HVPORT_EVENT = 2,
- HVPORT_MONITOR = 3
-};
-
-/* Define port information structure. */
-struct hv_port_info {
- enum hv_port_type port_type;
- u32 padding;
- union {
- struct {
- u32 target_sint;
- u32 target_vp;
- u64 rsvdz;
- } message_port_info;
- struct {
- u32 target_sint;
- u32 target_vp;
- u16 base_flag_number;
- u16 flag_count;
- u32 rsvdz;
- } event_port_info;
- struct {
- u64 monitor_address;
- u64 rsvdz;
- } monitor_port_info;
- };
-};
-
-struct hv_connection_info {
- enum hv_port_type port_type;
- u32 padding;
- union {
- struct {
- u64 rsvdz;
- } message_connection_info;
- struct {
- u64 rsvdz;
- } event_connection_info;
- struct {
- u64 monitor_address;
- } monitor_connection_info;
- };
-};
-
/*
* Timer configuration register.
*/
};
};
-/* Define the number of message buffers associated with each port. */
-#define HV_PORT_MESSAGE_BUFFER_COUNT (16)
/* Define the synthetic interrupt controller event flags format. */
union hv_synic_event_flags {
u32 flags32[HV_EVENT_FLAGS_DWORD_COUNT];
};
-/* Define the synthetic interrupt flags page layout. */
-struct hv_synic_event_flags_page {
- union hv_synic_event_flags sintevent_flags[HV_SYNIC_SINT_COUNT];
-};
-
/* Define SynIC control register. */
union hv_synic_scontrol {
u64 as_uint64;
u8 rsvdz4[1984];
};
+#define HV_HYPERCALL_PARAM_ALIGN sizeof(u64)
+
/* Definition of the hv_post_message hypercall input structure. */
struct hv_input_post_message {
union hv_connection_id connectionid;
u64 payload[HV_MESSAGE_PAYLOAD_QWORD_COUNT];
};
-/*
- * Versioning definitions used for guests reporting themselves to the
- * hypervisor, and visa versa.
- */
-
-/* Version info reported by guest OS's */
-enum hv_guest_os_vendor {
- HVGUESTOS_VENDOR_MICROSOFT = 0x0001
-};
-
-enum hv_guest_os_microsoft_ids {
- HVGUESTOS_MICROSOFT_UNDEFINED = 0x00,
- HVGUESTOS_MICROSOFT_MSDOS = 0x01,
- HVGUESTOS_MICROSOFT_WINDOWS3X = 0x02,
- HVGUESTOS_MICROSOFT_WINDOWS9X = 0x03,
- HVGUESTOS_MICROSOFT_WINDOWSNT = 0x04,
- HVGUESTOS_MICROSOFT_WINDOWSCE = 0x05
-};
-
-/*
- * Declare the MSR used to identify the guest OS.
- */
-#define HV_X64_MSR_GUEST_OS_ID 0x40000000
-
-union hv_x64_msr_guest_os_id_contents {
- u64 as_uint64;
- struct {
- u64 build_number:16;
- u64 service_version:8; /* Service Pack, etc. */
- u64 minor_version:8;
- u64 major_version:8;
- u64 os_id:8; /* enum hv_guest_os_microsoft_ids (if Vendor=MS) */
- u64 vendor_id:16; /* enum hv_guest_os_vendor */
- };
-};
-
-/*
- * Declare the MSR used to setup pages used to communicate with the hypervisor.
- */
-#define HV_X64_MSR_HYPERCALL 0x40000001
-
-union hv_x64_msr_hypercall_contents {
- u64 as_uint64;
- struct {
- u64 enable:1;
- u64 reserved:11;
- u64 guest_physical_address:52;
- };
-};
-
enum {
VMBUS_MESSAGE_CONNECTION_ID = 1,
VMBUS_MESSAGE_SINT = 2,
};
-/* #defines */
-
-#define HV_PRESENT_BIT 0x80000000
-
-/*
- * The guest OS needs to register the guest ID with the hypervisor.
- * The guest ID is a 64 bit entity and the structure of this ID is
- * specified in the Hyper-V specification:
- *
- * http://msdn.microsoft.com/en-us/library/windows/hardware/ff542653%28v=vs.85%29.aspx
- *
- * While the current guideline does not specify how Linux guest ID(s)
- * need to be generated, our plan is to publish the guidelines for
- * Linux and other guest operating systems that currently are hosted
- * on Hyper-V. The implementation here conforms to this yet
- * unpublished guidelines.
- *
- *
- * Bit(s)
- * 63 - Indicates if the OS is Open Source or not; 1 is Open Source
- * 62:56 - Os Type; Linux is 0x100
- * 55:48 - Distro specific identification
- * 47:16 - Linux kernel version number
- * 15:0 - Distro specific identification
- *
- *
- */
-
-#define HV_LINUX_VENDOR_ID 0x8100
-
-/*
- * Generate the guest ID based on the guideline described above.
- */
-
-static inline __u64 generate_guest_id(__u8 d_info1, __u32 kernel_version,
- __u16 d_info2)
-{
- __u64 guest_id = 0;
-
- guest_id = (((__u64)HV_LINUX_VENDOR_ID) << 48);
- guest_id |= (((__u64)(d_info1)) << 48);
- guest_id |= (((__u64)(kernel_version)) << 16);
- guest_id |= ((__u64)(d_info2));
-
- return guest_id;
-}
-
-
-#define HV_CPU_POWER_MANAGEMENT (1 << 0)
-#define HV_RECOMMENDATIONS_MAX 4
-
-#define HV_X64_MAX 5
-#define HV_CAPS_MAX 8
-
-
-#define HV_HYPERCALL_PARAM_ALIGN sizeof(u64)
-
-
-/* Service definitions */
-
-#define HV_SERVICE_PARENT_PORT (0)
-#define HV_SERVICE_PARENT_CONNECTION (0)
-
-#define HV_SERVICE_CONNECT_RESPONSE_SUCCESS (0)
-#define HV_SERVICE_CONNECT_RESPONSE_INVALID_PARAMETER (1)
-#define HV_SERVICE_CONNECT_RESPONSE_UNKNOWN_SERVICE (2)
-#define HV_SERVICE_CONNECT_RESPONSE_CONNECTION_REJECTED (3)
-
-#define HV_SERVICE_CONNECT_REQUEST_MESSAGE_ID (1)
-#define HV_SERVICE_CONNECT_RESPONSE_MESSAGE_ID (2)
-#define HV_SERVICE_DISCONNECT_REQUEST_MESSAGE_ID (3)
-#define HV_SERVICE_DISCONNECT_RESPONSE_MESSAGE_ID (4)
-#define HV_SERVICE_MAX_MESSAGE_ID (4)
-
-#define HV_SERVICE_PROTOCOL_VERSION (0x0010)
-#define HV_CONNECT_PAYLOAD_BYTE_COUNT 64
-
-/* #define VMBUS_REVISION_NUMBER 6 */
-
-/* Our local vmbus's port and connection id. Anything >0 is fine */
-/* #define VMBUS_PORT_ID 11 */
-
-/* 628180B8-308D-4c5e-B7DB-1BEB62E62EF4 */
-static const uuid_le VMBUS_SERVICE_ID = {
- .b = {
- 0xb8, 0x80, 0x81, 0x62, 0x8d, 0x30, 0x5e, 0x4c,
- 0xb7, 0xdb, 0x1b, 0xeb, 0x62, 0xe6, 0x2e, 0xf4
- },
-};
-
-
-
struct hv_context {
/* We only support running on top of Hyper-V
* So at this point this really can only contain the Hyper-V ID
*/
u64 guestid;
- void *hypercall_page;
void *tsc_page;
bool synic_initialized;
extern struct hv_context hv_context;
-struct ms_hyperv_tsc_page {
- volatile u32 tsc_sequence;
- u32 reserved1;
- volatile u64 tsc_scale;
- volatile s64 tsc_offset;
- u64 reserved2[509];
-};
-
struct hv_ring_buffer_debug_info {
u32 current_interrupt_mask;
u32 current_read_index;
extern void hv_synic_free(void);
-extern void hv_synic_init(void *irqarg);
+extern int hv_synic_init(unsigned int cpu);
-extern void hv_synic_cleanup(void *arg);
+extern int hv_synic_cleanup(unsigned int cpu);
extern void hv_synic_clockevents_cleanup(void);
-/*
- * Host version information.
- */
-extern unsigned int host_info_eax;
-extern unsigned int host_info_ebx;
-extern unsigned int host_info_ecx;
-extern unsigned int host_info_edx;
-
/* Interface */
extern struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT];
-/* Free the message slot and signal end-of-message if required */
-static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
-{
- /*
- * On crash we're reading some other CPU's message page and we need
- * to be careful: this other CPU may already had cleared the header
- * and the host may already had delivered some other message there.
- * In case we blindly write msg->header.message_type we're going
- * to lose it. We can still lose a message of the same type but
- * we count on the fact that there can only be one
- * CHANNELMSG_UNLOAD_RESPONSE and we don't care about other messages
- * on crash.
- */
- if (cmpxchg(&msg->header.message_type, old_msg_type,
- HVMSG_NONE) != old_msg_type)
- return;
-
- /*
- * Make sure the write to MessageType (ie set to
- * HVMSG_NONE) happens before we read the
- * MessagePending and EOMing. Otherwise, the EOMing
- * will not deliver any more messages since there is
- * no empty slot
- */
- mb();
-
- if (msg->header.message_flags.msg_pending) {
- /*
- * This will cause message queue rescan to
- * possibly deliver another msg from the
- * hypervisor
- */
- wrmsrl(HV_X64_MSR_EOM, 0);
- }
-}
/* General vmbus interface */
int vmbus_device_register(struct hv_device *child_device_obj);
void vmbus_device_unregister(struct hv_device *device_obj);
-/* static void */
-/* VmbusChildDeviceDestroy( */
-/* struct hv_device *); */
-
struct vmbus_channel *relid2channel(u32 relid);
void vmbus_free_channels(void);
int vmbus_connect(void);
void vmbus_disconnect(void);
-int vmbus_post_msg(void *buffer, size_t buflen);
+int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep);
void vmbus_on_event(unsigned long data);
void vmbus_on_msg_dpc(unsigned long data);
unsigned long flags = 0;
struct hv_ring_buffer_info *outring_info = &channel->outbound;
+ if (channel->rescind)
+ return -ENODEV;
+
for (i = 0; i < kv_count; i++)
totalbytes_towrite += kv_list[i].iov_len;
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
hv_signal_on_write(old_write, channel, kick_q);
+
+ if (channel->rescind)
+ return -ENODEV;
+
return 0;
}
static struct completion probe_event;
-
-static void hyperv_report_panic(struct pt_regs *regs)
-{
- static bool panic_reported;
-
- /*
- * We prefer to report panic on 'die' chain as we have proper
- * registers to report, but if we miss it (e.g. on BUG()) we need
- * to report it on 'panic'.
- */
- if (panic_reported)
- return;
- panic_reported = true;
-
- wrmsrl(HV_X64_MSR_CRASH_P0, regs->ip);
- wrmsrl(HV_X64_MSR_CRASH_P1, regs->ax);
- wrmsrl(HV_X64_MSR_CRASH_P2, regs->bx);
- wrmsrl(HV_X64_MSR_CRASH_P3, regs->cx);
- wrmsrl(HV_X64_MSR_CRASH_P4, regs->dx);
-
- /*
- * Let Hyper-V know there is crash data available
- */
- wrmsrl(HV_X64_MSR_CRASH_CTL, HV_CRASH_CTL_CRASH_NOTIFY);
-}
+static int hyperv_cpuhp_online;
static int hyperv_panic_event(struct notifier_block *nb, unsigned long val,
void *args)
* Initialize the per-cpu interrupt state and
* connect to the host.
*/
- on_each_cpu(hv_synic_init, NULL, 1);
+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv:online",
+ hv_synic_init, hv_synic_cleanup);
+ if (ret < 0)
+ goto err_alloc;
+ hyperv_cpuhp_online = ret;
+
ret = vmbus_connect();
if (ret)
goto err_connect;
- if (vmbus_proto_version > VERSION_WIN7)
- cpu_hotplug_disable();
-
/*
* Only register if the crash MSRs are available
*/
return 0;
err_connect:
- on_each_cpu(hv_synic_cleanup, NULL, 1);
+ cpuhp_remove_state(hyperv_cpuhp_online);
err_alloc:
hv_synic_free();
hv_remove_vmbus_irq();
static void hv_kexec_handler(void)
{
- int cpu;
-
hv_synic_clockevents_cleanup();
vmbus_initiate_unload(false);
- for_each_online_cpu(cpu)
- smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
+ vmbus_connection.conn_state = DISCONNECTED;
+ /* Make sure conn_state is set as hv_synic_cleanup checks for it */
+ mb();
+ cpuhp_remove_state(hyperv_cpuhp_online);
hv_cleanup(false);
};
* doing the cleanup for current CPU only. This should be sufficient
* for kdump.
*/
- hv_synic_cleanup(NULL);
+ vmbus_connection.conn_state = DISCONNECTED;
+ hv_synic_cleanup(smp_processor_id());
hv_cleanup(true);
};
hv_cleanup(false);
for_each_online_cpu(cpu) {
tasklet_kill(hv_context.event_dpc[cpu]);
- smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
}
+ cpuhp_remove_state(hyperv_cpuhp_online);
hv_synic_free();
acpi_bus_unregister_driver(&vmbus_acpi_driver);
- if (vmbus_proto_version > VERSION_WIN7)
- cpu_hotplug_enable();
}
if (!drvdata || !drvdata->csdev)
return;
- stm_disable(drvdata->csdev, NULL);
+ coresight_disable(drvdata->csdev);
}
static phys_addr_t
u8 command, int size, union i2c_smbus_data *data)
{
struct i2c_piix4_adapdata *adapdata = i2c_get_adapdata(adap);
+ unsigned short piix4_smba = adapdata->smba;
+ int retries = MAX_TIMEOUT;
+ int smbslvcnt;
u8 smba_en_lo;
u8 port;
int retval;
+ /* Request the SMBUS semaphore, avoid conflicts with the IMC */
+ smbslvcnt = inb_p(SMBSLVCNT);
+ do {
+ outb_p(smbslvcnt | 0x10, SMBSLVCNT);
+
+ /* Check the semaphore status */
+ smbslvcnt = inb_p(SMBSLVCNT);
+ if (smbslvcnt & 0x10)
+ break;
+
+ usleep_range(1000, 2000);
+ } while (--retries);
+ /* SMBus is still owned by the IMC, we give up */
+ if (!retries)
+ return -EBUSY;
+
mutex_lock(&piix4_mutex_sb800);
outb_p(piix4_port_sel_sb800, SB800_PIIX4_SMB_IDX);
mutex_unlock(&piix4_mutex_sb800);
+ /* Release the semaphore */
+ outb_p(smbslvcnt | 0x20, SMBSLVCNT);
+
return retval;
}
if (!client->irq) {
int irq = -ENOENT;
- if (dev->of_node) {
+ if (client->flags & I2C_CLIENT_HOST_NOTIFY) {
+ dev_dbg(dev, "Using Host Notify IRQ\n");
+ irq = i2c_smbus_host_notify_to_irq(client);
+ } else if (dev->of_node) {
irq = of_irq_get_byname(dev->of_node, "irq");
if (irq == -EINVAL || irq == -ENODATA)
irq = of_irq_get(dev->of_node, 0);
}
if (irq == -EPROBE_DEFER)
return irq;
- /*
- * ACPI and OF did not find any useful IRQ, try to see
- * if Host Notify can be used.
- */
- if (irq < 0) {
- dev_dbg(dev, "Using Host Notify IRQ\n");
- irq = i2c_smbus_host_notify_to_irq(client);
- }
+
if (irq < 0)
irq = 0;
if (i2c_check_addr_validity(addr, info.flags)) {
dev_err(&adap->dev, "of_i2c: invalid addr=%x on %s\n",
- info.addr, node->full_name);
+ addr, node->full_name);
return ERR_PTR(-EINVAL);
}
info.of_node = of_node_get(node);
info.archdata = &dev_ad;
+ if (of_property_read_bool(node, "host-notify"))
+ info.flags |= I2C_CLIENT_HOST_NOTIFY;
+
if (of_get_property(node, "wakeup-source", NULL))
info.flags |= I2C_CLIENT_WAKE;
int ret;
if (!client || !slave_cb) {
- WARN(1, "insufficent data\n");
+ WARN(1, "insufficient data\n");
return -EINVAL;
}
unsigned long arg)
{
struct i2c_smbus_ioctl_data data_arg;
- union i2c_smbus_data temp;
+ union i2c_smbus_data temp = {};
int datasize, res;
if (copy_from_user(&data_arg,
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/mm.h>
-#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/poll.h>
#include <linux/init.h>
input_dev->name = xpad->name;
input_dev->phys = xpad->phys;
usb_to_input_id(xpad->udev, &input_dev->id);
+
+ if (xpad->xtype == XTYPE_XBOX360W) {
+ /* x360w controllers and the receiver have different ids */
+ input_dev->id.product = 0x02a1;
+ }
+
input_dev->dev.parent = &xpad->intf->dev;
input_set_drvdata(input_dev, xpad);
MODULE_DEVICE_TABLE(i2c, adxl34x_id);
-#ifdef CONFIG_OF
static const struct of_device_id adxl34x_of_id[] = {
/*
* The ADXL346 is backward-compatible with the ADXL345. Differences are
};
MODULE_DEVICE_TABLE(of, adxl34x_of_id);
-#endif
static struct i2c_driver adxl34x_driver = {
.driver = {
.name = "adxl34x",
.pm = &adxl34x_i2c_pm,
- .of_match_table = of_match_ptr(adxl34x_of_id),
+ .of_match_table = adxl34x_of_id,
},
.probe = adxl34x_i2c_probe,
.remove = adxl34x_i2c_remove,
(_b[1] & 0x7F) \
)
-#define SS4_TS_Y_V2(_b) (s8)( \
+#define SS4_TS_Y_V2(_b) -(s8)( \
((_b[3] & 0x01) << 7) | \
(_b[2] & 0x7F) \
)
* after soft reset, we should wait for 1 ms
* before the device becomes operational
*/
-#define SOFT_RESET_DELAY_MS 3
+#define SOFT_RESET_DELAY_US 3000
/* and after hard reset, we should wait for max 500ms */
#define HARD_RESET_DELAY_MS 500
if (ret) {
dev_err(&client->dev, "Unable to reset device\n");
} else {
- msleep(SOFT_RESET_DELAY_MS);
+ usleep_range(SOFT_RESET_DELAY_US, SOFT_RESET_DELAY_US + 100);
ret = synaptics_i2c_config(client);
if (ret)
dev_err(&client->dev, "Unable to config device\n");
config RMI4_F03
bool "RMI4 Function 03 (PS2 Guest)"
- depends on RMI4_CORE && SERIO
+ depends on RMI4_CORE
+ depends on SERIO=y || RMI4_CORE=SERIO
help
Say Y here if you want to add support for RMI4 function 03.
DMI_MATCH(DMI_PRODUCT_VERSION, "Rev 1"),
},
},
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
+ },
+ },
{ }
};
case QUEUE_HEADER_NORMAL:
report_count = ts->buf[FW_HDR_COUNT];
- if (report_count > 3) {
+ if (report_count == 0 || report_count > 3) {
dev_err(&client->dev,
- "too large report count: %*ph\n",
+ "bad report count: %*ph\n",
HEADER_SIZE, ts->buf);
break;
}
int is_new);
struct md_cluster_info;
+/* change UNSUPPORTED_MDDEV_FLAGS for each array type if new flag is added */
enum mddev_flags {
MD_ARRAY_FIRST_USE, /* First use of array, needs initialization */
MD_CLOSING, /* If set, we are closing the array, do not open
{
return mddev->cluster_info && mddev->bitmap_info.nodes > 1;
}
+
+/* clear unsupported mddev_flags */
+static inline void mddev_clear_unsupported_flags(struct mddev *mddev,
+ unsigned long unsupported_flags)
+{
+ mddev->flags &= ~unsupported_flags;
+}
#endif /* _MD_MD_H */
#include "raid0.h"
#include "raid5.h"
+#define UNSUPPORTED_MDDEV_FLAGS \
+ ((1L << MD_HAS_JOURNAL) | \
+ (1L << MD_JOURNAL_CLEAN) | \
+ (1L << MD_FAILFAST_SUPPORTED))
+
static int raid0_congested(struct mddev *mddev, int bits)
{
struct r0conf *conf = mddev->private;
mddev->delta_disks = -1;
/* make sure it will be not marked as dirty */
mddev->recovery_cp = MaxSector;
- clear_bit(MD_HAS_JOURNAL, &mddev->flags);
- clear_bit(MD_JOURNAL_CLEAN, &mddev->flags);
+ mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS);
create_strip_zones(mddev, &priv_conf);
mddev->degraded = 0;
/* make sure it will be not marked as dirty */
mddev->recovery_cp = MaxSector;
- clear_bit(MD_FAILFAST_SUPPORTED, &mddev->flags);
+ mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS);
create_strip_zones(mddev, &priv_conf);
return priv_conf;
mddev->raid_disks = 1;
/* make sure it will be not marked as dirty */
mddev->recovery_cp = MaxSector;
- clear_bit(MD_FAILFAST_SUPPORTED, &mddev->flags);
+ mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS);
create_strip_zones(mddev, &priv_conf);
return priv_conf;
#include "raid1.h"
#include "bitmap.h"
+#define UNSUPPORTED_MDDEV_FLAGS \
+ ((1L << MD_HAS_JOURNAL) | \
+ (1L << MD_JOURNAL_CLEAN))
+
/*
* Number of guaranteed r1bios in case of extreme VM load:
*/
kfree(plug);
}
-static void raid1_make_request(struct mddev *mddev, struct bio * bio)
+static void raid1_read_request(struct mddev *mddev, struct bio *bio,
+ struct r1bio *r1_bio)
{
struct r1conf *conf = mddev->private;
struct raid1_info *mirror;
- struct r1bio *r1_bio;
struct bio *read_bio;
+ struct bitmap *bitmap = mddev->bitmap;
+ const int op = bio_op(bio);
+ const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
+ int sectors_handled;
+ int max_sectors;
+ int rdisk;
+
+ wait_barrier(conf, bio);
+
+read_again:
+ rdisk = read_balance(conf, r1_bio, &max_sectors);
+
+ if (rdisk < 0) {
+ /* couldn't find anywhere to read from */
+ raid_end_bio_io(r1_bio);
+ return;
+ }
+ mirror = conf->mirrors + rdisk;
+
+ if (test_bit(WriteMostly, &mirror->rdev->flags) &&
+ bitmap) {
+ /*
+ * Reading from a write-mostly device must take care not to
+ * over-take any writes that are 'behind'
+ */
+ raid1_log(mddev, "wait behind writes");
+ wait_event(bitmap->behind_wait,
+ atomic_read(&bitmap->behind_writes) == 0);
+ }
+ r1_bio->read_disk = rdisk;
+ r1_bio->start_next_window = 0;
+
+ read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
+ bio_trim(read_bio, r1_bio->sector - bio->bi_iter.bi_sector,
+ max_sectors);
+
+ r1_bio->bios[rdisk] = read_bio;
+
+ read_bio->bi_iter.bi_sector = r1_bio->sector +
+ mirror->rdev->data_offset;
+ read_bio->bi_bdev = mirror->rdev->bdev;
+ read_bio->bi_end_io = raid1_end_read_request;
+ bio_set_op_attrs(read_bio, op, do_sync);
+ if (test_bit(FailFast, &mirror->rdev->flags) &&
+ test_bit(R1BIO_FailFast, &r1_bio->state))
+ read_bio->bi_opf |= MD_FAILFAST;
+ read_bio->bi_private = r1_bio;
+
+ if (mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(read_bio->bi_bdev),
+ read_bio, disk_devt(mddev->gendisk),
+ r1_bio->sector);
+
+ if (max_sectors < r1_bio->sectors) {
+ /*
+ * could not read all from this device, so we will need another
+ * r1_bio.
+ */
+ sectors_handled = (r1_bio->sector + max_sectors
+ - bio->bi_iter.bi_sector);
+ r1_bio->sectors = max_sectors;
+ spin_lock_irq(&conf->device_lock);
+ if (bio->bi_phys_segments == 0)
+ bio->bi_phys_segments = 2;
+ else
+ bio->bi_phys_segments++;
+ spin_unlock_irq(&conf->device_lock);
+
+ /*
+ * Cannot call generic_make_request directly as that will be
+ * queued in __make_request and subsequent mempool_alloc might
+ * block waiting for it. So hand bio over to raid1d.
+ */
+ reschedule_retry(r1_bio);
+
+ r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
+
+ r1_bio->master_bio = bio;
+ r1_bio->sectors = bio_sectors(bio) - sectors_handled;
+ r1_bio->state = 0;
+ r1_bio->mddev = mddev;
+ r1_bio->sector = bio->bi_iter.bi_sector + sectors_handled;
+ goto read_again;
+ } else
+ generic_make_request(read_bio);
+}
+
+static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ struct r1bio *r1_bio)
+{
+ struct r1conf *conf = mddev->private;
int i, disks;
- struct bitmap *bitmap;
+ struct bitmap *bitmap = mddev->bitmap;
unsigned long flags;
const int op = bio_op(bio);
- const int rw = bio_data_dir(bio);
const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
const unsigned long do_flush_fua = (bio->bi_opf &
(REQ_PREFLUSH | REQ_FUA));
md_write_start(mddev, bio); /* wait on superblock update early */
- if (bio_data_dir(bio) == WRITE &&
- ((bio_end_sector(bio) > mddev->suspend_lo &&
+ if ((bio_end_sector(bio) > mddev->suspend_lo &&
bio->bi_iter.bi_sector < mddev->suspend_hi) ||
(mddev_is_clustered(mddev) &&
md_cluster_ops->area_resyncing(mddev, WRITE,
- bio->bi_iter.bi_sector, bio_end_sector(bio))))) {
- /* As the suspend_* range is controlled by
- * userspace, we want an interruptible
- * wait.
+ bio->bi_iter.bi_sector, bio_end_sector(bio)))) {
+
+ /*
+ * As the suspend_* range is controlled by userspace, we want
+ * an interruptible wait.
*/
DEFINE_WAIT(w);
for (;;) {
bio->bi_iter.bi_sector >= mddev->suspend_hi ||
(mddev_is_clustered(mddev) &&
!md_cluster_ops->area_resyncing(mddev, WRITE,
- bio->bi_iter.bi_sector, bio_end_sector(bio))))
+ bio->bi_iter.bi_sector,
+ bio_end_sector(bio))))
break;
schedule();
}
finish_wait(&conf->wait_barrier, &w);
}
-
start_next_window = wait_barrier(conf, bio);
- bitmap = mddev->bitmap;
-
- /*
- * make_request() can abort the operation when read-ahead is being
- * used and no empty request is available.
- *
- */
- r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
-
- r1_bio->master_bio = bio;
- r1_bio->sectors = bio_sectors(bio);
- r1_bio->state = 0;
- r1_bio->mddev = mddev;
- r1_bio->sector = bio->bi_iter.bi_sector;
-
- /* We might need to issue multiple reads to different
- * devices if there are bad blocks around, so we keep
- * track of the number of reads in bio->bi_phys_segments.
- * If this is 0, there is only one r1_bio and no locking
- * will be needed when requests complete. If it is
- * non-zero, then it is the number of not-completed requests.
- */
- bio->bi_phys_segments = 0;
- bio_clear_flag(bio, BIO_SEG_VALID);
-
- if (rw == READ) {
- /*
- * read balancing logic:
- */
- int rdisk;
-
-read_again:
- rdisk = read_balance(conf, r1_bio, &max_sectors);
-
- if (rdisk < 0) {
- /* couldn't find anywhere to read from */
- raid_end_bio_io(r1_bio);
- return;
- }
- mirror = conf->mirrors + rdisk;
-
- if (test_bit(WriteMostly, &mirror->rdev->flags) &&
- bitmap) {
- /* Reading from a write-mostly device must
- * take care not to over-take any writes
- * that are 'behind'
- */
- raid1_log(mddev, "wait behind writes");
- wait_event(bitmap->behind_wait,
- atomic_read(&bitmap->behind_writes) == 0);
- }
- r1_bio->read_disk = rdisk;
- r1_bio->start_next_window = 0;
-
- read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
- bio_trim(read_bio, r1_bio->sector - bio->bi_iter.bi_sector,
- max_sectors);
-
- r1_bio->bios[rdisk] = read_bio;
-
- read_bio->bi_iter.bi_sector = r1_bio->sector +
- mirror->rdev->data_offset;
- read_bio->bi_bdev = mirror->rdev->bdev;
- read_bio->bi_end_io = raid1_end_read_request;
- bio_set_op_attrs(read_bio, op, do_sync);
- if (test_bit(FailFast, &mirror->rdev->flags) &&
- test_bit(R1BIO_FailFast, &r1_bio->state))
- read_bio->bi_opf |= MD_FAILFAST;
- read_bio->bi_private = r1_bio;
-
- if (mddev->gendisk)
- trace_block_bio_remap(bdev_get_queue(read_bio->bi_bdev),
- read_bio, disk_devt(mddev->gendisk),
- r1_bio->sector);
-
- if (max_sectors < r1_bio->sectors) {
- /* could not read all from this device, so we will
- * need another r1_bio.
- */
-
- sectors_handled = (r1_bio->sector + max_sectors
- - bio->bi_iter.bi_sector);
- r1_bio->sectors = max_sectors;
- spin_lock_irq(&conf->device_lock);
- if (bio->bi_phys_segments == 0)
- bio->bi_phys_segments = 2;
- else
- bio->bi_phys_segments++;
- spin_unlock_irq(&conf->device_lock);
- /* Cannot call generic_make_request directly
- * as that will be queued in __make_request
- * and subsequent mempool_alloc might block waiting
- * for it. So hand bio over to raid1d.
- */
- reschedule_retry(r1_bio);
-
- r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
-
- r1_bio->master_bio = bio;
- r1_bio->sectors = bio_sectors(bio) - sectors_handled;
- r1_bio->state = 0;
- r1_bio->mddev = mddev;
- r1_bio->sector = bio->bi_iter.bi_sector +
- sectors_handled;
- goto read_again;
- } else
- generic_make_request(read_bio);
- return;
- }
-
- /*
- * WRITE:
- */
if (conf->pending_count >= max_queued_requests) {
md_wakeup_thread(mddev->thread);
raid1_log(mddev, "wait queued");
int bad_sectors;
int is_bad;
- is_bad = is_badblock(rdev, r1_bio->sector,
- max_sectors,
+ is_bad = is_badblock(rdev, r1_bio->sector, max_sectors,
&first_bad, &bad_sectors);
if (is_bad < 0) {
/* mustn't write here until the bad block is
continue;
mbio = bio_clone_mddev(bio, GFP_NOIO, mddev);
- bio_trim(mbio, r1_bio->sector - bio->bi_iter.bi_sector, max_sectors);
+ bio_trim(mbio, r1_bio->sector - bio->bi_iter.bi_sector,
+ max_sectors);
if (first_clone) {
/* do behind I/O ?
wake_up(&conf->wait_barrier);
}
+static void raid1_make_request(struct mddev *mddev, struct bio *bio)
+{
+ struct r1conf *conf = mddev->private;
+ struct r1bio *r1_bio;
+
+ /*
+ * make_request() can abort the operation when read-ahead is being
+ * used and no empty request is available.
+ *
+ */
+ r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
+
+ r1_bio->master_bio = bio;
+ r1_bio->sectors = bio_sectors(bio);
+ r1_bio->state = 0;
+ r1_bio->mddev = mddev;
+ r1_bio->sector = bio->bi_iter.bi_sector;
+
+ /*
+ * We might need to issue multiple reads to different devices if there
+ * are bad blocks around, so we keep track of the number of reads in
+ * bio->bi_phys_segments. If this is 0, there is only one r1_bio and
+ * no locking will be needed when requests complete. If it is
+ * non-zero, then it is the number of not-completed requests.
+ */
+ bio->bi_phys_segments = 0;
+ bio_clear_flag(bio, BIO_SEG_VALID);
+
+ if (bio_data_dir(bio) == READ)
+ raid1_read_request(mddev, bio, r1_bio);
+ else
+ raid1_write_request(mddev, bio, r1_bio);
+}
+
static void raid1_status(struct seq_file *seq, struct mddev *mddev)
{
struct r1conf *conf = mddev->private;
if (!IS_ERR(conf)) {
/* Array must appear to be quiesced */
conf->array_frozen = 1;
- clear_bit(MD_HAS_JOURNAL, &mddev->flags);
- clear_bit(MD_JOURNAL_CLEAN, &mddev->flags);
+ mddev_clear_unsupported_flags(mddev,
+ UNSUPPORTED_MDDEV_FLAGS);
}
return conf;
}
kfree(plug);
}
-static void __make_request(struct mddev *mddev, struct bio *bio)
+static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ struct r10bio *r10_bio)
{
struct r10conf *conf = mddev->private;
- struct r10bio *r10_bio;
struct bio *read_bio;
+ const int op = bio_op(bio);
+ const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
+ int sectors_handled;
+ int max_sectors;
+ sector_t sectors;
+ struct md_rdev *rdev;
+ int slot;
+
+ /*
+ * Register the new request and wait if the reconstruction
+ * thread has put up a bar for new requests.
+ * Continue immediately if no resync is active currently.
+ */
+ wait_barrier(conf);
+
+ sectors = bio_sectors(bio);
+ while (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
+ bio->bi_iter.bi_sector < conf->reshape_progress &&
+ bio->bi_iter.bi_sector + sectors > conf->reshape_progress) {
+ /*
+ * IO spans the reshape position. Need to wait for reshape to
+ * pass
+ */
+ raid10_log(conf->mddev, "wait reshape");
+ allow_barrier(conf);
+ wait_event(conf->wait_barrier,
+ conf->reshape_progress <= bio->bi_iter.bi_sector ||
+ conf->reshape_progress >= bio->bi_iter.bi_sector +
+ sectors);
+ wait_barrier(conf);
+ }
+
+read_again:
+ rdev = read_balance(conf, r10_bio, &max_sectors);
+ if (!rdev) {
+ raid_end_bio_io(r10_bio);
+ return;
+ }
+ slot = r10_bio->read_slot;
+
+ read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
+ bio_trim(read_bio, r10_bio->sector - bio->bi_iter.bi_sector,
+ max_sectors);
+
+ r10_bio->devs[slot].bio = read_bio;
+ r10_bio->devs[slot].rdev = rdev;
+
+ read_bio->bi_iter.bi_sector = r10_bio->devs[slot].addr +
+ choose_data_offset(r10_bio, rdev);
+ read_bio->bi_bdev = rdev->bdev;
+ read_bio->bi_end_io = raid10_end_read_request;
+ bio_set_op_attrs(read_bio, op, do_sync);
+ if (test_bit(FailFast, &rdev->flags) &&
+ test_bit(R10BIO_FailFast, &r10_bio->state))
+ read_bio->bi_opf |= MD_FAILFAST;
+ read_bio->bi_private = r10_bio;
+
+ if (mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(read_bio->bi_bdev),
+ read_bio, disk_devt(mddev->gendisk),
+ r10_bio->sector);
+ if (max_sectors < r10_bio->sectors) {
+ /*
+ * Could not read all from this device, so we will need another
+ * r10_bio.
+ */
+ sectors_handled = (r10_bio->sector + max_sectors
+ - bio->bi_iter.bi_sector);
+ r10_bio->sectors = max_sectors;
+ spin_lock_irq(&conf->device_lock);
+ if (bio->bi_phys_segments == 0)
+ bio->bi_phys_segments = 2;
+ else
+ bio->bi_phys_segments++;
+ spin_unlock_irq(&conf->device_lock);
+ /*
+ * Cannot call generic_make_request directly as that will be
+ * queued in __generic_make_request and subsequent
+ * mempool_alloc might block waiting for it. so hand bio over
+ * to raid10d.
+ */
+ reschedule_retry(r10_bio);
+
+ r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO);
+
+ r10_bio->master_bio = bio;
+ r10_bio->sectors = bio_sectors(bio) - sectors_handled;
+ r10_bio->state = 0;
+ r10_bio->mddev = mddev;
+ r10_bio->sector = bio->bi_iter.bi_sector + sectors_handled;
+ goto read_again;
+ } else
+ generic_make_request(read_bio);
+ return;
+}
+
+static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ struct r10bio *r10_bio)
+{
+ struct r10conf *conf = mddev->private;
int i;
const int op = bio_op(bio);
- const int rw = bio_data_dir(bio);
const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
const unsigned long do_fua = (bio->bi_opf & REQ_FUA);
unsigned long flags;
struct md_rdev *blocked_rdev;
struct blk_plug_cb *cb;
struct raid10_plug_cb *plug = NULL;
+ sector_t sectors;
int sectors_handled;
int max_sectors;
- int sectors;
md_write_start(mddev, bio);
while (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
bio->bi_iter.bi_sector < conf->reshape_progress &&
bio->bi_iter.bi_sector + sectors > conf->reshape_progress) {
- /* IO spans the reshape position. Need to wait for
- * reshape to pass
+ /*
+ * IO spans the reshape position. Need to wait for reshape to
+ * pass
*/
raid10_log(conf->mddev, "wait reshape");
allow_barrier(conf);
sectors);
wait_barrier(conf);
}
+
if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
- bio_data_dir(bio) == WRITE &&
(mddev->reshape_backwards
? (bio->bi_iter.bi_sector < conf->reshape_safe &&
bio->bi_iter.bi_sector + sectors > conf->reshape_progress)
conf->reshape_safe = mddev->reshape_position;
}
- r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO);
-
- r10_bio->master_bio = bio;
- r10_bio->sectors = sectors;
-
- r10_bio->mddev = mddev;
- r10_bio->sector = bio->bi_iter.bi_sector;
- r10_bio->state = 0;
-
- /* We might need to issue multiple reads to different
- * devices if there are bad blocks around, so we keep
- * track of the number of reads in bio->bi_phys_segments.
- * If this is 0, there is only one r10_bio and no locking
- * will be needed when the request completes. If it is
- * non-zero, then it is the number of not-completed requests.
- */
- bio->bi_phys_segments = 0;
- bio_clear_flag(bio, BIO_SEG_VALID);
-
- if (rw == READ) {
- /*
- * read balancing logic:
- */
- struct md_rdev *rdev;
- int slot;
-
-read_again:
- rdev = read_balance(conf, r10_bio, &max_sectors);
- if (!rdev) {
- raid_end_bio_io(r10_bio);
- return;
- }
- slot = r10_bio->read_slot;
-
- read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
- bio_trim(read_bio, r10_bio->sector - bio->bi_iter.bi_sector,
- max_sectors);
-
- r10_bio->devs[slot].bio = read_bio;
- r10_bio->devs[slot].rdev = rdev;
-
- read_bio->bi_iter.bi_sector = r10_bio->devs[slot].addr +
- choose_data_offset(r10_bio, rdev);
- read_bio->bi_bdev = rdev->bdev;
- read_bio->bi_end_io = raid10_end_read_request;
- bio_set_op_attrs(read_bio, op, do_sync);
- if (test_bit(FailFast, &rdev->flags) &&
- test_bit(R10BIO_FailFast, &r10_bio->state))
- read_bio->bi_opf |= MD_FAILFAST;
- read_bio->bi_private = r10_bio;
-
- if (mddev->gendisk)
- trace_block_bio_remap(bdev_get_queue(read_bio->bi_bdev),
- read_bio, disk_devt(mddev->gendisk),
- r10_bio->sector);
- if (max_sectors < r10_bio->sectors) {
- /* Could not read all from this device, so we will
- * need another r10_bio.
- */
- sectors_handled = (r10_bio->sector + max_sectors
- - bio->bi_iter.bi_sector);
- r10_bio->sectors = max_sectors;
- spin_lock_irq(&conf->device_lock);
- if (bio->bi_phys_segments == 0)
- bio->bi_phys_segments = 2;
- else
- bio->bi_phys_segments++;
- spin_unlock_irq(&conf->device_lock);
- /* Cannot call generic_make_request directly
- * as that will be queued in __generic_make_request
- * and subsequent mempool_alloc might block
- * waiting for it. so hand bio over to raid10d.
- */
- reschedule_retry(r10_bio);
-
- r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO);
-
- r10_bio->master_bio = bio;
- r10_bio->sectors = bio_sectors(bio) - sectors_handled;
- r10_bio->state = 0;
- r10_bio->mddev = mddev;
- r10_bio->sector = bio->bi_iter.bi_sector +
- sectors_handled;
- goto read_again;
- } else
- generic_make_request(read_bio);
- return;
- }
-
- /*
- * WRITE:
- */
if (conf->pending_count >= max_queued_requests) {
md_wakeup_thread(mddev->thread);
raid10_log(mddev, "wait queued");
int bad_sectors;
int is_bad;
- is_bad = is_badblock(rdev, dev_sector,
- max_sectors,
+ is_bad = is_badblock(rdev, dev_sector, max_sectors,
&first_bad, &bad_sectors);
if (is_bad < 0) {
/* Mustn't write here until the bad block
r10_bio->devs[i].bio = mbio;
mbio->bi_iter.bi_sector = (r10_bio->devs[i].addr+
- choose_data_offset(r10_bio,
- rdev));
+ choose_data_offset(r10_bio, rdev));
mbio->bi_bdev = rdev->bdev;
mbio->bi_end_io = raid10_end_write_request;
bio_set_op_attrs(mbio, op, do_sync | do_fua);
r10_bio->devs[i].repl_bio = mbio;
mbio->bi_iter.bi_sector = (r10_bio->devs[i].addr +
- choose_data_offset(
- r10_bio, rdev));
+ choose_data_offset(r10_bio, rdev));
mbio->bi_bdev = rdev->bdev;
mbio->bi_end_io = raid10_end_write_request;
bio_set_op_attrs(mbio, op, do_sync | do_fua);
one_write_done(r10_bio);
}
+static void __make_request(struct mddev *mddev, struct bio *bio)
+{
+ struct r10conf *conf = mddev->private;
+ struct r10bio *r10_bio;
+
+ r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO);
+
+ r10_bio->master_bio = bio;
+ r10_bio->sectors = bio_sectors(bio);
+
+ r10_bio->mddev = mddev;
+ r10_bio->sector = bio->bi_iter.bi_sector;
+ r10_bio->state = 0;
+
+ /*
+ * We might need to issue multiple reads to different devices if there
+ * are bad blocks around, so we keep track of the number of reads in
+ * bio->bi_phys_segments. If this is 0, there is only one r10_bio and
+ * no locking will be needed when the request completes. If it is
+ * non-zero, then it is the number of not-completed requests.
+ */
+ bio->bi_phys_segments = 0;
+ bio_clear_flag(bio, BIO_SEG_VALID);
+
+ if (bio_data_dir(bio) == READ)
+ raid10_read_request(mddev, bio, r10_bio);
+ else
+ raid10_write_request(mddev, bio, r10_bio);
+}
+
static void raid10_make_request(struct mddev *mddev, struct bio *bio)
{
struct r10conf *conf = mddev->private;
static struct stripe_head *
r5c_recovery_alloc_stripe(struct r5conf *conf,
- sector_t stripe_sect,
- sector_t log_start)
+ sector_t stripe_sect)
{
struct stripe_head *sh;
return NULL; /* no more stripe available */
r5l_recovery_reset_stripe(sh);
- sh->log_start = log_start;
return sh;
}
stripe_sect);
if (!sh) {
- sh = r5c_recovery_alloc_stripe(conf, stripe_sect, ctx->pos);
+ sh = r5c_recovery_alloc_stripe(conf, stripe_sect);
/*
* cannot get stripe from raid5_get_active_stripe
* try replay some stripes
r5c_recovery_replay_stripes(
cached_stripe_list, ctx);
sh = r5c_recovery_alloc_stripe(
- conf, stripe_sect, ctx->pos);
+ conf, stripe_sect);
}
if (!sh) {
pr_debug("md/raid:%s: Increasing stripe cache size to %d to recovery data on journal.\n",
conf->min_nr_stripes * 2);
raid5_set_cache_size(mddev,
conf->min_nr_stripes * 2);
- sh = r5c_recovery_alloc_stripe(
- conf, stripe_sect, ctx->pos);
+ sh = r5c_recovery_alloc_stripe(conf,
+ stripe_sect);
}
if (!sh) {
pr_err("md/raid:%s: Cannot get enough stripes due to memory pressure. Recovery failed.\n",
if (!test_bit(STRIPE_R5C_CACHING, &sh->state) &&
test_bit(R5_Wantwrite, &sh->dev[sh->pd_idx].flags)) {
r5l_recovery_replay_one_stripe(conf, sh, ctx);
- sh->log_start = ctx->pos;
list_move_tail(&sh->lru, cached_stripe_list);
}
r5l_recovery_load_data(log, sh, ctx, payload,
set_bit(R5_UPTODATE, &dev->flags);
}
}
- list_add_tail(&sh->r5c, &log->stripe_in_journal_list);
- atomic_inc(&log->stripe_in_journal_count);
}
/*
struct stripe_head *sh, *next;
struct mddev *mddev = log->rdev->mddev;
struct page *page;
+ sector_t next_checkpoint = MaxSector;
page = alloc_page(GFP_KERNEL);
if (!page) {
return -ENOMEM;
}
+ WARN_ON(list_empty(&ctx->cached_list));
+
list_for_each_entry_safe(sh, next, &ctx->cached_list, lru) {
struct r5l_meta_block *mb;
int i;
sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page,
REQ_OP_WRITE, REQ_FUA, false);
sh->log_start = ctx->pos;
+ list_add_tail(&sh->r5c, &log->stripe_in_journal_list);
+ atomic_inc(&log->stripe_in_journal_count);
ctx->pos = write_pos;
ctx->seq += 1;
-
+ next_checkpoint = sh->log_start;
list_del_init(&sh->lru);
raid5_release_stripe(sh);
}
+ log->next_checkpoint = next_checkpoint;
__free_page(page);
return 0;
}
struct r5l_recovery_ctx ctx;
int ret;
sector_t pos;
- struct stripe_head *sh;
ctx.pos = log->last_checkpoint;
ctx.seq = log->last_cp_seq;
log->next_checkpoint = ctx.pos;
r5l_log_write_empty_meta_block(log, ctx.pos, ctx.seq++);
ctx.pos = r5l_ring_add(log, ctx.pos, BLOCK_SECTORS);
- } else {
- sh = list_last_entry(&ctx.cached_list, struct stripe_head, lru);
- log->next_checkpoint = sh->log_start;
}
if ((ctx.data_only_stripes == 0) && (ctx.data_parity_stripes == 0))
pr_debug("md/raid:%s: starting from clean shutdown\n",
mdname(mddev));
else {
- pr_debug("md/raid:%s: recoverying %d data-only stripes and %d data-parity stripes\n",
+ pr_debug("md/raid:%s: recovering %d data-only stripes and %d data-parity stripes\n",
mdname(mddev), ctx.data_only_stripes,
ctx.data_parity_stripes);
if (do_wakeup)
wake_up(&conf->wait_for_overlap);
- if (conf->log->r5c_journal_mode == R5C_JOURNAL_MODE_WRITE_THROUGH)
- return;
-
spin_lock_irq(&conf->log->stripe_in_journal_lock);
list_del_init(&sh->r5c);
spin_unlock_irq(&conf->log->stripe_in_journal_lock);
spin_lock_init(&log->stripe_in_journal_lock);
atomic_set(&log->stripe_in_journal_count, 0);
+ rcu_assign_pointer(conf->log, log);
+
if (r5l_load_log(log))
goto error;
- rcu_assign_pointer(conf->log, log);
set_bit(MD_HAS_JOURNAL, &conf->mddev->flags);
return 0;
error:
+ rcu_assign_pointer(conf->log, NULL);
md_unregister_thread(&log->reclaim_thread);
reclaim_thread:
mempool_destroy(log->meta_pool);
#include "raid0.h"
#include "bitmap.h"
+#define UNSUPPORTED_MDDEV_FLAGS (1L << MD_FAILFAST_SUPPORTED)
+
#define cpu_to_group(cpu) cpu_to_node(cpu)
#define ANY_GROUP NUMA_NO_NODE
mddev->new_chunk_sectors = chunksect;
ret = setup_conf(mddev);
- if (!IS_ERR_VALUE(ret))
- clear_bit(MD_FAILFAST_SUPPORTED, &mddev->flags);
+ if (!IS_ERR(ret))
+ mddev_clear_unsupported_flags(mddev,
+ UNSUPPORTED_MDDEV_FLAGS);
return ret;
}
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
+#include <linux/platform_data/ti-aemif.h>
#define TA_SHIFT 2
#define RHOLD_SHIFT 4
struct device_node *np = dev->of_node;
struct device_node *child_np;
struct aemif_device *aemif;
+ struct aemif_platform_data *pdata;
+ struct of_dev_auxdata *dev_lookup;
if (np == NULL)
return 0;
if (!aemif)
return -ENOMEM;
+ pdata = dev_get_platdata(&pdev->dev);
+ dev_lookup = pdata ? pdata->dev_lookup : NULL;
+
platform_set_drvdata(pdev, aemif);
aemif->clk = devm_clk_get(dev, NULL);
* parameters are set.
*/
for_each_available_child_of_node(np, child_np) {
- ret = of_platform_populate(child_np, NULL, NULL, dev);
+ ret = of_platform_populate(child_np, NULL, dev_lookup, dev);
if (ret < 0)
goto error;
}
If unsure, say N.
+config EEPROM_IDT_89HPESX
+ tristate "IDT 89HPESx PCIe-swtiches EEPROM / CSR support"
+ depends on I2C && SYSFS
+ help
+ Enable this driver to get read/write access to EEPROM / CSRs
+ over IDT PCIe-swtich i2c-slave interface.
+
+ This driver can also be built as a module. If so, the module
+ will be called idt_89hpesx.
+
endmenu
obj-$(CONFIG_EEPROM_93CX6) += eeprom_93cx6.o
obj-$(CONFIG_EEPROM_93XX46) += eeprom_93xx46.o
obj-$(CONFIG_EEPROM_DIGSY_MTC_CFG) += digsy_mtc_eeprom.o
+obj-$(CONFIG_EEPROM_IDT_89HPESX) += idt_89hpesx.o
--- /dev/null
+/*
+ * This file is provided under a GPLv2 license. When using or
+ * redistributing this file, you may do so under that license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, it can be found <http://www.gnu.org/licenses/>.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * IDT PCIe-switch NTB Linux driver
+ *
+ * Contact Information:
+ * Serge Semin <fancer.lancer@gmail.com>, <Sergey.Semin@t-platforms.ru>
+ */
+/*
+ * NOTE of the IDT 89HPESx SMBus-slave interface driver
+ * This driver primarily is developed to have an access to EEPROM device of
+ * IDT PCIe-switches. IDT provides a simple SMBus interface to perform IO-
+ * operations from/to EEPROM, which is located at private (so called Master)
+ * SMBus of switches. Using that interface this the driver creates a simple
+ * binary sysfs-file in the device directory:
+ * /sys/bus/i2c/devices/<bus>-<devaddr>/eeprom
+ * In case if read-only flag is specified in the dts-node of device desription,
+ * User-space applications won't be able to write to the EEPROM sysfs-node.
+ * Additionally IDT 89HPESx SMBus interface has an ability to write/read
+ * data of device CSRs. This driver exposes debugf-file to perform simple IO
+ * operations using that ability for just basic debug purpose. Particularly
+ * next file is created in the specific debugfs-directory:
+ * /sys/kernel/debug/idt_csr/
+ * Format of the debugfs-node is:
+ * $ cat /sys/kernel/debug/idt_csr/<bus>-<devaddr>/<devname>;
+ * <CSR address>:<CSR value>
+ * So reading the content of the file gives current CSR address and it value.
+ * If User-space application wishes to change current CSR address,
+ * it can just write a proper value to the sysfs-file:
+ * $ echo "<CSR address>" > /sys/kernel/debug/idt_csr/<bus>-<devaddr>/<devname>
+ * If it wants to change the CSR value as well, the format of the write
+ * operation is:
+ * $ echo "<CSR address>:<CSR value>" > \
+ * /sys/kernel/debug/idt_csr/<bus>-<devaddr>/<devname>;
+ * CSR address and value can be any of hexadecimal, decimal or octal format.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/sysfs.h>
+#include <linux/debugfs.h>
+#include <linux/mod_devicetable.h>
+#include <linux/of.h>
+#include <linux/i2c.h>
+#include <linux/pci_ids.h>
+#include <linux/delay.h>
+
+#define IDT_NAME "89hpesx"
+#define IDT_89HPESX_DESC "IDT 89HPESx SMBus-slave interface driver"
+#define IDT_89HPESX_VER "1.0"
+
+MODULE_DESCRIPTION(IDT_89HPESX_DESC);
+MODULE_VERSION(IDT_89HPESX_VER);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("T-platforms");
+
+/*
+ * csr_dbgdir - CSR read/write operations Debugfs directory
+ */
+static struct dentry *csr_dbgdir;
+
+/*
+ * struct idt_89hpesx_dev - IDT 89HPESx device data structure
+ * @eesize: Size of EEPROM in bytes (calculated from "idt,eecompatible")
+ * @eero: EEPROM Read-only flag
+ * @eeaddr: EEPROM custom address
+ *
+ * @inieecmd: Initial cmd value for EEPROM read/write operations
+ * @inicsrcmd: Initial cmd value for CSR read/write operations
+ * @iniccode: Initialial command code value for IO-operations
+ *
+ * @csr: CSR address to perform read operation
+ *
+ * @smb_write: SMBus write method
+ * @smb_read: SMBus read method
+ * @smb_mtx: SMBus mutex
+ *
+ * @client: i2c client used to perform IO operations
+ *
+ * @ee_file: EEPROM read/write sysfs-file
+ * @csr_file: CSR read/write debugfs-node
+ */
+struct idt_smb_seq;
+struct idt_89hpesx_dev {
+ u32 eesize;
+ bool eero;
+ u8 eeaddr;
+
+ u8 inieecmd;
+ u8 inicsrcmd;
+ u8 iniccode;
+
+ u16 csr;
+
+ int (*smb_write)(struct idt_89hpesx_dev *, const struct idt_smb_seq *);
+ int (*smb_read)(struct idt_89hpesx_dev *, struct idt_smb_seq *);
+ struct mutex smb_mtx;
+
+ struct i2c_client *client;
+
+ struct bin_attribute *ee_file;
+ struct dentry *csr_dir;
+ struct dentry *csr_file;
+};
+
+/*
+ * struct idt_smb_seq - sequence of data to be read/written from/to IDT 89HPESx
+ * @ccode: SMBus command code
+ * @bytecnt: Byte count of operation
+ * @data: Data to by written
+ */
+struct idt_smb_seq {
+ u8 ccode;
+ u8 bytecnt;
+ u8 *data;
+};
+
+/*
+ * struct idt_eeprom_seq - sequence of data to be read/written from/to EEPROM
+ * @cmd: Transaction CMD
+ * @eeaddr: EEPROM custom address
+ * @memaddr: Internal memory address of EEPROM
+ * @data: Data to be written at the memory address
+ */
+struct idt_eeprom_seq {
+ u8 cmd;
+ u8 eeaddr;
+ u16 memaddr;
+ u8 data;
+} __packed;
+
+/*
+ * struct idt_csr_seq - sequence of data to be read/written from/to CSR
+ * @cmd: Transaction CMD
+ * @csraddr: Internal IDT device CSR address
+ * @data: Data to be read/written from/to the CSR address
+ */
+struct idt_csr_seq {
+ u8 cmd;
+ u16 csraddr;
+ u32 data;
+} __packed;
+
+/*
+ * SMBus command code macros
+ * @CCODE_END: Indicates the end of transaction
+ * @CCODE_START: Indicates the start of transaction
+ * @CCODE_CSR: CSR read/write transaction
+ * @CCODE_EEPROM: EEPROM read/write transaction
+ * @CCODE_BYTE: Supplied data has BYTE length
+ * @CCODE_WORD: Supplied data has WORD length
+ * @CCODE_BLOCK: Supplied data has variable length passed in bytecnt
+ * byte right following CCODE byte
+ */
+#define CCODE_END ((u8)0x01)
+#define CCODE_START ((u8)0x02)
+#define CCODE_CSR ((u8)0x00)
+#define CCODE_EEPROM ((u8)0x04)
+#define CCODE_BYTE ((u8)0x00)
+#define CCODE_WORD ((u8)0x20)
+#define CCODE_BLOCK ((u8)0x40)
+#define CCODE_PEC ((u8)0x80)
+
+/*
+ * EEPROM command macros
+ * @EEPROM_OP_WRITE: EEPROM write operation
+ * @EEPROM_OP_READ: EEPROM read operation
+ * @EEPROM_USA: Use specified address of EEPROM
+ * @EEPROM_NAERR: EEPROM device is not ready to respond
+ * @EEPROM_LAERR: EEPROM arbitration loss error
+ * @EEPROM_MSS: EEPROM misplace start & stop bits error
+ * @EEPROM_WR_CNT: Bytes count to perform write operation
+ * @EEPROM_WRRD_CNT: Bytes count to write before reading
+ * @EEPROM_RD_CNT: Bytes count to perform read operation
+ * @EEPROM_DEF_SIZE: Fall back size of EEPROM
+ * @EEPROM_DEF_ADDR: Defatul EEPROM address
+ * @EEPROM_TOUT: Timeout before retry read operation if eeprom is busy
+ */
+#define EEPROM_OP_WRITE ((u8)0x00)
+#define EEPROM_OP_READ ((u8)0x01)
+#define EEPROM_USA ((u8)0x02)
+#define EEPROM_NAERR ((u8)0x08)
+#define EEPROM_LAERR ((u8)0x10)
+#define EEPROM_MSS ((u8)0x20)
+#define EEPROM_WR_CNT ((u8)5)
+#define EEPROM_WRRD_CNT ((u8)4)
+#define EEPROM_RD_CNT ((u8)5)
+#define EEPROM_DEF_SIZE ((u16)4096)
+#define EEPROM_DEF_ADDR ((u8)0x50)
+#define EEPROM_TOUT (100)
+
+/*
+ * CSR command macros
+ * @CSR_DWE: Enable all four bytes of the operation
+ * @CSR_OP_WRITE: CSR write operation
+ * @CSR_OP_READ: CSR read operation
+ * @CSR_RERR: Read operation error
+ * @CSR_WERR: Write operation error
+ * @CSR_WR_CNT: Bytes count to perform write operation
+ * @CSR_WRRD_CNT: Bytes count to write before reading
+ * @CSR_RD_CNT: Bytes count to perform read operation
+ * @CSR_MAX: Maximum CSR address
+ * @CSR_DEF: Default CSR address
+ * @CSR_REAL_ADDR: CSR real unshifted address
+ */
+#define CSR_DWE ((u8)0x0F)
+#define CSR_OP_WRITE ((u8)0x00)
+#define CSR_OP_READ ((u8)0x10)
+#define CSR_RERR ((u8)0x40)
+#define CSR_WERR ((u8)0x80)
+#define CSR_WR_CNT ((u8)7)
+#define CSR_WRRD_CNT ((u8)3)
+#define CSR_RD_CNT ((u8)7)
+#define CSR_MAX ((u32)0x3FFFF)
+#define CSR_DEF ((u16)0x0000)
+#define CSR_REAL_ADDR(val) ((unsigned int)val << 2)
+
+/*
+ * IDT 89HPESx basic register
+ * @IDT_VIDDID_CSR: PCIe VID and DID of IDT 89HPESx
+ * @IDT_VID_MASK: Mask of VID
+ */
+#define IDT_VIDDID_CSR ((u32)0x0000)
+#define IDT_VID_MASK ((u32)0xFFFF)
+
+/*
+ * IDT 89HPESx can send NACK when new command is sent before previous one
+ * fininshed execution. In this case driver retries operation
+ * certain times.
+ * @RETRY_CNT: Number of retries before giving up and fail
+ * @idt_smb_safe: Generate a retry loop on corresponding SMBus method
+ */
+#define RETRY_CNT (128)
+#define idt_smb_safe(ops, args...) ({ \
+ int __retry = RETRY_CNT; \
+ s32 __sts; \
+ do { \
+ __sts = i2c_smbus_ ## ops ## _data(args); \
+ } while (__retry-- && __sts < 0); \
+ __sts; \
+})
+
+/*===========================================================================
+ * i2c bus level IO-operations
+ *===========================================================================
+ */
+
+/*
+ * idt_smb_write_byte() - SMBus write method when I2C_SMBUS_BYTE_DATA operation
+ * is only available
+ * @pdev: Pointer to the driver data
+ * @seq: Sequence of data to be written
+ */
+static int idt_smb_write_byte(struct idt_89hpesx_dev *pdev,
+ const struct idt_smb_seq *seq)
+{
+ s32 sts;
+ u8 ccode;
+ int idx;
+
+ /* Loop over the supplied data sending byte one-by-one */
+ for (idx = 0; idx < seq->bytecnt; idx++) {
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BYTE;
+ if (idx == 0)
+ ccode |= CCODE_START;
+ if (idx == seq->bytecnt - 1)
+ ccode |= CCODE_END;
+
+ /* Send data to the device */
+ sts = idt_smb_safe(write_byte, pdev->client, ccode,
+ seq->data[idx]);
+ if (sts != 0)
+ return (int)sts;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_smb_read_byte() - SMBus read method when I2C_SMBUS_BYTE_DATA operation
+ * is only available
+ * @pdev: Pointer to the driver data
+ * @seq: Buffer to read data to
+ */
+static int idt_smb_read_byte(struct idt_89hpesx_dev *pdev,
+ struct idt_smb_seq *seq)
+{
+ s32 sts;
+ u8 ccode;
+ int idx;
+
+ /* Loop over the supplied buffer receiving byte one-by-one */
+ for (idx = 0; idx < seq->bytecnt; idx++) {
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BYTE;
+ if (idx == 0)
+ ccode |= CCODE_START;
+ if (idx == seq->bytecnt - 1)
+ ccode |= CCODE_END;
+
+ /* Read data from the device */
+ sts = idt_smb_safe(read_byte, pdev->client, ccode);
+ if (sts < 0)
+ return (int)sts;
+
+ seq->data[idx] = (u8)sts;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_smb_write_word() - SMBus write method when I2C_SMBUS_BYTE_DATA and
+ * I2C_FUNC_SMBUS_WORD_DATA operations are available
+ * @pdev: Pointer to the driver data
+ * @seq: Sequence of data to be written
+ */
+static int idt_smb_write_word(struct idt_89hpesx_dev *pdev,
+ const struct idt_smb_seq *seq)
+{
+ s32 sts;
+ u8 ccode;
+ int idx, evencnt;
+
+ /* Calculate the even count of data to send */
+ evencnt = seq->bytecnt - (seq->bytecnt % 2);
+
+ /* Loop over the supplied data sending two bytes at a time */
+ for (idx = 0; idx < evencnt; idx += 2) {
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_WORD;
+ if (idx == 0)
+ ccode |= CCODE_START;
+ if (idx == evencnt - 2)
+ ccode |= CCODE_END;
+
+ /* Send word data to the device */
+ sts = idt_smb_safe(write_word, pdev->client, ccode,
+ *(u16 *)&seq->data[idx]);
+ if (sts != 0)
+ return (int)sts;
+ }
+
+ /* If there is odd number of bytes then send just one last byte */
+ if (seq->bytecnt != evencnt) {
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BYTE | CCODE_END;
+ if (idx == 0)
+ ccode |= CCODE_START;
+
+ /* Send byte data to the device */
+ sts = idt_smb_safe(write_byte, pdev->client, ccode,
+ seq->data[idx]);
+ if (sts != 0)
+ return (int)sts;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_smb_read_word() - SMBus read method when I2C_SMBUS_BYTE_DATA and
+ * I2C_FUNC_SMBUS_WORD_DATA operations are available
+ * @pdev: Pointer to the driver data
+ * @seq: Buffer to read data to
+ */
+static int idt_smb_read_word(struct idt_89hpesx_dev *pdev,
+ struct idt_smb_seq *seq)
+{
+ s32 sts;
+ u8 ccode;
+ int idx, evencnt;
+
+ /* Calculate the even count of data to send */
+ evencnt = seq->bytecnt - (seq->bytecnt % 2);
+
+ /* Loop over the supplied data reading two bytes at a time */
+ for (idx = 0; idx < evencnt; idx += 2) {
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_WORD;
+ if (idx == 0)
+ ccode |= CCODE_START;
+ if (idx == evencnt - 2)
+ ccode |= CCODE_END;
+
+ /* Read word data from the device */
+ sts = idt_smb_safe(read_word, pdev->client, ccode);
+ if (sts < 0)
+ return (int)sts;
+
+ *(u16 *)&seq->data[idx] = (u16)sts;
+ }
+
+ /* If there is odd number of bytes then receive just one last byte */
+ if (seq->bytecnt != evencnt) {
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BYTE | CCODE_END;
+ if (idx == 0)
+ ccode |= CCODE_START;
+
+ /* Read last data byte from the device */
+ sts = idt_smb_safe(read_byte, pdev->client, ccode);
+ if (sts < 0)
+ return (int)sts;
+
+ seq->data[idx] = (u8)sts;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_smb_write_block() - SMBus write method when I2C_SMBUS_BLOCK_DATA
+ * operation is available
+ * @pdev: Pointer to the driver data
+ * @seq: Sequence of data to be written
+ */
+static int idt_smb_write_block(struct idt_89hpesx_dev *pdev,
+ const struct idt_smb_seq *seq)
+{
+ u8 ccode;
+
+ /* Return error if too much data passed to send */
+ if (seq->bytecnt > I2C_SMBUS_BLOCK_MAX)
+ return -EINVAL;
+
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BLOCK | CCODE_START | CCODE_END;
+
+ /* Send block of data to the device */
+ return idt_smb_safe(write_block, pdev->client, ccode, seq->bytecnt,
+ seq->data);
+}
+
+/*
+ * idt_smb_read_block() - SMBus read method when I2C_SMBUS_BLOCK_DATA
+ * operation is available
+ * @pdev: Pointer to the driver data
+ * @seq: Buffer to read data to
+ */
+static int idt_smb_read_block(struct idt_89hpesx_dev *pdev,
+ struct idt_smb_seq *seq)
+{
+ s32 sts;
+ u8 ccode;
+
+ /* Return error if too much data passed to send */
+ if (seq->bytecnt > I2C_SMBUS_BLOCK_MAX)
+ return -EINVAL;
+
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BLOCK | CCODE_START | CCODE_END;
+
+ /* Read block of data from the device */
+ sts = idt_smb_safe(read_block, pdev->client, ccode, seq->data);
+ if (sts != seq->bytecnt)
+ return (sts < 0 ? sts : -ENODATA);
+
+ return 0;
+}
+
+/*
+ * idt_smb_write_i2c_block() - SMBus write method when I2C_SMBUS_I2C_BLOCK_DATA
+ * operation is available
+ * @pdev: Pointer to the driver data
+ * @seq: Sequence of data to be written
+ *
+ * NOTE It's usual SMBus write block operation, except the actual data length is
+ * sent as first byte of data
+ */
+static int idt_smb_write_i2c_block(struct idt_89hpesx_dev *pdev,
+ const struct idt_smb_seq *seq)
+{
+ u8 ccode, buf[I2C_SMBUS_BLOCK_MAX + 1];
+
+ /* Return error if too much data passed to send */
+ if (seq->bytecnt > I2C_SMBUS_BLOCK_MAX)
+ return -EINVAL;
+
+ /* Collect the data to send. Length byte must be added prior the data */
+ buf[0] = seq->bytecnt;
+ memcpy(&buf[1], seq->data, seq->bytecnt);
+
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BLOCK | CCODE_START | CCODE_END;
+
+ /* Send length and block of data to the device */
+ return idt_smb_safe(write_i2c_block, pdev->client, ccode,
+ seq->bytecnt + 1, buf);
+}
+
+/*
+ * idt_smb_read_i2c_block() - SMBus read method when I2C_SMBUS_I2C_BLOCK_DATA
+ * operation is available
+ * @pdev: Pointer to the driver data
+ * @seq: Buffer to read data to
+ *
+ * NOTE It's usual SMBus read block operation, except the actual data length is
+ * retrieved as first byte of data
+ */
+static int idt_smb_read_i2c_block(struct idt_89hpesx_dev *pdev,
+ struct idt_smb_seq *seq)
+{
+ u8 ccode, buf[I2C_SMBUS_BLOCK_MAX + 1];
+ s32 sts;
+
+ /* Return error if too much data passed to send */
+ if (seq->bytecnt > I2C_SMBUS_BLOCK_MAX)
+ return -EINVAL;
+
+ /* Collect the command code byte */
+ ccode = seq->ccode | CCODE_BLOCK | CCODE_START | CCODE_END;
+
+ /* Read length and block of data from the device */
+ sts = idt_smb_safe(read_i2c_block, pdev->client, ccode,
+ seq->bytecnt + 1, buf);
+ if (sts != seq->bytecnt + 1)
+ return (sts < 0 ? sts : -ENODATA);
+ if (buf[0] != seq->bytecnt)
+ return -ENODATA;
+
+ /* Copy retrieved data to the output data buffer */
+ memcpy(seq->data, &buf[1], seq->bytecnt);
+
+ return 0;
+}
+
+/*===========================================================================
+ * EEPROM IO-operations
+ *===========================================================================
+ */
+
+/*
+ * idt_eeprom_read_byte() - read just one byte from EEPROM
+ * @pdev: Pointer to the driver data
+ * @memaddr: Start EEPROM memory address
+ * @data: Data to be written to EEPROM
+ */
+static int idt_eeprom_read_byte(struct idt_89hpesx_dev *pdev, u16 memaddr,
+ u8 *data)
+{
+ struct device *dev = &pdev->client->dev;
+ struct idt_eeprom_seq eeseq;
+ struct idt_smb_seq smbseq;
+ int ret, retry;
+
+ /* Initialize SMBus sequence fields */
+ smbseq.ccode = pdev->iniccode | CCODE_EEPROM;
+ smbseq.data = (u8 *)&eeseq;
+
+ /*
+ * Sometimes EEPROM may respond with NACK if it's busy with previous
+ * operation, so we need to perform a few attempts of read cycle
+ */
+ retry = RETRY_CNT;
+ do {
+ /* Send EEPROM memory address to read data from */
+ smbseq.bytecnt = EEPROM_WRRD_CNT;
+ eeseq.cmd = pdev->inieecmd | EEPROM_OP_READ;
+ eeseq.eeaddr = pdev->eeaddr;
+ eeseq.memaddr = cpu_to_le16(memaddr);
+ ret = pdev->smb_write(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to init eeprom addr 0x%02hhx",
+ memaddr);
+ break;
+ }
+
+ /* Perform read operation */
+ smbseq.bytecnt = EEPROM_RD_CNT;
+ ret = pdev->smb_read(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to read eeprom data 0x%02hhx",
+ memaddr);
+ break;
+ }
+
+ /* Restart read operation if the device is busy */
+ if (retry && (eeseq.cmd & EEPROM_NAERR)) {
+ dev_dbg(dev, "EEPROM busy, retry reading after %d ms",
+ EEPROM_TOUT);
+ msleep(EEPROM_TOUT);
+ continue;
+ }
+
+ /* Check whether IDT successfully read data from EEPROM */
+ if (eeseq.cmd & (EEPROM_NAERR | EEPROM_LAERR | EEPROM_MSS)) {
+ dev_err(dev,
+ "Communication with eeprom failed, cmd 0x%hhx",
+ eeseq.cmd);
+ ret = -EREMOTEIO;
+ break;
+ }
+
+ /* Save retrieved data and exit the loop */
+ *data = eeseq.data;
+ break;
+ } while (retry--);
+
+ /* Return the status of operation */
+ return ret;
+}
+
+/*
+ * idt_eeprom_write() - EEPROM write operation
+ * @pdev: Pointer to the driver data
+ * @memaddr: Start EEPROM memory address
+ * @len: Length of data to be written
+ * @data: Data to be written to EEPROM
+ */
+static int idt_eeprom_write(struct idt_89hpesx_dev *pdev, u16 memaddr, u16 len,
+ const u8 *data)
+{
+ struct device *dev = &pdev->client->dev;
+ struct idt_eeprom_seq eeseq;
+ struct idt_smb_seq smbseq;
+ int ret;
+ u16 idx;
+
+ /* Initialize SMBus sequence fields */
+ smbseq.ccode = pdev->iniccode | CCODE_EEPROM;
+ smbseq.data = (u8 *)&eeseq;
+
+ /* Send data byte-by-byte, checking if it is successfully written */
+ for (idx = 0; idx < len; idx++, memaddr++) {
+ /* Lock IDT SMBus device */
+ mutex_lock(&pdev->smb_mtx);
+
+ /* Perform write operation */
+ smbseq.bytecnt = EEPROM_WR_CNT;
+ eeseq.cmd = pdev->inieecmd | EEPROM_OP_WRITE;
+ eeseq.eeaddr = pdev->eeaddr;
+ eeseq.memaddr = cpu_to_le16(memaddr);
+ eeseq.data = data[idx];
+ ret = pdev->smb_write(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev,
+ "Failed to write 0x%04hx:0x%02hhx to eeprom",
+ memaddr, data[idx]);
+ goto err_mutex_unlock;
+ }
+
+ /*
+ * Check whether the data is successfully written by reading
+ * from the same EEPROM memory address.
+ */
+ eeseq.data = ~data[idx];
+ ret = idt_eeprom_read_byte(pdev, memaddr, &eeseq.data);
+ if (ret != 0)
+ goto err_mutex_unlock;
+
+ /* Check whether the read byte is the same as written one */
+ if (eeseq.data != data[idx]) {
+ dev_err(dev, "Values don't match 0x%02hhx != 0x%02hhx",
+ eeseq.data, data[idx]);
+ ret = -EREMOTEIO;
+ goto err_mutex_unlock;
+ }
+
+ /* Unlock IDT SMBus device */
+err_mutex_unlock:
+ mutex_unlock(&pdev->smb_mtx);
+ if (ret != 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_eeprom_read() - EEPROM read operation
+ * @pdev: Pointer to the driver data
+ * @memaddr: Start EEPROM memory address
+ * @len: Length of data to read
+ * @buf: Buffer to read data to
+ */
+static int idt_eeprom_read(struct idt_89hpesx_dev *pdev, u16 memaddr, u16 len,
+ u8 *buf)
+{
+ int ret;
+ u16 idx;
+
+ /* Read data byte-by-byte, retrying if it wasn't successful */
+ for (idx = 0; idx < len; idx++, memaddr++) {
+ /* Lock IDT SMBus device */
+ mutex_lock(&pdev->smb_mtx);
+
+ /* Just read the byte to the buffer */
+ ret = idt_eeprom_read_byte(pdev, memaddr, &buf[idx]);
+
+ /* Unlock IDT SMBus device */
+ mutex_unlock(&pdev->smb_mtx);
+
+ /* Return error if read operation failed */
+ if (ret != 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+/*===========================================================================
+ * CSR IO-operations
+ *===========================================================================
+ */
+
+/*
+ * idt_csr_write() - CSR write operation
+ * @pdev: Pointer to the driver data
+ * @csraddr: CSR address (with no two LS bits)
+ * @data: Data to be written to CSR
+ */
+static int idt_csr_write(struct idt_89hpesx_dev *pdev, u16 csraddr,
+ const u32 data)
+{
+ struct device *dev = &pdev->client->dev;
+ struct idt_csr_seq csrseq;
+ struct idt_smb_seq smbseq;
+ int ret;
+
+ /* Initialize SMBus sequence fields */
+ smbseq.ccode = pdev->iniccode | CCODE_CSR;
+ smbseq.data = (u8 *)&csrseq;
+
+ /* Lock IDT SMBus device */
+ mutex_lock(&pdev->smb_mtx);
+
+ /* Perform write operation */
+ smbseq.bytecnt = CSR_WR_CNT;
+ csrseq.cmd = pdev->inicsrcmd | CSR_OP_WRITE;
+ csrseq.csraddr = cpu_to_le16(csraddr);
+ csrseq.data = cpu_to_le32(data);
+ ret = pdev->smb_write(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to write 0x%04x: 0x%04x to csr",
+ CSR_REAL_ADDR(csraddr), data);
+ goto err_mutex_unlock;
+ }
+
+ /* Send CSR address to read data from */
+ smbseq.bytecnt = CSR_WRRD_CNT;
+ csrseq.cmd = pdev->inicsrcmd | CSR_OP_READ;
+ ret = pdev->smb_write(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to init csr address 0x%04x",
+ CSR_REAL_ADDR(csraddr));
+ goto err_mutex_unlock;
+ }
+
+ /* Perform read operation */
+ smbseq.bytecnt = CSR_RD_CNT;
+ ret = pdev->smb_read(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to read csr 0x%04x",
+ CSR_REAL_ADDR(csraddr));
+ goto err_mutex_unlock;
+ }
+
+ /* Check whether IDT successfully retrieved CSR data */
+ if (csrseq.cmd & (CSR_RERR | CSR_WERR)) {
+ dev_err(dev, "IDT failed to perform CSR r/w");
+ ret = -EREMOTEIO;
+ goto err_mutex_unlock;
+ }
+
+ /* Unlock IDT SMBus device */
+err_mutex_unlock:
+ mutex_unlock(&pdev->smb_mtx);
+
+ return ret;
+}
+
+/*
+ * idt_csr_read() - CSR read operation
+ * @pdev: Pointer to the driver data
+ * @csraddr: CSR address (with no two LS bits)
+ * @data: Data to be written to CSR
+ */
+static int idt_csr_read(struct idt_89hpesx_dev *pdev, u16 csraddr, u32 *data)
+{
+ struct device *dev = &pdev->client->dev;
+ struct idt_csr_seq csrseq;
+ struct idt_smb_seq smbseq;
+ int ret;
+
+ /* Initialize SMBus sequence fields */
+ smbseq.ccode = pdev->iniccode | CCODE_CSR;
+ smbseq.data = (u8 *)&csrseq;
+
+ /* Lock IDT SMBus device */
+ mutex_lock(&pdev->smb_mtx);
+
+ /* Send CSR register address before reading it */
+ smbseq.bytecnt = CSR_WRRD_CNT;
+ csrseq.cmd = pdev->inicsrcmd | CSR_OP_READ;
+ csrseq.csraddr = cpu_to_le16(csraddr);
+ ret = pdev->smb_write(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to init csr address 0x%04x",
+ CSR_REAL_ADDR(csraddr));
+ goto err_mutex_unlock;
+ }
+
+ /* Perform read operation */
+ smbseq.bytecnt = CSR_RD_CNT;
+ ret = pdev->smb_read(pdev, &smbseq);
+ if (ret != 0) {
+ dev_err(dev, "Failed to read csr 0x%04hx",
+ CSR_REAL_ADDR(csraddr));
+ goto err_mutex_unlock;
+ }
+
+ /* Check whether IDT successfully retrieved CSR data */
+ if (csrseq.cmd & (CSR_RERR | CSR_WERR)) {
+ dev_err(dev, "IDT failed to perform CSR r/w");
+ ret = -EREMOTEIO;
+ goto err_mutex_unlock;
+ }
+
+ /* Save data retrieved from IDT */
+ *data = le32_to_cpu(csrseq.data);
+
+ /* Unlock IDT SMBus device */
+err_mutex_unlock:
+ mutex_unlock(&pdev->smb_mtx);
+
+ return ret;
+}
+
+/*===========================================================================
+ * Sysfs/debugfs-nodes IO-operations
+ *===========================================================================
+ */
+
+/*
+ * eeprom_write() - EEPROM sysfs-node write callback
+ * @filep: Pointer to the file system node
+ * @kobj: Pointer to the kernel object related to the sysfs-node
+ * @attr: Attributes of the file
+ * @buf: Buffer to write data to
+ * @off: Offset at which data should be written to
+ * @count: Number of bytes to write
+ */
+static ssize_t eeprom_write(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr,
+ char *buf, loff_t off, size_t count)
+{
+ struct idt_89hpesx_dev *pdev;
+ int ret;
+
+ /* Retrieve driver data */
+ pdev = dev_get_drvdata(kobj_to_dev(kobj));
+
+ /* Perform EEPROM write operation */
+ ret = idt_eeprom_write(pdev, (u16)off, (u16)count, (u8 *)buf);
+ return (ret != 0 ? ret : count);
+}
+
+/*
+ * eeprom_read() - EEPROM sysfs-node read callback
+ * @filep: Pointer to the file system node
+ * @kobj: Pointer to the kernel object related to the sysfs-node
+ * @attr: Attributes of the file
+ * @buf: Buffer to write data to
+ * @off: Offset at which data should be written to
+ * @count: Number of bytes to write
+ */
+static ssize_t eeprom_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr,
+ char *buf, loff_t off, size_t count)
+{
+ struct idt_89hpesx_dev *pdev;
+ int ret;
+
+ /* Retrieve driver data */
+ pdev = dev_get_drvdata(kobj_to_dev(kobj));
+
+ /* Perform EEPROM read operation */
+ ret = idt_eeprom_read(pdev, (u16)off, (u16)count, (u8 *)buf);
+ return (ret != 0 ? ret : count);
+}
+
+/*
+ * idt_dbgfs_csr_write() - CSR debugfs-node write callback
+ * @filep: Pointer to the file system file descriptor
+ * @buf: Buffer to read data from
+ * @count: Size of the buffer
+ * @offp: Offset within the file
+ *
+ * It accepts either "0x<reg addr>:0x<value>" for saving register address
+ * and writing value to specified DWORD register or "0x<reg addr>" for
+ * just saving register address in order to perform next read operation.
+ *
+ * WARNING No spaces are allowed. Incoming string must be strictly formated as:
+ * "<reg addr>:<value>". Register address must be aligned within 4 bytes
+ * (one DWORD).
+ */
+static ssize_t idt_dbgfs_csr_write(struct file *filep, const char __user *ubuf,
+ size_t count, loff_t *offp)
+{
+ struct idt_89hpesx_dev *pdev = filep->private_data;
+ char *colon_ch, *csraddr_str, *csrval_str;
+ int ret, csraddr_len, csrval_len;
+ u32 csraddr, csrval;
+ char *buf;
+
+ /* Copy data from User-space */
+ buf = kmalloc(count + 1, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ ret = simple_write_to_buffer(buf, count, offp, ubuf, count);
+ if (ret < 0)
+ goto free_buf;
+ buf[count] = 0;
+
+ /* Find position of colon in the buffer */
+ colon_ch = strnchr(buf, count, ':');
+
+ /*
+ * If there is colon passed then new CSR value should be parsed as
+ * well, so allocate buffer for CSR address substring.
+ * If no colon is found, then string must have just one number with
+ * no new CSR value
+ */
+ if (colon_ch != NULL) {
+ csraddr_len = colon_ch - buf;
+ csraddr_str =
+ kmalloc(sizeof(char)*(csraddr_len + 1), GFP_KERNEL);
+ if (csraddr_str == NULL)
+ return -ENOMEM;
+ /* Copy the register address to the substring buffer */
+ strncpy(csraddr_str, buf, csraddr_len);
+ csraddr_str[csraddr_len] = '\0';
+ /* Register value must follow the colon */
+ csrval_str = colon_ch + 1;
+ csrval_len = strnlen(csrval_str, count - csraddr_len);
+ } else /* if (str_colon == NULL) */ {
+ csraddr_str = (char *)buf; /* Just to shut warning up */
+ csraddr_len = strnlen(csraddr_str, count);
+ csrval_str = NULL;
+ csrval_len = 0;
+ }
+
+ /* Convert CSR address to u32 value */
+ ret = kstrtou32(csraddr_str, 0, &csraddr);
+ if (ret != 0)
+ goto free_csraddr_str;
+
+ /* Check whether passed register address is valid */
+ if (csraddr > CSR_MAX || !IS_ALIGNED(csraddr, SZ_4)) {
+ ret = -EINVAL;
+ goto free_csraddr_str;
+ }
+
+ /* Shift register address to the right so to have u16 address */
+ pdev->csr = (csraddr >> 2);
+
+ /* Parse new CSR value and send it to IDT, if colon has been found */
+ if (colon_ch != NULL) {
+ ret = kstrtou32(csrval_str, 0, &csrval);
+ if (ret != 0)
+ goto free_csraddr_str;
+
+ ret = idt_csr_write(pdev, pdev->csr, csrval);
+ if (ret != 0)
+ goto free_csraddr_str;
+ }
+
+ /* Free memory only if colon has been found */
+free_csraddr_str:
+ if (colon_ch != NULL)
+ kfree(csraddr_str);
+
+ /* Free buffer allocated for data retrieved from User-space */
+free_buf:
+ kfree(buf);
+
+ return (ret != 0 ? ret : count);
+}
+
+/*
+ * idt_dbgfs_csr_read() - CSR debugfs-node read callback
+ * @filep: Pointer to the file system file descriptor
+ * @buf: Buffer to write data to
+ * @count: Size of the buffer
+ * @offp: Offset within the file
+ *
+ * It just prints the pair "0x<reg addr>:0x<value>" to passed buffer.
+ */
+#define CSRBUF_SIZE ((size_t)32)
+static ssize_t idt_dbgfs_csr_read(struct file *filep, char __user *ubuf,
+ size_t count, loff_t *offp)
+{
+ struct idt_89hpesx_dev *pdev = filep->private_data;
+ u32 csraddr, csrval;
+ char buf[CSRBUF_SIZE];
+ int ret, size;
+
+ /* Perform CSR read operation */
+ ret = idt_csr_read(pdev, pdev->csr, &csrval);
+ if (ret != 0)
+ return ret;
+
+ /* Shift register address to the left so to have real address */
+ csraddr = ((u32)pdev->csr << 2);
+
+ /* Print the "0x<reg addr>:0x<value>" to buffer */
+ size = snprintf(buf, CSRBUF_SIZE, "0x%05x:0x%08x\n",
+ (unsigned int)csraddr, (unsigned int)csrval);
+
+ /* Copy data to User-space */
+ return simple_read_from_buffer(ubuf, count, offp, buf, size);
+}
+
+/*
+ * eeprom_attribute - EEPROM sysfs-node attributes
+ *
+ * NOTE Size will be changed in compliance with OF node. EEPROM attribute will
+ * be read-only as well if the corresponding flag is specified in OF node.
+ */
+static BIN_ATTR_RW(eeprom, EEPROM_DEF_SIZE);
+
+/*
+ * csr_dbgfs_ops - CSR debugfs-node read/write operations
+ */
+static const struct file_operations csr_dbgfs_ops = {
+ .owner = THIS_MODULE,
+ .open = simple_open,
+ .write = idt_dbgfs_csr_write,
+ .read = idt_dbgfs_csr_read
+};
+
+/*===========================================================================
+ * Driver init/deinit methods
+ *===========================================================================
+ */
+
+/*
+ * idt_set_defval() - disable EEPROM access by default
+ * @pdev: Pointer to the driver data
+ */
+static void idt_set_defval(struct idt_89hpesx_dev *pdev)
+{
+ /* If OF info is missing then use next values */
+ pdev->eesize = 0;
+ pdev->eero = true;
+ pdev->inieecmd = 0;
+ pdev->eeaddr = 0;
+}
+
+#ifdef CONFIG_OF
+static const struct i2c_device_id ee_ids[];
+/*
+ * idt_ee_match_id() - check whether the node belongs to compatible EEPROMs
+ */
+static const struct i2c_device_id *idt_ee_match_id(struct device_node *node)
+{
+ const struct i2c_device_id *id = ee_ids;
+ char devname[I2C_NAME_SIZE];
+
+ /* Retrieve the device name without manufacturer name */
+ if (of_modalias_node(node, devname, sizeof(devname)))
+ return NULL;
+
+ /* Search through the device name */
+ while (id->name[0]) {
+ if (strcmp(devname, id->name) == 0)
+ return id;
+ id++;
+ }
+ return NULL;
+}
+
+/*
+ * idt_get_ofdata() - get IDT i2c-device parameters from device tree
+ * @pdev: Pointer to the driver data
+ */
+static void idt_get_ofdata(struct idt_89hpesx_dev *pdev)
+{
+ const struct device_node *node = pdev->client->dev.of_node;
+ struct device *dev = &pdev->client->dev;
+
+ /* Read dts node parameters */
+ if (node) {
+ const struct i2c_device_id *ee_id = NULL;
+ struct device_node *child;
+ const __be32 *addr_be;
+ int len;
+
+ /* Walk through all child nodes looking for compatible one */
+ for_each_available_child_of_node(node, child) {
+ ee_id = idt_ee_match_id(child);
+ if (IS_ERR_OR_NULL(ee_id)) {
+ dev_warn(dev, "Skip unsupported child node %s",
+ child->full_name);
+ continue;
+ } else
+ break;
+ }
+
+ /* If there is no child EEPROM device, then set zero size */
+ if (!ee_id) {
+ idt_set_defval(pdev);
+ return;
+ }
+
+ /* Retrieve EEPROM size */
+ pdev->eesize = (u32)ee_id->driver_data;
+
+ /* Get custom EEPROM address from 'reg' attribute */
+ addr_be = of_get_property(child, "reg", &len);
+ if (!addr_be || (len < sizeof(*addr_be))) {
+ dev_warn(dev, "No reg on %s, use default address %d",
+ child->full_name, EEPROM_DEF_ADDR);
+ pdev->inieecmd = 0;
+ pdev->eeaddr = EEPROM_DEF_ADDR << 1;
+ } else {
+ pdev->inieecmd = EEPROM_USA;
+ pdev->eeaddr = be32_to_cpup(addr_be) << 1;
+ }
+
+ /* Check EEPROM 'read-only' flag */
+ if (of_get_property(child, "read-only", NULL))
+ pdev->eero = true;
+ else /* if (!of_get_property(node, "read-only", NULL)) */
+ pdev->eero = false;
+
+ dev_dbg(dev, "EEPROM of %u bytes found by %hhu",
+ pdev->eesize, pdev->eeaddr);
+ } else {
+ dev_warn(dev, "No dts node, EEPROM access disabled");
+ idt_set_defval(pdev);
+ }
+}
+#else
+static void idt_get_ofdata(struct idt_89hpesx_dev *pdev)
+{
+ struct device *dev = &pdev->client->dev;
+
+ dev_warn(dev, "OF table is unsupported, EEPROM access disabled");
+
+ /* Nothing we can do, just set the default values */
+ idt_set_defval(pdev);
+}
+#endif /* CONFIG_OF */
+
+/*
+ * idt_create_pdev() - create and init data structure of the driver
+ * @client: i2c client of IDT PCIe-switch device
+ */
+static struct idt_89hpesx_dev *idt_create_pdev(struct i2c_client *client)
+{
+ struct idt_89hpesx_dev *pdev;
+
+ /* Allocate memory for driver data */
+ pdev = devm_kmalloc(&client->dev, sizeof(struct idt_89hpesx_dev),
+ GFP_KERNEL);
+ if (pdev == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ /* Initialize basic fields of the data */
+ pdev->client = client;
+ i2c_set_clientdata(client, pdev);
+
+ /* Read OF nodes information */
+ idt_get_ofdata(pdev);
+
+ /* Initialize basic CSR CMD field - use full DWORD-sized r/w ops */
+ pdev->inicsrcmd = CSR_DWE;
+ pdev->csr = CSR_DEF;
+
+ /* Enable Packet Error Checking if it's supported by adapter */
+ if (i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_PEC)) {
+ pdev->iniccode = CCODE_PEC;
+ client->flags |= I2C_CLIENT_PEC;
+ } else /* PEC is unsupported */ {
+ pdev->iniccode = 0;
+ }
+
+ return pdev;
+}
+
+/*
+ * idt_free_pdev() - free data structure of the driver
+ * @pdev: Pointer to the driver data
+ */
+static void idt_free_pdev(struct idt_89hpesx_dev *pdev)
+{
+ /* Clear driver data from device private field */
+ i2c_set_clientdata(pdev->client, NULL);
+
+ /* Just free memory allocated for data */
+ devm_kfree(&pdev->client->dev, pdev);
+}
+
+/*
+ * idt_set_smbus_ops() - set supported SMBus operations
+ * @pdev: Pointer to the driver data
+ * Return status of smbus check operations
+ */
+static int idt_set_smbus_ops(struct idt_89hpesx_dev *pdev)
+{
+ struct i2c_adapter *adapter = pdev->client->adapter;
+ struct device *dev = &pdev->client->dev;
+
+ /* Check i2c adapter read functionality */
+ if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_READ_BLOCK_DATA)) {
+ pdev->smb_read = idt_smb_read_block;
+ dev_dbg(dev, "SMBus block-read op chosen");
+ } else if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_READ_I2C_BLOCK)) {
+ pdev->smb_read = idt_smb_read_i2c_block;
+ dev_dbg(dev, "SMBus i2c-block-read op chosen");
+ } else if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_READ_WORD_DATA) &&
+ i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_READ_BYTE_DATA)) {
+ pdev->smb_read = idt_smb_read_word;
+ dev_warn(dev, "Use slow word/byte SMBus read ops");
+ } else if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_READ_BYTE_DATA)) {
+ pdev->smb_read = idt_smb_read_byte;
+ dev_warn(dev, "Use slow byte SMBus read op");
+ } else /* no supported smbus read operations */ {
+ dev_err(dev, "No supported SMBus read op");
+ return -EPFNOSUPPORT;
+ }
+
+ /* Check i2c adapter write functionality */
+ if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_WRITE_BLOCK_DATA)) {
+ pdev->smb_write = idt_smb_write_block;
+ dev_dbg(dev, "SMBus block-write op chosen");
+ } else if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_WRITE_I2C_BLOCK)) {
+ pdev->smb_write = idt_smb_write_i2c_block;
+ dev_dbg(dev, "SMBus i2c-block-write op chosen");
+ } else if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_WRITE_WORD_DATA) &&
+ i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_WRITE_BYTE_DATA)) {
+ pdev->smb_write = idt_smb_write_word;
+ dev_warn(dev, "Use slow word/byte SMBus write op");
+ } else if (i2c_check_functionality(adapter,
+ I2C_FUNC_SMBUS_WRITE_BYTE_DATA)) {
+ pdev->smb_write = idt_smb_write_byte;
+ dev_warn(dev, "Use slow byte SMBus write op");
+ } else /* no supported smbus write operations */ {
+ dev_err(dev, "No supported SMBus write op");
+ return -EPFNOSUPPORT;
+ }
+
+ /* Initialize IDT SMBus slave interface mutex */
+ mutex_init(&pdev->smb_mtx);
+
+ return 0;
+}
+
+/*
+ * idt_check_dev() - check whether it's really IDT 89HPESx device
+ * @pdev: Pointer to the driver data
+ * Return status of i2c adapter check operation
+ */
+static int idt_check_dev(struct idt_89hpesx_dev *pdev)
+{
+ struct device *dev = &pdev->client->dev;
+ u32 viddid;
+ int ret;
+
+ /* Read VID and DID directly from IDT memory space */
+ ret = idt_csr_read(pdev, IDT_VIDDID_CSR, &viddid);
+ if (ret != 0) {
+ dev_err(dev, "Failed to read VID/DID");
+ return ret;
+ }
+
+ /* Check whether it's IDT device */
+ if ((viddid & IDT_VID_MASK) != PCI_VENDOR_ID_IDT) {
+ dev_err(dev, "Got unsupported VID/DID: 0x%08x", viddid);
+ return -ENODEV;
+ }
+
+ dev_info(dev, "Found IDT 89HPES device VID:0x%04x, DID:0x%04x",
+ (viddid & IDT_VID_MASK), (viddid >> 16));
+
+ return 0;
+}
+
+/*
+ * idt_create_sysfs_files() - create sysfs attribute files
+ * @pdev: Pointer to the driver data
+ * Return status of operation
+ */
+static int idt_create_sysfs_files(struct idt_89hpesx_dev *pdev)
+{
+ struct device *dev = &pdev->client->dev;
+ int ret;
+
+ /* Don't do anything if EEPROM isn't accessible */
+ if (pdev->eesize == 0) {
+ dev_dbg(dev, "Skip creating sysfs-files");
+ return 0;
+ }
+
+ /* Allocate memory for attribute file */
+ pdev->ee_file = devm_kmalloc(dev, sizeof(*pdev->ee_file), GFP_KERNEL);
+ if (!pdev->ee_file)
+ return -ENOMEM;
+
+ /* Copy the declared EEPROM attr structure to change some of fields */
+ memcpy(pdev->ee_file, &bin_attr_eeprom, sizeof(*pdev->ee_file));
+
+ /* In case of read-only EEPROM get rid of write ability */
+ if (pdev->eero) {
+ pdev->ee_file->attr.mode &= ~0200;
+ pdev->ee_file->write = NULL;
+ }
+ /* Create EEPROM sysfs file */
+ pdev->ee_file->size = pdev->eesize;
+ ret = sysfs_create_bin_file(&dev->kobj, pdev->ee_file);
+ if (ret != 0) {
+ kfree(pdev->ee_file);
+ dev_err(dev, "Failed to create EEPROM sysfs-node");
+ return ret;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_remove_sysfs_files() - remove sysfs attribute files
+ * @pdev: Pointer to the driver data
+ */
+static void idt_remove_sysfs_files(struct idt_89hpesx_dev *pdev)
+{
+ struct device *dev = &pdev->client->dev;
+
+ /* Don't do anything if EEPROM wasn't accessible */
+ if (pdev->eesize == 0)
+ return;
+
+ /* Remove EEPROM sysfs file */
+ sysfs_remove_bin_file(&dev->kobj, pdev->ee_file);
+
+ /* Free memory allocated for bin_attribute structure */
+ kfree(pdev->ee_file);
+}
+
+/*
+ * idt_create_dbgfs_files() - create debugfs files
+ * @pdev: Pointer to the driver data
+ */
+#define CSRNAME_LEN ((size_t)32)
+static void idt_create_dbgfs_files(struct idt_89hpesx_dev *pdev)
+{
+ struct i2c_client *cli = pdev->client;
+ char fname[CSRNAME_LEN];
+
+ /* Create Debugfs directory for CSR file */
+ snprintf(fname, CSRNAME_LEN, "%d-%04hx", cli->adapter->nr, cli->addr);
+ pdev->csr_dir = debugfs_create_dir(fname, csr_dbgdir);
+
+ /* Create Debugfs file for CSR read/write operations */
+ pdev->csr_file = debugfs_create_file(cli->name, 0600,
+ pdev->csr_dir, pdev, &csr_dbgfs_ops);
+}
+
+/*
+ * idt_remove_dbgfs_files() - remove debugfs files
+ * @pdev: Pointer to the driver data
+ */
+static void idt_remove_dbgfs_files(struct idt_89hpesx_dev *pdev)
+{
+ /* Remove CSR directory and it sysfs-node */
+ debugfs_remove_recursive(pdev->csr_dir);
+}
+
+/*
+ * idt_probe() - IDT 89HPESx driver probe() callback method
+ */
+static int idt_probe(struct i2c_client *client, const struct i2c_device_id *id)
+{
+ struct idt_89hpesx_dev *pdev;
+ int ret;
+
+ /* Create driver data */
+ pdev = idt_create_pdev(client);
+ if (IS_ERR(pdev))
+ return PTR_ERR(pdev);
+
+ /* Set SMBus operations */
+ ret = idt_set_smbus_ops(pdev);
+ if (ret != 0)
+ goto err_free_pdev;
+
+ /* Check whether it is truly IDT 89HPESx device */
+ ret = idt_check_dev(pdev);
+ if (ret != 0)
+ goto err_free_pdev;
+
+ /* Create sysfs files */
+ ret = idt_create_sysfs_files(pdev);
+ if (ret != 0)
+ goto err_free_pdev;
+
+ /* Create debugfs files */
+ idt_create_dbgfs_files(pdev);
+
+ return 0;
+
+err_free_pdev:
+ idt_free_pdev(pdev);
+
+ return ret;
+}
+
+/*
+ * idt_remove() - IDT 89HPESx driver remove() callback method
+ */
+static int idt_remove(struct i2c_client *client)
+{
+ struct idt_89hpesx_dev *pdev = i2c_get_clientdata(client);
+
+ /* Remove debugfs files first */
+ idt_remove_dbgfs_files(pdev);
+
+ /* Remove sysfs files */
+ idt_remove_sysfs_files(pdev);
+
+ /* Discard driver data structure */
+ idt_free_pdev(pdev);
+
+ return 0;
+}
+
+/*
+ * ee_ids - array of supported EEPROMs
+ */
+static const struct i2c_device_id ee_ids[] = {
+ { "24c32", 4096},
+ { "24c64", 8192},
+ { "24c128", 16384},
+ { "24c256", 32768},
+ { "24c512", 65536},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, ee_ids);
+
+/*
+ * idt_ids - supported IDT 89HPESx devices
+ */
+static const struct i2c_device_id idt_ids[] = {
+ { "89hpes8nt2", 0 },
+ { "89hpes12nt3", 0 },
+
+ { "89hpes24nt6ag2", 0 },
+ { "89hpes32nt8ag2", 0 },
+ { "89hpes32nt8bg2", 0 },
+ { "89hpes12nt12g2", 0 },
+ { "89hpes16nt16g2", 0 },
+ { "89hpes24nt24g2", 0 },
+ { "89hpes32nt24ag2", 0 },
+ { "89hpes32nt24bg2", 0 },
+
+ { "89hpes12n3", 0 },
+ { "89hpes12n3a", 0 },
+ { "89hpes24n3", 0 },
+ { "89hpes24n3a", 0 },
+
+ { "89hpes32h8", 0 },
+ { "89hpes32h8g2", 0 },
+ { "89hpes48h12", 0 },
+ { "89hpes48h12g2", 0 },
+ { "89hpes48h12ag2", 0 },
+ { "89hpes16h16", 0 },
+ { "89hpes22h16", 0 },
+ { "89hpes22h16g2", 0 },
+ { "89hpes34h16", 0 },
+ { "89hpes34h16g2", 0 },
+ { "89hpes64h16", 0 },
+ { "89hpes64h16g2", 0 },
+ { "89hpes64h16ag2", 0 },
+
+ /* { "89hpes3t3", 0 }, // No SMBus-slave iface */
+ { "89hpes12t3g2", 0 },
+ { "89hpes24t3g2", 0 },
+ /* { "89hpes4t4", 0 }, // No SMBus-slave iface */
+ { "89hpes16t4", 0 },
+ { "89hpes4t4g2", 0 },
+ { "89hpes10t4g2", 0 },
+ { "89hpes16t4g2", 0 },
+ { "89hpes16t4ag2", 0 },
+ { "89hpes5t5", 0 },
+ { "89hpes6t5", 0 },
+ { "89hpes8t5", 0 },
+ { "89hpes8t5a", 0 },
+ { "89hpes24t6", 0 },
+ { "89hpes6t6g2", 0 },
+ { "89hpes24t6g2", 0 },
+ { "89hpes16t7", 0 },
+ { "89hpes32t8", 0 },
+ { "89hpes32t8g2", 0 },
+ { "89hpes48t12", 0 },
+ { "89hpes48t12g2", 0 },
+ { /* END OF LIST */ }
+};
+MODULE_DEVICE_TABLE(i2c, idt_ids);
+
+/*
+ * idt_driver - IDT 89HPESx driver structure
+ */
+static struct i2c_driver idt_driver = {
+ .driver = {
+ .name = IDT_NAME,
+ .owner = THIS_MODULE,
+ },
+ .probe = idt_probe,
+ .remove = idt_remove,
+ .id_table = idt_ids,
+};
+
+/*
+ * idt_init() - IDT 89HPESx driver init() callback method
+ */
+static int __init idt_init(void)
+{
+ /* Create Debugfs directory first */
+ if (debugfs_initialized())
+ csr_dbgdir = debugfs_create_dir("idt_csr", NULL);
+
+ /* Add new i2c-device driver */
+ return i2c_add_driver(&idt_driver);
+}
+module_init(idt_init);
+
+/*
+ * idt_exit() - IDT 89HPESx driver exit() callback method
+ */
+static void __exit idt_exit(void)
+{
+ /* Discard debugfs directory and all files if any */
+ debugfs_remove_recursive(csr_dbgdir);
+
+ /* Unregister i2c-device driver */
+ i2c_del_driver(&idt_driver);
+}
+module_exit(idt_exit);
static struct pci_error_handlers genwqe_err_handler = {
.error_detected = genwqe_err_error_detected,
.mmio_enabled = genwqe_err_result_none,
- .link_reset = genwqe_err_result_none,
.slot_reset = genwqe_err_slot_reset,
.resume = genwqe_err_resume,
};
(void) recursive_loop(recur_count);
}
+static noinline void __lkdtm_CORRUPT_STACK(void *stack)
+{
+ memset(stack, 'a', 64);
+}
+
noinline void lkdtm_CORRUPT_STACK(void)
{
/* Use default char array length that triggers stack protection. */
char data[8];
+ __lkdtm_CORRUPT_STACK(&data);
- memset((void *)data, 'a', 64);
pr_info("Corrupted stack with '%16s'...\n", data);
}
{
int ret;
+ if (!cldev->bus->hbm_f_os_supported)
+ return;
+
ret = mei_cldev_enable(cldev);
if (ret)
return;
* @list: An instance of our list structure
* @cl: host client
*/
-void mei_io_list_flush(struct mei_cl_cb *list, struct mei_cl *cl)
+static inline void mei_io_list_flush(struct mei_cl_cb *list, struct mei_cl *cl)
{
__mei_io_list_flush(list, cl, false);
}
*
* @cl: host client
*/
-void mei_cl_set_disconnected(struct mei_cl *cl)
+static void mei_cl_set_disconnected(struct mei_cl *cl)
{
struct mei_device *dev = cl->dev;
{
INIT_LIST_HEAD(&list->list);
}
-void mei_io_list_flush(struct mei_cl_cb *list, struct mei_cl *cl);
/*
* MEI Host Client Functions
}
int mei_cl_disconnect(struct mei_cl *cl);
-void mei_cl_set_disconnected(struct mei_cl *cl);
int mei_cl_irq_disconnect(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
int mei_cl_connect(struct mei_cl *cl, struct mei_me_client *me_cl,
dev->hbm_f_ev_supported);
pos += scnprintf(buf + pos, bufsz - pos, "\tFA: %01d\n",
dev->hbm_f_fa_supported);
+ pos += scnprintf(buf + pos, bufsz - pos, "\tOS: %01d\n",
+ dev->hbm_f_os_supported);
}
pos += scnprintf(buf + pos, bufsz - pos, "pg: %s, %s\n",
/* Fixed Address Client Support */
if (dev->version.major_version >= HBM_MAJOR_VERSION_FA)
dev->hbm_f_fa_supported = 1;
+
+ /* OS ver message Support */
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_OS)
+ dev->hbm_f_os_supported = 1;
}
/**
#define HBM_MINOR_VERSION_FA 0
#define HBM_MAJOR_VERSION_FA 2
+/*
+ * MEI version with OS ver message support
+ */
+#define HBM_MINOR_VERSION_OS 0
+#define HBM_MAJOR_VERSION_OS 2
+
/* Host bus message command opcode */
#define MEI_HBM_CMD_OP_MSK 0x7f
/* Host bus message command RESPONSE */
* @hbm_f_ev_supported : hbm feature event notification
* @hbm_f_fa_supported : hbm feature fixed address client
* @hbm_f_ie_supported : hbm feature immediate reply to enum request
+ * @hbm_f_os_supported : hbm feature support OS ver message
*
* @me_clients_rwsem: rw lock over me_clients list
* @me_clients : list of FW clients
unsigned int hbm_f_ev_supported:1;
unsigned int hbm_f_fa_supported:1;
unsigned int hbm_f_ie_supported:1;
+ unsigned int hbm_f_os_supported:1;
struct rw_semaphore me_clients_rwsem;
struct list_head me_clients;
if (ret) {
dev_err(vop_dev(vdev), "%s %d err %d\n",
__func__, __LINE__, ret);
- kfree(vdev);
return ret;
}
block->label = devm_kstrdup(sram->dev,
label, GFP_KERNEL);
- if (!block->label)
+ if (!block->label) {
+ ret = -ENOMEM;
goto err_chunks;
+ }
dev_dbg(sram->dev, "found %sblock '%s' 0x%x-0x%x\n",
block->export ? "exported " : "", block->label,
*/
static int ipddp_create(struct ipddp_route *new_rt)
{
- struct ipddp_route *rt = kmalloc(sizeof(*rt), GFP_KERNEL);
+ struct ipddp_route *rt = kzalloc(sizeof(*rt), GFP_KERNEL);
if (rt == NULL)
return -ENOMEM;
if (addr == BRCM_PSEUDO_PHY_ADDR && priv->indir_phy_mask & BIT(addr))
return bcm_sf2_sw_indir_rw(priv, 1, addr, regnum, 0);
else
- return mdiobus_read(priv->master_mii_bus, addr, regnum);
+ return mdiobus_read_nested(priv->master_mii_bus, addr, regnum);
}
static int bcm_sf2_sw_mdio_write(struct mii_bus *bus, int addr, int regnum,
if (addr == BRCM_PSEUDO_PHY_ADDR && priv->indir_phy_mask & BIT(addr))
bcm_sf2_sw_indir_rw(priv, 0, addr, regnum, val);
else
- mdiobus_write(priv->master_mii_bus, addr, regnum, val);
+ mdiobus_write_nested(priv->master_mii_bus, addr, regnum, val);
return 0;
}
const char *reg_names[BCM_SF2_REGS_NUM] = BCM_SF2_REGS_NAME;
struct device_node *dn = pdev->dev.of_node;
struct b53_platform_data *pdata;
+ struct dsa_switch_ops *ops;
struct bcm_sf2_priv *priv;
struct b53_device *dev;
struct dsa_switch *ds;
if (!priv)
return -ENOMEM;
+ ops = devm_kzalloc(&pdev->dev, sizeof(*ops), GFP_KERNEL);
+ if (!ops)
+ return -ENOMEM;
+
dev = b53_switch_alloc(&pdev->dev, &bcm_sf2_io_ops, priv);
if (!dev)
return -ENOMEM;
ds = dev->ds;
/* Override the parts that are non-standard wrt. normal b53 devices */
+ memcpy(ops, ds->ops, sizeof(*ops));
+ ds->ops = ops;
ds->ops->get_tag_protocol = bcm_sf2_sw_get_tag_protocol;
ds->ops->setup = bcm_sf2_sw_setup;
ds->ops->get_phy_flags = bcm_sf2_sw_get_phy_flags;
}
}
+isr_done:
/* If there is not a separate AN irq, handle it here */
if (pdata->dev_irq == pdata->an_irq)
pdata->phy_if.an_isr(irq, pdata);
if (pdata->vdata->i2c_support && (pdata->dev_irq == pdata->i2c_irq))
pdata->i2c_if.i2c_isr(irq, pdata);
-isr_done:
return IRQ_HANDLED;
}
tg3_mem_rx_release(tp);
tg3_mem_tx_release(tp);
+ /* Protect tg3_get_stats64() from reading freed tp->hw_stats. */
+ tg3_full_lock(tp, 0);
if (tp->hw_stats) {
dma_free_coherent(&tp->pdev->dev, sizeof(struct tg3_hw_stats),
tp->hw_stats, tp->stats_mapping);
tp->hw_stats = NULL;
}
+ tg3_full_unlock(tp);
}
/*
/* Check if mac has already been added as part of uc-list */
for (i = 0; i < adapter->uc_macs; i++) {
- if (ether_addr_equal((u8 *)&adapter->uc_list[i * ETH_ALEN],
- mac)) {
+ if (ether_addr_equal(adapter->uc_list[i].mac, mac)) {
/* mac already added, skip addition */
adapter->pmac_id[0] = adapter->pmac_id[i + 1];
return 0;
static int be_uc_mac_add(struct be_adapter *adapter, int uc_idx)
{
- if (ether_addr_equal((u8 *)&adapter->uc_list[uc_idx * ETH_ALEN],
- adapter->dev_mac)) {
+ if (ether_addr_equal(adapter->uc_list[uc_idx].mac, adapter->dev_mac)) {
adapter->pmac_id[uc_idx + 1] = adapter->pmac_id[0];
return 0;
}
- return be_cmd_pmac_add(adapter,
- (u8 *)&adapter->uc_list[uc_idx * ETH_ALEN],
+ return be_cmd_pmac_add(adapter, adapter->uc_list[uc_idx].mac,
adapter->if_handle,
&adapter->pmac_id[uc_idx + 1], 0);
}
}
if (adapter->update_uc_list) {
- i = 1; /* First slot is claimed by the Primary MAC */
-
/* cache the uc-list in adapter array */
+ i = 0;
netdev_for_each_uc_addr(ha, netdev) {
ether_addr_copy(adapter->uc_list[i].mac, ha->addr);
i++;
PAGE_SIZE,
DMA_FROM_DEVICE,
DMA_ATTR_SKIP_CPU_SYNC);
- __page_frag_drain(buffer_info->page, 0,
- buffer_info->pagecnt_bias);
+ __page_frag_cache_drain(buffer_info->page,
+ buffer_info->pagecnt_bias);
buffer_info->page = NULL;
}
dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
PAGE_SIZE, DMA_FROM_DEVICE,
DMA_ATTR_SKIP_CPU_SYNC);
- __page_frag_drain(page, 0, rx_buffer->pagecnt_bias);
+ __page_frag_cache_drain(page, rx_buffer->pagecnt_bias);
}
/* clear contents of rx_buffer */
if (priv->tx_ring_num[TX_XDP] &&
!mlx4_en_check_xdp_mtu(dev, new_mtu))
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
dev->mtu = new_mtu;
static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
{
- struct mlx5_core_dev *mdev = priv->mdev;
- struct mlx5_eswitch *esw = mdev->priv.eswitch;
-
mlx5e_vxlan_cleanup(priv);
- if (MLX5_CAP_GEN(mdev, vport_group_manager))
- mlx5_eswitch_unregister_vport_rep(esw, 0);
-
if (priv->xdp_prog)
bpf_prog_put(priv->xdp_prog);
}
static void mlx5e_nic_disable(struct mlx5e_priv *priv)
{
+ struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5_eswitch *esw = mdev->priv.eswitch;
+
queue_work(priv->wq, &priv->set_rx_mode_work);
+ if (MLX5_CAP_GEN(mdev, vport_group_manager))
+ mlx5_eswitch_unregister_vport_rep(esw, 0);
mlx5e_disable_async_events(priv);
- mlx5_lag_remove(priv->mdev);
+ mlx5_lag_remove(mdev);
}
static const struct mlx5e_profile mlx5e_nic_profile = {
switch (am->tune_state) {
case MLX5E_AM_PARKING_ON_TOP:
case MLX5E_AM_PARKING_TIRED:
- WARN_ONCE(true, "mlx5e_am_on_top: PARKING\n");
return true;
case MLX5E_AM_GOING_RIGHT:
return (am->steps_left > 1) && (am->steps_right == 1);
switch (am->tune_state) {
case MLX5E_AM_PARKING_ON_TOP:
case MLX5E_AM_PARKING_TIRED:
- WARN_ONCE(true, "mlx5e_am_turn: PARKING\n");
break;
case MLX5E_AM_GOING_RIGHT:
am->tune_state = MLX5E_AM_GOING_LEFT;
switch (am->tune_state) {
case MLX5E_AM_PARKING_ON_TOP:
case MLX5E_AM_PARKING_TIRED:
- WARN_ONCE(true, "mlx5e_am_step: PARKING\n");
break;
case MLX5E_AM_GOING_RIGHT:
if (am->profile_ix == (MLX5E_PARAMS_AM_NUM_PROFILES - 1))
u32 delta_us = ktime_us_delta(end->time, start->time);
unsigned int npkts = end->pkt_ctr - start->pkt_ctr;
- if (!delta_us) {
- WARN_ONCE(true, "mlx5e_am_calc_stats: delta_us=0\n");
+ if (!delta_us)
return;
- }
curr_stats->ppms = (npkts * USEC_PER_MSEC) / delta_us;
curr_stats->epms = (MLX5E_AM_NEVENTS * USEC_PER_MSEC) / delta_us;
}
}
+/* we get here also when setting rule to the FW failed, etc. It means that the
+ * flow rule itself might not exist, but some offloading related to the actions
+ * should be cleaned.
+ */
static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_fc *counter = NULL;
- counter = mlx5_flow_rule_counter(flow->rule);
-
- mlx5_del_flow_rules(flow->rule);
+ if (!IS_ERR(flow->rule)) {
+ counter = mlx5_flow_rule_counter(flow->rule);
+ mlx5_del_flow_rules(flow->rule);
+ mlx5_fc_destroy(priv->mdev, counter);
+ }
if (esw && esw->mode == SRIOV_OFFLOADS) {
mlx5_eswitch_del_vlan_action(esw, flow->attr);
mlx5e_detach_encap(priv, flow);
}
- mlx5_fc_destroy(priv->mdev, counter);
-
if (!mlx5e_tc_num_filters(priv) && (priv->fs.tc.t)) {
mlx5_destroy_flow_table(priv->fs.tc.t);
priv->fs.tc.t = NULL;
void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
outer_headers);
+ struct flow_dissector_key_control *enc_control =
+ skb_flow_dissector_target(f->dissector,
+ FLOW_DISSECTOR_KEY_ENC_CONTROL,
+ f->key);
+
if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
struct flow_dissector_key_ports *key =
skb_flow_dissector_target(f->dissector,
/* Full udp dst port must be given */
if (memchr_inv(&mask->dst, 0xff, sizeof(mask->dst)))
- return -EOPNOTSUPP;
-
- /* udp src port isn't supported */
- if (memchr_inv(&mask->src, 0, sizeof(mask->src)))
- return -EOPNOTSUPP;
+ goto vxlan_match_offload_err;
if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->dst)) &&
MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap))
parse_vxlan_attr(spec, f);
- else
+ else {
+ netdev_warn(priv->netdev,
+ "%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->dst));
return -EOPNOTSUPP;
+ }
MLX5_SET(fte_match_set_lyr_2_4, headers_c,
udp_dport, ntohs(mask->dst));
MLX5_SET(fte_match_set_lyr_2_4, headers_v,
udp_dport, ntohs(key->dst));
+ MLX5_SET(fte_match_set_lyr_2_4, headers_c,
+ udp_sport, ntohs(mask->src));
+ MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+ udp_sport, ntohs(key->src));
} else { /* udp dst port must be given */
- return -EOPNOTSUPP;
+vxlan_match_offload_err:
+ netdev_warn(priv->netdev,
+ "IP tunnel decap offload supported only for vxlan, must set UDP dport\n");
+ return -EOPNOTSUPP;
}
- if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
+ if (enc_control->addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
struct flow_dissector_key_ipv4_addrs *key =
skb_flow_dissector_target(f->dissector,
FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
MLX5_SET(fte_match_set_lyr_2_4, headers_v,
dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
ntohl(key->dst));
- }
- MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
- MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
+ MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP);
+ }
/* Enforce DMAC when offloading incoming tunneled flows.
* Flow counters require a match on the DMAC.
if (parse_tunnel_attr(priv, spec, f))
return -EOPNOTSUPP;
break;
+ case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
+ netdev_warn(priv->netdev,
+ "IPv6 tunnel decap offload isn't supported\n");
default:
return -EOPNOTSUPP;
}
MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1);
MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
key->flags & FLOW_DIS_IS_FRAGMENT);
+
+ /* the HW doesn't need L3 inline to match on frag=no */
+ if (key->flags & FLOW_DIS_IS_FRAGMENT)
+ *min_inline = MLX5_INLINE_MODE_IP;
}
}
#if IS_ENABLED(CONFIG_INET)
rt = ip_route_output_key(dev_net(mirred_dev), fl4);
- if (IS_ERR(rt)) {
- pr_warn("%s: no route to %pI4\n", __func__, &fl4->daddr);
- return -EOPNOTSUPP;
- }
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
#else
return -EOPNOTSUPP;
#endif
if (!switchdev_port_same_parent_id(priv->netdev, rt->dst.dev)) {
- pr_warn("%s: Can't offload the flow, netdevices aren't on the same HW e-switch\n",
- __func__);
+ pr_warn("%s: can't offload, devices not on same HW e-switch\n", __func__);
ip_rt_put(rt);
return -EOPNOTSUPP;
}
struct net_device **out_dev)
{
int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
+ struct neighbour *n = NULL;
struct flowi4 fl4 = {};
- struct neighbour *n;
char *encap_header;
int encap_size;
- __be32 saddr;
- int ttl;
+ __be32 saddr = 0;
+ int ttl = 0;
int err;
encap_header = kzalloc(max_encap_size, GFP_KERNEL);
e->out_dev = *out_dev;
if (!(n->nud_state & NUD_VALID)) {
- err = -ENOTSUPP;
+ pr_warn("%s: can't offload, neighbour to %pI4 invalid\n", __func__, &fl4.daddr);
+ err = -EOPNOTSUPP;
goto out;
}
err = mlx5_encap_alloc(priv->mdev, e->tunnel_type,
encap_size, encap_header, &e->encap_id);
out:
+ if (err && n)
+ neigh_release(n);
kfree(encap_header);
return err;
}
int tunnel_type;
int err;
- /* udp dst port must be given */
+ /* udp dst port must be set */
if (!memchr_inv(&key->tp_dst, 0, sizeof(key->tp_dst)))
+ goto vxlan_encap_offload_err;
+
+ /* setting udp src port isn't supported */
+ if (memchr_inv(&key->tp_src, 0, sizeof(key->tp_src))) {
+vxlan_encap_offload_err:
+ netdev_warn(priv->netdev,
+ "must set udp dst port and not set udp src port\n");
return -EOPNOTSUPP;
+ }
if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->tp_dst)) &&
MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) {
info.tun_id = tunnel_id_to_key32(key->tun_id);
tunnel_type = MLX5_HEADER_TYPE_VXLAN;
} else {
+ netdev_warn(priv->netdev,
+ "%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->tp_dst));
return -EOPNOTSUPP;
}
case AF_INET:
info.daddr = key->u.ipv4.dst;
break;
+ case AF_INET6:
+ netdev_warn(priv->netdev,
+ "IPv6 tunnel encap offload isn't supported\n");
default:
return -EOPNOTSUPP;
}
if (IS_ERR(flow->rule)) {
err = PTR_ERR(flow->rule);
- goto err_free;
+ goto err_del_rule;
}
err = rhashtable_insert_fast(&tc->ht, &flow->node,
goto out;
err_del_rule:
- mlx5_del_flow_rules(flow->rule);
+ mlx5e_tc_del_flow(priv, flow);
err_free:
kfree(flow);
{
int err = 0;
- mlx5_drain_health_wq(dev);
+ if (cleanup)
+ mlx5_drain_health_wq(dev);
mutex_lock(&dev->intf_state_mutex);
if (test_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state)) {
mlx5_enter_error_state(dev);
mlx5_unload_one(dev, priv, false);
- /* In case of kernel call save the pci state */
+ /* In case of kernel call save the pci state and drain the health wq */
if (state) {
pci_save_state(pdev);
+ mlx5_drain_health_wq(dev);
mlx5_pci_disable_device(dev);
}
enum rtl_rx_desc_bit {
/* Rx private */
PID1 = (1 << 18), /* Protocol ID bit 1/2 */
- PID0 = (1 << 17), /* Protocol ID bit 2/2 */
+ PID0 = (1 << 17), /* Protocol ID bit 0/2 */
#define RxProtoUDP (PID1)
#define RxProtoTCP (PID0)
.rpadir_value = 2 << 16,
.no_trimd = 1,
.no_ade = 1,
+ .hw_crc = 1,
.tsu = 1,
.select_mii = 1,
.shift_rd0 = 1,
.ecsr_value = ECSR_ICD | ECSR_MPD,
.ecsipr_value = ECSIPR_LCHNGIP | ECSIPR_ICDIP | ECSIPR_MPDIP,
- .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003fffff,
+ .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003f07ff,
.tx_check = EESR_TC1 | EESR_FTC,
.eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT |
.ecsr_value = ECSR_ICD | ECSR_MPD,
.ecsipr_value = ECSIPR_LCHNGIP | ECSIPR_ICDIP | ECSIPR_MPDIP,
- .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003fffff,
+ .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003f07ff,
.tx_check = EESR_TC1 | EESR_FTC,
.eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT |
ndev->max_mtu = JUMBO_LEN;
else
ndev->max_mtu = SKB_MAX_HEAD(NET_SKB_PAD + NET_IP_ALIGN);
- if (priv->plat->maxmtu < ndev->max_mtu)
+ /* Will not overwrite ndev->max_mtu if plat->maxmtu > ndev->max_mtu
+ * as well as plat->maxmtu < ndev->min_mtu which is a invalid range.
+ */
+ if ((priv->plat->maxmtu < ndev->max_mtu) &&
+ (priv->plat->maxmtu >= ndev->min_mtu))
ndev->max_mtu = priv->plat->maxmtu;
+ else if (priv->plat->maxmtu < ndev->min_mtu)
+ netdev_warn(priv->dev,
+ "%s: warning: maxmtu having invalid value (%d)\n",
+ __func__, priv->plat->maxmtu);
if (flow_ctrl)
priv->flow_ctrl = FLOW_AUTO; /* RX/TX pause on */
/* Set default value for unicast filter entries */
plat->unicast_filter_entries = 1;
+
+ /* Set the maxmtu to a default of JUMBO_LEN */
+ plat->maxmtu = JUMBO_LEN;
}
static int quark_default_data(struct plat_stmmacenet_data *plat,
/* Set default value for unicast filter entries */
plat->unicast_filter_entries = 1;
+ /* Set the maxmtu to a default of JUMBO_LEN */
+ plat->maxmtu = JUMBO_LEN;
+
return 0;
}
config MESON_GXL_PHY
tristate "Amlogic Meson GXL Internal PHY"
+ depends on ARCH_MESON || COMPILE_TEST
---help---
Currently has a driver for the Amlogic Meson GXL Internal PHY
#define MII_DP83867_MICR 0x12
#define MII_DP83867_ISR 0x13
#define DP83867_CTRL 0x1f
+#define DP83867_CFG3 0x1e
/* Extended Registers */
#define DP83867_RGMIICTL 0x0032
micr_status |=
(MII_DP83867_MICR_AN_ERR_INT_EN |
MII_DP83867_MICR_SPEED_CHNG_INT_EN |
+ MII_DP83867_MICR_AUTONEG_COMP_INT_EN |
+ MII_DP83867_MICR_LINK_STS_CHNG_INT_EN |
MII_DP83867_MICR_DUP_MODE_CHNG_INT_EN |
MII_DP83867_MICR_SLEEP_MODE_CHNG_INT_EN);
}
}
+ /* Enable Interrupt output INT_OE in CFG3 register */
+ if (phy_interrupt_is_valid(phydev)) {
+ val = phy_read(phydev, DP83867_CFG3);
+ val |= BIT(7);
+ phy_write(phydev, DP83867_CFG3, val);
+ }
+
return 0;
}
int err;
/* Check the fiber mode first */
- if (phydev->supported & SUPPORTED_FIBRE) {
+ if (phydev->supported & SUPPORTED_FIBRE &&
+ phydev->interface != PHY_INTERFACE_MODE_SGMII) {
err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER);
if (err < 0)
goto error;
if (old_link != phydev->link)
phydev->state = PHY_CHANGELINK;
}
+ /*
+ * Failsafe: check that nobody set phydev->link=0 between two
+ * poll cycles, otherwise we won't leave RUNNING state as long
+ * as link remains down.
+ */
+ if (!phydev->link && phydev->state == PHY_RUNNING) {
+ phydev->state = PHY_CHANGELINK;
+ phydev_err(phydev, "no link in PHY_RUNNING\n");
+ }
break;
case PHY_CHANGELINK:
err = phy_read_status(phydev);
return false;
}
-static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message)
+static int rtl8152_rumtime_suspend(struct r8152 *tp)
{
- struct r8152 *tp = usb_get_intfdata(intf);
struct net_device *netdev = tp->netdev;
int ret = 0;
- mutex_lock(&tp->control);
+ if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) {
+ u32 rcr = 0;
- if (PMSG_IS_AUTO(message)) {
- if (netif_running(netdev) && delay_autosuspend(tp)) {
+ if (delay_autosuspend(tp)) {
ret = -EBUSY;
goto out1;
}
- set_bit(SELECTIVE_SUSPEND, &tp->flags);
- } else {
- netif_device_detach(netdev);
+ if (netif_carrier_ok(netdev)) {
+ u32 ocp_data;
+
+ rcr = ocp_read_dword(tp, MCU_TYPE_PLA, PLA_RCR);
+ ocp_data = rcr & ~RCR_ACPT_ALL;
+ ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data);
+ rxdy_gated_en(tp, true);
+ ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA,
+ PLA_OOB_CTRL);
+ if (!(ocp_data & RXFIFO_EMPTY)) {
+ rxdy_gated_en(tp, false);
+ ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, rcr);
+ ret = -EBUSY;
+ goto out1;
+ }
+ }
+
+ clear_bit(WORK_ENABLE, &tp->flags);
+ usb_kill_urb(tp->intr_urb);
+
+ tp->rtl_ops.autosuspend_en(tp, true);
+
+ if (netif_carrier_ok(netdev)) {
+ napi_disable(&tp->napi);
+ rtl_stop_rx(tp);
+ rxdy_gated_en(tp, false);
+ ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, rcr);
+ napi_enable(&tp->napi);
+ }
}
+ set_bit(SELECTIVE_SUSPEND, &tp->flags);
+
+out1:
+ return ret;
+}
+
+static int rtl8152_system_suspend(struct r8152 *tp)
+{
+ struct net_device *netdev = tp->netdev;
+ int ret = 0;
+
+ netif_device_detach(netdev);
+
if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) {
clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb);
napi_disable(&tp->napi);
- if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
- rtl_stop_rx(tp);
- tp->rtl_ops.autosuspend_en(tp, true);
- } else {
- cancel_delayed_work_sync(&tp->schedule);
- tp->rtl_ops.down(tp);
- }
+ cancel_delayed_work_sync(&tp->schedule);
+ tp->rtl_ops.down(tp);
napi_enable(&tp->napi);
}
-out1:
+
+ return ret;
+}
+
+static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message)
+{
+ struct r8152 *tp = usb_get_intfdata(intf);
+ int ret;
+
+ mutex_lock(&tp->control);
+
+ if (PMSG_IS_AUTO(message))
+ ret = rtl8152_rumtime_suspend(tp);
+ else
+ ret = rtl8152_system_suspend(tp);
+
mutex_unlock(&tp->control);
return ret;
.flowi4_iif = LOOPBACK_IFINDEX,
.flowi4_tos = RT_TOS(ip4h->tos),
.flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF,
+ .flowi4_proto = ip4h->protocol,
.daddr = ip4h->daddr,
+ .saddr = ip4h->saddr,
};
struct net *net = dev_net(vrf_dev);
struct rtable *rt;
return -EINVAL;
vrf->tb_id = nla_get_u32(data[IFLA_VRF_TABLE]);
+ if (vrf->tb_id == RT_TABLE_UNSPEC)
+ return -EINVAL;
dev->priv_flags |= IFF_L3MDEV_MASTER;
/********************************************************************/
int orinoco_mic_init(struct orinoco_private *priv)
{
- priv->tx_tfm_mic = crypto_alloc_ahash("michael_mic", 0,
+ priv->tx_tfm_mic = crypto_alloc_shash("michael_mic", 0,
CRYPTO_ALG_ASYNC);
if (IS_ERR(priv->tx_tfm_mic)) {
printk(KERN_DEBUG "orinoco_mic_init: could not allocate "
return -ENOMEM;
}
- priv->rx_tfm_mic = crypto_alloc_ahash("michael_mic", 0,
+ priv->rx_tfm_mic = crypto_alloc_shash("michael_mic", 0,
CRYPTO_ALG_ASYNC);
if (IS_ERR(priv->rx_tfm_mic)) {
printk(KERN_DEBUG "orinoco_mic_init: could not allocate "
void orinoco_mic_free(struct orinoco_private *priv)
{
if (priv->tx_tfm_mic)
- crypto_free_ahash(priv->tx_tfm_mic);
+ crypto_free_shash(priv->tx_tfm_mic);
if (priv->rx_tfm_mic)
- crypto_free_ahash(priv->rx_tfm_mic);
+ crypto_free_shash(priv->rx_tfm_mic);
}
-int orinoco_mic(struct crypto_ahash *tfm_michael, u8 *key,
+int orinoco_mic(struct crypto_shash *tfm_michael, u8 *key,
u8 *da, u8 *sa, u8 priority,
u8 *data, size_t data_len, u8 *mic)
{
- AHASH_REQUEST_ON_STACK(req, tfm_michael);
- struct scatterlist sg[2];
+ SHASH_DESC_ON_STACK(desc, tfm_michael);
u8 hdr[ETH_HLEN + 2]; /* size of header + padding */
int err;
hdr[ETH_ALEN * 2 + 2] = 0;
hdr[ETH_ALEN * 2 + 3] = 0;
- /* Use scatter gather to MIC header and data in one go */
- sg_init_table(sg, 2);
- sg_set_buf(&sg[0], hdr, sizeof(hdr));
- sg_set_buf(&sg[1], data, data_len);
+ desc->tfm = tfm_michael;
+ desc->flags = 0;
- if (crypto_ahash_setkey(tfm_michael, key, MIC_KEYLEN))
- return -1;
+ err = crypto_shash_setkey(tfm_michael, key, MIC_KEYLEN);
+ if (err)
+ return err;
+
+ err = crypto_shash_init(desc);
+ if (err)
+ return err;
+
+ err = crypto_shash_update(desc, hdr, sizeof(hdr));
+ if (err)
+ return err;
+
+ err = crypto_shash_update(desc, data, data_len);
+ if (err)
+ return err;
+
+ err = crypto_shash_final(desc, mic);
+ shash_desc_zero(desc);
- ahash_request_set_tfm(req, tfm_michael);
- ahash_request_set_callback(req, 0, NULL, NULL);
- ahash_request_set_crypt(req, sg, mic, data_len + sizeof(hdr));
- err = crypto_ahash_digest(req);
- ahash_request_zero(req);
return err;
}
#define _ORINOCO_MIC_H_
#include <linux/types.h>
+#include <crypto/hash.h>
#define MICHAEL_MIC_LEN 8
int orinoco_mic_init(struct orinoco_private *priv);
void orinoco_mic_free(struct orinoco_private *priv);
-int orinoco_mic(struct crypto_ahash *tfm_michael, u8 *key,
+int orinoco_mic(struct crypto_shash *tfm_michael, u8 *key,
u8 *da, u8 *sa, u8 priority,
u8 *data, size_t data_len, u8 *mic);
u8 *wpa_ie;
int wpa_ie_len;
- struct crypto_ahash *rx_tfm_mic;
- struct crypto_ahash *tx_tfm_mic;
+ struct crypto_shash *rx_tfm_mic;
+ struct crypto_shash *tx_tfm_mic;
unsigned int wpa_enabled:1;
unsigned int tkip_cm_active:1;
return -ENOMEM;
}
rtlpriv = hw->priv;
+ rtlpriv->hw = hw;
rtlpriv->usb_data = kzalloc(RTL_USB_MAX_RX_COUNT * sizeof(u32),
GFP_KERNEL);
if (!rtlpriv->usb_data)
if (ret)
return ret;
- /* Checking for ctrl->tagset is a trick to avoid sleeping on module
- * load, since we only need the quirk on reset_controller. Notice
- * that the HGST device needs this delay only in firmware activation
- * procedure; unfortunately we have no (easy) way to verify this.
- */
- if ((ctrl->quirks & NVME_QUIRK_DELAY_BEFORE_CHK_RDY) && ctrl->tagset)
+ if (ctrl->quirks & NVME_QUIRK_DELAY_BEFORE_CHK_RDY)
msleep(NVME_QUIRK_DELAY_AMOUNT);
return nvme_wait_ready(ctrl, cap, false);
struct nvme_fc_fcp_op *op)
{
struct nvmefc_fcp_req *freq = &op->fcp_req;
- u32 map_len = nvme_map_len(rq);
enum dma_data_direction dir;
int ret;
freq->sg_cnt = 0;
- if (!map_len)
+ if (!blk_rq_payload_bytes(rq))
return 0;
freq->sg_table.sgl = freq->first_sgl;
if (ret)
return ret;
- data_len = nvme_map_len(rq);
+ data_len = blk_rq_payload_bytes(rq);
if (data_len)
io_dir = ((rq_data_dir(rq) == WRITE) ?
NVMEFC_FCP_WRITE : NVMEFC_FCP_READ);
return (sector >> (ns->lba_shift - 9));
}
-static inline unsigned nvme_map_len(struct request *rq)
-{
- if (req_op(rq) == REQ_OP_DISCARD)
- return sizeof(struct nvme_dsm_range);
- else
- return blk_rq_bytes(rq);
-}
-
static inline void nvme_cleanup_cmd(struct request *req)
{
if (req->rq_flags & RQF_SPECIAL_PAYLOAD) {
return (__le64 **)(iod->sg + blk_rq_nr_phys_segments(req));
}
-static int nvme_init_iod(struct request *rq, unsigned size,
- struct nvme_dev *dev)
+static int nvme_init_iod(struct request *rq, struct nvme_dev *dev)
{
struct nvme_iod *iod = blk_mq_rq_to_pdu(rq);
int nseg = blk_rq_nr_phys_segments(rq);
+ unsigned int size = blk_rq_payload_bytes(rq);
if (nseg > NVME_INT_PAGES || size > NVME_INT_BYTES(dev)) {
iod->sg = kmalloc(nvme_iod_alloc_size(dev, size, nseg), GFP_ATOMIC);
}
#endif
-static bool nvme_setup_prps(struct nvme_dev *dev, struct request *req,
- int total_len)
+static bool nvme_setup_prps(struct nvme_dev *dev, struct request *req)
{
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
struct dma_pool *pool;
- int length = total_len;
+ int length = blk_rq_payload_bytes(req);
struct scatterlist *sg = iod->sg;
int dma_len = sg_dma_len(sg);
u64 dma_addr = sg_dma_address(sg);
}
static int nvme_map_data(struct nvme_dev *dev, struct request *req,
- unsigned size, struct nvme_command *cmnd)
+ struct nvme_command *cmnd)
{
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
struct request_queue *q = req->q;
DMA_ATTR_NO_WARN))
goto out;
- if (!nvme_setup_prps(dev, req, size))
+ if (!nvme_setup_prps(dev, req))
goto out_unmap;
ret = BLK_MQ_RQ_QUEUE_ERROR;
struct nvme_dev *dev = nvmeq->dev;
struct request *req = bd->rq;
struct nvme_command cmnd;
- unsigned map_len;
int ret = BLK_MQ_RQ_QUEUE_OK;
/*
if (ret != BLK_MQ_RQ_QUEUE_OK)
return ret;
- map_len = nvme_map_len(req);
- ret = nvme_init_iod(req, map_len, dev);
+ ret = nvme_init_iod(req, dev);
if (ret != BLK_MQ_RQ_QUEUE_OK)
goto out_free_cmd;
if (blk_rq_nr_phys_segments(req))
- ret = nvme_map_data(dev, req, map_len, &cmnd);
+ ret = nvme_map_data(dev, req, &cmnd);
if (ret != BLK_MQ_RQ_QUEUE_OK)
goto out_cleanup_iod;
}
static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
- struct request *rq, unsigned int map_len,
- struct nvme_command *c)
+ struct request *rq, struct nvme_command *c)
{
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_device *dev = queue->device;
}
if (count == 1) {
- if (rq_data_dir(rq) == WRITE &&
- map_len <= nvme_rdma_inline_data_size(queue) &&
- nvme_rdma_queue_idx(queue))
+ if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) &&
+ blk_rq_payload_bytes(rq) <=
+ nvme_rdma_inline_data_size(queue))
return nvme_rdma_map_sg_inline(queue, req, c);
if (dev->pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY)
struct request *rq)
{
if (unlikely(!test_bit(NVME_RDMA_Q_LIVE, &queue->flags))) {
- struct nvme_command *cmd = (struct nvme_command *)rq->cmd;
+ struct nvme_command *cmd = nvme_req(rq)->cmd;
if (rq->cmd_type != REQ_TYPE_DRV_PRIV ||
cmd->common.opcode != nvme_fabrics_command ||
struct nvme_command *c = sqe->data;
bool flush = false;
struct ib_device *dev;
- unsigned int map_len;
int ret;
WARN_ON_ONCE(rq->tag < 0);
blk_mq_start_request(rq);
- map_len = nvme_map_len(rq);
- ret = nvme_rdma_map_data(queue, rq, map_len, c);
+ ret = nvme_rdma_map_data(queue, rq, c);
if (ret < 0) {
dev_err(queue->ctrl->ctrl.device,
"Failed to map data (%d)\n", ret);
goto unwind_vring_allocations;
}
- /* track the rvdevs list reference */
- kref_get(&rvdev->refcount);
-
list_add_tail(&rvdev->node, &rproc->rvdevs);
rproc_add_subdev(rproc, &rvdev->subdev,
/*
* Create a copy of the resource table. When a virtio device starts
* and calls vring_new_virtqueue() the address of the allocated vring
- * will be stored in the table_ptr. Before the device is started,
- * table_ptr will be copied into device memory.
+ * will be stored in the cached_table. Before the device is started,
+ * cached_table will be copied into device memory.
*/
- rproc->table_ptr = kmemdup(table, tablesz, GFP_KERNEL);
- if (!rproc->table_ptr)
+ rproc->cached_table = kmemdup(table, tablesz, GFP_KERNEL);
+ if (!rproc->cached_table)
goto clean_up;
+ rproc->table_ptr = rproc->cached_table;
+
/* reset max_notifyid */
rproc->max_notifyid = -1;
}
/*
- * The starting device has been given the rproc->table_ptr as the
+ * The starting device has been given the rproc->cached_table as the
* resource table. The address of the vring along with the other
- * allocated resources (carveouts etc) is stored in table_ptr.
+ * allocated resources (carveouts etc) is stored in cached_table.
* In order to pass this information to the remote device we must copy
* this information to device memory. We also update the table_ptr so
* that any subsequent changes will be applied to the loaded version.
*/
loaded_table = rproc_find_loaded_rsc_table(rproc, fw);
- if (loaded_table)
- memcpy(loaded_table, rproc->table_ptr, tablesz);
+ if (loaded_table) {
+ memcpy(loaded_table, rproc->cached_table, tablesz);
+ rproc->table_ptr = loaded_table;
+ }
/* power up the remote processor */
ret = rproc->ops->start(rproc);
clean_up_resources:
rproc_resource_cleanup(rproc);
clean_up:
- kfree(rproc->table_ptr);
+ kfree(rproc->cached_table);
+ rproc->cached_table = NULL;
rproc->table_ptr = NULL;
rproc_disable_iommu(rproc);
rproc_disable_iommu(rproc);
/* Free the copy of the resource table */
- kfree(rproc->table_ptr);
+ kfree(rproc->cached_table);
+ rproc->cached_table = NULL;
rproc->table_ptr = NULL;
/* if in crash state, unlock crash handler */
struct device *dev = &rpdev->dev;
int ret;
- dev_set_name(&rpdev->dev, "%s:%s",
- dev_name(dev->parent), rpdev->id.name);
+ dev_set_name(&rpdev->dev, "%s.%s.%d.%d", dev_name(dev->parent),
+ rpdev->id.name, rpdev->src, rpdev->dst);
rpdev->dev.bus = &rpmsg_bus;
rpdev->dev.release = rpmsg_release_device;
u32 bfi_image_cb_size, bfi_image_ct_size, bfi_image_ct2_size;
u32 *bfi_image_cb, *bfi_image_ct, *bfi_image_ct2;
-#define BFAD_FW_FILE_CB "cbfw-3.2.3.0.bin"
-#define BFAD_FW_FILE_CT "ctfw-3.2.3.0.bin"
-#define BFAD_FW_FILE_CT2 "ct2fw-3.2.3.0.bin"
+#define BFAD_FW_FILE_CB "cbfw-3.2.5.1.bin"
+#define BFAD_FW_FILE_CT "ctfw-3.2.5.1.bin"
+#define BFAD_FW_FILE_CT2 "ct2fw-3.2.5.1.bin"
static u32 *bfad_load_fwimg(struct pci_dev *pdev);
static void bfad_free_fwimg(void);
#ifdef BFA_DRIVER_VERSION
#define BFAD_DRIVER_VERSION BFA_DRIVER_VERSION
#else
-#define BFAD_DRIVER_VERSION "3.2.25.0"
+#define BFAD_DRIVER_VERSION "3.2.25.1"
#endif
#define BFAD_PROTO_NAME FCPI_NAME
struct completion *remove_wait; /* device remove thread blocks */
atomic_t in_flight; /* io counter */
+ bool internal_reset_inprogress;
u32 _reserved; /* fill hole */
unsigned long state_flags; /* protected by host lock */
enum fnic_state state;
unsigned long wait_host_tmo;
struct Scsi_Host *shost = sc->device->host;
struct fc_lport *lp = shost_priv(shost);
+ struct fnic *fnic = lport_priv(lp);
+ unsigned long flags;
+
+ spin_lock_irqsave(&fnic->fnic_lock, flags);
+ if (fnic->internal_reset_inprogress == 0) {
+ fnic->internal_reset_inprogress = 1;
+ } else {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+ "host reset in progress skipping another host reset\n");
+ return SUCCESS;
+ }
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
/*
* If fnic_reset is successful, wait for fabric login to complete
}
}
+ spin_lock_irqsave(&fnic->fnic_lock, flags);
+ fnic->internal_reset_inprogress = 0;
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
return ret;
}
1, 1);
if (rc) {
pr_err("srp_transfer_data() failed: %d\n", rc);
- return -EAGAIN;
+ return -EIO;
}
/*
* We now tell TCM to add this WRITE CDB directly into the TCM storage
config QEDI
tristate "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver Support"
- depends on PCI && SCSI
+ depends on PCI && SCSI && UIO
depends on QED
select SCSI_ISCSI_ATTRS
select QED_LL2
count = blk_rq_map_sg(req->q, req, sdb->table.sgl);
BUG_ON(count > sdb->table.nents);
sdb->table.nents = count;
- sdb->length = blk_rq_bytes(req);
+ sdb->length = blk_rq_payload_bytes(req);
return BLKPREP_OK;
}
* request queue.
*/
if (q->mq_ops) {
- blk_mq_stop_hw_queues(q);
+ blk_mq_quiesce_queue(q);
} else {
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
struct bio *bio = rq->bio;
sector_t sector = blk_rq_pos(rq);
unsigned int nr_sectors = blk_rq_sectors(rq);
- unsigned int nr_bytes = blk_rq_bytes(rq);
int ret;
if (sdkp->device->no_write_same)
cmd->transfersize = sdp->sector_size;
cmd->allowed = SD_MAX_RETRIES;
-
- /*
- * For WRITE_SAME the data transferred in the DATA IN buffer is
- * different from the amount of data actually written to the target.
- *
- * We set up __data_len to the amount of data transferred from the
- * DATA IN buffer so that blk_rq_map_sg set up the proper S/G list
- * to transfer a single sector of data first, but then reset it to
- * the amount of data to be written right after so that the I/O path
- * knows how much to actually write.
- */
- rq->__data_len = sdp->sector_size;
- ret = scsi_init_io(cmd);
- rq->__data_len = nr_bytes;
- return ret;
+ return scsi_init_io(cmd);
}
static int sd_setup_flush_cmnd(struct scsi_cmnd *cmd)
if (!pool) {
SNIC_HOST_ERR(shost, "dflt sgl pool creation failed\n");
+ ret = -ENOMEM;
goto err_free_res;
}
if (!pool) {
SNIC_HOST_ERR(shost, "max sgl pool creation failed\n");
+ ret = -ENOMEM;
goto err_free_dflt_sgl_pool;
}
if (!pool) {
SNIC_HOST_ERR(shost, "snic tmreq info pool creation failed.\n");
+ ret = -ENOMEM;
goto err_free_max_sgl_pool;
}
case TCM_LOGICAL_BLOCK_APP_TAG_CHECK_FAILED:
case TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED:
case TCM_COPY_TARGET_DEVICE_NOT_REACHABLE:
+ case TCM_TOO_MANY_TARGET_DESCS:
+ case TCM_UNSUPPORTED_TARGET_DESC_TYPE_CODE:
+ case TCM_TOO_MANY_SEGMENT_DESCS:
+ case TCM_UNSUPPORTED_SEGMENT_DESC_TYPE_CODE:
break;
case TCM_OUT_OF_RESOURCES:
sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
.key = ILLEGAL_REQUEST,
.asc = 0x26, /* INVALID FIELD IN PARAMETER LIST */
},
+ [TCM_TOO_MANY_TARGET_DESCS] = {
+ .key = ILLEGAL_REQUEST,
+ .asc = 0x26,
+ .ascq = 0x06, /* TOO MANY TARGET DESCRIPTORS */
+ },
+ [TCM_UNSUPPORTED_TARGET_DESC_TYPE_CODE] = {
+ .key = ILLEGAL_REQUEST,
+ .asc = 0x26,
+ .ascq = 0x07, /* UNSUPPORTED TARGET DESCRIPTOR TYPE CODE */
+ },
+ [TCM_TOO_MANY_SEGMENT_DESCS] = {
+ .key = ILLEGAL_REQUEST,
+ .asc = 0x26,
+ .ascq = 0x08, /* TOO MANY SEGMENT DESCRIPTORS */
+ },
+ [TCM_UNSUPPORTED_SEGMENT_DESC_TYPE_CODE] = {
+ .key = ILLEGAL_REQUEST,
+ .asc = 0x26,
+ .ascq = 0x09, /* UNSUPPORTED SEGMENT DESCRIPTOR TYPE CODE */
+ },
[TCM_PARAMETER_LIST_LENGTH_ERROR] = {
.key = ILLEGAL_REQUEST,
.asc = 0x1a, /* PARAMETER LIST LENGTH ERROR */
return 0;
}
-static int target_xcopy_locate_se_dev_e4(struct se_cmd *se_cmd, struct xcopy_op *xop,
- bool src)
+static int target_xcopy_locate_se_dev_e4(const unsigned char *dev_wwn,
+ struct se_device **found_dev)
{
struct se_device *se_dev;
- unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN], *dev_wwn;
+ unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
int rc;
- if (src)
- dev_wwn = &xop->dst_tid_wwn[0];
- else
- dev_wwn = &xop->src_tid_wwn[0];
-
mutex_lock(&g_device_mutex);
list_for_each_entry(se_dev, &g_device_list, g_dev_node) {
if (rc != 0)
continue;
- if (src) {
- xop->dst_dev = se_dev;
- pr_debug("XCOPY 0xe4: Setting xop->dst_dev: %p from located"
- " se_dev\n", xop->dst_dev);
- } else {
- xop->src_dev = se_dev;
- pr_debug("XCOPY 0xe4: Setting xop->src_dev: %p from located"
- " se_dev\n", xop->src_dev);
- }
+ *found_dev = se_dev;
+ pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
rc = target_depend_item(&se_dev->dev_group.cg_item);
if (rc != 0) {
}
static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op *xop,
- unsigned char *p, bool src)
+ unsigned char *p, unsigned short cscd_index)
{
unsigned char *desc = p;
unsigned short ript;
return -EINVAL;
}
- if (src) {
+ if (cscd_index != xop->stdi && cscd_index != xop->dtdi) {
+ pr_debug("XCOPY 0xe4: ignoring CSCD entry %d - neither src nor "
+ "dest\n", cscd_index);
+ return 0;
+ }
+
+ if (cscd_index == xop->stdi) {
memcpy(&xop->src_tid_wwn[0], &desc[8], XCOPY_NAA_IEEE_REGEX_LEN);
/*
* Determine if the source designator matches the local device
pr_debug("XCOPY 0xe4: Set xop->src_dev %p from source"
" received xop\n", xop->src_dev);
}
- } else {
+ }
+
+ if (cscd_index == xop->dtdi) {
memcpy(&xop->dst_tid_wwn[0], &desc[8], XCOPY_NAA_IEEE_REGEX_LEN);
/*
- * Determine if the destination designator matches the local device
+ * Determine if the destination designator matches the local
+ * device. If @cscd_index corresponds to both source (stdi) and
+ * destination (dtdi), or dtdi comes after stdi, then
+ * XCOL_DEST_RECV_OP wins.
*/
if (!memcmp(&xop->local_dev_wwn[0], &xop->dst_tid_wwn[0],
XCOPY_NAA_IEEE_REGEX_LEN)) {
{
struct se_device *local_dev = se_cmd->se_dev;
unsigned char *desc = p;
- int offset = tdll % XCOPY_TARGET_DESC_LEN, rc, ret = 0;
+ int offset = tdll % XCOPY_TARGET_DESC_LEN, rc;
+ unsigned short cscd_index = 0;
unsigned short start = 0;
- bool src = true;
*sense_ret = TCM_INVALID_PARAMETER_LIST;
if (offset != 0) {
pr_err("XCOPY target descriptor list length is not"
" multiple of %d\n", XCOPY_TARGET_DESC_LEN);
+ *sense_ret = TCM_UNSUPPORTED_TARGET_DESC_TYPE_CODE;
return -EINVAL;
}
- if (tdll > 64) {
+ if (tdll > RCR_OP_MAX_TARGET_DESC_COUNT * XCOPY_TARGET_DESC_LEN) {
pr_err("XCOPY target descriptor supports a maximum"
" two src/dest descriptors, tdll: %hu too large..\n", tdll);
+ /* spc4r37 6.4.3.4 CSCD DESCRIPTOR LIST LENGTH field */
+ *sense_ret = TCM_TOO_MANY_TARGET_DESCS;
return -EINVAL;
}
/*
while (start < tdll) {
/*
- * Check target descriptor identification with 0xE4 type with
- * use VPD 0x83 WWPN matching ..
+ * Check target descriptor identification with 0xE4 type, and
+ * compare the current index with the CSCD descriptor IDs in
+ * the segment descriptor. Use VPD 0x83 WWPN matching ..
*/
switch (desc[0]) {
case 0xe4:
rc = target_xcopy_parse_tiddesc_e4(se_cmd, xop,
- &desc[0], src);
+ &desc[0], cscd_index);
if (rc != 0)
goto out;
- /*
- * Assume target descriptors are in source -> destination order..
- */
- if (src)
- src = false;
- else
- src = true;
start += XCOPY_TARGET_DESC_LEN;
desc += XCOPY_TARGET_DESC_LEN;
- ret++;
+ cscd_index++;
break;
default:
pr_err("XCOPY unsupported descriptor type code:"
" 0x%02x\n", desc[0]);
+ *sense_ret = TCM_UNSUPPORTED_TARGET_DESC_TYPE_CODE;
goto out;
}
}
- if (xop->op_origin == XCOL_SOURCE_RECV_OP)
- rc = target_xcopy_locate_se_dev_e4(se_cmd, xop, true);
- else
- rc = target_xcopy_locate_se_dev_e4(se_cmd, xop, false);
+ switch (xop->op_origin) {
+ case XCOL_SOURCE_RECV_OP:
+ rc = target_xcopy_locate_se_dev_e4(xop->dst_tid_wwn,
+ &xop->dst_dev);
+ break;
+ case XCOL_DEST_RECV_OP:
+ rc = target_xcopy_locate_se_dev_e4(xop->src_tid_wwn,
+ &xop->src_dev);
+ break;
+ default:
+ pr_err("XCOPY CSCD descriptor IDs not found in CSCD list - "
+ "stdi: %hu dtdi: %hu\n", xop->stdi, xop->dtdi);
+ rc = -EINVAL;
+ break;
+ }
/*
* If a matching IEEE NAA 0x83 descriptor for the requested device
* is not located on this node, return COPY_ABORTED with ASQ/ASQC
pr_debug("XCOPY TGT desc: Dest dev: %p NAA IEEE WWN: 0x%16phN\n",
xop->dst_dev, &xop->dst_tid_wwn[0]);
- return ret;
+ return cscd_index;
out:
return -EINVAL;
xop->stdi = get_unaligned_be16(&desc[4]);
xop->dtdi = get_unaligned_be16(&desc[6]);
+
+ if (xop->stdi > XCOPY_CSCD_DESC_ID_LIST_OFF_MAX ||
+ xop->dtdi > XCOPY_CSCD_DESC_ID_LIST_OFF_MAX) {
+ pr_err("XCOPY segment desc 0x02: unsupported CSCD ID > 0x%x; stdi: %hu dtdi: %hu\n",
+ XCOPY_CSCD_DESC_ID_LIST_OFF_MAX, xop->stdi, xop->dtdi);
+ return -EINVAL;
+ }
+
pr_debug("XCOPY seg desc 0x02: desc_len: %hu stdi: %hu dtdi: %hu, DC: %d\n",
desc_len, xop->stdi, xop->dtdi, dc);
static int target_xcopy_parse_segment_descriptors(struct se_cmd *se_cmd,
struct xcopy_op *xop, unsigned char *p,
- unsigned int sdll)
+ unsigned int sdll, sense_reason_t *sense_ret)
{
unsigned char *desc = p;
unsigned int start = 0;
int offset = sdll % XCOPY_SEGMENT_DESC_LEN, rc, ret = 0;
+ *sense_ret = TCM_INVALID_PARAMETER_LIST;
+
if (offset != 0) {
pr_err("XCOPY segment descriptor list length is not"
" multiple of %d\n", XCOPY_SEGMENT_DESC_LEN);
+ *sense_ret = TCM_UNSUPPORTED_SEGMENT_DESC_TYPE_CODE;
+ return -EINVAL;
+ }
+ if (sdll > RCR_OP_MAX_SG_DESC_COUNT * XCOPY_SEGMENT_DESC_LEN) {
+ pr_err("XCOPY supports %u segment descriptor(s), sdll: %u too"
+ " large..\n", RCR_OP_MAX_SG_DESC_COUNT, sdll);
+ /* spc4r37 6.4.3.5 SEGMENT DESCRIPTOR LIST LENGTH field */
+ *sense_ret = TCM_TOO_MANY_SEGMENT_DESCS;
return -EINVAL;
}
default:
pr_err("XCOPY unsupported segment descriptor"
"type: 0x%02x\n", desc[0]);
+ *sense_ret = TCM_UNSUPPORTED_SEGMENT_DESC_TYPE_CODE;
goto out;
}
}
return TCM_UNSUPPORTED_SCSI_OPCODE;
}
+ if (se_cmd->data_length == 0) {
+ target_complete_cmd(se_cmd, SAM_STAT_GOOD);
+ return TCM_NO_SENSE;
+ }
+ if (se_cmd->data_length < XCOPY_HDR_LEN) {
+ pr_err("XCOPY parameter truncation: length %u < hdr_len %u\n",
+ se_cmd->data_length, XCOPY_HDR_LEN);
+ return TCM_PARAMETER_LIST_LENGTH_ERROR;
+ }
+
xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL);
if (!xop) {
pr_err("Unable to allocate xcopy_op\n");
*/
tdll = get_unaligned_be16(&p[2]);
sdll = get_unaligned_be32(&p[8]);
+ if (tdll + sdll > RCR_OP_MAX_DESC_LIST_LEN) {
+ pr_err("XCOPY descriptor list length %u exceeds maximum %u\n",
+ tdll + sdll, RCR_OP_MAX_DESC_LIST_LEN);
+ ret = TCM_PARAMETER_LIST_LENGTH_ERROR;
+ goto out;
+ }
inline_dl = get_unaligned_be32(&p[12]);
if (inline_dl != 0) {
goto out;
}
+ if (se_cmd->data_length < (XCOPY_HDR_LEN + tdll + sdll + inline_dl)) {
+ pr_err("XCOPY parameter truncation: data length %u too small "
+ "for tdll: %hu sdll: %u inline_dl: %u\n",
+ se_cmd->data_length, tdll, sdll, inline_dl);
+ ret = TCM_PARAMETER_LIST_LENGTH_ERROR;
+ goto out;
+ }
+
pr_debug("Processing XCOPY with list_id: 0x%02x list_id_usage: 0x%02x"
" tdll: %hu sdll: %u inline_dl: %u\n", list_id, list_id_usage,
tdll, sdll, inline_dl);
+ /*
+ * skip over the target descriptors until segment descriptors
+ * have been passed - CSCD ids are needed to determine src and dest.
+ */
+ seg_desc = &p[16] + tdll;
+
+ rc = target_xcopy_parse_segment_descriptors(se_cmd, xop, seg_desc,
+ sdll, &ret);
+ if (rc <= 0)
+ goto out;
+
+ pr_debug("XCOPY: Processed %d segment descriptors, length: %u\n", rc,
+ rc * XCOPY_SEGMENT_DESC_LEN);
+
rc = target_xcopy_parse_target_descriptors(se_cmd, xop, &p[16], tdll, &ret);
if (rc <= 0)
goto out;
pr_debug("XCOPY: Processed %d target descriptors, length: %u\n", rc,
rc * XCOPY_TARGET_DESC_LEN);
- seg_desc = &p[16];
- seg_desc += (rc * XCOPY_TARGET_DESC_LEN);
-
- rc = target_xcopy_parse_segment_descriptors(se_cmd, xop, seg_desc, sdll);
- if (rc <= 0) {
- xcopy_pt_undepend_remotedev(xop);
- goto out;
- }
transport_kunmap_data_sg(se_cmd);
- pr_debug("XCOPY: Processed %d segment descriptors, length: %u\n", rc,
- rc * XCOPY_SEGMENT_DESC_LEN);
INIT_WORK(&xop->xop_work, target_xcopy_do_work);
queue_work(xcopy_wq, &xop->xop_work);
return TCM_NO_SENSE;
#include <target/target_core_base.h>
+#define XCOPY_HDR_LEN 16
#define XCOPY_TARGET_DESC_LEN 32
#define XCOPY_SEGMENT_DESC_LEN 28
#define XCOPY_NAA_IEEE_REGEX_LEN 16
#define XCOPY_MAX_SECTORS 1024
+/*
+ * SPC4r37 6.4.6.1
+ * Table 150 — CSCD descriptor ID values
+ */
+#define XCOPY_CSCD_DESC_ID_LIST_OFF_MAX 0x07FF
+
enum xcopy_origin_list {
XCOL_SOURCE_RECV_OP = 0x01,
XCOL_DEST_RECV_OP = 0x02,
.device = uart_console_device,
.setup = univ8250_console_setup,
.match = univ8250_console_match,
- .flags = CON_PRINTBUFFER | CON_ANYTIME | CON_CONSDEV,
+ .flags = CON_PRINTBUFFER | CON_ANYTIME,
.index = -1,
.data = &serial8250_reg,
};
static void serial8250_io_resume(struct pci_dev *dev)
{
struct serial_private *priv = pci_get_drvdata(dev);
- const struct pciserial_board *board;
+ struct serial_private *new;
if (!priv)
return;
- board = priv->board;
- kfree(priv);
- priv = pciserial_init_ports(dev, board);
-
- if (!IS_ERR(priv)) {
- pci_set_drvdata(dev, priv);
+ new = pciserial_init_ports(dev, priv->board);
+ if (!IS_ERR(new)) {
+ pci_set_drvdata(dev, new);
+ kfree(priv);
}
}
* Enable previously disabled RX interrupts.
*/
if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX)) {
- serial8250_clear_fifos(p);
+ serial8250_clear_and_reinit_fifos(p);
p->ier |= UART_IER_RLSI | UART_IER_RDI;
serial_port_out(&p->port, UART_IER, p->ier);
/* disable PDC transmit */
atmel_uart_writel(port, ATMEL_PDC_PTCR, ATMEL_PDC_TXTDIS);
}
+
+ /*
+ * Disable the transmitter.
+ * This is mandatory when DMA is used, otherwise the DMA buffer
+ * is fully transmitted.
+ */
+ atmel_uart_writel(port, ATMEL_US_CR, ATMEL_US_TXDIS);
+
/* Disable interrupts */
atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask);
/* Enable interrupts */
atmel_uart_writel(port, ATMEL_US_IER, atmel_port->tx_done_mask);
+
+ /* re-enable the transmitter */
+ atmel_uart_writel(port, ATMEL_US_CR, ATMEL_US_TXEN);
}
/*
*/
if (!uart_circ_empty(xmit))
atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+ else if ((port->rs485.flags & SER_RS485_ENABLED) &&
+ !(port->rs485.flags & SER_RS485_RX_DURING_TX)) {
+ /* DMA done, stop TX, start RX for RS485 */
+ atmel_start_rx(port);
+ }
spin_unlock_irqrestore(&port->lock, flags);
}
desc->callback = atmel_complete_tx_dma;
desc->callback_param = atmel_port;
atmel_port->cookie_tx = dmaengine_submit(desc);
-
- } else {
- if (port->rs485.flags & SER_RS485_ENABLED) {
- /* DMA done, stop TX, start RX for RS485 */
- atmel_start_rx(port);
- }
}
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
{
.flags = INPUT_DEVICE_ID_MATCH_EVBIT |
INPUT_DEVICE_ID_MATCH_KEYBIT,
- .evbit = { BIT_MASK(EV_KEY) },
- .keybit = { BIT_MASK(KEY_LEFTALT) },
+ .evbit = { [BIT_WORD(EV_KEY)] = BIT_MASK(EV_KEY) },
+ .keybit = { [BIT_WORD(KEY_LEFTALT)] = BIT_MASK(KEY_LEFTALT) },
},
{ },
};
spin_lock_irqsave(&xhci->lock, flags);
ep->stop_cmds_pending--;
- if (xhci->xhc_state & XHCI_STATE_REMOVING) {
- spin_unlock_irqrestore(&xhci->lock, flags);
- return;
- }
- if (xhci->xhc_state & XHCI_STATE_DYING) {
- xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
- "Stop EP timer ran, but another timer marked "
- "xHCI as DYING, exiting.");
- spin_unlock_irqrestore(&xhci->lock, flags);
- return;
- }
if (!(ep->stop_cmds_pending == 0 && (ep->ep_state & EP_HALT_PENDING))) {
xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
"Stop EP timer ran, but no command pending, "
xhci_urb_free_priv(urb_priv);
return ret;
}
- if ((xhci->xhc_state & XHCI_STATE_DYING) ||
- (xhci->xhc_state & XHCI_STATE_HALTED)) {
- xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
- "Ep 0x%x: URB %p to be canceled on "
- "non-responsive xHCI host.",
- urb->ep->desc.bEndpointAddress, urb);
- /* Let the stop endpoint command watchdog timer (which set this
- * state) finish cleaning up the endpoint TD lists. We must
- * have caught it in the middle of dropping a lock and giving
- * back an URB.
- */
- goto done;
- }
ep_index = xhci_get_endpoint_index(&urb->ep->desc);
ep = &xhci->devs[urb->dev->slot_id]->eps[ep_index];
unsigned i;
seq_printf(s, "MUSB (M)HDRC Register Dump\n");
+ pm_runtime_get_sync(musb->controller);
for (i = 0; i < ARRAY_SIZE(musb_regmap); i++) {
switch (musb_regmap[i].size) {
}
}
+ pm_runtime_mark_last_busy(musb->controller);
+ pm_runtime_put_autosuspend(musb->controller);
return 0;
}
struct musb *musb = s->private;
unsigned test;
+ pm_runtime_get_sync(musb->controller);
test = musb_readb(musb->mregs, MUSB_TESTMODE);
+ pm_runtime_mark_last_busy(musb->controller);
+ pm_runtime_put_autosuspend(musb->controller);
if (test & MUSB_TEST_FORCE_HOST)
seq_printf(s, "force host\n");
u8 test;
char buf[18];
+ pm_runtime_get_sync(musb->controller);
test = musb_readb(musb->mregs, MUSB_TESTMODE);
if (test) {
dev_err(musb->controller, "Error: test mode is already set. "
"Please do USB Bus Reset to start a new test.\n");
- return count;
+ goto ret;
}
memset(buf, 0x00, sizeof(buf));
musb_writeb(musb->mregs, MUSB_TESTMODE, test);
+ret:
+ pm_runtime_mark_last_busy(musb->controller);
+ pm_runtime_put_autosuspend(musb->controller);
return count;
}
switch (musb->xceiv->otg->state) {
case OTG_STATE_A_HOST:
case OTG_STATE_A_WAIT_BCON:
+ pm_runtime_get_sync(musb->controller);
+
reg = musb_readb(musb->mregs, MUSB_DEVCTL);
connect = reg & MUSB_DEVCTL_SESSION ? 1 : 0;
+
+ pm_runtime_mark_last_busy(musb->controller);
+ pm_runtime_put_autosuspend(musb->controller);
break;
default:
connect = -1;
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
+ pm_runtime_get_sync(musb->controller);
if (!strncmp(buf, "0", 1)) {
switch (musb->xceiv->otg->state) {
case OTG_STATE_A_HOST:
}
}
+ pm_runtime_mark_last_busy(musb->controller);
+ pm_runtime_put_autosuspend(musb->controller);
return count;
}
unsigned baud_rate; /* set baud rate */
u8 line_control; /* set line control value RTS/DTR */
u8 line_status; /* active status of modem control inputs */
+ u8 lcr;
};
static void ch341_set_termios(struct tty_struct *tty,
r = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), request,
USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
value, index, NULL, 0, DEFAULT_TIMEOUT);
+ if (r < 0)
+ dev_err(&dev->dev, "failed to send control message: %d\n", r);
return r;
}
r = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), request,
USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
value, index, buf, bufsize, DEFAULT_TIMEOUT);
- return r;
+ if (r < bufsize) {
+ if (r >= 0) {
+ dev_err(&dev->dev,
+ "short control message received (%d < %u)\n",
+ r, bufsize);
+ r = -EIO;
+ }
+
+ dev_err(&dev->dev, "failed to receive control message: %d\n",
+ r);
+ return r;
+ }
+
+ return 0;
}
-static int ch341_init_set_baudrate(struct usb_device *dev,
- struct ch341_private *priv, unsigned ctrl)
+static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ struct ch341_private *priv, u8 lcr)
{
short a;
int r;
factor = 0x10000 - factor;
a = (factor & 0xff00) | divisor;
- /* 0x9c is "enable SFR_UART Control register and timer" */
- r = ch341_control_out(dev, CH341_REQ_SERIAL_INIT,
- 0x9c | (ctrl << 8), a | 0x80);
+ /*
+ * CH341A buffers data until a full endpoint-size packet (32 bytes)
+ * has been received unless bit 7 is set.
+ */
+ a |= BIT(7);
+
+ r = ch341_control_out(dev, CH341_REQ_WRITE_REG, 0x1312, a);
+ if (r)
+ return r;
+
+ r = ch341_control_out(dev, CH341_REQ_WRITE_REG, 0x2518, lcr);
+ if (r)
+ return r;
return r;
}
static int ch341_get_status(struct usb_device *dev, struct ch341_private *priv)
{
+ const unsigned int size = 2;
char *buffer;
int r;
- const unsigned size = 8;
unsigned long flags;
buffer = kmalloc(size, GFP_KERNEL);
if (r < 0)
goto out;
- /* setup the private status if available */
- if (r == 2) {
- r = 0;
- spin_lock_irqsave(&priv->lock, flags);
- priv->line_status = (~(*buffer)) & CH341_BITS_MODEM_STAT;
- spin_unlock_irqrestore(&priv->lock, flags);
- } else
- r = -EPROTO;
+ spin_lock_irqsave(&priv->lock, flags);
+ priv->line_status = (~(*buffer)) & CH341_BITS_MODEM_STAT;
+ spin_unlock_irqrestore(&priv->lock, flags);
out: kfree(buffer);
return r;
static int ch341_configure(struct usb_device *dev, struct ch341_private *priv)
{
+ const unsigned int size = 2;
char *buffer;
int r;
- const unsigned size = 8;
buffer = kmalloc(size, GFP_KERNEL);
if (!buffer)
if (r < 0)
goto out;
- r = ch341_init_set_baudrate(dev, priv, 0);
+ r = ch341_set_baudrate_lcr(dev, priv, priv->lcr);
if (r < 0)
goto out;
spin_lock_init(&priv->lock);
priv->baud_rate = DEFAULT_BAUD_RATE;
- priv->line_control = CH341_BIT_RTS | CH341_BIT_DTR;
r = ch341_configure(port->serial->dev, priv);
if (r < 0)
r = ch341_configure(serial->dev, priv);
if (r)
- goto out;
+ return r;
if (tty)
ch341_set_termios(tty, port, NULL);
if (r) {
dev_err(&port->dev, "%s - failed to submit interrupt urb: %d\n",
__func__, r);
- goto out;
+ return r;
}
r = usb_serial_generic_open(tty, port);
+ if (r)
+ goto err_kill_interrupt_urb;
+
+ return 0;
+
+err_kill_interrupt_urb:
+ usb_kill_urb(port->interrupt_in_urb);
-out: return r;
+ return r;
}
/* Old_termios contains the original termios settings and
baud_rate = tty_get_baud_rate(tty);
- priv->baud_rate = baud_rate;
ctrl = CH341_LCR_ENABLE_RX | CH341_LCR_ENABLE_TX;
switch (C_CSIZE(tty)) {
ctrl |= CH341_LCR_STOP_BITS_2;
if (baud_rate) {
- spin_lock_irqsave(&priv->lock, flags);
- priv->line_control |= (CH341_BIT_DTR | CH341_BIT_RTS);
- spin_unlock_irqrestore(&priv->lock, flags);
- r = ch341_init_set_baudrate(port->serial->dev, priv, ctrl);
+ priv->baud_rate = baud_rate;
+
+ r = ch341_set_baudrate_lcr(port->serial->dev, priv, ctrl);
if (r < 0 && old_termios) {
priv->baud_rate = tty_termios_baud_rate(old_termios);
tty_termios_copy_hw(&tty->termios, old_termios);
+ } else if (r == 0) {
+ priv->lcr = ctrl;
}
- } else {
- spin_lock_irqsave(&priv->lock, flags);
- priv->line_control &= ~(CH341_BIT_DTR | CH341_BIT_RTS);
- spin_unlock_irqrestore(&priv->lock, flags);
}
- ch341_set_handshake(port->serial->dev, priv->line_control);
+ spin_lock_irqsave(&priv->lock, flags);
+ if (C_BAUD(tty) == B0)
+ priv->line_control &= ~(CH341_BIT_DTR | CH341_BIT_RTS);
+ else if (old_termios && (old_termios->c_cflag & CBAUD) == B0)
+ priv->line_control |= (CH341_BIT_DTR | CH341_BIT_RTS);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ ch341_set_handshake(port->serial->dev, priv->line_control);
}
static void ch341_break_ctl(struct tty_struct *tty, int break_state)
static int ch341_reset_resume(struct usb_serial *serial)
{
- struct ch341_private *priv;
-
- priv = usb_get_serial_port_data(serial->port[0]);
+ struct usb_serial_port *port = serial->port[0];
+ struct ch341_private *priv = usb_get_serial_port_data(port);
+ int ret;
/* reconfigure ch341 serial port after bus-reset */
ch341_configure(serial->dev, priv);
- return 0;
+ if (tty_port_initialized(&port->port)) {
+ ret = usb_submit_urb(port->interrupt_in_urb, GFP_NOIO);
+ if (ret) {
+ dev_err(&port->dev, "failed to submit interrupt urb: %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ return usb_serial_generic_resume(serial);
}
static struct usb_serial_driver ch341_device = {
status_buf, KLSI_STATUSBUF_LEN,
10000
);
- if (rc < 0)
- dev_err(&port->dev, "Reading line status failed (error = %d)\n",
- rc);
- else {
+ if (rc != KLSI_STATUSBUF_LEN) {
+ dev_err(&port->dev, "reading line status failed: %d\n", rc);
+ if (rc >= 0)
+ rc = -EIO;
+ } else {
status = get_unaligned_le16(status_buf);
dev_info(&port->serial->dev->dev, "read status %x %x\n",
struct scatterlist sg[4], sg_dst;
void *dst_buf;
size_t dst_size;
- const u8 bzero[16] = { 0 };
u8 iv[crypto_skcipher_ivsize(tfm_cbc)];
size_t zero_padding;
sg_set_buf(&sg[1], &scratch->b1, sizeof(scratch->b1));
sg_set_buf(&sg[2], b, blen);
/* 0 if well behaved :) */
- sg_set_buf(&sg[3], bzero, zero_padding);
+ sg_set_page(&sg[3], ZERO_PAGE(0), zero_padding, 0);
sg_init_one(&sg_dst, dst_buf, dst_size);
skcipher_request_set_tfm(req, tfm_cbc);
#include <linux/uaccess.h>
#include <linux/vfio.h>
#include <linux/workqueue.h>
-#include <linux/pid_namespace.h>
#include <linux/mdev.h>
#include <linux/notifier.h>
unsigned long *pfn_base, bool do_accounting)
{
unsigned long limit;
- bool lock_cap = ns_capable(task_active_pid_ns(dma->task)->user_ns,
- CAP_IPC_LOCK);
+ bool lock_cap = has_capability(dma->task, CAP_IPC_LOCK);
struct mm_struct *mm;
int ret;
bool rsvd;
vme_bound = ioread32(bridge->base + CA91CX42_VSI_BD[i]);
pci_offset = ioread32(bridge->base + CA91CX42_VSI_TO[i]);
- *pci_base = (dma_addr_t)vme_base + pci_offset;
+ *pci_base = (dma_addr_t)*vme_base + pci_offset;
*size = (unsigned long long)((vme_bound - *vme_base) + granularity);
*enabled = 0;
* Tell lockdep we inherited freeze protection from submission
* thread.
*/
- __sb_writers_acquired(file_inode(file)->i_sb, SB_FREEZE_WRITE);
+ if (S_ISREG(file_inode(file)->i_mode))
+ __sb_writers_acquired(file_inode(file)->i_sb, SB_FREEZE_WRITE);
file_end_write(file);
}
* by telling it the lock got released so that it doesn't
* complain about held lock when we return to userspace.
*/
- __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
+ if (S_ISREG(file_inode(file)->i_mode))
+ __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
}
kfree(iovec);
return ret;
goto end_coredump;
}
}
+ dump_truncate(cprm);
if (!elf_core_write_extra_data(cprm))
goto end_coredump;
unsigned long flags;
while (1) {
+ void *wtag;
+
spin_lock_irqsave(lock, flags);
if (list_empty(list))
break;
spin_unlock_irqrestore(lock, flags);
/*
- * we don't want to call the ordered free functions
- * with the lock held though
+ * We don't want to call the ordered free functions with the
+ * lock held though. Save the work as tag for the trace event,
+ * because the callback could free the structure.
*/
+ wtag = work;
work->ordered_free(work);
- trace_btrfs_all_work_done(work);
+ trace_btrfs_all_work_done(wq->fs_info, wtag);
}
spin_unlock_irqrestore(lock, flags);
}
static void normal_work_helper(struct btrfs_work *work)
{
struct __btrfs_workqueue *wq;
+ void *wtag;
int need_order = 0;
/*
if (work->ordered_func)
need_order = 1;
wq = work->wq;
+ /* Safe for tracepoints in case work gets freed by the callback */
+ wtag = work;
trace_btrfs_work_sched(work);
thresh_exec_hook(wq);
run_ordered_work(wq);
}
if (!need_order)
- trace_btrfs_all_work_done(work);
+ trace_btrfs_all_work_done(wq->fs_info, wtag);
}
void btrfs_init_work(struct btrfs_work *work, btrfs_work_func_t uniq_func,
if (ref && ref->seq &&
btrfs_check_delayed_seq(fs_info, delayed_refs, ref->seq)) {
spin_unlock(&locked_ref->lock);
- btrfs_delayed_ref_unlock(locked_ref);
spin_lock(&delayed_refs->lock);
locked_ref->processing = 0;
delayed_refs->num_heads_ready++;
spin_unlock(&delayed_refs->lock);
+ btrfs_delayed_ref_unlock(locked_ref);
locked_ref = NULL;
cond_resched();
count++;
*/
if (must_insert_reserved)
locked_ref->must_insert_reserved = 1;
+ spin_lock(&delayed_refs->lock);
locked_ref->processing = 0;
+ delayed_refs->num_heads_ready++;
+ spin_unlock(&delayed_refs->lock);
btrfs_debug(fs_info,
"run_delayed_extent_op returned %d",
ret);
spin_unlock(&cluster->refill_lock);
- down_read(&used_bg->data_rwsem);
+ /* We should only have one-level nested. */
+ down_read_nested(&used_bg->data_rwsem, SINGLE_DEPTH_NESTING);
spin_lock(&cluster->refill_lock);
if (used_bg == cluster->block_group)
write_unlock(&em_tree->lock);
out:
- trace_btrfs_get_extent(root, em);
+ trace_btrfs_get_extent(root, inode, em);
btrfs_free_path(path);
if (trans) {
* within our reservation, otherwise we need to adjust our inode
* counter appropriately.
*/
- if (dio_data->outstanding_extents) {
+ if (dio_data->outstanding_extents >= num_extents) {
dio_data->outstanding_extents -= num_extents;
} else {
+ /*
+ * If dio write length has been split due to no large enough
+ * contiguous space, we need to compensate our inode counter
+ * appropriately.
+ */
+ u64 num_needed = num_extents - dio_data->outstanding_extents;
+
spin_lock(&BTRFS_I(inode)->lock);
- BTRFS_I(inode)->outstanding_extents += num_extents;
+ BTRFS_I(inode)->outstanding_extents += num_needed;
spin_unlock(&BTRFS_I(inode)->lock);
}
}
*/
#define LOG_INODE_ALL 0
#define LOG_INODE_EXISTS 1
+#define LOG_OTHER_INODE 2
/*
* directory trouble cases
if (S_ISDIR(inode->i_mode) ||
(!test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
&BTRFS_I(inode)->runtime_flags) &&
- inode_only == LOG_INODE_EXISTS))
+ inode_only >= LOG_INODE_EXISTS))
max_key.type = BTRFS_XATTR_ITEM_KEY;
else
max_key.type = (u8)-1;
return ret;
}
- mutex_lock(&BTRFS_I(inode)->log_mutex);
+ if (inode_only == LOG_OTHER_INODE) {
+ inode_only = LOG_INODE_EXISTS;
+ mutex_lock_nested(&BTRFS_I(inode)->log_mutex,
+ SINGLE_DEPTH_NESTING);
+ } else {
+ mutex_lock(&BTRFS_I(inode)->log_mutex);
+ }
/*
* a brute force approach to making sure we get the most uptodate
* unpin it.
*/
err = btrfs_log_inode(trans, root, other_inode,
- LOG_INODE_EXISTS,
+ LOG_OTHER_INODE,
0, LLONG_MAX, ctx);
iput(other_inode);
if (err)
out:
btrfs_free_path(path);
- if (ret)
- btrfs_warn(fs_info, "btrfs_uuid_tree_iterate failed %d", ret);
- return 0;
+ return ret;
}
dout(" head snapc %p has %d dirty pages\n",
snapc, ci->i_wrbuffer_ref_head);
if (truncate_size)
- *truncate_size = capsnap->truncate_size;
+ *truncate_size = ci->i_truncate_size;
if (truncate_seq)
- *truncate_seq = capsnap->truncate_seq;
+ *truncate_seq = ci->i_truncate_seq;
}
spin_unlock(&ci->i_ceph_lock);
return snapc;
dout("do_request mdsmap err %d\n", err);
goto finish;
}
+ if (mdsc->mdsmap->m_epoch == 0) {
+ dout("do_request no mdsmap, waiting for map\n");
+ list_add(&req->r_wait, &mdsc->waiting_for_map);
+ goto finish;
+ }
if (!(mdsc->fsc->mount_options->flags &
CEPH_MOUNT_OPT_MOUNTWAIT) &&
!ceph_mdsmap_is_cluster_available(mdsc->mdsmap)) {
return mod ? dump_skip(cprm, align - mod) : 1;
}
EXPORT_SYMBOL(dump_align);
+
+/*
+ * Ensures that file size is big enough to contain the current file
+ * postion. This prevents gdb from complaining about a truncated file
+ * if the last "write" to the file was dump_skip.
+ */
+void dump_truncate(struct coredump_params *cprm)
+{
+ struct file *file = cprm->file;
+ loff_t offset;
+
+ if (file->f_op->llseek && file->f_op->llseek != no_llseek) {
+ offset = file->f_op->llseek(file, 0, SEEK_CUR);
+ if (i_size_read(file->f_mapping->host) < offset)
+ do_truncate(file->f_path.dentry, offset, 0, file);
+ }
+}
+EXPORT_SYMBOL(dump_truncate);
pgoff_t index, unsigned long pfn)
{
struct vm_area_struct *vma;
- pte_t *ptep;
- pte_t pte;
+ pte_t pte, *ptep = NULL;
+ pmd_t *pmdp = NULL;
spinlock_t *ptl;
bool changed;
address = pgoff_address(index, vma);
changed = false;
- if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
+ if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
continue;
- if (pfn != pte_pfn(*ptep))
- goto unlock;
- if (!pte_dirty(*ptep) && !pte_write(*ptep))
- goto unlock;
- flush_cache_page(vma, address, pfn);
- pte = ptep_clear_flush(vma, address, ptep);
- pte = pte_wrprotect(pte);
- pte = pte_mkclean(pte);
- set_pte_at(vma->vm_mm, address, ptep, pte);
- changed = true;
-unlock:
- pte_unmap_unlock(ptep, ptl);
+ if (pmdp) {
+#ifdef CONFIG_FS_DAX_PMD
+ pmd_t pmd;
+
+ if (pfn != pmd_pfn(*pmdp))
+ goto unlock_pmd;
+ if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ goto unlock_pmd;
+
+ flush_cache_page(vma, address, pfn);
+ pmd = pmdp_huge_clear_flush(vma, address, pmdp);
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_mkclean(pmd);
+ set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+ changed = true;
+unlock_pmd:
+ spin_unlock(ptl);
+#endif
+ } else {
+ if (pfn != pte_pfn(*ptep))
+ goto unlock_pte;
+ if (!pte_dirty(*ptep) && !pte_write(*ptep))
+ goto unlock_pte;
+
+ flush_cache_page(vma, address, pfn);
+ pte = ptep_clear_flush(vma, address, ptep);
+ pte = pte_wrprotect(pte);
+ pte = pte_mkclean(pte);
+ set_pte_at(vma->vm_mm, address, ptep, pte);
+ changed = true;
+unlock_pte:
+ pte_unmap_unlock(ptep, ptl);
+ }
if (changed)
mmu_notifier_invalidate_page(vma->vm_mm, address);
}
spin_lock(&dentry->d_lock);
if (!d_unlinked(dentry)) {
- dentry->d_flags |= DCACHE_MOUNTED;
- ret = 0;
+ ret = -EBUSY;
+ if (!d_mountpoint(dentry)) {
+ dentry->d_flags |= DCACHE_MOUNTED;
+ ret = 0;
+ }
}
spin_unlock(&dentry->d_lock);
out:
struct buffer_head *map_bh)
{
const unsigned blkbits = sdio->blkbits;
+ const unsigned i_blkbits = blkbits + sdio->blkfactor;
int ret = 0;
while (sdio->block_in_file < sdio->final_block_in_request) {
clean_bdev_aliases(
map_bh->b_bdev,
map_bh->b_blocknr,
- map_bh->b_size >> blkbits);
+ map_bh->b_size >> i_blkbits);
}
if (!sdio->blkfactor)
}
sector = SECTOR_FROM_BLOCK(blkstart);
- if (sector & (bdev_zone_size(bdev) - 1) ||
- nr_sects != bdev_zone_size(bdev)) {
+ if (sector & (bdev_zone_sectors(bdev) - 1) ||
+ nr_sects != bdev_zone_sectors(bdev)) {
f2fs_msg(sbi->sb, KERN_INFO,
"(%d) %s: Unaligned discard attempted (block %x + %x)",
devi, sbi->s_ndevs ? FDEV(devi).path: "",
return 0;
if (sbi->blocks_per_blkz && sbi->blocks_per_blkz !=
- SECTOR_TO_BLOCK(bdev_zone_size(bdev)))
+ SECTOR_TO_BLOCK(bdev_zone_sectors(bdev)))
return -EINVAL;
- sbi->blocks_per_blkz = SECTOR_TO_BLOCK(bdev_zone_size(bdev));
+ sbi->blocks_per_blkz = SECTOR_TO_BLOCK(bdev_zone_sectors(bdev));
if (sbi->log_blocks_per_blkz && sbi->log_blocks_per_blkz !=
__ilog2_u32(sbi->blocks_per_blkz))
return -EINVAL;
sbi->log_blocks_per_blkz = __ilog2_u32(sbi->blocks_per_blkz);
FDEV(devi).nr_blkz = SECTOR_TO_BLOCK(nr_sectors) >>
sbi->log_blocks_per_blkz;
- if (nr_sectors & (bdev_zone_size(bdev) - 1))
+ if (nr_sectors & (bdev_zone_sectors(bdev) - 1))
FDEV(devi).nr_blkz++;
FDEV(devi).blkz_type = kmalloc(FDEV(devi).nr_blkz, GFP_KERNEL);
struct inode *root;
struct qstr d_name = QSTR_INIT(name, strlen(name));
- s = sget(fs_type, NULL, set_anon_super, MS_NOUSER, NULL);
+ s = sget_userns(fs_type, NULL, set_anon_super, MS_KERNMOUNT|MS_NOUSER,
+ &init_user_ns, NULL);
if (IS_ERR(s))
return ERR_CAST(s);
return NULL;
}
-static struct mountpoint *new_mountpoint(struct dentry *dentry)
+static struct mountpoint *get_mountpoint(struct dentry *dentry)
{
- struct hlist_head *chain = mp_hash(dentry);
- struct mountpoint *mp;
+ struct mountpoint *mp, *new = NULL;
int ret;
- mp = kmalloc(sizeof(struct mountpoint), GFP_KERNEL);
- if (!mp)
+ if (d_mountpoint(dentry)) {
+mountpoint:
+ read_seqlock_excl(&mount_lock);
+ mp = lookup_mountpoint(dentry);
+ read_sequnlock_excl(&mount_lock);
+ if (mp)
+ goto done;
+ }
+
+ if (!new)
+ new = kmalloc(sizeof(struct mountpoint), GFP_KERNEL);
+ if (!new)
return ERR_PTR(-ENOMEM);
+
+ /* Exactly one processes may set d_mounted */
ret = d_set_mounted(dentry);
- if (ret) {
- kfree(mp);
- return ERR_PTR(ret);
- }
- mp->m_dentry = dentry;
- mp->m_count = 1;
- hlist_add_head(&mp->m_hash, chain);
- INIT_HLIST_HEAD(&mp->m_list);
+ /* Someone else set d_mounted? */
+ if (ret == -EBUSY)
+ goto mountpoint;
+
+ /* The dentry is not available as a mountpoint? */
+ mp = ERR_PTR(ret);
+ if (ret)
+ goto done;
+
+ /* Add the new mountpoint to the hash table */
+ read_seqlock_excl(&mount_lock);
+ new->m_dentry = dentry;
+ new->m_count = 1;
+ hlist_add_head(&new->m_hash, mp_hash(dentry));
+ INIT_HLIST_HEAD(&new->m_list);
+ read_sequnlock_excl(&mount_lock);
+
+ mp = new;
+ new = NULL;
+done:
+ kfree(new);
return mp;
}
struct mount *mnt;
namespace_lock();
+ lock_mount_hash();
mp = lookup_mountpoint(dentry);
if (IS_ERR_OR_NULL(mp))
goto out_unlock;
- lock_mount_hash();
event++;
while (!hlist_empty(&mp->m_list)) {
mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
}
else umount_tree(mnt, UMOUNT_CONNECTED);
}
- unlock_mount_hash();
put_mountpoint(mp);
out_unlock:
+ unlock_mount_hash();
namespace_unlock();
}
namespace_lock();
mnt = lookup_mnt(path);
if (likely(!mnt)) {
- struct mountpoint *mp = lookup_mountpoint(dentry);
- if (!mp)
- mp = new_mountpoint(dentry);
+ struct mountpoint *mp = get_mountpoint(dentry);
if (IS_ERR(mp)) {
namespace_unlock();
inode_unlock(dentry->d_inode);
static void unlock_mount(struct mountpoint *where)
{
struct dentry *dentry = where->m_dentry;
+
+ read_seqlock_excl(&mount_lock);
put_mountpoint(where);
+ read_sequnlock_excl(&mount_lock);
+
namespace_unlock();
inode_unlock(dentry->d_inode);
}
touch_mnt_namespace(current->nsproxy->mnt_ns);
/* A moved mount should not expire automatically */
list_del_init(&new_mnt->mnt_expire);
+ put_mountpoint(root_mp);
unlock_mount_hash();
chroot_fs_refs(&root, &new);
- put_mountpoint(root_mp);
error = 0;
out4:
unlock_mount(old_mp);
mlog(ML_BASTS, "lockres %s, level %d => %d\n", lockres->l_name,
lockres->l_level, new_level);
+ /*
+ * On DLM_LKF_VALBLK, fsdlm behaves differently with o2cb. It always
+ * expects DLM_LKF_VALBLK being set if the LKB has LVB, so that
+ * we can recover correctly from node failure. Otherwise, we may get
+ * invalid LVB in LKB, but without DLM_SBF_VALNOTVALIDÂ being set.
+ */
+ if (!ocfs2_is_o2cb_active() &&
+ lockres->l_ops->flags & LOCK_TYPE_USES_LVB)
+ lvb = 1;
+
if (lvb)
dlm_flags |= DLM_LKF_VALBLK;
*/
static struct ocfs2_stack_plugin *active_stack;
+inline int ocfs2_is_o2cb_active(void)
+{
+ return !strcmp(active_stack->sp_name, OCFS2_STACK_PLUGIN_O2CB);
+}
+EXPORT_SYMBOL_GPL(ocfs2_is_o2cb_active);
+
static struct ocfs2_stack_plugin *ocfs2_stack_lookup(const char *name)
{
struct ocfs2_stack_plugin *p;
int ocfs2_stack_glue_register(struct ocfs2_stack_plugin *plugin);
void ocfs2_stack_glue_unregister(struct ocfs2_stack_plugin *plugin);
+/* In ocfs2_downconvert_lock(), we need to know which stack we are using */
+int ocfs2_is_o2cb_active(void);
+
extern struct kset *ocfs2_kset;
#endif /* STACKGLUE_H */
int error;
if (type == ACL_TYPE_ACCESS) {
- error = posix_acl_equiv_mode(acl, &inode->i_mode);
- if (error < 0)
- return 0;
- if (error == 0)
- acl = NULL;
+ error = posix_acl_update_mode(inode,
+ &inode->i_mode, &acl);
+ if (error)
+ return error;
}
inode->i_ctime = current_time(inode);
ctl_dir = container_of(head, struct ctl_dir, header);
if (!dir_emit_dots(file, ctx))
- return 0;
+ goto out;
pos = 2;
break;
}
}
+out:
sysctl_head_finish(head);
return 0;
}
xfs_alloc_set_aside(
struct xfs_mount *mp)
{
- unsigned int blocks;
-
- blocks = 4 + (mp->m_sb.sb_agcount * XFS_ALLOC_AGFL_RESERVE);
- return blocks;
+ return mp->m_sb.sb_agcount * (XFS_ALLOC_AGFL_RESERVE + 4);
}
/*
return;
ASSERT(rlen >= args->minlen && rlen <= args->maxlen);
ASSERT(rlen % args->prod == args->mod);
+ ASSERT(args->pag->pagf_freeblks + args->pag->pagf_flcount >=
+ rlen + args->minleft);
args->len = rlen;
}
-/*
- * Fix up length if there is too little space left in the a.g.
- * Return 1 if ok, 0 if too little, should give up.
- */
-STATIC int
-xfs_alloc_fix_minleft(
- xfs_alloc_arg_t *args) /* allocation argument structure */
-{
- xfs_agf_t *agf; /* a.g. freelist header */
- int diff; /* free space difference */
-
- if (args->minleft == 0)
- return 1;
- agf = XFS_BUF_TO_AGF(args->agbp);
- diff = be32_to_cpu(agf->agf_freeblks)
- - args->len - args->minleft;
- if (diff >= 0)
- return 1;
- args->len += diff; /* shrink the allocated space */
- /* casts to (int) catch length underflows */
- if ((int)args->len >= (int)args->minlen)
- return 1;
- args->agbno = NULLAGBLOCK;
- return 0;
-}
-
/*
* Update the two btrees, logically removing from freespace the extent
* starting at rbno, rlen blocks. The extent is contained within the
xfs_alloc_arg_t *args) /* argument structure for allocation */
{
int error=0;
- xfs_extlen_t reservation;
- xfs_extlen_t oldmax;
ASSERT(args->minlen > 0);
ASSERT(args->maxlen > 0);
ASSERT(args->mod < args->prod);
ASSERT(args->alignment > 0);
- /*
- * Clamp maxlen to the amount of free space minus any reservations
- * that have been made.
- */
- oldmax = args->maxlen;
- reservation = xfs_ag_resv_needed(args->pag, args->resv);
- if (args->maxlen > args->pag->pagf_freeblks - reservation)
- args->maxlen = args->pag->pagf_freeblks - reservation;
- if (args->maxlen == 0) {
- args->agbno = NULLAGBLOCK;
- args->maxlen = oldmax;
- return 0;
- }
-
/*
* Branch to correct routine based on the type.
*/
/* NOTREACHED */
}
- args->maxlen = oldmax;
-
if (error || args->agbno == NULLAGBLOCK)
return error;
args->len = XFS_AGBLOCK_MIN(tend, args->agbno + args->maxlen)
- args->agbno;
xfs_alloc_fix_len(args);
- if (!xfs_alloc_fix_minleft(args))
- goto not_found;
-
ASSERT(args->agbno + args->len <= tend);
/*
XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
ASSERT(ltbno + ltlen <= be32_to_cpu(XFS_BUF_TO_AGF(args->agbp)->agf_length));
args->len = blen;
- if (!xfs_alloc_fix_minleft(args)) {
- xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);
- trace_xfs_alloc_near_nominleft(args);
- return 0;
- }
- blen = args->len;
+
/*
* We are allocating starting at bnew for blen blocks.
*/
*/
args->len = XFS_EXTLEN_MIN(ltlena, args->maxlen);
xfs_alloc_fix_len(args);
- if (!xfs_alloc_fix_minleft(args)) {
- trace_xfs_alloc_near_nominleft(args);
- xfs_btree_del_cursor(bno_cur_lt, XFS_BTREE_NOERROR);
- xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);
- return 0;
- }
rlen = args->len;
(void)xfs_alloc_compute_diff(args->agbno, rlen, args->alignment,
args->datatype, ltbnoa, ltlena, <new);
}
xfs_alloc_fix_len(args);
- if (!xfs_alloc_fix_minleft(args))
- goto out_nominleft;
rlen = args->len;
XFS_WANT_CORRUPTED_GOTO(args->mp, rlen <= flen, error0);
/*
int flags)
{
struct xfs_perag *pag = args->pag;
- xfs_extlen_t longest;
+ xfs_extlen_t alloc_len, longest;
xfs_extlen_t reservation; /* blocks that are still reserved */
int available;
reservation = xfs_ag_resv_needed(pag, args->resv);
/* do we have enough contiguous free space for the allocation? */
+ alloc_len = args->minlen + (args->alignment - 1) + args->minalignslop;
longest = xfs_alloc_longest_free_extent(args->mp, pag, min_free,
reservation);
- if ((args->minlen + args->alignment + args->minalignslop - 1) > longest)
+ if (longest < alloc_len)
return false;
/* do we have enough free space remaining for the allocation? */
available = (int)(pag->pagf_freeblks + pag->pagf_flcount -
- reservation - min_free - args->total);
- if (available < (int)args->minleft || available <= 0)
+ reservation - min_free - args->minleft);
+ if (available < (int)max(args->total, alloc_len))
return false;
+ /*
+ * Clamp maxlen to the amount of free space available for the actual
+ * extent allocation.
+ */
+ if (available < (int)args->maxlen && !(flags & XFS_ALLOC_FLAG_CHECK)) {
+ args->maxlen = available;
+ ASSERT(args->maxlen > 0);
+ ASSERT(args->maxlen >= args->minlen);
+ }
+
return true;
}
}
need = xfs_alloc_min_freelist(mp, pag);
- if (!xfs_alloc_space_available(args, need, flags))
+ if (!xfs_alloc_space_available(args, need, flags |
+ XFS_ALLOC_FLAG_CHECK))
goto out_agbp_relse;
/*
xfs_agblock_t agsize; /* allocation group size */
int error;
int flags; /* XFS_ALLOC_FLAG_... locking flags */
- xfs_extlen_t minleft;/* minimum left value, temp copy */
xfs_mount_t *mp; /* mount structure pointer */
xfs_agnumber_t sagno; /* starting allocation group number */
xfs_alloctype_t type; /* input allocation type */
int bump_rotor = 0;
- int no_min = 0;
xfs_agnumber_t rotorstep = xfs_rotorstep; /* inode32 agf stepper */
mp = args->mp;
trace_xfs_alloc_vextent_badargs(args);
return 0;
}
- minleft = args->minleft;
switch (type) {
case XFS_ALLOCTYPE_THIS_AG:
*/
args->agno = XFS_FSB_TO_AGNO(mp, args->fsbno);
args->pag = xfs_perag_get(mp, args->agno);
- args->minleft = 0;
error = xfs_alloc_fix_freelist(args, 0);
- args->minleft = minleft;
if (error) {
trace_xfs_alloc_vextent_nofix(args);
goto error0;
*/
for (;;) {
args->pag = xfs_perag_get(mp, args->agno);
- if (no_min) args->minleft = 0;
error = xfs_alloc_fix_freelist(args, flags);
- args->minleft = minleft;
if (error) {
trace_xfs_alloc_vextent_nofix(args);
goto error0;
* or switch to non-trylock mode.
*/
if (args->agno == sagno) {
- if (no_min == 1) {
+ if (flags == 0) {
args->agbno = NULLAGBLOCK;
trace_xfs_alloc_vextent_allfailed(args);
break;
}
- if (flags == 0) {
- no_min = 1;
- } else {
- flags = 0;
- if (type == XFS_ALLOCTYPE_START_BNO) {
- args->agbno = XFS_FSB_TO_AGBNO(mp,
- args->fsbno);
- args->type = XFS_ALLOCTYPE_NEAR_BNO;
- }
+
+ flags = 0;
+ if (type == XFS_ALLOCTYPE_START_BNO) {
+ args->agbno = XFS_FSB_TO_AGBNO(mp,
+ args->fsbno);
+ args->type = XFS_ALLOCTYPE_NEAR_BNO;
}
}
xfs_perag_put(args->pag);
#define XFS_ALLOC_FLAG_FREEING 0x00000002 /* indicate caller is freeing extents*/
#define XFS_ALLOC_FLAG_NORMAP 0x00000004 /* don't modify the rmapbt */
#define XFS_ALLOC_FLAG_NOSHRINK 0x00000008 /* don't shrink the freelist */
-
+#define XFS_ALLOC_FLAG_CHECK 0x00000010 /* test only, don't modify args */
/*
* Argument structure for xfs_alloc routines.
args.fsbno = 0;
args.type = XFS_ALLOCTYPE_FIRST_AG;
args.total = ap->minlen;
- args.minleft = 0;
if ((error = xfs_alloc_vextent(&args)))
return error;
ap->dfops->dop_low = true;
if (error)
return error;
- if (bma->dfops->dop_low)
- bma->minleft = 0;
if (bma->cur)
bma->cur->bc_private.b.firstblock = *bma->firstblock;
if (bma->blkno == NULLFSBLOCK)
if (args.fsbno == NULLFSBLOCK && args.minleft) {
/*
* Could not find an AG with enough free space to satisfy
- * a full btree split. Try again without minleft and if
+ * a full btree split. Try again and if
* successful activate the lowspace algorithm.
*/
args.fsbno = 0;
args.type = XFS_ALLOCTYPE_FIRST_AG;
- args.minleft = 0;
error = xfs_alloc_vextent(&args);
if (error)
goto error0;
* block_invalidatepage() can send pages that are still marked dirty
* but otherwise have invalidated buffers.
*
- * We've historically freed buffers on the latter. Instead, quietly
- * filter out all dirty pages to avoid spurious buffer state warnings.
- * This can likely be removed once shrink_active_list() is fixed.
+ * We want to release the latter to avoid unnecessary buildup of the
+ * LRU, skip the former and warn if we've left any lingering
+ * delalloc/unwritten buffers on clean pages. Skip pages with delalloc
+ * or unwritten buffers and warn if the page is not dirty. Otherwise
+ * try to release the buffers.
*/
- if (PageDirty(page))
- return 0;
-
xfs_count_page_state(page, &delalloc, &unwritten);
- if (WARN_ON_ONCE(delalloc))
+ if (delalloc) {
+ WARN_ON_ONCE(!PageDirty(page));
return 0;
- if (WARN_ON_ONCE(unwritten))
+ }
+ if (unwritten) {
+ WARN_ON_ONCE(!PageDirty(page));
return 0;
+ }
return try_to_free_buffers(page);
}
xfs_mount_t *mp,
uint flags)
{
- int error;
-
trace_xfs_log_force(mp, 0, _RET_IP_);
- error = _xfs_log_force(mp, flags, NULL);
- if (error)
- xfs_warn(mp, "%s: error %d returned.", __func__, error);
+ _xfs_log_force(mp, flags, NULL);
}
/*
xfs_lsn_t lsn,
uint flags)
{
- int error;
-
trace_xfs_log_force(mp, lsn, _RET_IP_);
- error = _xfs_log_force_lsn(mp, lsn, flags, NULL);
- if (error)
- xfs_warn(mp, "%s: error %d returned.", __func__, error);
+ _xfs_log_force_lsn(mp, lsn, flags, NULL);
}
/*
}
}
-static inline unsigned int blk_queue_zone_size(struct request_queue *q)
+static inline unsigned int blk_queue_zone_sectors(struct request_queue *q)
{
return blk_queue_is_zoned(q) ? q->limits.chunk_sectors : 0;
}
return blk_rq_cur_bytes(rq) >> 9;
}
+/*
+ * Some commands like WRITE SAME have a payload or data transfer size which
+ * is different from the size of the request. Any driver that supports such
+ * commands using the RQF_SPECIAL_PAYLOAD flag needs to use this helper to
+ * calculate the data transfer size.
+ */
+static inline unsigned int blk_rq_payload_bytes(struct request *rq)
+{
+ if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)
+ return rq->special_vec.bv_len;
+ return blk_rq_bytes(rq);
+}
+
static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
int op)
{
return false;
}
-static inline unsigned int bdev_zone_size(struct block_device *bdev)
+static inline unsigned int bdev_zone_sectors(struct block_device *bdev)
{
struct request_queue *q = bdev_get_queue(bdev);
if (q)
- return blk_queue_zone_size(q);
+ return blk_queue_zone_sectors(q);
return 0;
}
extern int dump_skip(struct coredump_params *cprm, size_t nr);
extern int dump_emit(struct coredump_params *cprm, const void *addr, int nr);
extern int dump_align(struct coredump_params *cprm, int align);
+extern void dump_truncate(struct coredump_params *cprm);
#ifdef CONFIG_COREDUMP
extern void do_coredump(const siginfo_t *siginfo);
#else
#define EFI_PAGE_SHIFT 12
#define EFI_PAGE_SIZE (1UL << EFI_PAGE_SHIFT)
+#define EFI_PAGES_MAX (U64_MAX >> EFI_PAGE_SHIFT)
typedef struct {
u32 type;
#endif
extern void __iomem *efi_lookup_mapped_addr(u64 phys_addr);
+extern phys_addr_t __init efi_memmap_alloc(unsigned int num_entries);
extern int __init efi_memmap_init_early(struct efi_memory_map_data *data);
extern int __init efi_memmap_init_late(phys_addr_t addr, unsigned long size);
extern void __init efi_memmap_unmap(void);
#define ___GFP_ACCOUNT 0x100000u
#define ___GFP_NOTRACK 0x200000u
#define ___GFP_DIRECT_RECLAIM 0x400000u
-#define ___GFP_OTHER_NODE 0x800000u
-#define ___GFP_WRITE 0x1000000u
-#define ___GFP_KSWAPD_RECLAIM 0x2000000u
+#define ___GFP_WRITE 0x800000u
+#define ___GFP_KSWAPD_RECLAIM 0x1000000u
/* If the above are modified, __GFP_BITS_SHIFT may need updating */
/*
* __GFP_NOTRACK_FALSE_POSITIVE is an alias of __GFP_NOTRACK. It's a means of
* distinguishing in the source between false positives and allocations that
* cannot be supported (e.g. page tables).
- *
- * __GFP_OTHER_NODE is for allocations that are on a remote node but that
- * should not be accounted for as a remote allocation in vmstat. A
- * typical user would be khugepaged collapsing a huge page on a remote
- * node.
*/
#define __GFP_COLD ((__force gfp_t)___GFP_COLD)
#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_NOTRACK ((__force gfp_t)___GFP_NOTRACK)
#define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK)
-#define __GFP_OTHER_NODE ((__force gfp_t)___GFP_OTHER_NODE)
/* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT 26
+#define __GFP_BITS_SHIFT 25
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
/*
extern void free_hot_cold_page_list(struct list_head *list, bool cold);
struct page_frag_cache;
-extern void __page_frag_drain(struct page *page, unsigned int order,
- unsigned int count);
-extern void *__alloc_page_frag(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask);
-extern void __free_page_frag(void *addr);
+extern void __page_frag_cache_drain(struct page *page, unsigned int count);
+extern void *page_frag_alloc(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask);
+extern void page_frag_free(void *addr);
#define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0)
/* Synchronize the request/response if needed */
struct completion waitevent;
+ struct vmbus_channel *waiting_channel;
union {
struct vmbus_channel_version_supported version_supported;
struct vmbus_channel_open_result open_result;
#define I2C_CLIENT_TEN 0x10 /* we have a ten bit chip address */
/* Must equal I2C_M_TEN below */
#define I2C_CLIENT_SLAVE 0x20 /* we are the slave */
+#define I2C_CLIENT_HOST_NOTIFY 0x40 /* We want to use I2C host notify */
#define I2C_CLIENT_WAKE 0x80 /* for board_info; true iff can wake */
#define I2C_CLIENT_SCCB 0x9000 /* Use Omnivision SCCB protocol */
/* Must match I2C_M_STOP|IGNORE_NAK */
#ifdef HAVE_JUMP_LABEL
extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
+extern void static_key_deferred_flush(struct static_key_deferred *key);
extern void
jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
STATIC_KEY_CHECK_USE();
static_key_slow_dec(&key->key);
}
+static inline void static_key_deferred_flush(struct static_key_deferred *key)
+{
+ STATIC_KEY_CHECK_USE();
+}
static inline void
jump_label_rate_limit(struct static_key_deferred *key,
unsigned long rl)
*/
struct mem_cgroup_per_node {
struct lruvec lruvec;
- unsigned long lru_size[NR_LRU_LISTS];
+ unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
struct mem_cgroup_reclaim_iter iter[DEF_PRIORITY + 1];
int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
- int nr_pages);
+ int zid, int nr_pages);
unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
int nid, unsigned int lru_mask);
unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru)
{
struct mem_cgroup_per_node *mz;
+ unsigned long nr_pages = 0;
+ int zid;
mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
- return mz->lru_size[lru];
+ for (zid = 0; zid < MAX_NR_ZONES; zid++)
+ nr_pages += mz->lru_zone_size[zid][lru];
+ return nr_pages;
+}
+
+static inline
+unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
+ enum lru_list lru, int zone_idx)
+{
+ struct mem_cgroup_per_node *mz;
+
+ mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
+ return mz->lru_zone_size[zone_idx][lru];
}
void mem_cgroup_handle_over_high(void);
{
return 0;
}
+static inline
+unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
+ enum lru_list lru, int zone_idx)
+{
+ return 0;
+}
static inline unsigned long
mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
/*#define ADB_MOUSE_MINOR 10 FIXME OBSOLETE */
#define WATCHDOG_MINOR 130 /* Watchdog timer */
#define TEMP_MINOR 131 /* Temperature Sensor */
+#define APM_MINOR_DEV 134
#define RTC_MINOR 135
#define EFI_RTC_MINOR 136 /* EFI Time services */
#define VHCI_MINOR 137
struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn);
int follow_phys(struct vm_area_struct *vma, unsigned long address,
{
__update_lru_size(lruvec, lru, zid, nr_pages);
#ifdef CONFIG_MEMCG
- mem_cgroup_update_lru_size(lruvec, lru, nr_pages);
+ mem_cgroup_update_lru_size(lruvec, lru, zid, nr_pages);
#endif
}
return NAPI_GRO_CB(skb)->frag0_len < hlen;
}
+static inline void skb_gro_frag0_invalidate(struct sk_buff *skb)
+{
+ NAPI_GRO_CB(skb)->frag0 = NULL;
+ NAPI_GRO_CB(skb)->frag0_len = 0;
+}
+
static inline void *skb_gro_header_slow(struct sk_buff *skb, unsigned int hlen,
unsigned int offset)
{
if (!pskb_may_pull(skb, hlen))
return NULL;
- NAPI_GRO_CB(skb)->frag0 = NULL;
- NAPI_GRO_CB(skb)->frag0_len = 0;
+ skb_gro_frag0_invalidate(skb);
return skb->data + offset;
}
extern void perf_event_disable_local(struct perf_event *event);
extern void perf_event_disable_inatomic(struct perf_event *event);
extern void perf_event_task_tick(void);
+extern int perf_event_account_interrupt(struct perf_event *event);
#else /* !CONFIG_PERF_EVENTS: */
static inline void *
perf_aux_output_begin(struct perf_output_handle *handle,
--- /dev/null
+/*
+ * TI DaVinci AEMIF platform glue.
+ *
+ * Copyright (C) 2017 BayLibre SAS
+ *
+ * Author:
+ * Bartosz Golaszewski <bgolaszewski@baylibre.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __TI_DAVINCI_AEMIF_DATA_H__
+#define __TI_DAVINCI_AEMIF_DATA_H__
+
+#include <linux/of_platform.h>
+
+struct aemif_platform_data {
+ struct of_dev_auxdata *dev_lookup;
+};
+
+#endif /* __TI_DAVINCI_AEMIF_DATA_H__ */
* @crash_comp: completion used to sync crash handler and the rproc reload
* @recovery_disabled: flag that state if recovery was disabled
* @max_notifyid: largest allocated notify id.
- * @table_ptr: our copy of the resource table
+ * @table_ptr: pointer to the resource table in effect
+ * @cached_table: copy of the resource table
* @has_iommu: flag to indicate if remote processor is behind an MMU
*/
struct rproc {
bool recovery_disabled;
int max_notifyid;
struct resource_table *table_ptr;
+ struct resource_table *cached_table;
bool has_iommu;
bool auto_boot;
};
#define SIGNAL_UNKILLABLE 0x00000040 /* for init: ignore fatal signals */
+#define SIGNAL_STOP_MASK (SIGNAL_CLD_MASK | SIGNAL_STOP_STOPPED | \
+ SIGNAL_STOP_CONTINUED)
+
+static inline void signal_set_stop_flags(struct signal_struct *sig,
+ unsigned int flags)
+{
+ WARN_ON(sig->flags & (SIGNAL_GROUP_EXIT|SIGNAL_GROUP_COREDUMP));
+ sig->flags = (sig->flags & ~SIGNAL_STOP_MASK) | flags;
+}
+
/* If true, all threads except ->group_exit_task have pending SIGKILL */
static inline int signal_group_exit(const struct signal_struct *sig)
{
static inline void skb_free_frag(void *addr)
{
- __free_page_frag(addr);
+ page_frag_free(addr);
}
void *napi_alloc_frag(unsigned int fragsz);
* (PAGE_SIZE*2). Larger requests are passed to the page allocator.
*/
#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
-#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT)
+#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
#ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 3
#endif
* be allocated from the same page.
*/
#define KMALLOC_SHIFT_HIGH PAGE_SHIFT
-#define KMALLOC_SHIFT_MAX 30
+#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
#ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 3
#endif
SWP_FILE = (1 << 7), /* set after swap_activate success */
SWP_AREA_DISCARD = (1 << 8), /* single-time swap area discards */
SWP_PAGE_DISCARD = (1 << 9), /* freed swap page-cluster discards */
+ SWP_STABLE_WRITES = (1 << 10), /* no overwrite PG_writeback pages */
/* add others here before... */
- SWP_SCANNING = (1 << 10), /* refcount in scan_swap_map */
+ SWP_SCANNING = (1 << 11), /* refcount in scan_swap_map */
};
#define SWAP_CLUSTER_MAX 32UL
#ifndef _LINUX_TIMERFD_H
#define _LINUX_TIMERFD_H
-/* For O_CLOEXEC and O_NONBLOCK */
-#include <linux/fcntl.h>
-
-/* For _IO helpers */
-#include <linux/ioctl.h>
-
-/*
- * CAREFUL: Check include/asm-generic/fcntl.h when defining
- * new flags, since they might collide with O_* ones. We want
- * to re-use O_* flags that couldn't possibly have a meaning
- * from eventfd, in order to leave a free define-space for
- * shared O_* flags.
- */
-#define TFD_TIMER_ABSTIME (1 << 0)
-#define TFD_TIMER_CANCEL_ON_SET (1 << 1)
-#define TFD_CLOEXEC O_CLOEXEC
-#define TFD_NONBLOCK O_NONBLOCK
+#include <uapi/linux/timerfd.h>
#define TFD_SHARED_FCNTL_FLAGS (TFD_CLOEXEC | TFD_NONBLOCK)
/* Flags for timerfd_create. */
/* Flags for timerfd_settime. */
#define TFD_SETTIME_FLAGS (TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET)
-#define TFD_IOC_SET_TICKS _IOW('T', 0, u64)
-
#endif /* _LINUX_TIMERFD_H */
};
struct vme_driver {
- struct list_head node;
const char *name;
int (*match)(struct vme_dev *);
int (*probe)(struct vme_dev *);
HDMI_AC97,
HDMI_SPDIF,
} fmt;
- int bit_clk_inv:1;
- int frame_clk_inv:1;
- int bit_clk_master:1;
- int frame_clk_master:1;
+ unsigned int bit_clk_inv:1;
+ unsigned int frame_clk_inv:1;
+ unsigned int bit_clk_master:1;
+ unsigned int frame_clk_master:1;
};
/*
unsigned int suspended:1; /* is in suspend PM state */
struct list_head list;
+ struct list_head card_aux_list; /* for auxiliary bound components */
struct list_head card_list;
struct snd_soc_dai_driver *dai_drv;
*/
struct snd_soc_aux_dev *aux_dev;
int num_aux_devs;
+ struct list_head aux_comp_list;
const struct snd_kcontrol_new *controls;
int num_controls;
INIT_LIST_HEAD(&card->widgets);
INIT_LIST_HEAD(&card->paths);
INIT_LIST_HEAD(&card->dapm_list);
+ INIT_LIST_HEAD(&card->aux_comp_list);
INIT_LIST_HEAD(&card->component_dev_list);
}
TCM_LOGICAL_BLOCK_APP_TAG_CHECK_FAILED = R(0x16),
TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED = R(0x17),
TCM_COPY_TARGET_DEVICE_NOT_REACHABLE = R(0x18),
+ TCM_TOO_MANY_TARGET_DESCS = R(0x19),
+ TCM_UNSUPPORTED_TARGET_DESC_TYPE_CODE = R(0x1a),
+ TCM_TOO_MANY_SEGMENT_DESCS = R(0x1b),
+ TCM_UNSUPPORTED_SEGMENT_DESC_TYPE_CODE = R(0x1c),
#undef R
};
BTRFS_I(inode)->root->root_key.objectid;
),
- TP_printk_btrfs("root = %llu(%s), gen = %llu, ino = %lu, blocks = %llu, "
- "disk_i_size = %llu, last_trans = %llu, logged_trans = %llu",
+ TP_printk_btrfs("root=%llu(%s) gen=%llu ino=%lu blocks=%llu "
+ "disk_i_size=%llu last_trans=%llu logged_trans=%llu",
show_root_type(__entry->root_objectid),
(unsigned long long)__entry->generation,
(unsigned long)__entry->ino,
TRACE_EVENT_CONDITION(btrfs_get_extent,
- TP_PROTO(struct btrfs_root *root, struct extent_map *map),
+ TP_PROTO(struct btrfs_root *root, struct inode *inode,
+ struct extent_map *map),
- TP_ARGS(root, map),
+ TP_ARGS(root, inode, map),
TP_CONDITION(map),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
+ __field( u64, ino )
__field( u64, start )
__field( u64, len )
__field( u64, orig_start )
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = root->root_key.objectid;
- __entry->start = map->start;
+ __entry->ino = btrfs_ino(inode);
+ __entry->start = map->start;
__entry->len = map->len;
__entry->orig_start = map->orig_start;
__entry->block_start = map->block_start;
__entry->compress_type = map->compress_type;
),
- TP_printk_btrfs("root = %llu(%s), start = %llu, len = %llu, "
- "orig_start = %llu, block_start = %llu(%s), "
- "block_len = %llu, flags = %s, refs = %u, "
- "compress_type = %u",
+ TP_printk_btrfs("root=%llu(%s) ino=%llu start=%llu len=%llu "
+ "orig_start=%llu block_start=%llu(%s) "
+ "block_len=%llu flags=%s refs=%u "
+ "compress_type=%u",
show_root_type(__entry->root_objectid),
+ (unsigned long long)__entry->ino,
(unsigned long long)__entry->start,
(unsigned long long)__entry->len,
(unsigned long long)__entry->orig_start,
__field( int, compress_type )
__field( int, refs )
__field( u64, root_objectid )
+ __field( u64, truncated_len )
),
TP_fast_assign_btrfs(btrfs_sb(inode->i_sb),
__entry->refs = atomic_read(&ordered->refs);
__entry->root_objectid =
BTRFS_I(inode)->root->root_key.objectid;
+ __entry->truncated_len = ordered->truncated_len;
),
- TP_printk_btrfs("root = %llu(%s), ino = %llu, file_offset = %llu, "
- "start = %llu, len = %llu, disk_len = %llu, "
- "bytes_left = %llu, flags = %s, compress_type = %d, "
- "refs = %d",
+ TP_printk_btrfs("root=%llu(%s) ino=%llu file_offset=%llu "
+ "start=%llu len=%llu disk_len=%llu "
+ "truncated_len=%llu "
+ "bytes_left=%llu flags=%s compress_type=%d "
+ "refs=%d",
show_root_type(__entry->root_objectid),
(unsigned long long)__entry->ino,
(unsigned long long)__entry->file_offset,
(unsigned long long)__entry->start,
(unsigned long long)__entry->len,
(unsigned long long)__entry->disk_len,
+ (unsigned long long)__entry->truncated_len,
(unsigned long long)__entry->bytes_left,
show_ordered_flags(__entry->flags),
__entry->compress_type, __entry->refs)
BTRFS_I(inode)->root->root_key.objectid;
),
- TP_printk_btrfs("root = %llu(%s), ino = %lu, page_index = %lu, "
- "nr_to_write = %ld, pages_skipped = %ld, range_start = %llu, "
- "range_end = %llu, for_kupdate = %d, "
- "for_reclaim = %d, range_cyclic = %d, writeback_index = %lu",
+ TP_printk_btrfs("root=%llu(%s) ino=%lu page_index=%lu "
+ "nr_to_write=%ld pages_skipped=%ld range_start=%llu "
+ "range_end=%llu for_kupdate=%d "
+ "for_reclaim=%d range_cyclic=%d writeback_index=%lu",
show_root_type(__entry->root_objectid),
(unsigned long)__entry->ino, __entry->index,
__entry->nr_to_write, __entry->pages_skipped,
BTRFS_I(page->mapping->host)->root->root_key.objectid;
),
- TP_printk_btrfs("root = %llu(%s), ino = %lu, page_index = %lu, start = %llu, "
- "end = %llu, uptodate = %d",
+ TP_printk_btrfs("root=%llu(%s) ino=%lu page_index=%lu start=%llu "
+ "end=%llu uptodate=%d",
show_root_type(__entry->root_objectid),
(unsigned long)__entry->ino, (unsigned long)__entry->index,
(unsigned long long)__entry->start,
BTRFS_I(inode)->root->root_key.objectid;
),
- TP_printk_btrfs("root = %llu(%s), ino = %ld, parent = %ld, datasync = %d",
+ TP_printk_btrfs("root=%llu(%s) ino=%ld parent=%ld datasync=%d",
show_root_type(__entry->root_objectid),
(unsigned long)__entry->ino, (unsigned long)__entry->parent,
__entry->datasync)
__entry->create = create;
),
- TP_printk("%pU: block_group offset = %llu, size = %llu, "
- "flags = %llu(%s), bytes_used = %llu, bytes_super = %llu, "
- "create = %d", __entry->fsid,
+ TP_printk("%pU: block_group offset=%llu size=%llu "
+ "flags=%llu(%s) bytes_used=%llu bytes_super=%llu "
+ "create=%d", __entry->fsid,
(unsigned long long)__entry->offset,
(unsigned long long)__entry->size,
(unsigned long long)__entry->flags,
__entry->seq = ref->seq;
),
- TP_printk_btrfs("bytenr = %llu, num_bytes = %llu, action = %s, "
- "parent = %llu(%s), ref_root = %llu(%s), level = %d, "
- "type = %s, seq = %llu",
+ TP_printk_btrfs("bytenr=%llu num_bytes=%llu action=%s "
+ "parent=%llu(%s) ref_root=%llu(%s) level=%d "
+ "type=%s seq=%llu",
(unsigned long long)__entry->bytenr,
(unsigned long long)__entry->num_bytes,
show_ref_action(__entry->action),
__entry->seq = ref->seq;
),
- TP_printk_btrfs("bytenr = %llu, num_bytes = %llu, action = %s, "
- "parent = %llu(%s), ref_root = %llu(%s), owner = %llu, "
- "offset = %llu, type = %s, seq = %llu",
+ TP_printk_btrfs("bytenr=%llu num_bytes=%llu action=%s "
+ "parent=%llu(%s) ref_root=%llu(%s) owner=%llu "
+ "offset=%llu type=%s seq=%llu",
(unsigned long long)__entry->bytenr,
(unsigned long long)__entry->num_bytes,
show_ref_action(__entry->action),
__entry->is_data = head_ref->is_data;
),
- TP_printk_btrfs("bytenr = %llu, num_bytes = %llu, action = %s, is_data = %d",
+ TP_printk_btrfs("bytenr=%llu num_bytes=%llu action=%s is_data=%d",
(unsigned long long)__entry->bytenr,
(unsigned long long)__entry->num_bytes,
show_ref_action(__entry->action),
__entry->root_objectid = fs_info->chunk_root->root_key.objectid;
),
- TP_printk_btrfs("root = %llu(%s), offset = %llu, size = %llu, "
- "num_stripes = %d, sub_stripes = %d, type = %s",
+ TP_printk_btrfs("root=%llu(%s) offset=%llu size=%llu "
+ "num_stripes=%d sub_stripes=%d type=%s",
show_root_type(__entry->root_objectid),
(unsigned long long)__entry->offset,
(unsigned long long)__entry->size,
__entry->cow_level = btrfs_header_level(cow);
),
- TP_printk_btrfs("root = %llu(%s), refs = %d, orig_buf = %llu "
- "(orig_level = %d), cow_buf = %llu (cow_level = %d)",
+ TP_printk_btrfs("root=%llu(%s) refs=%d orig_buf=%llu "
+ "(orig_level=%d) cow_buf=%llu (cow_level=%d)",
show_root_type(__entry->root_objectid),
__entry->refs,
(unsigned long long)__entry->buf_start,
__assign_str(reason, reason)
),
- TP_printk("%pU: %s: flush = %d(%s), flags = %llu(%s), bytes = %llu",
+ TP_printk("%pU: %s: flush=%d(%s) flags=%llu(%s) bytes=%llu",
__entry->fsid, __get_str(reason), __entry->flush,
show_flush_action(__entry->flush),
(unsigned long long)__entry->flags,
__entry->ret = ret;
),
- TP_printk("%pU: state = %d(%s), flags = %llu(%s), num_bytes = %llu, "
- "orig_bytes = %llu, ret = %d", __entry->fsid, __entry->state,
+ TP_printk("%pU: state=%d(%s) flags=%llu(%s) num_bytes=%llu "
+ "orig_bytes=%llu ret=%d", __entry->fsid, __entry->state,
show_flush_state(__entry->state),
(unsigned long long)__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
__entry->len = len;
),
- TP_printk_btrfs("root = %llu(%s), start = %llu, len = %llu",
+ TP_printk_btrfs("root=%llu(%s) start=%llu len=%llu",
show_root_type(BTRFS_EXTENT_TREE_OBJECTID),
(unsigned long long)__entry->start,
(unsigned long long)__entry->len)
__entry->data = data;
),
- TP_printk_btrfs("root = %Lu(%s), len = %Lu, empty_size = %Lu, flags = %Lu(%s)",
+ TP_printk_btrfs("root=%Lu(%s) len=%Lu empty_size=%Lu flags=%Lu(%s)",
show_root_type(BTRFS_EXTENT_TREE_OBJECTID),
__entry->num_bytes, __entry->empty_size, __entry->data,
__print_flags((unsigned long)__entry->data, "|",
__entry->len = len;
),
- TP_printk_btrfs("root = %Lu(%s), block_group = %Lu, flags = %Lu(%s), "
- "start = %Lu, len = %Lu",
+ TP_printk_btrfs("root=%Lu(%s) block_group=%Lu flags=%Lu(%s) "
+ "start=%Lu len=%Lu",
show_root_type(BTRFS_EXTENT_TREE_OBJECTID),
__entry->bg_objectid,
__entry->flags, __print_flags((unsigned long)__entry->flags,
__entry->min_bytes = min_bytes;
),
- TP_printk_btrfs("block_group = %Lu, flags = %Lu(%s), start = %Lu, len = %Lu,"
- " empty_size = %Lu, min_bytes = %Lu", __entry->bg_objectid,
+ TP_printk_btrfs("block_group=%Lu flags=%Lu(%s) start=%Lu len=%Lu "
+ "empty_size=%Lu min_bytes=%Lu", __entry->bg_objectid,
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS), __entry->start,
__entry->bg_objectid = block_group->key.objectid;
),
- TP_printk_btrfs("block_group = %Lu", __entry->bg_objectid)
+ TP_printk_btrfs("block_group=%Lu", __entry->bg_objectid)
);
TRACE_EVENT(btrfs_setup_cluster,
__entry->bitmap = bitmap;
),
- TP_printk_btrfs("block_group = %Lu, flags = %Lu(%s), window_start = %Lu, "
- "size = %Lu, max_size = %Lu, bitmap = %d",
+ TP_printk_btrfs("block_group=%Lu flags=%Lu(%s) window_start=%Lu "
+ "size=%Lu max_size=%Lu bitmap=%d",
__entry->bg_objectid,
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
__entry->ip = IP
),
- TP_printk("state=%p; mask = %s; caller = %pS", __entry->state,
+ TP_printk("state=%p mask=%s caller=%pS", __entry->state,
show_gfp_flags(__entry->mask), (void *)__entry->ip)
);
__entry->ip = IP
),
- TP_printk(" state=%p; caller = %pS", __entry->state,
+ TP_printk("state=%p caller=%pS", __entry->state,
(void *)__entry->ip)
);
__entry->normal_work = &work->normal_work;
),
- TP_printk_btrfs("work=%p (normal_work=%p), wq=%p, func=%pf, ordered_func=%p,"
- " ordered_free=%p",
+ TP_printk_btrfs("work=%p (normal_work=%p) wq=%p func=%pf ordered_func=%p "
+ "ordered_free=%p",
__entry->work, __entry->normal_work, __entry->wq,
__entry->func, __entry->ordered_func, __entry->ordered_free)
);
-/* For situiations that the work is freed */
+/*
+ * For situiations when the work is freed, we pass fs_info and a tag that that
+ * matches address of the work structure so it can be paired with the
+ * scheduling event.
+ */
DECLARE_EVENT_CLASS(btrfs__work__done,
- TP_PROTO(struct btrfs_work *work),
+ TP_PROTO(struct btrfs_fs_info *fs_info, void *wtag),
- TP_ARGS(work),
+ TP_ARGS(fs_info, wtag),
TP_STRUCT__entry_btrfs(
- __field( void *, work )
+ __field( void *, wtag )
),
- TP_fast_assign_btrfs(btrfs_work_owner(work),
- __entry->work = work;
+ TP_fast_assign_btrfs(fs_info,
+ __entry->wtag = wtag;
),
- TP_printk_btrfs("work->%p", __entry->work)
+ TP_printk_btrfs("work->%p", __entry->wtag)
);
DEFINE_EVENT(btrfs__work, btrfs_work_queued,
DEFINE_EVENT(btrfs__work__done, btrfs_all_work_done,
- TP_PROTO(struct btrfs_work *work),
+ TP_PROTO(struct btrfs_fs_info *fs_info, void *wtag),
- TP_ARGS(work)
+ TP_ARGS(fs_info, wtag)
);
DEFINE_EVENT(btrfs__work, btrfs_ordered_sched,
__entry->high = high;
),
- TP_printk_btrfs("name=%s%s, wq=%p", __get_str(name),
+ TP_printk_btrfs("name=%s%s wq=%p", __get_str(name),
__print_flags(__entry->high, "",
{(WQ_HIGHPRI), "-high"}),
__entry->wq)
__entry->free_reserved = free_reserved;
),
- TP_printk_btrfs("rootid=%llu, ino=%lu, free_reserved=%llu",
+ TP_printk_btrfs("rootid=%llu ino=%lu free_reserved=%llu",
__entry->rootid, __entry->ino, __entry->free_reserved)
);
__entry->op = op;
),
- TP_printk_btrfs("root=%llu, ino=%lu, start=%llu, len=%llu, reserved=%llu, op=%s",
+ TP_printk_btrfs("root=%llu ino=%lu start=%llu len=%llu reserved=%llu op=%s",
__entry->rootid, __entry->ino, __entry->start, __entry->len,
__entry->reserved,
__print_flags((unsigned long)__entry->op, "",
__entry->reserved = reserved;
),
- TP_printk_btrfs("root=%llu, reserved=%llu, op=free",
+ TP_printk_btrfs("root=%llu reserved=%llu op=free",
__entry->ref_root, __entry->reserved)
);
__entry->num_bytes = rec->num_bytes;
),
- TP_printk_btrfs("bytenr = %llu, num_bytes = %llu",
+ TP_printk_btrfs("bytenr=%llu num_bytes=%llu",
(unsigned long long)__entry->bytenr,
(unsigned long long)__entry->num_bytes)
);
__entry->nr_new_roots = nr_new_roots;
),
- TP_printk_btrfs("bytenr = %llu, num_bytes = %llu, nr_old_roots = %llu, "
- "nr_new_roots = %llu",
+ TP_printk_btrfs("bytenr=%llu num_bytes=%llu nr_old_roots=%llu "
+ "nr_new_roots=%llu",
__entry->bytenr,
__entry->num_bytes,
__entry->nr_old_roots,
__entry->cur_new_count = cur_new_count;
),
- TP_printk_btrfs("qgid = %llu, cur_old_count = %llu, cur_new_count = %llu",
+ TP_printk_btrfs("qgid=%llu cur_old_count=%llu cur_new_count=%llu",
__entry->qgid,
__entry->cur_old_count,
__entry->cur_new_count)
{(unsigned long)__GFP_WRITE, "__GFP_WRITE"}, \
{(unsigned long)__GFP_RECLAIM, "__GFP_RECLAIM"}, \
{(unsigned long)__GFP_DIRECT_RECLAIM, "__GFP_DIRECT_RECLAIM"},\
- {(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"},\
- {(unsigned long)__GFP_OTHER_NODE, "__GFP_OTHER_NODE"} \
+ {(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"}\
#define show_gfp_flags(flags) \
(flags) ? __print_flags(flags, "|", \
header-y += termios.h
header-y += thermal.h
header-y += time.h
+header-y += timerfd.h
header-y += times.h
header-y += timex.h
header-y += tiocl.h
--- /dev/null
+/*
+ * include/linux/timerfd.h
+ *
+ * Copyright (C) 2007 Davide Libenzi <davidel@xmailserver.org>
+ *
+ */
+
+#ifndef _UAPI_LINUX_TIMERFD_H
+#define _UAPI_LINUX_TIMERFD_H
+
+#include <linux/types.h>
+
+/* For O_CLOEXEC and O_NONBLOCK */
+#include <linux/fcntl.h>
+
+/* For _IO helpers */
+#include <linux/ioctl.h>
+
+/*
+ * CAREFUL: Check include/asm-generic/fcntl.h when defining
+ * new flags, since they might collide with O_* ones. We want
+ * to re-use O_* flags that couldn't possibly have a meaning
+ * from eventfd, in order to leave a free define-space for
+ * shared O_* flags.
+ *
+ * Also make sure to update the masks in include/linux/timerfd.h
+ * when adding new flags.
+ */
+#define TFD_TIMER_ABSTIME (1 << 0)
+#define TFD_TIMER_CANCEL_ON_SET (1 << 1)
+#define TFD_CLOEXEC O_CLOEXEC
+#define TFD_NONBLOCK O_NONBLOCK
+
+#define TFD_IOC_SET_TICKS _IOW('T', 0, __u64)
+
+#endif /* _UAPI_LINUX_TIMERFD_H */
Say N.
+config SOCK_CGROUP_DATA
+ bool
+ default n
+
endif # CGROUPS
config CHECKPOINT_RESTORE
help
See tools/perf/design.txt for details
+config PC104
+ bool "PC/104 support"
+ help
+ Expose PC/104 form factor device drivers and options available for
+ selection and configuration. Enable this option if your target
+ machine has a PC/104 bus.
+
menu "Kernel Performance Events And Counters"
config PERF_EVENTS
}
rcu_read_lock();
- sem_lock(sma, sops, nsops);
+ locknum = sem_lock(sma, sops, nsops);
if (!ipc_valid_object(&sma->sem_perm))
goto out_unlock_free;
attr->value_size == 0 || attr->map_flags)
return ERR_PTR(-EINVAL);
- if (attr->value_size >= 1 << (KMALLOC_SHIFT_MAX - 1))
+ if (attr->value_size > KMALLOC_MAX_SIZE)
/* if value_size is bigger, the user space won't be able to
* access the elements.
*/
*/
goto free_htab;
- if (htab->map.value_size >= (1 << (KMALLOC_SHIFT_MAX - 1)) -
+ if (htab->map.value_size >= KMALLOC_MAX_SIZE -
MAX_BPF_STACK - sizeof(struct htab_elem))
/* if value_size is bigger, the user space won't be able to
* access the elements via bpf syscall. This check also makes
{
return has_ns_capability(t, &init_user_ns, cap);
}
+EXPORT_SYMBOL(has_capability);
/**
* has_ns_capability_noaudit - Does a task have a capability (unaudited)
struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
struct perf_event_context *task_ctx = cpuctx->task_ctx;
- bool activate = true;
+ bool reprogram = true;
int ret = 0;
raw_spin_lock(&cpuctx->ctx.lock);
raw_spin_lock(&ctx->lock);
task_ctx = ctx;
- /* If we're on the wrong CPU, try again */
- if (task_cpu(ctx->task) != smp_processor_id()) {
- ret = -ESRCH;
- goto unlock;
- }
+ reprogram = (ctx->task == current);
/*
- * If we're on the right CPU, see if the task we target is
- * current, if not we don't have to activate the ctx, a future
- * context switch will do that for us.
+ * If the task is running, it must be running on this CPU,
+ * otherwise we cannot reprogram things.
+ *
+ * If its not running, we don't care, ctx->lock will
+ * serialize against it becoming runnable.
*/
- if (ctx->task != current)
- activate = false;
- else
- WARN_ON_ONCE(cpuctx->task_ctx && cpuctx->task_ctx != ctx);
+ if (task_curr(ctx->task) && !reprogram) {
+ ret = -ESRCH;
+ goto unlock;
+ }
+ WARN_ON_ONCE(reprogram && cpuctx->task_ctx && cpuctx->task_ctx != ctx);
} else if (task_ctx) {
raw_spin_lock(&task_ctx->lock);
}
- if (activate) {
+ if (reprogram) {
ctx_sched_out(ctx, cpuctx, EVENT_TIME);
add_event_to_ctx(event, ctx);
ctx_resched(cpuctx, task_ctx);
/*
* Installing events is tricky because we cannot rely on ctx->is_active
* to be set in case this is the nr_events 0 -> 1 transition.
+ *
+ * Instead we use task_curr(), which tells us if the task is running.
+ * However, since we use task_curr() outside of rq::lock, we can race
+ * against the actual state. This means the result can be wrong.
+ *
+ * If we get a false positive, we retry, this is harmless.
+ *
+ * If we get a false negative, things are complicated. If we are after
+ * perf_event_context_sched_in() ctx::lock will serialize us, and the
+ * value must be correct. If we're before, it doesn't matter since
+ * perf_event_context_sched_in() will program the counter.
+ *
+ * However, this hinges on the remote context switch having observed
+ * our task->perf_event_ctxp[] store, such that it will in fact take
+ * ctx::lock in perf_event_context_sched_in().
+ *
+ * We do this by task_function_call(), if the IPI fails to hit the task
+ * we know any future context switch of task must see the
+ * perf_event_ctpx[] store.
*/
-again:
+
/*
- * Cannot use task_function_call() because we need to run on the task's
- * CPU regardless of whether its current or not.
+ * This smp_mb() orders the task->perf_event_ctxp[] store with the
+ * task_cpu() load, such that if the IPI then does not find the task
+ * running, a future context switch of that task must observe the
+ * store.
*/
- if (!cpu_function_call(task_cpu(task), __perf_install_in_context, event))
+ smp_mb();
+again:
+ if (!task_function_call(task, __perf_install_in_context, event))
return;
raw_spin_lock_irq(&ctx->lock);
raw_spin_unlock_irq(&ctx->lock);
return;
}
- raw_spin_unlock_irq(&ctx->lock);
/*
- * Since !ctx->is_active doesn't mean anything, we must IPI
- * unconditionally.
+ * If the task is not running, ctx->lock will avoid it becoming so,
+ * thus we can safely install the event.
*/
- goto again;
+ if (task_curr(task)) {
+ raw_spin_unlock_irq(&ctx->lock);
+ goto again;
+ }
+ add_event_to_ctx(event, ctx);
+ raw_spin_unlock_irq(&ctx->lock);
}
/*
perf_output_end(&handle);
}
-/*
- * Generic event overflow handling, sampling.
- */
-
-static int __perf_event_overflow(struct perf_event *event,
- int throttle, struct perf_sample_data *data,
- struct pt_regs *regs)
+static int
+__perf_event_account_interrupt(struct perf_event *event, int throttle)
{
- int events = atomic_read(&event->event_limit);
struct hw_perf_event *hwc = &event->hw;
- u64 seq;
int ret = 0;
-
- /*
- * Non-sampling counters might still use the PMI to fold short
- * hardware counters, ignore those.
- */
- if (unlikely(!is_sampling_event(event)))
- return 0;
+ u64 seq;
seq = __this_cpu_read(perf_throttled_seq);
if (seq != hwc->interrupts_seq) {
perf_adjust_period(event, delta, hwc->last_period, true);
}
+ return ret;
+}
+
+int perf_event_account_interrupt(struct perf_event *event)
+{
+ return __perf_event_account_interrupt(event, 1);
+}
+
+/*
+ * Generic event overflow handling, sampling.
+ */
+
+static int __perf_event_overflow(struct perf_event *event,
+ int throttle, struct perf_sample_data *data,
+ struct pt_regs *regs)
+{
+ int events = atomic_read(&event->event_limit);
+ int ret = 0;
+
+ /*
+ * Non-sampling counters might still use the PMI to fold short
+ * hardware counters, ignore those.
+ */
+ if (unlikely(!is_sampling_event(event)))
+ return 0;
+
+ ret = __perf_event_account_interrupt(event, throttle);
+
/*
* XXX event_limit might not quite work as expected on inherited
* events
return 0;
}
+/*
+ * Variation on perf_event_ctx_lock_nested(), except we take two context
+ * mutexes.
+ */
+static struct perf_event_context *
+__perf_event_ctx_lock_double(struct perf_event *group_leader,
+ struct perf_event_context *ctx)
+{
+ struct perf_event_context *gctx;
+
+again:
+ rcu_read_lock();
+ gctx = READ_ONCE(group_leader->ctx);
+ if (!atomic_inc_not_zero(&gctx->refcount)) {
+ rcu_read_unlock();
+ goto again;
+ }
+ rcu_read_unlock();
+
+ mutex_lock_double(&gctx->mutex, &ctx->mutex);
+
+ if (group_leader->ctx != gctx) {
+ mutex_unlock(&ctx->mutex);
+ mutex_unlock(&gctx->mutex);
+ put_ctx(gctx);
+ goto again;
+ }
+
+ return gctx;
+}
+
/**
* sys_perf_event_open - open a performance event, associate it to a task/cpu
*
}
if (move_group) {
- gctx = group_leader->ctx;
- mutex_lock_double(&gctx->mutex, &ctx->mutex);
+ gctx = __perf_event_ctx_lock_double(group_leader, ctx);
+
if (gctx->task == TASK_TOMBSTONE) {
err = -ESRCH;
goto err_locked;
}
+
+ /*
+ * Check if we raced against another sys_perf_event_open() call
+ * moving the software group underneath us.
+ */
+ if (!(group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) {
+ /*
+ * If someone moved the group out from under us, check
+ * if this new event wound up on the same ctx, if so
+ * its the regular !move_group case, otherwise fail.
+ */
+ if (gctx != ctx) {
+ err = -EINVAL;
+ goto err_locked;
+ } else {
+ perf_event_ctx_unlock(group_leader, gctx);
+ move_group = 0;
+ }
+ }
} else {
mutex_lock(&ctx->mutex);
}
perf_unpin_context(ctx);
if (move_group)
- mutex_unlock(&gctx->mutex);
+ perf_event_ctx_unlock(group_leader, gctx);
mutex_unlock(&ctx->mutex);
if (task) {
err_locked:
if (move_group)
- mutex_unlock(&gctx->mutex);
+ perf_event_ctx_unlock(group_leader, gctx);
mutex_unlock(&ctx->mutex);
/* err_file: */
fput(event_file);
}
EXPORT_SYMBOL_GPL(static_key_slow_dec_deferred);
+void static_key_deferred_flush(struct static_key_deferred *key)
+{
+ STATIC_KEY_CHECK_USE();
+ flush_delayed_work(&key->work);
+}
+EXPORT_SYMBOL_GPL(static_key_deferred_flush);
+
void jump_label_rate_limit(struct static_key_deferred *key,
unsigned long rl)
{
/* pages are dead and unused, undo the arch mapping */
align_start = res->start & ~(SECTION_SIZE - 1);
align_size = ALIGN(resource_size(res), SECTION_SIZE);
+ mem_hotplug_begin();
arch_remove_memory(align_start, align_size);
+ mem_hotplug_done();
untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
pgmap_radix_release(res);
dev_WARN_ONCE(dev, pgmap->altmap && pgmap->altmap->alloc,
if (error)
goto err_pfn_remap;
+ mem_hotplug_begin();
error = arch_add_memory(nid, align_start, align_size, true);
+ mem_hotplug_done();
if (error)
goto err_add_memory;
static void delayed_free_pidns(struct rcu_head *p)
{
- kmem_cache_free(pid_ns_cachep,
- container_of(p, struct pid_namespace, rcu));
+ struct pid_namespace *ns = container_of(p, struct pid_namespace, rcu);
+
+ dec_pid_namespaces(ns->ucounts);
+ put_user_ns(ns->user_ns);
+
+ kmem_cache_free(pid_ns_cachep, ns);
}
static void destroy_pid_namespace(struct pid_namespace *ns)
ns_free_inum(&ns->ns);
for (i = 0; i < PIDMAP_ENTRIES; i++)
kfree(ns->pidmap[i].page);
- dec_pid_namespaces(ns->ucounts);
- put_user_ns(ns->user_ns);
call_rcu(&ns->rcu, delayed_free_pidns);
}
* fresh group stop. Read comment in do_signal_stop() for details.
*/
if (!sig->group_stop_count && !(sig->flags & SIGNAL_STOP_STOPPED)) {
- sig->flags = SIGNAL_STOP_STOPPED;
+ signal_set_stop_flags(sig, SIGNAL_STOP_STOPPED);
return true;
}
return false;
* will take ->siglock, notice SIGNAL_CLD_MASK, and
* notify its parent. See get_signal_to_deliver().
*/
- signal->flags = why | SIGNAL_STOP_CONTINUED;
+ signal_set_stop_flags(signal, why | SIGNAL_STOP_CONTINUED);
signal->group_stop_count = 0;
signal->group_exit_code = 0;
}
tick = expires;
/* Skip reprogram of event if its not changed */
- if (ts->tick_stopped && (expires == dev->next_event))
+ if (ts->tick_stopped && (expires == ts->next_tick))
goto out;
/*
trace_tick_stop(1, TICK_DEP_MASK_NONE);
}
+ ts->next_tick = tick;
+
/*
* If the expiration time == KTIME_MAX, then we simply stop
* the tick timer.
else
tick_program_event(tick, 1);
out:
- /* Update the estimated sleep length */
+ /*
+ * Update the estimated sleep length until the next timer
+ * (not only the tick).
+ */
ts->sleep_length = ktime_sub(dev->next_event, now);
return tick;
}
* timer is modified for nohz sleeps. This is necessary
* to resume the tick timer operation in the timeline
* when the CPU returns from nohz sleep.
+ * @next_tick: Next tick to be fired when in dynticks mode.
* @tick_stopped: Indicator that the idle tick has been stopped
* @idle_jiffies: jiffies at the entry to idle for idle time accounting
* @idle_calls: Total number of idle calls
unsigned long check_clocks;
enum tick_nohz_mode nohz_mode;
ktime_t last_tick;
+ ktime_t next_tick;
int inidle;
int tick_stopped;
unsigned long idle_jiffies;
config DEBUG_INFO_SPLIT
bool "Produce split debuginfo in .dwo files"
- depends on DEBUG_INFO
+ depends on DEBUG_INFO && !FRV
help
Generate debug info into separate .dwo files. This significantly
reduces the build directory size for builds with DEBUG_INFO,
}
EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
+static inline void pipe_truncate(struct iov_iter *i)
+{
+ struct pipe_inode_info *pipe = i->pipe;
+ if (pipe->nrbufs) {
+ size_t off = i->iov_offset;
+ int idx = i->idx;
+ int nrbufs = (idx - pipe->curbuf) & (pipe->buffers - 1);
+ if (off) {
+ pipe->bufs[idx].len = off - pipe->bufs[idx].offset;
+ idx = next_idx(idx, pipe);
+ nrbufs++;
+ }
+ while (pipe->nrbufs > nrbufs) {
+ pipe_buf_release(pipe, &pipe->bufs[idx]);
+ idx = next_idx(idx, pipe);
+ pipe->nrbufs--;
+ }
+ }
+}
+
static void pipe_advance(struct iov_iter *i, size_t size)
{
struct pipe_inode_info *pipe = i->pipe;
- struct pipe_buffer *buf;
- int idx = i->idx;
- size_t off = i->iov_offset, orig_sz;
-
if (unlikely(i->count < size))
size = i->count;
- orig_sz = size;
-
if (size) {
+ struct pipe_buffer *buf;
+ size_t off = i->iov_offset, left = size;
+ int idx = i->idx;
if (off) /* make it relative to the beginning of buffer */
- size += off - pipe->bufs[idx].offset;
+ left += off - pipe->bufs[idx].offset;
while (1) {
buf = &pipe->bufs[idx];
- if (size <= buf->len)
+ if (left <= buf->len)
break;
- size -= buf->len;
+ left -= buf->len;
idx = next_idx(idx, pipe);
}
- buf->len = size;
i->idx = idx;
- off = i->iov_offset = buf->offset + size;
- }
- if (off)
- idx = next_idx(idx, pipe);
- if (pipe->nrbufs) {
- int unused = (pipe->curbuf + pipe->nrbufs) & (pipe->buffers - 1);
- /* [curbuf,unused) is in use. Free [idx,unused) */
- while (idx != unused) {
- pipe_buf_release(pipe, &pipe->bufs[idx]);
- idx = next_idx(idx, pipe);
- pipe->nrbufs--;
- }
+ i->iov_offset = buf->offset + left;
}
- i->count -= orig_sz;
+ i->count -= size;
+ /* ... and discard everything past that point */
+ pipe_truncate(i);
}
void iov_iter_advance(struct iov_iter *i, size_t size)
size_t count)
{
BUG_ON(direction != ITER_PIPE);
+ WARN_ON(pipe->nrbufs == pipe->buffers);
i->type = direction;
i->pipe = pipe;
i->idx = (pipe->curbuf + pipe->nrbufs) & (pipe->buffers - 1);
dax_radix_locked_entry(0, RADIX_DAX_EMPTY));
/* Wakeup waiters for exceptional entry lock */
dax_wake_mapping_entry_waiter(mapping, page->index, p,
- false);
+ true);
}
}
__radix_tree_replace(&mapping->page_tree, node, slot, page,
{
pmd_t entry;
unsigned long haddr;
+ bool write = vmf->flags & FAULT_FLAG_WRITE;
vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
goto unlock;
entry = pmd_mkyoung(orig_pmd);
+ if (write)
+ entry = pmd_mkdirty(entry);
haddr = vmf->address & HPAGE_PMD_MASK;
- if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry,
- vmf->flags & FAULT_FLAG_WRITE))
+ if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, write))
update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd);
unlock:
}
for (i = 0; i < HPAGE_PMD_NR; i++) {
- pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE |
- __GFP_OTHER_NODE, vma,
+ pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE, vma,
vmf->address, page_to_nid(page));
if (unlikely(!pages[i] ||
mem_cgroup_try_charge(pages[i], vma->vm_mm,
}
/*
- * When releasing a hugetlb pool reservation, any surplus pages that were
- * allocated to satisfy the reservation must be explicitly freed if they were
- * never used.
- * Called with hugetlb_lock held.
+ * This routine has two main purposes:
+ * 1) Decrement the reservation count (resv_huge_pages) by the value passed
+ * in unused_resv_pages. This corresponds to the prior adjustments made
+ * to the associated reservation map.
+ * 2) Free any unused surplus pages that may have been allocated to satisfy
+ * the reservation. As many as unused_resv_pages may be freed.
+ *
+ * Called with hugetlb_lock held. However, the lock could be dropped (and
+ * reacquired) during calls to cond_resched_lock. Whenever dropping the lock,
+ * we must make sure nobody else can claim pages we are in the process of
+ * freeing. Do this by ensuring resv_huge_page always is greater than the
+ * number of huge pages we plan to free when dropping the lock.
*/
static void return_unused_surplus_pages(struct hstate *h,
unsigned long unused_resv_pages)
{
unsigned long nr_pages;
- /* Uncommit the reservation */
- h->resv_huge_pages -= unused_resv_pages;
-
/* Cannot return gigantic pages currently */
if (hstate_is_gigantic(h))
- return;
+ goto out;
+ /*
+ * Part (or even all) of the reservation could have been backed
+ * by pre-allocated pages. Only free surplus pages.
+ */
nr_pages = min(unused_resv_pages, h->surplus_huge_pages);
/*
* when the nodes with surplus pages have no free pages.
* free_pool_huge_page() will balance the the freed pages across the
* on-line nodes with memory and will handle the hstate accounting.
+ *
+ * Note that we decrement resv_huge_pages as we free the pages. If
+ * we drop the lock, resv_huge_pages will still be sufficiently large
+ * to cover subsequent pages we may free.
*/
while (nr_pages--) {
+ h->resv_huge_pages--;
+ unused_resv_pages--;
if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1))
- break;
+ goto out;
cond_resched_lock(&hugetlb_lock);
}
+
+out:
+ /* Fully uncommit the reservation */
+ h->resv_huge_pages -= unused_resv_pages;
}
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
/* Only allocate from the target node */
- gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_OTHER_NODE | __GFP_THISNODE;
+ gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
/*
* Before allocating the hugepage, release the mmap_sem read lock.
struct vm_area_struct *vma;
unsigned long addr;
pmd_t *pmd, _pmd;
- bool deposited = false;
i_mmap_lock_write(mapping);
vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd);
/* assume page table is clear */
_pmd = pmdp_collapse_flush(vma, addr, pmd);
- /*
- * now deposit the pgtable for arch that need it
- * otherwise free it.
- */
- if (arch_needs_pgtable_deposit()) {
- /*
- * The deposit should be visibile only after
- * collapse is seen by others.
- */
- smp_wmb();
- pgtable_trans_huge_deposit(vma->vm_mm, pmd,
- pmd_pgtable(_pmd));
- deposited = true;
- }
spin_unlock(ptl);
up_write(&vma->vm_mm->mmap_sem);
- if (!deposited) {
- atomic_long_dec(&vma->vm_mm->nr_ptes);
- pte_free(vma->vm_mm, pmd_pgtable(_pmd));
- }
+ atomic_long_dec(&vma->vm_mm->nr_ptes);
+ pte_free(vma->vm_mm, pmd_pgtable(_pmd));
}
}
i_mmap_unlock_write(mapping);
VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
/* Only allocate from the target node */
- gfp = alloc_hugepage_khugepaged_gfpmask() |
- __GFP_OTHER_NODE | __GFP_THISNODE;
+ gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
new_page = khugepaged_alloc_page(hpage, gfp, node);
if (!new_page) {
unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
int nid, unsigned int lru_mask)
{
+ struct lruvec *lruvec = mem_cgroup_lruvec(NODE_DATA(nid), memcg);
unsigned long nr = 0;
- struct mem_cgroup_per_node *mz;
enum lru_list lru;
VM_BUG_ON((unsigned)nid >= nr_node_ids);
for_each_lru(lru) {
if (!(BIT(lru) & lru_mask))
continue;
- mz = mem_cgroup_nodeinfo(memcg, nid);
- nr += mz->lru_size[lru];
+ nr += mem_cgroup_get_lru_size(lruvec, lru);
}
return nr;
}
* mem_cgroup_update_lru_size - account for adding or removing an lru page
* @lruvec: mem_cgroup per zone lru vector
* @lru: index of lru list the page is sitting on
+ * @zid: zone id of the accounted pages
* @nr_pages: positive when adding or negative when removing
*
* This function must be called under lru_lock, just before a page is added
* so as to allow it to check that lru_size 0 is consistent with list_empty).
*/
void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
- int nr_pages)
+ int zid, int nr_pages)
{
struct mem_cgroup_per_node *mz;
unsigned long *lru_size;
long size;
- bool empty;
if (mem_cgroup_disabled())
return;
mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
- lru_size = mz->lru_size + lru;
- empty = list_empty(lruvec->lists + lru);
+ lru_size = &mz->lru_zone_size[zid][lru];
if (nr_pages < 0)
*lru_size += nr_pages;
size = *lru_size;
- if (WARN_ONCE(size < 0 || empty != !size,
- "%s(%p, %d, %d): lru_size %ld but %sempty\n",
- __func__, lruvec, lru, nr_pages, size, empty ? "" : "not ")) {
+ if (WARN_ONCE(size < 0,
+ "%s(%p, %d, %d): lru_size %ld\n",
+ __func__, lruvec, lru, nr_pages, size)) {
VM_BUG_ON(1);
*lru_size = 0;
}
}
#endif /* __PAGETABLE_PMD_FOLDED */
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
- pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
pgd_t *pgd;
pud_t *pud;
pmd = pmd_offset(pud, address);
VM_BUG_ON(pmd_trans_huge(*pmd));
- if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
- goto out;
- /* We cannot handle huge page PFN maps. Luckily they don't exist. */
- if (pmd_huge(*pmd))
+ if (pmd_huge(*pmd)) {
+ if (!pmdpp)
+ goto out;
+
+ *ptlp = pmd_lock(mm, pmd);
+ if (pmd_huge(*pmd)) {
+ *pmdpp = pmd;
+ return 0;
+ }
+ spin_unlock(*ptlp);
+ }
+
+ if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
return -EINVAL;
}
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp)
+static inline int follow_pte(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, spinlock_t **ptlp)
+{
+ int res;
+
+ /* (void) is needed to make gcc happy */
+ (void) __cond_lock(*ptlp,
+ !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+ ptlp)));
+ return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
int res;
/* (void) is needed to make gcc happy */
(void) __cond_lock(*ptlp,
- !(res = __follow_pte(mm, address, ptepp, ptlp)));
+ !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+ ptlp)));
return res;
}
+EXPORT_SYMBOL(follow_pte_pmd);
/**
* follow_pfn - look up PFN at a user virtual address
#endif
for (page = start_page; page <= end_page;) {
- /* Make sure we are not inadvertently changing nodes */
- VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
-
if (!pfn_valid_within(page_to_pfn(page))) {
page++;
continue;
}
+ /* Make sure we are not inadvertently changing nodes */
+ VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
+
if (!PageBuddy(page)) {
page++;
continue;
* Update NUMA hit/miss statistics
*
* Must be called with interrupts disabled.
- *
- * When __GFP_OTHER_NODE is set assume the node of the preferred
- * zone is the local node. This is useful for daemons who allocate
- * memory on behalf of other processes.
*/
-static inline void zone_statistics(struct zone *preferred_zone, struct zone *z,
- gfp_t flags)
+static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
{
#ifdef CONFIG_NUMA
- int local_nid = numa_node_id();
enum zone_stat_item local_stat = NUMA_LOCAL;
- if (unlikely(flags & __GFP_OTHER_NODE)) {
+ if (z->node != numa_node_id())
local_stat = NUMA_OTHER;
- local_nid = preferred_zone->node;
- }
- if (z->node == local_nid) {
+ if (z->node == preferred_zone->node)
__inc_zone_state(z, NUMA_HIT);
- __inc_zone_state(z, local_stat);
- } else {
+ else {
__inc_zone_state(z, NUMA_MISS);
__inc_zone_state(preferred_zone, NUMA_FOREIGN);
}
+ __inc_zone_state(z, local_stat);
#endif
}
}
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
- zone_statistics(preferred_zone, zone, gfp_flags);
+ zone_statistics(preferred_zone, zone);
local_irq_restore(flags);
VM_BUG_ON_PAGE(bad_range(zone, page), page);
* drivers to provide a backing region of memory for use as either an
* sk_buff->head, or to be used in the "frags" portion of skb_shared_info.
*/
-static struct page *__page_frag_refill(struct page_frag_cache *nc,
- gfp_t gfp_mask)
+static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
{
struct page *page = NULL;
gfp_t gfp = gfp_mask;
return page;
}
-void __page_frag_drain(struct page *page, unsigned int order,
- unsigned int count)
+void __page_frag_cache_drain(struct page *page, unsigned int count)
{
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
if (page_ref_sub_and_test(page, count)) {
+ unsigned int order = compound_order(page);
+
if (order == 0)
free_hot_cold_page(page, false);
else
__free_pages_ok(page, order);
}
}
-EXPORT_SYMBOL(__page_frag_drain);
+EXPORT_SYMBOL(__page_frag_cache_drain);
-void *__alloc_page_frag(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask)
+void *page_frag_alloc(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask)
{
unsigned int size = PAGE_SIZE;
struct page *page;
if (unlikely(!nc->va)) {
refill:
- page = __page_frag_refill(nc, gfp_mask);
+ page = __page_frag_cache_refill(nc, gfp_mask);
if (!page)
return NULL;
return nc->va + offset;
}
-EXPORT_SYMBOL(__alloc_page_frag);
+EXPORT_SYMBOL(page_frag_alloc);
/*
* Frees a page fragment allocated out of either a compound or order 0 page.
*/
-void __free_page_frag(void *addr)
+void page_frag_free(void *addr)
{
struct page *page = virt_to_head_page(addr);
if (unlikely(put_page_testzero(page)))
__free_pages_ok(page, compound_order(page));
}
-EXPORT_SYMBOL(__free_page_frag);
+EXPORT_SYMBOL(page_frag_free);
static void *make_alloc_exact(unsigned long addr, unsigned int order,
size_t size)
unsigned int pos;
unsigned int *list;
unsigned int count;
- unsigned int rand;
};
struct rnd_state rnd_state;
};
} else {
state->list = cachep->random_seq;
state->count = count;
- state->pos = 0;
- state->rand = rand;
+ state->pos = rand % count;
ret = true;
}
return ret;
/* Get the next entry on the list and randomize it using a random shift */
static freelist_idx_t next_random_slot(union freelist_init_state *state)
{
- return (state->list[state->pos++] + state->rand) % state->count;
+ if (state->pos >= state->count)
+ state->pos = 0;
+ return state->list[state->pos++];
}
/* Swap two freelist entries */
count = page_trans_huge_mapcount(page, total_mapcount);
if (count <= 1 && PageSwapCache(page)) {
count += page_swapcount(page);
- if (count == 1 && !PageWriteback(page)) {
+ if (count != 1)
+ goto out;
+ if (!PageWriteback(page)) {
delete_from_swap_cache(page);
SetPageDirty(page);
+ } else {
+ swp_entry_t entry;
+ struct swap_info_struct *p;
+
+ entry.val = page_private(page);
+ p = swap_info_get(entry);
+ if (p->flags & SWP_STABLE_WRITES) {
+ spin_unlock(&p->lock);
+ return false;
+ }
+ spin_unlock(&p->lock);
}
}
+out:
return count <= 1;
}
error = -ENOMEM;
goto bad_swap;
}
+
+ if (bdi_cap_stable_pages_required(inode_to_bdi(inode)))
+ p->flags |= SWP_STABLE_WRITES;
+
if (p->bdev && blk_queue_nonrot(bdev_get_queue(p->bdev))) {
int cpu;
return node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
}
+unsigned long lruvec_zone_lru_size(struct lruvec *lruvec, enum lru_list lru,
+ int zone_idx)
+{
+ if (!mem_cgroup_disabled())
+ return mem_cgroup_get_zone_lru_size(lruvec, lru, zone_idx);
+
+ return zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zone_idx],
+ NR_ZONE_LRU_BASE + lru);
+}
+
/*
* Add a shrinker callback to be called from the vm.
*/
* be complete before mem_cgroup_update_lru_size due to a santity check.
*/
static __always_inline void update_lru_sizes(struct lruvec *lruvec,
- enum lru_list lru, unsigned long *nr_zone_taken,
- unsigned long nr_taken)
+ enum lru_list lru, unsigned long *nr_zone_taken)
{
int zid;
continue;
__update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]);
- }
-
#ifdef CONFIG_MEMCG
- mem_cgroup_update_lru_size(lruvec, lru, -nr_taken);
+ mem_cgroup_update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]);
#endif
+ }
+
}
/*
*nr_scanned = scan;
trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan, scan,
nr_taken, mode, is_file_lru(lru));
- update_lru_sizes(lruvec, lru, nr_zone_taken, nr_taken);
+ update_lru_sizes(lruvec, lru, nr_zone_taken);
return nr_taken;
}
if (!managed_zone(zone))
continue;
- inactive_zone = zone_page_state(zone,
- NR_ZONE_LRU_BASE + (file * LRU_FILE));
- active_zone = zone_page_state(zone,
- NR_ZONE_LRU_BASE + (file * LRU_FILE) + LRU_ACTIVE);
+ inactive_zone = lruvec_zone_lru_size(lruvec, file * LRU_FILE, zid);
+ active_zone = lruvec_zone_lru_size(lruvec, (file * LRU_FILE) + LRU_ACTIVE, zid);
inactive -= min(inactive, inactive_zone);
active -= min(active, active_zone);
config HWBM
bool
-config SOCK_CGROUP_DATA
- bool
- default n
-
config CGROUP_NET_PRIO
bool "Network priority cgroup"
depends on CGROUPS
br_nf_hook_thresh(NF_BR_PRE_ROUTING,
net, sk, skb, skb->dev,
NULL,
- br_nf_pre_routing_finish);
+ br_nf_pre_routing_finish_bridge);
return 0;
}
ether_addr_copy(eth_hdr(skb)->h_dest, dev->dev_addr);
pinfo->nr_frags &&
!PageHighMem(skb_frag_page(frag0))) {
NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
- NAPI_GRO_CB(skb)->frag0_len = skb_frag_size(frag0);
+ NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
+ skb_frag_size(frag0),
+ skb->end - skb->tail);
}
}
* The function will try to retrieve a be32 entity at
* offset poff
*/
-__be16 skb_flow_get_be16(const struct sk_buff *skb, int poff, void *data,
- int hlen)
+static __be16 skb_flow_get_be16(const struct sk_buff *skb, int poff,
+ void *data, int hlen)
{
__be16 *u, _u;
local_irq_save(flags);
nc = this_cpu_ptr(&netdev_alloc_cache);
- data = __alloc_page_frag(nc, fragsz, gfp_mask);
+ data = page_frag_alloc(nc, fragsz, gfp_mask);
local_irq_restore(flags);
return data;
}
{
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
- return __alloc_page_frag(&nc->page, fragsz, gfp_mask);
+ return page_frag_alloc(&nc->page, fragsz, gfp_mask);
}
void *napi_alloc_frag(unsigned int fragsz)
local_irq_save(flags);
nc = this_cpu_ptr(&netdev_alloc_cache);
- data = __alloc_page_frag(nc, len, gfp_mask);
+ data = page_frag_alloc(nc, len, gfp_mask);
pfmemalloc = nc->pfmemalloc;
local_irq_restore(flags);
if (sk_memalloc_socks())
gfp_mask |= __GFP_MEMALLOC;
- data = __alloc_page_frag(&nc->page, len, gfp_mask);
+ data = page_frag_alloc(&nc->page, len, gfp_mask);
if (unlikely(!data))
return NULL;
"sk_lock-AF_RXRPC" , "sk_lock-AF_ISDN" , "sk_lock-AF_PHONET" ,
"sk_lock-AF_IEEE802154", "sk_lock-AF_CAIF" , "sk_lock-AF_ALG" ,
"sk_lock-AF_NFC" , "sk_lock-AF_VSOCK" , "sk_lock-AF_KCM" ,
- "sk_lock-AF_MAX"
+ "sk_lock-AF_QIPCRTR", "sk_lock-AF_MAX"
};
static const char *const af_family_slock_key_strings[AF_MAX+1] = {
"slock-AF_UNSPEC", "slock-AF_UNIX" , "slock-AF_INET" ,
"slock-AF_RXRPC" , "slock-AF_ISDN" , "slock-AF_PHONET" ,
"slock-AF_IEEE802154", "slock-AF_CAIF" , "slock-AF_ALG" ,
"slock-AF_NFC" , "slock-AF_VSOCK" ,"slock-AF_KCM" ,
- "slock-AF_MAX"
+ "slock-AF_QIPCRTR", "slock-AF_MAX"
};
static const char *const af_family_clock_key_strings[AF_MAX+1] = {
"clock-AF_UNSPEC", "clock-AF_UNIX" , "clock-AF_INET" ,
"clock-AF_RXRPC" , "clock-AF_ISDN" , "clock-AF_PHONET" ,
"clock-AF_IEEE802154", "clock-AF_CAIF" , "clock-AF_ALG" ,
"clock-AF_NFC" , "clock-AF_VSOCK" , "clock-AF_KCM" ,
- "clock-AF_MAX"
+ "clock-AF_QIPCRTR", "clock-AF_MAX"
};
/*
return err;
}
- err = dsa_cpu_port_ethtool_setup(dst->ds[0]);
- if (err)
- return err;
+ if (dst->ds[0]) {
+ err = dsa_cpu_port_ethtool_setup(dst->ds[0]);
+ if (err)
+ return err;
+ }
/* If we use a tagging format that doesn't have an ethertype
* field, make sure that all packets from this point on get
dsa_ds_unapply(dst, ds);
}
- dsa_cpu_port_ethtool_restore(dst->ds[0]);
+ if (dst->ds[0])
+ dsa_cpu_port_ethtool_restore(dst->ds[0]);
pr_info("DSA: tree %d unapplied\n", dst->tree);
dst->applied = false;
void fib_select_path(struct net *net, struct fib_result *res,
struct flowi4 *fl4, int mp_hash)
{
+ bool oif_check;
+
+ oif_check = (fl4->flowi4_oif == 0 ||
+ fl4->flowi4_flags & FLOWI_FLAG_SKIP_NH_OIF);
+
#ifdef CONFIG_IP_ROUTE_MULTIPATH
- if (res->fi->fib_nhs > 1 && fl4->flowi4_oif == 0) {
+ if (res->fi->fib_nhs > 1 && oif_check) {
if (mp_hash < 0)
mp_hash = get_hash_from_flowi4(fl4) >> 1;
#endif
if (!res->prefixlen &&
res->table->tb_num_default > 1 &&
- res->type == RTN_UNICAST && !fl4->flowi4_oif)
+ res->type == RTN_UNICAST && oif_check)
fib_select_default(fl4, res);
if (!fl4->saddr)
rcu_read_lock_bh();
c = __clusterip_config_find(net, clusterip);
if (c) {
- if (unlikely(!atomic_inc_not_zero(&c->refcount)))
+ if (!c->pde || unlikely(!atomic_inc_not_zero(&c->refcount)))
c = NULL;
else if (entry)
atomic_inc(&c->entries);
static struct clusterip_config *
clusterip_config_init(const struct ipt_clusterip_tgt_info *i, __be32 ip,
- struct net_device *dev)
+ struct net_device *dev)
{
+ struct net *net = dev_net(dev);
struct clusterip_config *c;
- struct clusterip_net *cn = net_generic(dev_net(dev), clusterip_net_id);
+ struct clusterip_net *cn = net_generic(net, clusterip_net_id);
c = kzalloc(sizeof(*c), GFP_ATOMIC);
if (!c)
- return NULL;
+ return ERR_PTR(-ENOMEM);
c->dev = dev;
c->clusterip = ip;
atomic_set(&c->refcount, 1);
atomic_set(&c->entries, 1);
+ spin_lock_bh(&cn->lock);
+ if (__clusterip_config_find(net, ip)) {
+ spin_unlock_bh(&cn->lock);
+ kfree(c);
+
+ return ERR_PTR(-EBUSY);
+ }
+
+ list_add_rcu(&c->list, &cn->configs);
+ spin_unlock_bh(&cn->lock);
+
#ifdef CONFIG_PROC_FS
{
char buffer[16];
cn->procdir,
&clusterip_proc_fops, c);
if (!c->pde) {
+ spin_lock_bh(&cn->lock);
+ list_del_rcu(&c->list);
+ spin_unlock_bh(&cn->lock);
kfree(c);
- return NULL;
+
+ return ERR_PTR(-ENOMEM);
}
}
#endif
- spin_lock_bh(&cn->lock);
- list_add_rcu(&c->list, &cn->configs);
- spin_unlock_bh(&cn->lock);
-
return c;
}
config = clusterip_config_init(cipinfo,
e->ip.dst.s_addr, dev);
- if (!config) {
+ if (IS_ERR(config)) {
dev_put(dev);
- return -ENOMEM;
+ return PTR_ERR(config);
}
dev_mc_add(config->dev, config->clustermac);
}
.data = &init_net.ipv4.sysctl_tcp_notsent_lowat,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = proc_douintvec,
},
{
.procname = "tcp_tw_reuse",
return ret;
}
-EXPORT_SYMBOL_GPL(tcp_peer_is_proven);
void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst)
{
ops = rcu_dereference(inet6_offloads[proto]);
if (!ops || !ops->callbacks.gro_receive) {
__pskb_pull(skb, skb_gro_offset(skb));
+ skb_gro_frag0_invalidate(skb);
proto = ipv6_gso_pull_exthdrs(skb, proto);
skb_gro_pull(skb, -skb_transport_offset(skb));
skb_reset_transport_header(skb);
struct vti6_net *ip6n = net_generic(net, vti6_net_id);
int err;
+ dev->rtnl_link_ops = &vti6_link_ops;
err = register_netdevice(dev);
if (err < 0)
goto out;
strcpy(t->parms.name, dev->name);
- dev->rtnl_link_ops = &vti6_link_ops;
dev_hold(dev);
vti6_tnl_link(ip6n, t);
struct fib6_node *fn;
/* Get the "current" route for this destination and
- * check if the redirect has come from approriate router.
+ * check if the redirect has come from appropriate router.
*
* RFC 4861 specifies that redirects should only be
* accepted if they come from the nexthop to the target.
old MTU is the lowest MTU in the path, update the route PMTU
to reflect the increase. In this case if the other nodes' MTU
also have the lowest MTU, TOO BIG MESSAGE will be lead to
- PMTU discouvery.
+ PMTU discovery.
*/
if (rt->dst.dev == arg->dev &&
dst_metric_raw(&rt->dst, RTAX_MTU) &&
{
struct sock *sk = sock->sk;
struct iucv_sock *iucv = iucv_sk(sk);
- size_t headroom, linear;
+ size_t headroom = 0;
+ size_t linear;
struct sk_buff *skb;
struct iucv_message txmsg = {0};
struct cmsghdr *cmsg;
* this is fine for SOCK_SEQPACKET (unless we want to support
* segmented records using the MSG_EOR flag), but
* for SOCK_STREAM we might want to improve it in future */
- headroom = (iucv->transport == AF_IUCV_TRANS_HIPER)
- ? sizeof(struct af_iucv_trans_hdr) + ETH_HLEN : 0;
- if (headroom + len < PAGE_SIZE) {
+ if (iucv->transport == AF_IUCV_TRANS_HIPER) {
+ headroom = sizeof(struct af_iucv_trans_hdr) + ETH_HLEN;
linear = len;
} else {
- /* In nonlinear "classic" iucv skb,
- * reserve space for iucv_array
- */
- if (iucv->transport != AF_IUCV_TRANS_HIPER)
- headroom += sizeof(struct iucv_array) *
- (MAX_SKB_FRAGS + 1);
- linear = PAGE_SIZE - headroom;
+ if (len < PAGE_SIZE) {
+ linear = len;
+ } else {
+ /* In nonlinear "classic" iucv skb,
+ * reserve space for iucv_array
+ */
+ headroom = sizeof(struct iucv_array) *
+ (MAX_SKB_FRAGS + 1);
+ linear = PAGE_SIZE - headroom;
+ }
}
skb = sock_alloc_send_pskb(sk, headroom + linear, len - linear,
noblock, &err, 0);
* is called on error from nf_tables_newrule().
*/
expr = nft_expr_first(rule);
- while (expr->ops && expr != nft_expr_last(rule)) {
+ while (expr != nft_expr_last(rule) && expr->ops) {
nf_tables_expr_destroy(ctx, expr);
expr = nft_expr_next(expr);
}
return 0;
}
+static int nft_payload_csum_inet(struct sk_buff *skb, const u32 *src,
+ __wsum fsum, __wsum tsum, int csum_offset)
+{
+ __sum16 sum;
+
+ if (skb_copy_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
+ return -1;
+
+ nft_csum_replace(&sum, fsum, tsum);
+ if (!skb_make_writable(skb, csum_offset + sizeof(sum)) ||
+ skb_store_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
+ return -1;
+
+ return 0;
+}
+
static void nft_payload_set_eval(const struct nft_expr *expr,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
const u32 *src = ®s->data[priv->sreg];
int offset, csum_offset;
__wsum fsum, tsum;
- __sum16 sum;
switch (priv->base) {
case NFT_PAYLOAD_LL_HEADER:
csum_offset = offset + priv->csum_offset;
offset += priv->offset;
- if (priv->csum_type == NFT_PAYLOAD_CSUM_INET &&
+ if ((priv->csum_type == NFT_PAYLOAD_CSUM_INET || priv->csum_flags) &&
(priv->base != NFT_PAYLOAD_TRANSPORT_HEADER ||
skb->ip_summed != CHECKSUM_PARTIAL)) {
- if (skb_copy_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
- goto err;
-
fsum = skb_checksum(skb, offset, priv->len, 0);
tsum = csum_partial(src, priv->len, 0);
- nft_csum_replace(&sum, fsum, tsum);
- if (!skb_make_writable(skb, csum_offset + sizeof(sum)) ||
- skb_store_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
+ if (priv->csum_type == NFT_PAYLOAD_CSUM_INET &&
+ nft_payload_csum_inet(skb, src, fsum, tsum, csum_offset))
goto err;
if (priv->csum_flags &&
if (priv->queues_total > 1) {
if (priv->flags & NFT_QUEUE_FLAG_CPU_FANOUT) {
- int cpu = smp_processor_id();
+ int cpu = raw_smp_processor_id();
queue = priv->queuenum + cpu % priv->queues_total;
} else {
static int nft_quota_do_dump(struct sk_buff *skb, struct nft_quota *priv,
bool reset)
{
+ u64 consumed, consumed_cap;
u32 flags = priv->flags;
- u64 consumed;
-
- if (reset) {
- consumed = atomic64_xchg(&priv->consumed, 0);
- if (test_and_clear_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
- flags |= NFT_QUOTA_F_DEPLETED;
- } else {
- consumed = atomic64_read(&priv->consumed);
- }
/* Since we inconditionally increment consumed quota for each packet
* that we see, don't go over the quota boundary in what we send to
* userspace.
*/
- if (consumed > priv->quota)
- consumed = priv->quota;
+ consumed = atomic64_read(&priv->consumed);
+ if (consumed >= priv->quota) {
+ consumed_cap = priv->quota;
+ flags |= NFT_QUOTA_F_DEPLETED;
+ } else {
+ consumed_cap = consumed;
+ }
if (nla_put_be64(skb, NFTA_QUOTA_BYTES, cpu_to_be64(priv->quota),
NFTA_QUOTA_PAD) ||
- nla_put_be64(skb, NFTA_QUOTA_CONSUMED, cpu_to_be64(consumed),
+ nla_put_be64(skb, NFTA_QUOTA_CONSUMED, cpu_to_be64(consumed_cap),
NFTA_QUOTA_PAD) ||
nla_put_be32(skb, NFTA_QUOTA_FLAGS, htonl(flags)))
goto nla_put_failure;
+
+ if (reset) {
+ atomic64_sub(consumed, &priv->consumed);
+ clear_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags);
+ }
return 0;
nla_put_failure:
printk(KERN_INFO "NetLabel: Initializing\n");
printk(KERN_INFO "NetLabel: domain hash size = %u\n",
(1 << NETLBL_DOMHSH_BITSIZE));
- printk(KERN_INFO "NetLabel: protocols ="
- " UNLABELED"
- " CIPSOv4"
- "\n");
+ printk(KERN_INFO "NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO\n");
ret_val = netlbl_domhsh_init(NETLBL_DOMHSH_BITSIZE);
if (ret_val != 0)
const int pkt_len = 20;
struct qrtr_hdr *hdr;
struct sk_buff *skb;
- u32 *buf;
+ __le32 *buf;
skb = alloc_skb(QRTR_HDR_SIZE + pkt_len, GFP_KERNEL);
if (!skb)
hdr->dst_node_id = cpu_to_le32(dst_node);
hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL);
- buf = (u32 *)skb_put(skb, pkt_len);
+ buf = (__le32 *)skb_put(skb, pkt_len);
memset(buf, 0, pkt_len);
buf[0] = cpu_to_le32(QRTR_TYPE_RESUME_TX);
buf[1] = cpu_to_le32(src_node);
(new_transport->state == SCTP_PF)))
new_transport = asoc->peer.active_path;
if (new_transport->state == SCTP_UNCONFIRMED) {
- WARN_ONCE(1, "Atempt to send packet on unconfirmed path.");
+ WARN_ONCE(1, "Attempt to send packet on unconfirmed path.");
sctp_chunk_fail(chunk, 0);
sctp_chunk_free(chunk);
continue;
return used;
}
-int sockfs_setattr(struct dentry *dentry, struct iattr *iattr)
+static int sockfs_setattr(struct dentry *dentry, struct iattr *iattr)
{
int err = simple_setattr(dentry, iattr);
list_for_each_entry_rcu(rdev, &cfg80211_rdev_list, list) {
bool schedule_destroy_work = false;
- bool schedule_scan_stop = false;
struct cfg80211_sched_scan_request *sched_scan_req =
rcu_dereference(rdev->sched_scan_req);
if (sched_scan_req && notify->portid &&
- sched_scan_req->owner_nlportid == notify->portid)
- schedule_scan_stop = true;
+ sched_scan_req->owner_nlportid == notify->portid) {
+ sched_scan_req->owner_nlportid = 0;
+
+ if (rdev->ops->sched_scan_stop &&
+ rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_SCHED_SCAN)
+ schedule_work(&rdev->sched_scan_stop_wk);
+ }
list_for_each_entry_rcu(wdev, &rdev->wiphy.wdev_list, list) {
cfg80211_mlme_unregister_socket(wdev, notify->portid);
spin_unlock(&rdev->destroy_list_lock);
schedule_work(&rdev->destroy_work);
}
- } else if (schedule_scan_stop) {
- sched_scan_req->owner_nlportid = 0;
-
- if (rdev->ops->sched_scan_stop &&
- rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_SCHED_SCAN)
- schedule_work(&rdev->sched_scan_stop_wk);
}
}
#include <unistd.h>
#include <string.h>
#include <errno.h>
-#include <net/ethernet.h>
+#include <linux/if_ether.h>
#include <net/if.h>
#include <linux/if_packet.h>
#include <arpa/inet.h>
#include <string.h>
#include <fcntl.h>
#include <poll.h>
-#include <sys/ioctl.h>
#include <linux/perf_event.h>
#include <linux/bpf.h>
#include <errno.h>
{
unsigned int size = 0;
struct mdev_state *mdev_state;
- int bar_index;
+ u32 bar_index;
if (!mdev)
return -EINVAL;
if (!mdev_state)
return -EINVAL;
- mutex_lock(&mdev_state->ops_lock);
bar_index = region_info->index;
+ if (bar_index >= VFIO_PCI_NUM_REGIONS)
+ return -EINVAL;
+
+ mutex_lock(&mdev_state->ops_lock);
switch (bar_index) {
case VFIO_PCI_CONFIG_REGION_INDEX:
memcpy(&mdev_state->dev_info, &info, sizeof(info));
- return copy_to_user((void __user *)arg, &info, minsz);
+ if (copy_to_user((void __user *)arg, &info, minsz))
+ return -EFAULT;
+
+ return 0;
}
case VFIO_DEVICE_GET_REGION_INFO:
{
if (ret)
return ret;
- return copy_to_user((void __user *)arg, &info, minsz);
+ if (copy_to_user((void __user *)arg, &info, minsz))
+ return -EFAULT;
+
+ return 0;
}
case VFIO_DEVICE_GET_IRQ_INFO:
if (ret)
return ret;
- if (info.count == -1)
- return -EINVAL;
+ if (copy_to_user((void __user *)arg, &info, minsz))
+ return -EFAULT;
- return copy_to_user((void __user *)arg, &info, minsz);
+ return 0;
}
case VFIO_DEVICE_SET_IRQS:
{
"""Find Kconfig symbols that are referenced but not defined."""
-# (c) 2014-2016 Valentin Rothberg <valentinrothberg@gmail.com>
+# (c) 2014-2017 Valentin Rothberg <valentinrothberg@gmail.com>
# (c) 2014 Stefan Hengelein <stefan.hengelein@fau.de>
#
# Licensed under the terms of the GNU GPL License version 2
"""Return a list of max. ten Kconfig symbols that are string-similar to
@symbol."""
if defined:
- return sorted(difflib.get_close_matches(symbol, set(defined), 10))
+ return difflib.get_close_matches(symbol, set(defined), 10)
pool = Pool(cpu_count(), init_worker)
kfiles = []
for res in pool.map(parse_kconfig_files, arglist):
defined.extend(res[0])
- return sorted(difflib.get_close_matches(symbol, set(defined), 10))
+ return difflib.get_close_matches(symbol, set(defined), 10)
def get_files():
nau8825_xtalk_backup(nau8825);
/* Config IIS as master to output signal by codec */
regmap_update_bits(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL2,
- NAU8825_I2S_MS_MASK | NAU8825_I2S_DRV_MASK |
+ NAU8825_I2S_MS_MASK | NAU8825_I2S_LRC_DIV_MASK |
NAU8825_I2S_BLK_DIV_MASK, NAU8825_I2S_MS_MASTER |
- (0x2 << NAU8825_I2S_DRV_SFT) | 0x1);
+ (0x2 << NAU8825_I2S_LRC_DIV_SFT) | 0x1);
/* Ramp up headphone volume to 0dB to get better performance and
* avoid pop noise in headphone.
*/
NAU8825_IRQ_RMS_EN, NAU8825_IRQ_RMS_EN);
/* Recover default value for IIS */
regmap_update_bits(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL2,
- NAU8825_I2S_MS_MASK | NAU8825_I2S_DRV_MASK |
+ NAU8825_I2S_MS_MASK | NAU8825_I2S_LRC_DIV_MASK |
NAU8825_I2S_BLK_DIV_MASK, NAU8825_I2S_MS_SLAVE);
/* Restore value of specific register for cross talk */
nau8825_xtalk_restore(nau8825);
NAU8825_FLL_INTEGER_MASK, fll_param->fll_int);
/* FLL pre-scaler */
regmap_update_bits(nau8825->regmap, NAU8825_REG_FLL4,
- NAU8825_FLL_REF_DIV_MASK, fll_param->clk_ref_div);
+ NAU8825_FLL_REF_DIV_MASK,
+ fll_param->clk_ref_div << NAU8825_FLL_REF_DIV_SFT);
/* select divided VCO input */
regmap_update_bits(nau8825->regmap, NAU8825_REG_FLL5,
NAU8825_FLL_CLK_SW_MASK, NAU8825_FLL_CLK_SW_REF);
#define NAU8825_FLL_CLK_SRC_FS (0x3 << NAU8825_FLL_CLK_SRC_SFT)
/* FLL4 (0x07) */
-#define NAU8825_FLL_REF_DIV_MASK (0x3 << 10)
+#define NAU8825_FLL_REF_DIV_SFT 10
+#define NAU8825_FLL_REF_DIV_MASK (0x3 << NAU8825_FLL_REF_DIV_SFT)
/* FLL5 (0x08) */
#define NAU8825_FLL_PDB_DAC_EN (0x1 << 15)
/* I2S_PCM_CTRL2 (0x1d) */
#define NAU8825_I2S_TRISTATE (1 << 15) /* 0 - normal mode, 1 - Hi-Z output */
-#define NAU8825_I2S_DRV_SFT 12
-#define NAU8825_I2S_DRV_MASK (0x3 << NAU8825_I2S_DRV_SFT)
+#define NAU8825_I2S_LRC_DIV_SFT 12
+#define NAU8825_I2S_LRC_DIV_MASK (0x3 << NAU8825_I2S_LRC_DIV_SFT)
#define NAU8825_I2S_MS_SFT 3
#define NAU8825_I2S_MS_MASK (1 << NAU8825_I2S_MS_SFT)
#define NAU8825_I2S_MS_MASTER (1 << NAU8825_I2S_MS_SFT)
}
}
+ regmap_update_bits(rt5645->regmap, RT5645_ADDA_CLK1,
+ RT5645_I2S_PD1_MASK, RT5645_I2S_PD1_2);
+
if (rt5645->pdata.jd_invert) {
regmap_update_bits(rt5645->regmap, RT5645_IRQ_CTRL2,
RT5645_JD_1_1_MASK, RT5645_JD_1_1_INV);
{ 108, 0x00 }, { 109, 0x00 },
};
+static bool aic3x_volatile_reg(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case AIC3X_RESET:
+ return true;
+ default:
+ return false;
+ }
+}
+
static const struct regmap_config aic3x_regmap = {
.reg_bits = 8,
.val_bits = 8,
.max_register = DAC_ICC_ADJ,
.reg_defaults = aic3x_reg,
.num_reg_defaults = ARRAY_SIZE(aic3x_reg),
+
+ .volatile_reg = aic3x_volatile_reg,
+
.cache_type = REGCACHE_RBTREE,
};
const struct wmfw_region *region;
const struct wm_adsp_region *mem;
const char *region_name;
- char *file, *text;
+ char *file, *text = NULL;
struct wm_adsp_buf *buf;
unsigned int reg;
int regions = 0;
regions, le32_to_cpu(region->len), offset,
region_name);
+ if ((pos + le32_to_cpu(region->len) + sizeof(*region)) >
+ firmware->size) {
+ adsp_err(dsp,
+ "%s.%d: %s region len %d bytes exceeds file length %zu\n",
+ file, regions, region_name,
+ le32_to_cpu(region->len), firmware->size);
+ ret = -EINVAL;
+ goto out_fw;
+ }
+
if (text) {
memcpy(text, region->data, le32_to_cpu(region->len));
adsp_info(dsp, "%s: %s\n", file, text);
kfree(text);
+ text = NULL;
}
if (reg) {
regmap_async_complete(regmap);
wm_adsp_buf_free(&buf_list);
release_firmware(firmware);
+ kfree(text);
out:
kfree(file);
}
if (reg) {
+ if ((pos + le32_to_cpu(blk->len) + sizeof(*blk)) >
+ firmware->size) {
+ adsp_err(dsp,
+ "%s.%d: %s region len %d bytes exceeds file length %zu\n",
+ file, blocks, region_name,
+ le32_to_cpu(blk->len),
+ firmware->size);
+ ret = -EINVAL;
+ goto out_fw;
+ }
+
buf = wm_adsp_buf_alloc(blk->data,
le32_to_cpu(blk->len),
&buf_list);
}
if (!pdata) {
- ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
- if (ret == -EPROBE_DEFER) {
- dev_err(&pdev->dev,
- "failed to register PCM, deferring probe\n");
- return ret;
- } else if (ret) {
- dev_err(&pdev->dev,
- "Could not register DMA PCM: %d\n"
- "falling back to PIO mode\n", ret);
+ if (irq >= 0) {
ret = dw_pcm_register(pdev);
- if (ret) {
- dev_err(&pdev->dev,
- "Could not register PIO PCM: %d\n",
+ dev->use_pio = true;
+ } else {
+ ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL,
+ 0);
+ dev->use_pio = false;
+ }
+
+ if (ret) {
+ dev_err(&pdev->dev, "could not register pcm: %d\n",
ret);
- goto err_clk_disable;
- }
+ goto err_clk_disable;
}
}
* @dbg_stats: Debugging statistics
*
* @soc: SoC specific data
+ *
+ * @fifo_watermark: the FIFO watermark setting. Notifies DMA when
+ * there are @fifo_watermark or fewer words in TX fifo or
+ * @fifo_watermark or more empty words in RX fifo.
+ * @dma_maxburst: max number of words to transfer in one go. So far,
+ * this is always the same as fifo_watermark.
*/
struct fsl_ssi_private {
struct regmap *regs;
const struct fsl_ssi_soc_data *soc;
struct device *dev;
+
+ u32 fifo_watermark;
+ u32 dma_maxburst;
};
/*
regmap_write(regs, CCSR_SSI_SRCR, srcr);
regmap_write(regs, CCSR_SSI_SCR, scr);
- /*
- * Set the watermark for transmit FIFI 0 and receive FIFO 0. We don't
- * use FIFO 1. We program the transmit water to signal a DMA transfer
- * if there are only two (or fewer) elements left in the FIFO. Two
- * elements equals one frame (left channel, right channel). This value,
- * however, depends on the depth of the transmit buffer.
- *
- * We set the watermark on the same level as the DMA burstsize. For
- * fiq it is probably better to use the biggest possible watermark
- * size.
- */
- if (ssi_private->use_dma)
- wm = ssi_private->fifo_depth - 2;
- else
- wm = ssi_private->fifo_depth;
+ wm = ssi_private->fifo_watermark;
regmap_write(regs, CCSR_SSI_SFCSR,
CCSR_SSI_SFCSR_TFWM0(wm) | CCSR_SSI_SFCSR_RFWM0(wm) |
dev_dbg(&pdev->dev, "could not get baud clock: %ld\n",
PTR_ERR(ssi_private->baudclk));
- /*
- * We have burstsize be "fifo_depth - 2" to match the SSI
- * watermark setting in fsl_ssi_startup().
- */
- ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
- ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
+ ssi_private->dma_params_tx.maxburst = ssi_private->dma_maxburst;
+ ssi_private->dma_params_rx.maxburst = ssi_private->dma_maxburst;
ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0;
ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
/* Older 8610 DTs didn't have the fifo-depth property */
ssi_private->fifo_depth = 8;
+ /*
+ * Set the watermark for transmit FIFO 0 and receive FIFO 0. We don't
+ * use FIFO 1 but set the watermark appropriately nontheless.
+ * We program the transmit water to signal a DMA transfer
+ * if there are N elements left in the FIFO. For chips with 15-deep
+ * FIFOs, set watermark to 8. This allows the SSI to operate at a
+ * high data rate without channel slipping. Behavior is unchanged
+ * for the older chips with a fifo depth of only 8. A value of 4
+ * might be appropriate for the older chips, but is left at
+ * fifo_depth-2 until sombody has a chance to test.
+ *
+ * We set the watermark on the same level as the DMA burstsize. For
+ * fiq it is probably better to use the biggest possible watermark
+ * size.
+ */
+ switch (ssi_private->fifo_depth) {
+ case 15:
+ /*
+ * 2 samples is not enough when running at high data
+ * rates (like 48kHz @ 16 bits/channel, 16 channels)
+ * 8 seems to split things evenly and leave enough time
+ * for the DMA to fill the FIFO before it's over/under
+ * run.
+ */
+ ssi_private->fifo_watermark = 8;
+ ssi_private->dma_maxburst = 8;
+ break;
+ case 8:
+ default:
+ /*
+ * maintain old behavior for older chips.
+ * Keeping it the same because I don't have an older
+ * board to test with.
+ * I suspect this could be changed to be something to
+ * leave some more space in the fifo.
+ */
+ ssi_private->fifo_watermark = ssi_private->fifo_depth - 2;
+ ssi_private->dma_maxburst = ssi_private->fifo_depth - 2;
+ break;
+ }
+
dev_set_drvdata(&pdev->dev, ssi_private);
if (ssi_private->soc->imx) {
* for Jack detection and button press
*/
ret = snd_soc_dai_set_sysclk(codec_dai, RT5640_SCLK_S_RCCLK,
- 0,
+ 48000 * 512,
SND_SOC_CLOCK_IN);
if (!ret) {
if ((byt_rt5640_quirk & BYT_RT5640_MCLK_EN) && priv->mclk)
if ((byt_rt5640_quirk & BYT_RT5640_MCLK_EN) && (is_valleyview())) {
priv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");
if (IS_ERR(priv->mclk)) {
+ ret_val = PTR_ERR(priv->mclk);
+
dev_err(&pdev->dev,
- "Failed to get MCLK from pmc_plt_clk_3: %ld\n",
- PTR_ERR(priv->mclk));
- return PTR_ERR(priv->mclk);
+ "Failed to get MCLK from pmc_plt_clk_3: %d\n",
+ ret_val);
+
+ /*
+ * Fall back to bit clock usage for -ENOENT (clock not
+ * available likely due to missing dependencies), bail
+ * for all other errors, including -EPROBE_DEFER
+ */
+ if (ret_val != -ENOENT)
+ return ret_val;
+ byt_rt5640_quirk &= ~BYT_RT5640_MCLK_EN;
}
}
snd_pcm_set_sync(substream);
mconfig = skl_tplg_fe_get_cpr_module(dai, substream->stream);
+ if (!mconfig)
+ return -EINVAL;
+
skl_tplg_d0i3_get(skl, mconfig->d0i3_caps);
return 0;
void skl_sst_dsp_cleanup(struct device *dev, struct skl_sst *ctx)
{
+
+ if (ctx->dsp->fw)
+ release_firmware(ctx->dsp->fw);
skl_clear_module_table(ctx->dsp);
skl_freeup_uuid_list(ctx);
skl_ipc_free(&ctx->ipc);
return -ENOMEM;
ret = snd_ctl_add(card, kctrl);
- if (ret < 0) {
- snd_ctl_free_one(kctrl);
+ if (ret < 0)
return ret;
- }
cfg->update = update;
cfg->card = card;
component->init = aux_dev->init;
component->auxiliary = 1;
+ list_add(&component->card_aux_list, &card->aux_comp_list);
return 0;
static int soc_probe_aux_devices(struct snd_soc_card *card)
{
- struct snd_soc_component *comp;
+ struct snd_soc_component *comp, *tmp;
int order;
int ret;
for (order = SND_SOC_COMP_ORDER_FIRST; order <= SND_SOC_COMP_ORDER_LAST;
order++) {
- list_for_each_entry(comp, &card->component_dev_list, card_list) {
- if (!comp->auxiliary)
- continue;
-
+ list_for_each_entry_safe(comp, tmp, &card->aux_comp_list,
+ card_aux_list) {
if (comp->driver->probe_order == order) {
ret = soc_probe_component(card, comp);
if (ret < 0) {
comp->name, ret);
return ret;
}
+ list_del(&comp->card_aux_list);
}
}
}
break;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
- case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
break;
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED;
+ break;
}
out:
== SND_SOC_TPLG_TYPE_MIXER)
kfree(kcontrol->tlv.p);
- snd_ctl_remove(card, kcontrol);
-
/* Private value is used as struct soc_mixer_control
* for volume mixers or soc_bytes_ext for bytes
* controls.
*/
kfree((void *)kcontrol->private_value);
+ snd_ctl_remove(card, kcontrol);
}
kfree(w->kcontrol_news);
}
case USB_ID(0x045E, 0x076F): /* MS Lifecam HD-6000 */
case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ case USB_ID(0x047F, 0x02F7): /* Plantronics BT-600 */
case USB_ID(0x047F, 0x0415): /* Plantronics BT-300 */
case USB_ID(0x047F, 0xAA05): /* Plantronics DA45 */
case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
else
err = get_arg(p, opt, flags, (const char **)opt->value);
+ if (opt->set)
+ *(bool *)opt->set = true;
+
/* PARSE_OPT_NOEMPTY: Allow NULL but disallow empty string. */
if (opt->flags & PARSE_OPT_NOEMPTY) {
const char *val = *(const char **)opt->value;
{ .type = OPTION_STRING, .short_name = (s), .long_name = (l), \
.value = check_vtype(v, const char **), (a), .help = (h), \
.flags = PARSE_OPT_OPTARG, .defval = (intptr_t)(d) }
+#define OPT_STRING_OPTARG_SET(s, l, v, os, a, h, d) \
+ { .type = OPTION_STRING, .short_name = (s), .long_name = (l), \
+ .value = check_vtype(v, const char **), (a), .help = (h), \
+ .flags = PARSE_OPT_OPTARG, .defval = (intptr_t)(d), \
+ .set = check_vtype(os, bool *)}
#define OPT_STRING_NOEMPTY(s, l, v, a, h) { .type = OPTION_STRING, .short_name = (s), .long_name = (l), .value = check_vtype(v, const char **), (a), .help = (h), .flags = PARSE_OPT_NOEMPTY}
#define OPT_DATE(s, l, v, h) \
{ .type = OPTION_CALLBACK, .short_name = (s), .long_name = (l), .value = (v), .argh = "time", .help = (h), .callback = parse_opt_approxidate_cb }
trace_seq_printf(s, "%lld ", val);
if (pevent_get_field_val(s, event, "prev_prio", record, &val, 0) == 0)
- trace_seq_printf(s, "[%lld] ", val);
+ trace_seq_printf(s, "[%d] ", (int) val);
if (pevent_get_field_val(s, event, "prev_state", record, &val, 0) == 0)
write_state(s, val);
trace_seq_printf(s, "%lld", val);
if (pevent_get_field_val(s, event, "next_prio", record, &val, 0) == 0)
- trace_seq_printf(s, " [%lld]", val);
+ trace_seq_printf(s, " [%d]", (int) val);
return 0;
}
particular perf.data snapshot should be kept or not.
Implies --timestamp-filename, --no-buildid and --no-buildid-cache.
+The reason for the latter two is to reduce the data file switching
+overhead. You can still switch them on with:
+
+ --switch-output --no-no-buildid --no-no-buildid-cache
--dry-run::
Parse options then exit. --dry-run can be used to detect errors in cmdline
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
$(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'
-install-bin: install-tools install-tests
+install-bin: install-tools install-tests install-traceevent-plugins
-install: install-bin try-install-man install-traceevent-plugins
+install: install-bin try-install-man
install-python_ext:
$(PYTHON_WORD) util/setup.py --quiet install --root='/$(DESTDIR_SQ)'
{ "__GFP_RECLAIM", "R" },
{ "__GFP_DIRECT_RECLAIM", "DR" },
{ "__GFP_KSWAPD_RECLAIM", "KR" },
- { "__GFP_OTHER_NODE", "ON" },
};
static size_t max_gfp_len;
* perf_evlist__prepare_workload, etc instead of fork+exec'in 'perf record',
* using pipes, etc.
*/
-struct option __record_options[] = {
+static struct option __record_options[] = {
OPT_CALLBACK('e', "event", &record.evlist, "event",
"event selector. use 'perf list' to list available events",
parse_events_option),
* overhead. Still generate buildid if they are required
* explicitly using
*
- * perf record --signal-trigger --no-no-buildid \
+ * perf record --switch-output --no-no-buildid \
* --no-no-buildid-cache
*
* Following code equals to:
u64 skipped_samples;
const char *time_str;
struct perf_time_interval ptime;
+ struct perf_time_interval hist_time;
};
/* per thread run time data */
timehist_print_sample(sched, sample, &al, thread, t);
out:
+ if (sched->hist_time.start == 0 && t >= ptime->start)
+ sched->hist_time.start = t;
+ if (ptime->end == 0 || t <= ptime->end)
+ sched->hist_time.end = t;
+
if (tr) {
/* time of this sched_switch event becomes last time task seen */
tr->last_time = sample->time;
struct thread *t;
struct thread_runtime *r;
int i;
+ u64 hist_time = sched->hist_time.end - sched->hist_time.start;
memset(&totals, 0, sizeof(totals));
totals.sched_count += r->run_stats.n;
printf(" CPU %2d idle for ", i);
print_sched_time(r->total_run_time, 6);
- printf(" msec\n");
+ printf(" msec (%6.2f%%)\n", 100.0 * r->total_run_time / hist_time);
} else
printf(" CPU %2d idle entire time window\n", i);
}
printf("\n"
" Total number of unique tasks: %" PRIu64 "\n"
- "Total number of context switches: %" PRIu64 "\n"
- " Total run time (msec): ",
+ "Total number of context switches: %" PRIu64 "\n",
totals.task_count, totals.sched_count);
+ printf(" Total run time (msec): ");
print_sched_time(totals.total_run_time, 2);
printf("\n");
+
+ printf(" Total scheduling time (msec): ");
+ print_sched_time(hist_time, 2);
+ printf(" (x %d)\n", sched->max_cpu);
}
typedef int (*sched_handler)(struct perf_tool *tool,
/* A file path -- this is an offline module */
if (module && strchr(module, '/'))
- return machine__findnew_module_map(host_machine, 0, module);
+ return dso__new_map(module);
if (!module)
module = "kernel";
if (strncmp(pos->dso->short_name + 1, module,
pos->dso->short_name_len - 2) == 0 &&
module[pos->dso->short_name_len - 2] == '\0') {
+ map__get(pos);
return pos;
}
}
return kernel_get_module_map(target);
}
-static void put_target_map(struct map *map, bool user)
-{
- if (map && user) {
- /* Only the user map needs to be released */
- map__put(map);
- }
-}
-
-
static int convert_exec_to_group(const char *exec, char **result)
{
char *ptr1, *ptr2, *exec_copy;
return true;
}
-/*
- * NOTE:
- * '.gnu.linkonce.this_module' section of kernel module elf directly
- * maps to 'struct module' from linux/module.h. This section contains
- * actual module name which will be used by kernel after loading it.
- * But, we cannot use 'struct module' here since linux/module.h is not
- * exposed to user-space. Offset of 'name' has remained same from long
- * time, so hardcoding it here.
- */
-#ifdef __LP64__
-#define MOD_NAME_OFFSET 24
-#else
-#define MOD_NAME_OFFSET 12
-#endif
-
/*
* @module can be module name of module file path. In case of path,
* inspect elf and find out what is actual module name.
Elf_Data *data;
Elf_Scn *sec;
char *mod_name = NULL;
+ int name_offset;
fd = open(module, O_RDONLY);
if (fd < 0)
if (!data || !data->d_buf)
goto ret_err;
- mod_name = strdup((char *)data->d_buf + MOD_NAME_OFFSET);
+ /*
+ * NOTE:
+ * '.gnu.linkonce.this_module' section of kernel module elf directly
+ * maps to 'struct module' from linux/module.h. This section contains
+ * actual module name which will be used by kernel after loading it.
+ * But, we cannot use 'struct module' here since linux/module.h is not
+ * exposed to user-space. Offset of 'name' has remained same from long
+ * time, so hardcoding it here.
+ */
+ if (ehdr.e_ident[EI_CLASS] == ELFCLASS32)
+ name_offset = 12;
+ else /* expect ELFCLASS64 by default */
+ name_offset = 24;
+
+ mod_name = strdup((char *)data->d_buf + name_offset);
ret_err:
elf_end(elf);
}
out:
- put_target_map(map, uprobes);
+ map__put(map);
return ret;
}
return ret ? : -ENOENT;
}
+/*
+ * Rename DWARF symbols to ELF symbols -- gcc sometimes optimizes functions
+ * and generate new symbols with suffixes such as .constprop.N or .isra.N
+ * etc. Since those symbols are not recorded in DWARF, we have to find
+ * correct generated symbols from offline ELF binary.
+ * For online kernel or uprobes we don't need this because those are
+ * rebased on _text, or already a section relative address.
+ */
+static int
+post_process_offline_probe_trace_events(struct probe_trace_event *tevs,
+ int ntevs, const char *pathname)
+{
+ struct symbol *sym;
+ struct map *map;
+ unsigned long stext = 0;
+ u64 addr;
+ int i;
+
+ /* Prepare a map for offline binary */
+ map = dso__new_map(pathname);
+ if (!map || get_text_start_address(pathname, &stext) < 0) {
+ pr_warning("Failed to get ELF symbols for %s\n", pathname);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < ntevs; i++) {
+ addr = tevs[i].point.address + tevs[i].point.offset - stext;
+ sym = map__find_symbol(map, addr);
+ if (!sym)
+ continue;
+ if (!strcmp(sym->name, tevs[i].point.symbol))
+ continue;
+ /* If we have no realname, use symbol for it */
+ if (!tevs[i].point.realname)
+ tevs[i].point.realname = tevs[i].point.symbol;
+ else
+ free(tevs[i].point.symbol);
+ tevs[i].point.symbol = strdup(sym->name);
+ tevs[i].point.offset = addr - sym->start;
+ }
+ map__put(map);
+
+ return 0;
+}
+
static int add_exec_to_probe_trace_events(struct probe_trace_event *tevs,
int ntevs, const char *exec)
{
/* Skip post process if the target is an offline kernel */
if (symbol_conf.ignore_vmlinux_buildid)
- return 0;
+ return post_process_offline_probe_trace_events(tevs, ntevs,
+ symbol_conf.vmlinux_name);
reloc_sym = kernel_get_ref_reloc_sym();
if (!reloc_sym) {
}
out:
- put_target_map(map, pev->uprobes);
+ map__put(map);
free(syms);
return ret;
return ret;
/* Get a symbol map */
- if (user)
- map = dso__new_map(target);
- else
- map = kernel_get_module_map(target);
+ map = get_target_map(target, user);
if (!map) {
pr_err("Failed to get a map for %s\n", (target) ? : "kernel");
return -EINVAL;
}
end:
- if (user) {
- map__put(map);
- }
+ map__put(map);
exit_probe_symbol_maps();
return ret;
break;
} else {
int n = namesz + descsz;
+
+ if (n > (int)sizeof(bf)) {
+ n = sizeof(bf);
+ pr_debug("%s: truncating reading of build id in sysfs file %s: n_namesz=%u, n_descsz=%u.\n",
+ __func__, filename, nhdr.n_namesz, nhdr.n_descsz);
+ }
if (read(fd, bf, n) != n)
break;
}
done;
@# Ask all targets to emit their test scripts
- echo "#!/bin/bash" > $(ALL_SCRIPT)
+ echo "#!/bin/sh" > $(ALL_SCRIPT)
echo "cd \$$(dirname \$$0)" >> $(ALL_SCRIPT)
echo "ROOT=\$$PWD" >> $(ALL_SCRIPT)
-#!/bin/bash
+#!/bin/sh
SRC_TREE=../../../../
-#!/bin/bash
+#!/bin/sh
echo "--------------------"
echo "running socket test"
tracing_off();
close_test_fds();
- printf("test %2d PASSED (itertation %d)\n", test_nr, iteration_nr);
+ printf("test %2d PASSED (iteration %d)\n", test_nr, iteration_nr);
dprintf1("======================\n\n");
}
iteration_nr++;
mutex_lock(&lock);
list_for_each_entry(tmp, &consumers, node) {
- if (tmp->token == consumer->token) {
+ if (tmp->token == consumer->token || tmp == consumer) {
mutex_unlock(&lock);
module_put(THIS_MODULE);
return -EBUSY;
mutex_lock(&lock);
list_for_each_entry(tmp, &consumers, node) {
- if (tmp->token != consumer->token)
+ if (tmp != consumer)
continue;
list_for_each_entry(producer, &producers, node) {