James Bottomley <jejb@mulgrave.(none)>
James Bottomley <jejb@titanic.il.steeleye.com>
James E Wilson <wilson@specifix.com>
+James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
+James Hogan <jhogan@kernel.org> <james@albanarts.com>
James Ketrenos <jketreno@io.(none)>
Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com>
<javier@osg.samsung.com> <javier.martinez@collabora.co.uk>
What; /sys/power/pm_trace_dev_match
Date: October 2010
-Contact: James Hogan <james@albanarts.com>
+Contact: James Hogan <jhogan@kernel.org>
Description:
The /sys/power/pm_trace_dev_match file contains the name of the
device associated with the last PM event point saved in the RTC
Although MT wq wasted a lot of resource, the level of concurrency
provided was unsatisfactory. The limitation was common to both ST and
MT wq albeit less severe on MT. Each wq maintained its own separate
-worker pool. A MT wq could provide only one execution context per CPU
-while a ST wq one for the whole system. Work items had to compete for
+worker pool. An MT wq could provide only one execution context per CPU
+while an ST wq one for the whole system. Work items had to compete for
those very limited execution contexts leading to various problems
including proneness to deadlocks around the single execution context.
``alloc_workqueue()`` allocates a wq. The original
``create_*workqueue()`` functions are deprecated and scheduled for
-removal. ``alloc_workqueue()`` takes three arguments - @``name``,
+removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
``@flags`` and ``@max_active``. ``@name`` is the name of the wq and
also used as the name of the rescuer thread if there is one.
served by worker threads with elevated nice level.
Note that normal and highpri worker-pools don't interact with
- each other. Each maintain its separate pool of workers and
+ each other. Each maintains its separate pool of workers and
implements concurrency management among its workers.
``WQ_CPU_INTENSIVE``
time thus achieving the same ordering property as ST wq.
In the current implementation the above configuration only guarantees
-ST behavior within a given NUMA node. Instead alloc_ordered_queue should
-be used to achieve system wide ST behavior.
+ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should
+be used to achieve system-wide ST behavior.
Example Execution Scenarios
index.txt - File index, Mailing list and Links (this document)
-intel-pstate.txt - Intel pstate cpufreq driver specific file.
-
pcc-cpufreq.txt - PCC cpufreq driver specific file.
(wrong raid10_copies/raid10_format sequence)
1.11.1 Add raid4/5/6 journal write-back support via journal_mode option
1.12.1 fix for MD deadlock between mddev_suspend() and md_write_start() available
+1.13.0 Fix dev_health status at end of "recover" (was 'a', now 'A')
Required properties:
- compatible : should contain one of these
+ "brcm,brcmstb-ddr-phy-v71.1"
+ "brcm,brcmstb-ddr-phy-v72.0"
"brcm,brcmstb-ddr-phy-v225.1"
"brcm,brcmstb-ddr-phy-v240.1"
"brcm,brcmstb-ddr-phy-v240.2"
Power-Down (SRPD), among other things.
Required properties:
-- compatible : should contain "brcm,brcmstb-memc-ddr"
+- compatible : should contain one of these
+ "brcm,brcmstb-memc-ddr-rev-b.2.2"
+ "brcm,brcmstb-memc-ddr"
- reg : the MEMC DDR register range
Example:
- compatible : should contain two values. First value must be one from following list:
- "samsung,exynos3250-pmu" - for Exynos3250 SoC,
- "samsung,exynos4210-pmu" - for Exynos4210 SoC,
- - "samsung,exynos4212-pmu" - for Exynos4212 SoC,
- "samsung,exynos4412-pmu" - for Exynos4412 SoC,
- "samsung,exynos5250-pmu" - for Exynos5250 SoC,
- "samsung,exynos5260-pmu" - for Exynos5260 SoC.
compatible = "st,stm32h743-rcc", "st,stm32-rcc";
reg = <0x58024400 0x400>;
#reset-cells = <1>;
- #clock-cells = <2>;
+ #clock-cells = <1>;
clocks = <&clk_hse>, <&clk_lse>, <&clk_i2s_ckin>;
st,syscfg = <&pwrcfg>;
* Core, iface, and bus clocks required for "qcom,scm"
- clock-names: Must contain "core" for the core clock, "iface" for the interface
clock and "bus" for the bus clock per the requirements of the compatible.
+- qcom,dload-mode: phandle to the TCSR hardware block and offset of the
+ download mode control register (optional)
Example for MSM8916:
compatible : Must be "ams,as3645a".
reg : The I2C address of the device. Typically 0x30.
+#address-cells : 1
+#size-cells : 0
-Required properties of the "flash" child node
-=============================================
+Required properties of the flash child node (0)
+===============================================
+reg: 0
flash-timeout-us: Flash timeout in microseconds. The value must be in
the range [100000, 850000] and divisible by 50000.
flash-max-microamp: Maximum flash current in microamperes. Has to be
and divisible by 50000.
-Optional properties of the "flash" child node
-=============================================
+Optional properties of the flash child node
+===========================================
label : The label of the flash LED.
-Required properties of the "indicator" child node
-=================================================
+Required properties of the indicator child node (1)
+===================================================
+reg: 1
led-max-microamp: Maximum indicator current. The allowed values are
2500, 5000, 7500 and 10000.
-Optional properties of the "indicator" child node
-=================================================
+Optional properties of the indicator child node
+===============================================
label : The label of the indicator LED.
=======
as3645a@30 {
+ #address-cells: 1
+ #size-cells: 0
reg = <0x30>;
compatible = "ams,as3645a";
- flash {
+ flash@0 {
+ reg = <0x0>;
flash-timeout-us = <150000>;
flash-max-microamp = <320000>;
led-max-microamp = <60000>;
ams,input-max-microamp = <1750000>;
label = "as3645a:flash";
};
- indicator {
+ indicator@1 {
+ reg = <0x1>;
led-max-microamp = <10000>;
label = "as3645a:indicator";
};
--- /dev/null
+DDR PHY Front End (DPFE) for Broadcom STB
+=========================================
+
+DPFE and the DPFE firmware provide an interface for the host CPU to
+communicate with the DCPU, which resides inside the DDR PHY.
+
+There are three memory regions for interacting with the DCPU. These are
+specified in a single reg property.
+
+Required properties:
+ - compatible: must be "brcm,bcm7271-dpfe-cpu", "brcm,bcm7268-dpfe-cpu"
+ or "brcm,dpfe-cpu"
+ - reg: must reference three register ranges
+ - start address and length of the DCPU register space
+ - start address and length of the DCPU data memory space
+ - start address and length of the DCPU instruction memory space
+ - reg-names: must contain "dpfe-cpu", "dpfe-dmem", and "dpfe-imem";
+ they must be in the same order as the register declarations
+
+Example:
+ dpfe_cpu0: dpfe-cpu@f1132000 {
+ compatible = "brcm,bcm7271-dpfe-cpu", "brcm,dpfe-cpu";
+ reg = <0xf1132000 0x180
+ 0xf1134000 0x1000
+ 0xf1138000 0x4000>;
+ reg-names = "dpfe-cpu", "dpfe-dmem", "dpfe-imem";
+ };
The experimental -viper variants are for running Linux on the 3384's
BMIPS4355 cable modem CPU instead of the BMIPS5000 application processor.
+
+Power management
+----------------
+
+For power management (particularly, S2/S3/S5 system suspend), the following SoC
+components are needed:
+
+= Always-On control block (AON CTRL)
+
+This hardware provides control registers for the "always-on" (even in low-power
+modes) hardware, such as the Power Management State Machine (PMSM).
+
+Required properties:
+- compatible : should be one of
+ "brcm,bcm7425-aon-ctrl"
+ "brcm,bcm7429-aon-ctrl"
+ "brcm,bcm7435-aon-ctrl" and
+ "brcm,brcmstb-aon-ctrl"
+- reg : the register start and length for the AON CTRL block
+
+Example:
+
+syscon@410000 {
+ compatible = "brcm,bcm7425-aon-ctrl", "brcm,brcmstb-aon-ctrl";
+ reg = <0x410000 0x400>;
+};
+
+= Memory controllers
+
+A Broadcom STB SoC typically has a number of independent memory controllers,
+each of which may have several associated hardware blocks, which are versioned
+independently (control registers, DDR PHYs, etc.). One might consider
+describing these controllers as a parent "memory controllers" block, which
+contains N sub-nodes (one for each controller in the system), each of which is
+associated with a number of hardware register resources (e.g., its PHY.
+
+== MEMC (MEMory Controller)
+
+Represents a single memory controller instance.
+
+Required properties:
+- compatible : should contain "brcm,brcmstb-memc" and "simple-bus"
+- ranges : should contain the child address in the parent address
+ space, must be 0 here, and the register start and length of
+ the entire memory controller (including all sub nodes: DDR PHY,
+ arbiter, etc.)
+- #address-cells : must be 1
+- #size-cells : must be 1
+
+Example:
+
+ memory-controller@0 {
+ compatible = "brcm,brcmstb-memc", "simple-bus";
+ ranges = <0x0 0x0 0xa000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ memc-arb@1000 {
+ ...
+ };
+
+ memc-ddr@2000 {
+ ...
+ };
+
+ ddr-phy@6000 {
+ ...
+ };
+ };
+
+Should contain subnodes for any of the following relevant hardware resources:
+
+== DDR PHY control
+
+Control registers for this memory controller's DDR PHY.
+
+Required properties:
+- compatible : should contain one of these
+ "brcm,brcmstb-ddr-phy-v64.5"
+ "brcm,brcmstb-ddr-phy"
+
+- reg : the DDR PHY register range and length
+
+Example:
+
+ ddr-phy@6000 {
+ compatible = "brcm,brcmstb-ddr-phy-v64.5";
+ reg = <0x6000 0xc8>;
+ };
+
+== DDR memory controller sequencer
+
+Control registers for this memory controller's DDR memory sequencer
+
+Required properties:
+- compatible : should contain one of these
+ "brcm,bcm7425-memc-ddr"
+ "brcm,bcm7429-memc-ddr"
+ "brcm,bcm7435-memc-ddr" and
+ "brcm,brcmstb-memc-ddr"
+
+- reg : the DDR sequencer register range and length
+
+Example:
+
+ memc-ddr@2000 {
+ compatible = "brcm,bcm7425-memc-ddr", "brcm,brcmstb-memc-ddr";
+ reg = <0x2000 0x300>;
+ };
+
+== MEMC Arbiter
+
+The memory controller arbiter is responsible for memory clients allocation
+(bandwidth, priorities etc.) and needs to have its contents restored during
+deep sleep states (S3).
+
+Required properties:
+
+- compatible : should contain one of these
+ "brcm,brcmstb-memc-arb-v10.0.0.0"
+ "brcm,brcmstb-memc-arb"
+
+- reg : the DDR Arbiter register range and length
+
+Example:
+
+ memc-arb@1000 {
+ compatible = "brcm,brcmstb-memc-arb-v10.0.0.0";
+ reg = <0x1000 0x248>;
+ };
+
+== Timers
+
+The Broadcom STB chips contain a timer block with several general purpose
+timers that can be used.
+
+Required properties:
+
+- compatible : should contain one of:
+ "brcm,bcm7425-timers"
+ "brcm,bcm7429-timers"
+ "brcm,bcm7435-timers and
+ "brcm,brcmstb-timers"
+- reg : the timers register range
+- interrupts : the interrupt line for this timer block
+
+Example:
+
+ timers: timer@4067c0 {
+ compatible = "brcm,bcm7425-timers", "brcm,brcmstb-timers";
+ reg = <0x4067c0 0x40>;
+ interrupts = <&periph_intc 19>;
+ };
- clocks:
Array of clocks required for SDHC.
- Require at least input clock for Xenon IP core.
+ Require at least input clock for Xenon IP core. For Armada AP806 and
+ CP110, the AXI clock is also mandatory.
- clock-names:
Array of names corresponding to clocks property.
The input clock for Xenon IP core should be named as "core".
+ The input clock for the AXI bus must be named as "axi".
- reg:
* For "marvell,armada-3700-sdhci", two register areas.
compatible = "marvell,armada-ap806-sdhci";
reg = <0xaa0000 0x1000>;
interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>
- clocks = <&emmc_clk>;
- clock-names = "core";
+ clocks = <&emmc_clk>,<&axi_clk>;
+ clock-names = "core", "axi";
bus-width = <4>;
marvell,xenon-phy-slow-mode;
marvell,xenon-tun-count = <11>;
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>
vqmmc-supply = <&sd_vqmmc_regulator>;
vmmc-supply = <&sd_vmmc_regulator>;
- clocks = <&sdclk>;
- clock-names = "core";
+ clocks = <&sdclk>, <&axi_clk>;
+ clock-names = "core", "axi";
bus-width = <4>;
marvell,xenon-tun-count = <9>;
};
- main controller clock (for both armada-375-pp2 and armada-7k-pp2)
- GOP clock (for both armada-375-pp2 and armada-7k-pp2)
- MG clock (only for armada-7k-pp2)
-- clock-names: names of used clocks, must be "pp_clk", "gop_clk" and
- "mg_clk" (the latter only for armada-7k-pp2).
+ - AXI clock (only for armada-7k-pp2)
+- clock-names: names of used clocks, must be "pp_clk", "gop_clk", "mg_clk"
+ and "axi_clk" (the 2 latter only for armada-7k-pp2).
The ethernet ports are represented by subnodes. At least one port is
required.
cpm_ethernet: ethernet@0 {
compatible = "marvell,armada-7k-pp22";
reg = <0x0 0x100000>, <0x129000 0xb000>;
- clocks = <&cpm_syscon0 1 3>, <&cpm_syscon0 1 9>, <&cpm_syscon0 1 5>;
- clock-names = "pp_clk", "gop_clk", "gp_clk";
+ clocks = <&cpm_syscon0 1 3>, <&cpm_syscon0 1 9>,
+ <&cpm_syscon0 1 5>, <&cpm_syscon0 1 18>;
+ clock-names = "pp_clk", "gop_clk", "gp_clk", "axi_clk";
eth0: eth0 {
interrupts = <ICU_GRP_NSR 39 IRQ_TYPE_LEVEL_HIGH>,
Required properties:
- compatible: should be "rockchip,<name>-gamc"
+ "rockchip,rk3128-gmac": found on RK312x SoCs
"rockchip,rk3228-gmac": found on RK322x SoCs
"rockchip,rk3288-gmac": found on RK3288 SoCs
"rockchip,rk3328-gmac": found on RK3328 SoCs
--- /dev/null
+Amlogic Meson Power Controller
+==============================
+
+The Amlogic Meson SoCs embeds an internal Power domain controller.
+
+VPU Power Domain
+----------------
+
+The Video Processing Unit power domain is controlled by this power controller,
+but the domain requires some external resources to meet the correct power
+sequences.
+The bindings must respect the power domain bindings as described in the file
+power_domain.txt
+
+Device Tree Bindings:
+---------------------
+
+Required properties:
+- compatible: should be "amlogic,meson-gx-pwrc-vpu" for the Meson GX SoCs
+- #power-domain-cells: should be 0
+- amlogic,hhi-sysctrl: phandle to the HHI sysctrl node
+- resets: phandles to the reset lines needed for this power demain sequence
+ as described in ../reset/reset.txt
+- clocks: from common clock binding: handle to VPU and VAPB clocks
+- clock-names: from common clock binding: must contain "vpu", "vapb"
+ corresponding to entry in the clocks property.
+
+Parent node should have the following properties :
+- compatible: "amlogic,meson-gx-ao-sysctrl", "syscon", "simple-mfd"
+- reg: base address and size of the AO system control register space.
+
+Example:
+-------
+
+ao_sysctrl: sys-ctrl@0 {
+ compatible = "amlogic,meson-gx-ao-sysctrl", "syscon", "simple-mfd";
+ reg = <0x0 0x0 0x0 0x100>;
+
+ pwrc_vpu: power-controller-vpu {
+ compatible = "amlogic,meson-gx-pwrc-vpu";
+ #power-domain-cells = <0>;
+ amlogic,hhi-sysctrl = <&sysctrl>;
+ resets = <&reset RESET_VIU>,
+ <&reset RESET_VENC>,
+ <&reset RESET_VCBUS>,
+ <&reset RESET_BT656>,
+ <&reset RESET_DVIN_RESET>,
+ <&reset RESET_RDMA>,
+ <&reset RESET_VENCI>,
+ <&reset RESET_VENCP>,
+ <&reset RESET_VDAC>,
+ <&reset RESET_VDI6>,
+ <&reset RESET_VENCL>,
+ <&reset RESET_VID_LOCK>;
+ clocks = <&clkc CLKID_VPU>,
+ <&clkc CLKID_VAPB>;
+ clock-names = "vpu", "vapb";
+ };
+};
+
+
- "renesas,r8a7794-sysc" (R-Car E2)
- "renesas,r8a7795-sysc" (R-Car H3)
- "renesas,r8a7796-sysc" (R-Car M3-W)
+ - "renesas,r8a77970-sysc" (R-Car V3M)
- "renesas,r8a77995-sysc" (R-Car D3)
- reg: Address start and address range for the device.
- #power-domain-cells: Must be 1.
--- /dev/null
+Qualcomm Remote File System Memory binding
+
+This binding describes the Qualcomm remote filesystem memory, which serves the
+purpose of describing the shared memory region used for remote processors to
+access block device data using the Remote Filesystem protocol.
+
+- compatible:
+ Usage: required
+ Value type: <stringlist>
+ Definition: must be:
+ "qcom,rmtfs-mem"
+
+- reg:
+ Usage: required for static allocation
+ Value type: <prop-encoded-array>
+ Definition: must specify base address and size of the memory region,
+ as described in reserved-memory.txt
+
+- size:
+ Usage: required for dynamic allocation
+ Value type: <prop-encoded-array>
+ Definition: must specify a size of the memory region, as described in
+ reserved-memory.txt
+
+- qcom,client-id:
+ Usage: required
+ Value type: <u32>
+ Definition: identifier of the client to use this region for buffers.
+
+- qcom,vmid:
+ Usage: optional
+ Value type: <u32>
+ Definition: vmid of the remote processor, to set up memory protection.
+
+= EXAMPLE
+The following example shows the remote filesystem memory setup for APQ8016,
+with the rmtfs region for the Hexagon DSP (id #1) located at 0x86700000.
+
+ reserved-memory {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+
+ rmtfs@86700000 {
+ compatible = "qcom,rmtfs-mem";
+ reg = <0x0 0x86700000 0x0 0xe0000>;
+ no-map;
+
+ qcom,client-id = <1>;
+ };
+ };
- "renesas,r8a7794-rst" (R-Car E2)
- "renesas,r8a7795-rst" (R-Car H3)
- "renesas,r8a7796-rst" (R-Car M3-W)
+ - "renesas,r8a77970-rst" (R-Car V3M)
- "renesas,r8a77995-rst" (R-Car D3)
- reg: Address start and address range for the device.
--- /dev/null
+Binding for the AXS10x reset controller
+
+This binding describes the ARC AXS10x boards custom IP-block which allows
+to control reset signals of selected peripherals. For example DW GMAC, etc...
+This block is controlled via memory-mapped register (AKA CREG) which
+represents up-to 32 reset lines.
+
+As of today only the following lines are used:
+ - DW GMAC - line 5
+
+This binding uses the common reset binding[1].
+
+[1] Documentation/devicetree/bindings/reset/reset.txt
+
+Required properties:
+- compatible: should be "snps,axs10x-reset".
+- reg: should always contain pair address - length: for creg reset
+ bits register.
+- #reset-cells: from common reset binding; Should always be set to 1.
+
+Example:
+ reset: reset-controller@11220 {
+ compatible = "snps,axs10x-reset";
+ #reset-cells = <1>;
+ reg = <0x11220 0x4>;
+ };
+
+Specifying reset lines connected to IP modules:
+ ethernet@.... {
+ ....
+ resets = <&reset 5>;
+ ....
+ };
--- /dev/null
+Binding for the Synopsys HSDK reset controller
+
+This binding uses the common reset binding[1].
+
+[1] Documentation/devicetree/bindings/reset/reset.txt
+
+Required properties:
+- compatible: should be "snps,hsdk-reset".
+- reg: should always contain 2 pairs address - length: first for reset
+ configuration register and second for corresponding SW reset and status bits
+ register.
+- #reset-cells: from common reset binding; Should always be set to 1.
+
+Example:
+ reset: reset@880 {
+ compatible = "snps,hsdk-reset";
+ #reset-cells = <1>;
+ reg = <0x8A0 0x4>, <0xFF0 0x4>;
+ };
+
+Specifying reset lines connected to IP modules:
+ ethernet@.... {
+ ....
+ resets = <&reset HSDK_V1_ETH_RESET>;
+ ....
+ };
+
+The index could be found in <dt-bindings/reset/snps,hsdk-reset.h>
"socionext,uniphier-pxs2-reset" - for PXs2/LD6b SoC
"socionext,uniphier-ld11-reset" - for LD11 SoC
"socionext,uniphier-ld20-reset" - for LD20 SoC
+ "socionext,uniphier-pxs3-reset" - for PXs3 SoC
- #reset-cells: should be 1.
Example:
"socionext,uniphier-ld11-mio-reset" - for LD11 SoC (MIO)
"socionext,uniphier-ld11-sd-reset" - for LD11 SoC (SD)
"socionext,uniphier-ld20-sd-reset" - for LD20 SoC
+ "socionext,uniphier-pxs3-sd-reset" - for PXs3 SoC
- #reset-cells: should be 1.
Example:
"socionext,uniphier-pxs2-peri-reset" - for PXs2/LD6b SoC
"socionext,uniphier-ld11-peri-reset" - for LD11 SoC
"socionext,uniphier-ld20-peri-reset" - for LD20 SoC
+ "socionext,uniphier-pxs3-peri-reset" - for PXs3 SoC
- #reset-cells: should be 1.
Example:
the firmware event log
- linux,sml-size : size of the memory allocated for the firmware event log
+Optional properties:
+
+- powered-while-suspended: present when the TPM is left powered on between
+ suspend and resume (makes the suspend/resume
+ callbacks do nothing).
+
Example (for OpenPower Systems with Nuvoton TPM 2.0 on I2C)
----------------------------------------------------------
- "renesas,hscif-r8a7795" for R8A7795 (R-Car H3) HSCIF compatible UART.
- "renesas,scif-r8a7796" for R8A7796 (R-Car M3-W) SCIF compatible UART.
- "renesas,hscif-r8a7796" for R8A7796 (R-Car M3-W) HSCIF compatible UART.
+ - "renesas,scif-r8a77970" for R8A77970 (R-Car V3M) SCIF compatible UART.
+ - "renesas,hscif-r8a77970" for R8A77970 (R-Car V3M) HSCIF compatible UART.
- "renesas,scif-r8a77995" for R8A77995 (R-Car D3) SCIF compatible UART.
- "renesas,hscif-r8a77995" for R8A77995 (R-Car D3) HSCIF compatible UART.
- "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART.
BMan Private Memory Node
BMan requires a contiguous range of physical memory used for the backing store
-for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as a
-node under the /reserved-memory node
+for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as
+a node under the /reserved-memory node.
The BMan FBPR memory node must be named "bman-fbpr"
- compatible
Usage: required
Value type: <stringlist>
- Definition: Must inclide "fsl,bman-fbpr"
+ Definition: PPC platforms: Must include "fsl,bman-fbpr"
+ ARM platforms: Must include "shared-dma-pool"
+ as well as the "no-map" property
The following constraints are relevant to the FBPR private memory:
- The size must be 2^(size + 1), with size = 11..33. That is 4 KiB to
ranges;
bman_fbpr: bman-fbpr {
- compatible = "fsl,bman-fbpr";
- alloc-ranges = <0 0 0x10 0>;
+ compatible = "shared-mem-pool";
size = <0 0x1000000>;
alignment = <0 0x1000000>;
+ no-map;
};
};
Value type: <prop-encoded-array>
Definition: Reference input clock. Its frequency is half of the
platform clock
+- memory-regions
+ Usage: Required for ARM
+ Value type: <phandle array>
+ Definition: List of phandles referencing the QMan private memory
+ nodes (described below). The qman-fqd node must be
+ first followed by qman-pfdr node. Only used on ARM
Devices connected to a QMan instance via Direct Connect Portals (DCP) must link
to the respective QMan instance
QMan requires two contiguous range of physical memory used for the backing store
for QMan Frame Queue Descriptor (FQD) and Packed Frame Descriptor Record (PFDR).
-This memory is reserved/allocated as a nodes under the /reserved-memory node
+This memory is reserved/allocated as a node under the /reserved-memory node.
+
+For additional details about reserved memory regions see reserved-memory.txt
The QMan FQD memory node must be named "qman-fqd"
- compatible
Usage: required
Value type: <stringlist>
- Definition: Must inclide "fsl,qman-fqd"
+ Definition: PPC platforms: Must include "fsl,qman-fqd"
+ ARM platforms: Must include "shared-dma-pool"
+ as well as the "no-map" property
The QMan PFDR memory node must be named "qman-pfdr"
- compatible
Usage: required
Value type: <stringlist>
- Definition: Must inclide "fsl,qman-pfdr"
+ Definition: PPC platforms: Must include "fsl,qman-pfdr"
+ ARM platforms: Must include "shared-dma-pool"
+ as well as the "no-map" property
The following constraints are relevant to the FQD and PFDR private memory:
- The size must be 2^(size + 1), with size = 11..29. That is 4 KiB to
ranges;
qman_fqd: qman-fqd {
- compatible = "fsl,qman-fqd";
- alloc-ranges = <0 0 0x10 0>;
+ compatible = "shared-dma-pool";
size = <0 0x400000>;
alignment = <0 0x400000>;
+ no-map;
};
qman_pfdr: qman-pfdr {
- compatible = "fsl,qman-pfdr";
- alloc-ranges = <0 0 0x10 0>;
+ compatible = "shared-dma-pool";
size = <0 0x2000000>;
alignment = <0 0x2000000>;
+ no-map;
};
};
Required properties in pwrap device node.
- compatible:
"mediatek,mt2701-pwrap" for MT2701/7623 SoCs
+ "mediatek,mt7622-pwrap" for MT7622 SoCs
"mediatek,mt8135-pwrap" for MT8135 SoCs
"mediatek,mt8173-pwrap" for MT8173 SoCs
- interrupts: IRQ for pwrap in SOC
- clocks: Must contain an entry for each entry in clock-names.
Optional properities:
-- pmic: Mediatek PMIC MFD is the child device of pwrap
+- pmic: Using either MediaTek PMIC MFD as the child device of pwrap
See the following for child node definitions:
Documentation/devicetree/bindings/mfd/mt6397.txt
+ or the regulator-only device as the child device of pwrap, such as MT6380.
+ See the following definitions for such kinds of devices.
+ Documentation/devicetree/bindings/regulator/mt6380-regulator.txt
Example:
pwrap: pwrap@1000f000 {
--- /dev/null
+NVIDIA Tegra186 BPMP thermal sensor
+
+In Tegra186, the BPMP (Boot and Power Management Processor) implements an
+interface that is used to read system temperatures, including CPU cluster
+and GPU temperatures. This binding describes the thermal sensor that is
+exposed by BPMP.
+
+The BPMP thermal node must be located directly inside the main BPMP node. See
+../firmware/nvidia,tegra186-bpmp.txt for details of the BPMP binding.
+
+This node represents a thermal sensor. See thermal.txt for details of the
+core thermal binding.
+
+Required properties:
+- compatible:
+ Array of strings.
+ One of:
+ - "nvidia,tegra186-bpmp-thermal".
+- #thermal-sensor-cells: Cell for sensor index.
+ Single-cell integer.
+ Must be <1>.
+
+Example:
+
+bpmp {
+ ...
+
+ bpmp_thermal: thermal {
+ compatible = "nvidia,tegra186-bpmp-thermal";
+ #thermal-sensor-cells = <1>;
+ };
+};
This isn't an exhaustive list, but you should add new prefixes to it before
using them to avoid name-space collisions.
-abcn Abracon Corporation
abilis Abilis Systems
+abracon Abracon Corporation
actions Actions Semiconductor Co., Ltd.
active-semi Active-Semi International Inc
ad Avionic Design GmbH
Support for power domains is provided through the :c:member:`pm_domain` field of
|struct device|. This field is a pointer to an object of type
-|struct dev_pm_domain|, defined in :file:`include/linux/pm.h``, providing a set
+|struct dev_pm_domain|, defined in :file:`include/linux/pm.h`, providing a set
of power management callbacks analogous to the subsystem-level and device driver
callbacks that are executed for the given device during all power transitions,
instead of the respective subsystem-level callbacks. Specifically, if a
};
Device drivers can export attributes via their sysfs directories.
-Drivers can declare attributes using a DRIVER_ATTR macro that works
-identically to the DEVICE_ATTR macro.
+Drivers can declare attributes using a DRIVER_ATTR_RW and DRIVER_ATTR_RO
+macro that works identically to the DEVICE_ATTR_RW and DEVICE_ATTR_RO
+macros.
Example:
-DRIVER_ATTR(debug,0644,show_debug,store_debug);
+DRIVER_ATTR_RW(debug);
This is equivalent to declaring:
Jeff Layton (many, many fixes, as well as great work on the cifs Kerberos code)
Scott Lovenberg
Pavel Shilovsky (for great work adding SMB2 support, and various SMB3 features)
+Aurelien Aptel (for DFS SMB3 work and some key bug fixes)
+Ronnie Sahlberg (for SMB3 xattr work and bug fixes)
+Shirish Pargaonkar (for many ACL patches over the years)
+Sachin Prabhu (many bug fixes, including for reconnect, copy offload and security)
+
Test case and Bug Report contributors
-------------------------------------
-The CIFS VFS support for Linux supports many advanced network filesystem
-features such as hierarchical dfs like namespace, hardlinks, locking and more.
+This module supports the SMB3 family of advanced network protocols (as well
+as older dialects, originally called "CIFS" or SMB1).
+
+The CIFS VFS module for Linux supports many advanced network filesystem
+features such as hierarchical DFS like namespace, hardlinks, locking and more.
It was designed to comply with the SNIA CIFS Technical Reference (which
supersedes the 1992 X/Open SMB Standard) as well as to perform best practice
practical interoperability with Windows 2000, Windows XP, Samba and equivalent
servers. This code was developed in participation with the Protocol Freedom
-Information Foundation.
+Information Foundation. CIFS and now SMB3 has now become a defacto
+standard for interoperating between Macs and Windows and major NAS appliances.
Please see
http://protocolfreedom.org/ and
For questions or bug reports please contact:
sfrench@samba.org (sfrench@us.ibm.com)
+See the project page at: https://wiki.samba.org/index.php/LinuxCIFS_utils
+
Build instructions:
==================
-For Linux 2.4:
-1) Get the kernel source (e.g.from http://www.kernel.org)
-and download the cifs vfs source (see the project page
-at http://us1.samba.org/samba/Linux_CIFS_client.html)
-and change directory into the top of the kernel directory
-then patch the kernel (e.g. "patch -p1 < cifs_24.patch")
-to add the cifs vfs to your kernel configure options if
-it has not already been added (e.g. current SuSE and UL
-users do not need to apply the cifs_24.patch since the cifs vfs is
-already in the kernel configure menu) and then
-mkdir linux/fs/cifs and then copy the current cifs vfs files from
-the cifs download to your kernel build directory e.g.
-
- cp <cifs_download_dir>/fs/cifs/* to <kernel_download_dir>/fs/cifs
-
-2) make menuconfig (or make xconfig)
-3) select cifs from within the network filesystem choices
-4) save and exit
-5) make dep
-6) make modules (or "make" if CIFS VFS not to be built as a module)
-
-For Linux 2.6:
+For Linux:
1) Download the kernel (e.g. from http://www.kernel.org)
and change directory into the top of the kernel directory tree
(e.g. /usr/src/linux-2.5.73)
If you do not have the utility mount.cifs (in the Samba 3.0 source tree and on
the CIFS VFS web site) copy it to the same directory in which mount.smbfs and
similar files reside (usually /sbin). Although the helper software is not
-required, mount.cifs is recommended. Eventually the Samba 3.0 utility program
-"net" may also be helpful since it may someday provide easier mount syntax for
-users who are used to Windows e.g.
- net use <mount point> <UNC name or cifs URL>
+required, mount.cifs is recommended. Most distros include a "cifs-utils"
+package that includes this utility so it is recommended to install this.
+
Note that running the Winbind pam/nss module (logon service) on all of your
Linux clients is useful in mapping Uids and Gids consistently across the
domain to the proper network user. The mount.cifs mount helper can be
-trivially built from Samba 3.0 or later source e.g. by executing:
-
- gcc samba/source/client/mount.cifs.c -o mount.cifs
+found at cifs-utils.git on git.samba.org
If cifs is built as a module, then the size and number of network buffers
and maximum number of simultaneous requests to one server can be configured.
on kernel/fs/cifs/cifs.ko the list of configuration changes that can be made
at module initialization time (by running insmod cifs.ko) can be seen.
+Recommendations
+===============
+To improve security the SMB2.1 dialect or later (usually will get SMB3) is now
+the new default. To use old dialects (e.g. to mount Windows XP) use "vers=1.0"
+on mount (or vers=2.0 for Windows Vista). Note that the CIFS (vers=1.0) is
+much older and less secure than the default dialect SMB3 which includes
+many advanced security features such as downgrade attack detection
+and encrypted shares and stronger signing and authentication algorithms.
+There are additional mount options that may be helpful for SMB3 to get
+improved POSIX behavior (NB: can use vers=3.0 to force only SMB3, never 2.1):
+ "mfsymlinks" and "cifsacl" and "idsfromsid"
+
Allowing User Mounts
====================
To permit users to mount and unmount over directories they own is possible
by default. This can be changed, as with nfs and other filesystems,
by simply specifying "nosuid" among the mount options. For user mounts
though to be able to pass the suid flag to mount requires rebuilding
-mount.cifs with the following flag:
-
- gcc samba/source/client/mount.cifs.c -DCIFS_ALLOW_USR_SUID -o mount.cifs
+mount.cifs with the following flag: CIFS_ALLOW_USR_SUID
There is a corresponding manual page for cifs mounting in the Samba 3.0 and
later source tree in docs/manpages/mount.cifs.8
Use instructions:
================
Once the CIFS VFS support is built into the kernel or installed as a module
-(cifs.o), you can use mount syntax like the following to access Samba or Windows
-servers:
+(cifs.ko), you can use mount syntax like the following to access Samba or
+Mac or Windows servers:
- mount -t cifs //9.53.216.11/e$ /mnt -o user=myname,pass=mypassword
+ mount -t cifs //9.53.216.11/e$ /mnt -o username=myname,password=mypassword
Before -o the option -v may be specified to make the mount.cifs
mount helper display the mount steps more verbosely.
After -o the following commonly used cifs vfs specific options
are supported:
- user=<username>
- pass=<password>
+ username=<username>
+ password=<password>
domain=<domain name>
Other cifs mount options are described below. Use of TCP names (in addition to
filenames (ie those which contain valid Linux characters, which normally
would be forbidden for Windows/CIFS semantics) as long as the server is
configured for Unix Extensions (and the client has not disabled
-/proc/fs/cifs/LinuxExtensionsEnabled).
-
+/proc/fs/cifs/LinuxExtensionsEnabled). In addition the mount option
+"mapposix" can be used on CIFS (vers=1.0) to force the mapping of
+illegal Windows/NTFS/SMB characters to a remap range (this mount parm
+is the default for SMB3). This remap ("mapposix") range is also
+compatible with Mac (and "Services for Mac" on some older Windows).
CIFS VFS Mount Options
======================
A partial list of the supported mount options follows:
- user The user name to use when trying to establish
+ username The user name to use when trying to establish
the CIFS session.
password The user password. If the mount helper is
installed, the user will be prompted for password
-Version 2.03 August 1, 2014
+Version 2.04 September 13, 2017
A Partial List of Missing Features
==================================
is a partial list of the known problems and missing features:
a) SMB3 (and SMB3.02) missing optional features:
- - RDMA
+ - RDMA (started)
- multichannel (started)
- directory leases (improved metadata caching)
- T10 copy offload (copy chunk is only mechanism supported)
- - encrypted shares
b) improved sparse file support
c) Directory entry caching relies on a 1 second timer, rather than
-using FindNotify or equivalent. - (started)
+using Directory Leases
d) quota support (needs minor kernel change since quota calls
to make it to network filesystems or deviceless filesystems)
-e) improve support for very old servers (OS/2 and Win9x for example)
-Including support for changing the time remotely (utimes command).
+e) Better optimize open to reduce redundant opens (using reference
+counts more) and to improve use of compounding in SMB3 to reduce
+number of roundtrips.
-f) hook lower into the sockets api (as NFS/SunRPC does) to avoid the
-extra copy in/out of the socket buffers in some cases.
-
-g) Better optimize open (and pathbased setfilesize) to reduce the
-oplock breaks coming from windows srv. Piggyback identical file
-opens on top of each other by incrementing reference count rather
-than resending (helps reduce server resource utilization and avoid
-spurious oplock breaks).
-
-h) Add support for storing symlink info to Windows servers
-in the Extended Attribute format their SFU clients would recognize.
-
-i) Finish inotify support so kde and gnome file list windows
+f) Finish inotify support so kde and gnome file list windows
will autorefresh (partially complete by Asser). Needs minor kernel
vfs change to support removing D_NOTIFY on a file.
-j) Add GUI tool to configure /proc/fs/cifs settings and for display of
+g) Add GUI tool to configure /proc/fs/cifs settings and for display of
the CIFS statistics (started)
-k) implement support for security and trusted categories of xattrs
+h) implement support for security and trusted categories of xattrs
(requires minor protocol extension) to enable better support for SELINUX
-l) Implement O_DIRECT flag on open (already supported on mount)
+i) Implement O_DIRECT flag on open (already supported on mount)
-m) Create UID mapping facility so server UIDs can be mapped on a per
+j) Create UID mapping facility so server UIDs can be mapped on a per
mount or a per server basis to client UIDs or nobody if no mapping
-exists. This is helpful when Unix extensions are negotiated to
-allow better permission checking when UIDs differ on the server
-and client. Add new protocol request to the CIFS protocol
-standard for asking the server for the corresponding name of a
-particular uid.
+exists. Also better integration with winbind for resolving SID owners
+
+k) Add tools to take advantage of more smb3 specific ioctls and features
+
+l) encrypted file support
+
+m) improved stats gathering, tools (perhaps integration with nfsometer?)
-n) DOS attrs - returned as pseudo-xattr in Samba format (check VFAT and NTFS for this too)
+n) allow setting more NTFS/SMB3 file attributes remotely (currently limited to compressed
+file attribute via chflags) and improve user space tools for managing and
+viewing them.
-o) mount check for unmatched uids
+o) mount helper GUI (to simplify the various configuration options on mount)
-p) Add support for new vfs entry point for fallocate
+p) autonegotiation of dialects (offering more than one dialect ie SMB3.02,
+SMB3, SMB2.1 not just SMB3).
-q) Add tools to take advantage of cifs/smb3 specific ioctls and features
-such as "CopyChunk" (fast server side file copy)
+q) Allow mount.cifs to be more verbose in reporting errors with dialect
+or unsupported feature errors.
-r) encrypted file support
+r) updating cifs documentation, and user guid.
-s) improved stats gathering, tools (perhaps integration with nfsometer?)
+s) Addressing bugs found by running a broader set of xfstests in standard
+file system xfstest suite.
-t) allow setting more NTFS/SMB3 file attributes remotely (currently limited to compressed
-file attribute via chflags)
+t) split cifs and smb3 support into separate modules so legacy (and less
+secure) CIFS dialect can be disabled in environments that don't need it
+and simplify the code.
-u) mount helper GUI (to simplify the various configuration options on mount)
+u) Finish up SMB3.1.1 dialect support
+v) POSIX Extensions for SMB3.1.1
KNOWN BUGS
====================================
- This is the client VFS module for the Common Internet File System
- (CIFS) protocol which is the successor to the Server Message Block
+ This is the client VFS module for the SMB3 NAS protocol as well
+ older dialects such as the Common Internet File System (CIFS)
+ protocol which was the successor to the Server Message Block
(SMB) protocol, the native file sharing mechanism for most early
PC operating systems. New and improved versions of CIFS are now
called SMB2 and SMB3. These dialects are also supported by the
CIFS VFS module. CIFS is fully supported by network
- file servers such as Windows 2000, 2003, 2008 and 2012
+ file servers such as Windows 2000, 2003, 2008, 2012 and 2016
as well by Samba (which provides excellent CIFS
- server support for Linux and many other operating systems), so
+ server support for Linux and many other operating systems), Apple
+ systems, as well as most Network Attached Storage vendors, so
this network filesystem client can mount to a wide variety of
servers.
The intent of this module is to provide the most advanced network
- file system function for CIFS compliant servers, including better
- POSIX compliance, secure per-user session establishment, high
- performance safe distributed caching (oplock), optional packet
+ file system function for SMB3 compliant servers, including advanced
+ security features, excellent parallelized high performance i/o, better
+ POSIX compliance, secure per-user session establishment, encryption,
+ high performance safe distributed caching (leases/oplocks), optional packet
signing, large files, Unicode support and other internationalization
improvements. Since both Samba server and this filesystem client support
- the CIFS Unix extensions, the combination can provide a reasonable
- alternative to NFSv4 for fileserving in some Linux to Linux environments,
- not just in Linux to Windows environments.
+ the CIFS Unix extensions (and in the future SMB3 POSIX extensions),
+ the combination can provide a reasonable alternative to other network and
+ cluster file systems for fileserving in some Linux to Linux environments,
+ not just in Linux to Windows (or Linux to Mac) environments.
This filesystem has an mount utility (mount.cifs) that can be obtained from
beneath or above the path of another overlay lower layer path.
Using an upper layer path and/or a workdir path that are already used by
-another overlay mount is not allowed and will fail with EBUSY. Using
+another overlay mount is not allowed and may fail with EBUSY. Using
partially overlapping paths is not allowed but will not fail with EBUSY.
+If files are accessed from two overlayfs mounts which share or overlap the
+upper layer and/or workdir path the behavior of the overlay is undefined,
+though it will not result in a crash or deadlock.
Mounting an overlay using an upper layer path, where the upper layer path
was previously used by another mounted overlay in combination with a
Declaring:
-DRIVER_ATTR(_name, _mode, _show, _store)
+DRIVER_ATTR_RO(_name)
+DRIVER_ATTR_RW(_name)
Creation/Removal:
* Intel Gemini Lake (SOC)
* Intel Cannon Lake-H (PCH)
* Intel Cannon Lake-LP (PCH)
+ * Intel Cedar Fork (PCH)
Datasheets: Publicly available at the Intel website
On Intel Patsburg and later chipsets, both the normal host SMBus controller
jeq #14, good /* __NR_rt_sigprocmask */
jeq #13, good /* __NR_rt_sigaction */
jeq #35, good /* __NR_nanosleep */
- bad: ret #0 /* SECCOMP_RET_KILL */
+ bad: ret #0 /* SECCOMP_RET_KILL_THREAD */
good: ret #0x7fff0000 /* SECCOMP_RET_ALLOW */
The above example code can be placed into a file (here called "foo"), and
2: Enable DAD, and disable IPv6 operation if MAC-based duplicate
link-local address has been found.
+ DAD operation and mode on a given interface will be selected according
+ to the maximum value of conf/{all,interface}/accept_dad.
+
force_tllao - BOOLEAN
Enable sending the target link-layer address option even when
responding to a unicast neighbor solicitation.
optimistic_dad - BOOLEAN
Whether to perform Optimistic Duplicate Address Detection (RFC 4429).
- 0: disabled (default)
- 1: enabled
+ 0: disabled (default)
+ 1: enabled
+
+ Optimistic Duplicate Address Detection for the interface will be enabled
+ if at least one of conf/{all,interface}/optimistic_dad is set to 1,
+ it will be disabled otherwise.
use_optimistic - BOOLEAN
If enabled, do not classify optimistic addresses as deprecated during
source address selection. Preferred addresses will still be chosen
before optimistic addresses, subject to other ranking in the source
address selection algorithm.
- 0: disabled (default)
- 1: enabled
+ 0: disabled (default)
+ 1: enabled
+
+ This will be enabled if at least one of
+ conf/{all,interface}/use_optimistic is set to 1, disabled otherwise.
stable_secret - IPv6 address
This IPv6 address will be used as a secret to generate IPv6
with SR-IOV or soft switches, such as OVS, are possible.
- User-space tools
-
- user space |
- +-------------------------------------------------------------------+
- kernel | Netlink
- |
- +--------------+-------------------------------+
- | Network stack |
- | (Linux) |
- | |
- +----------------------------------------------+
+ User-space tools
+
+ user space |
+ +-------------------------------------------------------------------+
+ kernel | Netlink
+ |
+ +--------------+-------------------------------+
+ | Network stack |
+ | (Linux) |
+ | |
+ +----------------------------------------------+
sw1p2 sw1p4 sw1p6
- sw1p1 + sw1p3 + sw1p5 + eth1
- + | + | + | +
- | | | | | | |
- +--+----+----+----+-+--+----+---+ +-----+-----+
- | Switch driver | | mgmt |
- | (this document) | | driver |
- | | | |
- +--------------+----------------+ +-----------+
- |
- kernel | HW bus (eg PCI)
- +-------------------------------------------------------------------+
- hardware |
- +--------------+---+------------+
- | Switch device (sw1) |
- | +----+ +--------+
- | | v offloaded data path | mgmt port
- | | | |
- +--|----|----+----+----+----+---+
- | | | | | |
- + + + + + +
- p1 p2 p3 p4 p5 p6
-
- front-panel ports
+ sw1p1 + sw1p3 + sw1p5 + eth1
+ + | + | + | +
+ | | | | | | |
+ +--+----+----+----+----+----+---+ +-----+-----+
+ | Switch driver | | mgmt |
+ | (this document) | | driver |
+ | | | |
+ +--------------+----------------+ +-----------+
+ |
+ kernel | HW bus (eg PCI)
+ +-------------------------------------------------------------------+
+ hardware |
+ +--------------+----------------+
+ | Switch device (sw1) |
+ | +----+ +--------+
+ | | v offloaded data path | mgmt port
+ | | | |
+ +--|----|----+----+----+----+---+
+ | | | | | |
+ + + + + + +
+ p1 p2 p3 p4 p5 p6
+
+ front-panel ports
Fig 1.
- reboot-cmd [ SPARC only ]
- rtsig-max
- rtsig-nr
+- seccomp/ ==> Documentation/userspace-api/seccomp_filter.rst
- sem
- sem_next_id [ sysv ipc ]
- sg-big-buff [ generic SCSI device (sg) ]
A seccomp filter may return any of the following values. If multiple
filters exist, the return value for the evaluation of a given system
call will always use the highest precedent value. (For example,
-``SECCOMP_RET_KILL`` will always take precedence.)
+``SECCOMP_RET_KILL_PROCESS`` will always take precedence.)
In precedence order, they are:
-``SECCOMP_RET_KILL``:
+``SECCOMP_RET_KILL_PROCESS``:
+ Results in the entire process exiting immediately without executing
+ the system call. The exit status of the task (``status & 0x7f``)
+ will be ``SIGSYS``, not ``SIGKILL``.
+
+``SECCOMP_RET_KILL_THREAD``:
Results in the task exiting immediately without executing the
system call. The exit status of the task (``status & 0x7f``) will
be ``SIGSYS``, not ``SIGKILL``.
allow use of ptrace, even of other sandboxed processes, without
extreme care; ptracers can use this mechanism to escape.)
+``SECCOMP_RET_LOG``:
+ Results in the system call being executed after it is logged. This
+ should be used by application developers to learn which syscalls their
+ application needs without having to iterate through multiple test and
+ development cycles to build the list.
+
+ This action will only be logged if "log" is present in the
+ actions_logged sysctl string.
+
``SECCOMP_RET_ALLOW``:
Results in the system call being executed.
and a more generic example of a higher level macro interface for BPF
program generation.
+Sysctls
+=======
+Seccomp's sysctl files can be found in the ``/proc/sys/kernel/seccomp/``
+directory. Here's a description of each file in that directory:
+
+``actions_avail``:
+ A read-only ordered list of seccomp return values (refer to the
+ ``SECCOMP_RET_*`` macros above) in string form. The ordering, from
+ left-to-right, is the least permissive return value to the most
+ permissive return value.
+
+ The list represents the set of seccomp return values supported
+ by the kernel. A userspace program may use this list to
+ determine if the actions found in the ``seccomp.h``, when the
+ program was built, differs from the set of actions actually
+ supported in the current running kernel.
+
+``actions_logged``:
+ A read-write ordered list of seccomp return values (refer to the
+ ``SECCOMP_RET_*`` macros above) that are allowed to be logged. Writes
+ to the file do not need to be in ordered form but reads from the file
+ will be ordered in the same way as the actions_avail sysctl.
+
+ It is important to note that the value of ``actions_logged`` does not
+ prevent certain actions from being logged when the audit subsystem is
+ configured to audit a task. If the action is not found in
+ ``actions_logged`` list, the final decision on whether to audit the
+ action for that task is ultimately left up to the audit subsystem to
+ decide for all seccomp return values other than ``SECCOMP_RET_ALLOW``.
+
+ The ``allow`` string is not accepted in the ``actions_logged`` sysctl
+ as it is not possible to log ``SECCOMP_RET_ALLOW`` actions. Attempting
+ to write ``allow`` to the sysctl will result in an EINVAL being
+ returned.
Adding architecture support
===========================
S: Maintained
F: drivers/acpi/arm64
+ACPI PMIC DRIVERS
+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
+M: Len Brown <lenb@kernel.org>
+R: Andy Shevchenko <andy@infradead.org>
+R: Mika Westerberg <mika.westerberg@linux.intel.com>
+L: linux-acpi@vger.kernel.org
+Q: https://patchwork.kernel.org/project/linux-acpi/list/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
+B: https://bugzilla.kernel.org
+S: Supported
+F: drivers/acpi/pmic/
+
ACPI THERMAL DRIVER
M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org
W: http://www.linux4sam.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nferre/linux-at91.git
S: Supported
+N: at91
+N: atmel
F: arch/arm/mach-at91/
F: include/soc/at91/
F: arch/arm/boot/dts/at91*.dts
F: arch/arm/boot/dts/sama*.dtsi
F: arch/arm/include/debug/at91.S
F: drivers/memory/atmel*
+F: drivers/watchdog/sama5d4_wdt.c
+X: drivers/input/touchscreen/atmel_mxt_ts.c
+X: drivers/net/wireless/atmel/
ARM/CALXEDA HIGHBANK ARCHITECTURE
M: Rob Herring <robh@kernel.org>
F: drivers/i2c/busses/i2c-zx2967.c
F: drivers/mmc/host/dw_mmc-zx.*
F: drivers/pinctrl/zte/
-F: drivers/reset/reset-zx2967.c
F: drivers/soc/zte/
F: drivers/thermal/zx2967_thermal.c
F: drivers/watchdog/zx2967_wdt.c
F: drivers/scsi/bnx2i/
BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER
-M: Yuval Mintz <Yuval.Mintz@cavium.com>
M: Ariel Elior <ariel.elior@cavium.com>
M: everest-linux-l2@cavium.com
L: netdev@vger.kernel.org
S: Maintained
F: drivers/mtd/nand/brcmnand/
+BROADCOM STB DPFE DRIVER
+M: Markus Mayer <mmayer@broadcom.com>
+M: bcm-kernel-feedback-list@broadcom.com
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: Documentation/devicetree/bindings/memory-controllers/brcm,dpfe-cpu.txt
+F: drivers/memory/brcmstb_dpfe.c
+
BROADCOM SYSTEMPORT ETHERNET DRIVER
M: Florian Fainelli <f.fainelli@gmail.com>
L: netdev@vger.kernel.org
F: drivers/iommu/exynos-iommu.c
EZchip NPS platform support
-M: Noam Camus <noamc@ezchip.com>
+M: Elad Kanfi <eladkan@mellanox.com>
+M: Vineet Gupta <vgupta@synopsys.com>
S: Supported
F: arch/arc/plat-eznps
F: arch/arc/boot/dts/eznps.dts
M: Stefan Schmidt <stefan@osg.samsung.com>
L: linux-wpan@vger.kernel.org
W: http://wpan.cakelab.org/
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth.git
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan-next.git
S: Maintained
F: net/ieee802154/
F: net/mac802154/
F: drivers/auxdisplay/img-ascii-lcd.c
IMGTEC IR DECODER DRIVER
-M: James Hogan <james.hogan@imgtec.com>
+M: James Hogan <jhogan@kernel.org>
S: Maintained
F: drivers/media/rc/img-ir/
F: arch/arm64/kvm/
KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
-M: James Hogan <james.hogan@imgtec.com>
+M: James Hogan <jhogan@kernel.org>
L: linux-mips@linux-mips.org
S: Supported
F: arch/mips/include/uapi/asm/kvm*
S: Orphan
F: drivers/net/wireless/marvell/libertas/
+MARVELL MACCHIATOBIN SUPPORT
+M: Russell King <rmk@armlinux.org.uk>
+L: linux-arm-kernel@lists.infradead.org
+S: Maintained
+F: arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
+
MARVELL MV643XX ETHERNET DRIVER
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/media/rc/mtk-cir.c
+MEDIATEK PMIC LED DRIVER
+M: Sean Wang <sean.wang@mediatek.com>
+S: Maintained
+F: drivers/leds/leds-mt6323.c
+F: Documentation/devicetree/bindings/leds/leds-mt6323.txt
+
MEDIATEK ETHERNET DRIVER
M: Felix Fietkau <nbd@openwrt.org>
M: John Crispin <john@phrozen.org>
T: git git://linuxtv.org/media_tree.git
METAG ARCHITECTURE
-M: James Hogan <james.hogan@imgtec.com>
+M: James Hogan <jhogan@kernel.org>
L: linux-metag@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag.git
S: Odd Fixes
M: Josef Bacik <jbacik@fb.com>
S: Maintained
L: linux-block@vger.kernel.org
-L: nbd-general@lists.sourceforge.net
+L: nbd@other.debian.org
F: Documentation/blockdev/nbd.txt
F: drivers/block/nbd.c
F: include/uapi/linux/nbd.h
F: drivers/scsi/qedi/
QLOGIC QL4xxx ETHERNET DRIVER
-M: Yuval Mintz <Yuval.Mintz@cavium.com>
M: Ariel Elior <Ariel.Elior@cavium.com>
M: everest-linux-l2@cavium.com
L: netdev@vger.kernel.org
F: arch/arc/boot/dts/ax*
F: Documentation/devicetree/bindings/arc/axs10*
+SYNOPSYS AXS10x RESET CONTROLLER DRIVER
+M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
+S: Supported
+F: drivers/reset/reset-axs10x.c
+F: Documentation/devicetree/bindings/reset/snps,axs10x-reset.txt
+
SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <vireshk@kernel.org>
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
SYNOPSYS HSDK RESET CONTROLLER DRIVER
M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
S: Supported
-F: drivers/reset/reset-hsdk-v1.c
-F: include/dt-bindings/reset/snps,hsdk-v1-reset.h
-F: Documentation/devicetree/bindings/reset/snps,hsdk-v1-reset.txt
+F: drivers/reset/reset-hsdk.c
+F: include/dt-bindings/reset/snps,hsdk-reset.h
+F: Documentation/devicetree/bindings/reset/snps,hsdk-reset.txt
SYSTEM CONFIGURATION (SYSCON)
M: Lee Jones <lee.jones@linaro.org>
VERSION = 4
PATCHLEVEL = 14
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc4
NAME = Fearless Coyote
# *DOCUMENTATION*
PHONY += kselftest
kselftest:
- $(Q)$(MAKE) -C tools/testing/selftests run_tests
+ $(Q)$(MAKE) -C $(srctree)/tools/testing/selftests run_tests
PHONY += kselftest-clean
kselftest-clean:
- $(Q)$(MAKE) -C tools/testing/selftests clean
+ $(Q)$(MAKE) -C $(srctree)/tools/testing/selftests clean
PHONY += kselftest-merge
kselftest-merge:
*/
#include <linux/mm_types.h>
+#include <linux/sched.h>
#include <asm/machvec.h>
#include <asm/compiler.h>
select GENERIC_SMP_IDLE_THREAD
select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK
- select HAVE_FUTEX_CMPXCHG
+ select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_IOREMAP_PROT
select HAVE_KPROBES
select HAVE_KRETPROBES
# published by the Free Software Foundation.
#
-UTS_MACHINE := arc
-
ifeq ($(CROSS_COMPILE),)
ifndef CONFIG_CPU_BIG_ENDIAN
CROSS_COMPILE := arc-linux-
mmcclk: mmcclk {
compatible = "fixed-clock";
- clock-frequency = <50000000>;
+ /*
+ * DW sdio controller has external ciu clock divider
+ * controlled via register in SDIO IP. It divides
+ * sdio_ref_clk (which comes from CGU) by 16 for
+ * default. So default mmcclk clock (which comes
+ * to sdk_in) is 25000000 Hz.
+ */
+ clock-frequency = <25000000>;
#clock-cells = <0>;
};
/dts-v1/;
#include <dt-bindings/net/ti-dp83867.h>
+#include <dt-bindings/reset/snps,hsdk-reset.h>
/ {
model = "snps,hsdk";
};
};
- core_clk: core-clk {
+ input_clk: input-clk {
#clock-cells = <0>;
compatible = "fixed-clock";
- clock-frequency = <500000000>;
+ clock-frequency = <33333333>;
};
cpu_intc: cpu-interrupt-controller {
ranges = <0x00000000 0xf0000000 0x10000000>;
+ cgu_rst: reset-controller@8a0 {
+ compatible = "snps,hsdk-reset";
+ #reset-cells = <1>;
+ reg = <0x8A0 0x4>, <0xFF0 0x4>;
+ };
+
+ core_clk: core-clk@0 {
+ compatible = "snps,hsdk-core-pll-clock";
+ reg = <0x00 0x10>, <0x14B8 0x4>;
+ #clock-cells = <0>;
+ clocks = <&input_clk>;
+ };
+
serial: serial@5000 {
compatible = "snps,dw-apb-uart";
reg = <0x5000 0x100>;
mmcclk_ciu: mmcclk-ciu {
compatible = "fixed-clock";
- clock-frequency = <100000000>;
+ /*
+ * DW sdio controller has external ciu clock divider
+ * controlled via register in SDIO IP. Due to its
+ * unexpected default value (it should devide by 1
+ * but it devides by 8) SDIO IP uses wrong clock and
+ * works unstable (see STAR 9001204800)
+ * So add temporary fix and change clock frequency
+ * from 100000000 to 12500000 Hz until we fix dw sdio
+ * driver itself.
+ */
+ clock-frequency = <12500000>;
#clock-cells = <0>;
};
clocks = <&gmacclk>;
clock-names = "stmmaceth";
phy-handle = <&phy0>;
+ resets = <&cgu_rst HSDK_ETH_RESET>;
+ reset-names = "stmmaceth";
mdio {
#address-cells = <1>;
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_NFS_FS=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_DW=y
# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_RESET_HSDK=y
CONFIG_EXT3_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_SHIRQ=y
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_SHIRQ=y
-CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
/* Auxiliary registers */
#define AUX_IDENTITY 4
+#define AUX_EXEC_CTRL 8
#define AUX_INTR_VEC_BASE 0x25
#define AUX_VOL 0x5e
#endif
};
-struct bcr_isa {
+struct bcr_isa_arcv2 {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int div_rem:4, pad2:4, ldd:1, unalign:1, atomic:1, be:1,
- pad1:11, atomic1:1, ver:8;
+ pad1:12, ver:8;
#else
- unsigned int ver:8, atomic1:1, pad1:11, be:1, atomic:1, unalign:1,
+ unsigned int ver:8, pad1:12, be:1, atomic:1, unalign:1,
ldd:1, pad2:4, div_rem:4;
#endif
};
struct cpuinfo_arc_mmu mmu;
struct cpuinfo_arc_bpu bpu;
struct bcr_identity core;
- struct bcr_isa isa;
+ struct bcr_isa_arcv2 isa;
const char *details, *name;
unsigned int vec_base;
struct cpuinfo_arc_ccm iccm, dccm;
struct {
unsigned int swap:1, norm:1, minmax:1, barrel:1, crc:1, swape:1, pad1:2,
- fpu_sp:1, fpu_dp:1, pad2:6,
+ fpu_sp:1, fpu_dp:1, dual_iss_enb:1, dual_iss_exist:1, pad2:4,
debug:1, ap:1, smart:1, rtt:1, pad3:4,
timer0:1, timer1:1, rtc:1, gfrc:1, pad4:4;
} extn;
#endif
-#define copy_segments(tsk, mm) do { } while (0)
-#define release_segments(mm) do { } while (0)
-
#define KSTK_EIP(tsk) (task_pt_regs(tsk)->ret)
#define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp)
{ 0x51, "R2.0" },
{ 0x52, "R2.1" },
{ 0x53, "R3.0" },
+ { 0x54, "R4.0" },
#endif
{ 0x00, NULL }
};
#else
{ 0x40, "ARC EM" },
{ 0x50, "ARC HS38" },
+ { 0x54, "ARC HS48" },
#endif
{ 0x00, "Unknown" }
};
struct bcr_generic bcr;
struct cpuinfo_arc *cpu = &cpuinfo_arc700[smp_processor_id()];
const struct id_to_str *tbl;
+ struct bcr_isa_arcv2 isa;
FIX_PTR(cpu);
READ_BCR(AUX_IDENTITY, cpu->core);
- READ_BCR(ARC_REG_ISA_CFG_BCR, cpu->isa);
for (tbl = &arc_cpu_rel[0]; tbl->id != 0; tbl++) {
if (cpu->core.family == tbl->id) {
}
for (tbl = &arc_cpu_nm[0]; tbl->id != 0; tbl++) {
- if ((cpu->core.family & 0xF0) == tbl->id)
+ if ((cpu->core.family & 0xF4) == tbl->id)
break;
}
cpu->name = tbl->str;
cpu->bpu.full = bpu.ft;
cpu->bpu.num_cache = 256 << bpu.bce;
cpu->bpu.num_pred = 2048 << bpu.pte;
+
+ if (cpu->core.family >= 0x54) {
+ unsigned int exec_ctrl;
+
+ READ_BCR(AUX_EXEC_CTRL, exec_ctrl);
+ cpu->extn.dual_iss_exist = 1;
+ cpu->extn.dual_iss_enb = exec_ctrl & 1;
+ }
}
READ_BCR(ARC_REG_AP_BCR, bcr);
cpu->extn.debug = cpu->extn.ap | cpu->extn.smart | cpu->extn.rtt;
+ READ_BCR(ARC_REG_ISA_CFG_BCR, isa);
+
/* some hacks for lack of feature BCR info in old ARC700 cores */
if (is_isa_arcompact()) {
- if (!cpu->isa.ver) /* ISA BCR absent, use Kconfig info */
+ if (!isa.ver) /* ISA BCR absent, use Kconfig info */
cpu->isa.atomic = IS_ENABLED(CONFIG_ARC_HAS_LLSC);
- else
- cpu->isa.atomic = cpu->isa.atomic1;
+ else {
+ /* ARC700_BUILD only has 2 bits of isa info */
+ struct bcr_generic bcr = *(struct bcr_generic *)&isa;
+ cpu->isa.atomic = bcr.info & 1;
+ }
cpu->isa.be = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
/* there's no direct way to distinguish 750 vs. 770 */
if (unlikely(cpu->core.family < 0x34 || cpu->mmu.ver < 3))
cpu->name = "ARC750";
+ } else {
+ cpu->isa = isa;
}
}
"\nIDENTITY\t: ARCVER [%#02x] ARCNUM [%#02x] CHIPID [%#4x]\n",
core->family, core->cpu_id, core->chip_id);
- n += scnprintf(buf + n, len - n, "processor [%d]\t: %s %s (%s ISA) %s\n",
+ n += scnprintf(buf + n, len - n, "processor [%d]\t: %s %s (%s ISA) %s%s%s\n",
cpu_id, cpu->name, cpu->details,
is_isa_arcompact() ? "ARCompact" : "ARCv2",
- IS_AVAIL1(cpu->isa.be, "[Big-Endian]"));
+ IS_AVAIL1(cpu->isa.be, "[Big-Endian]"),
+ IS_AVAIL3(cpu->extn.dual_iss_exist, cpu->extn.dual_iss_enb, " Dual-Issue"));
n += scnprintf(buf + n, len - n, "Timers\t\t: %s%s%s%s%s%s\nISA Extn\t: ",
IS_AVAIL1(cpu->extn.timer0, "Timer0 "),
axs10x_enable_gpio_intc_wire();
+ /*
+ * Reset ethernet IP core.
+ * TODO: get rid of this quirk after axs10x reset driver (or simple
+ * reset driver) will be available in upstream.
+ */
+ iowrite32((1 << 5), (void __iomem *) CREG_MB_SW_RESET);
+
scnprintf(mb, 32, "MainBoard v%d", mb_rev);
axs10x_print_board_ver(CREG_MB_VER, mb);
}
#
menuconfig ARC_SOC_HSDK
- bool "ARC HS Development Kit SOC"
+ bool "ARC HS Development Kit SOC"
+ select CLK_HSDK
#define CREG_PAE (CREG_BASE + 0x180)
#define CREG_PAE_UPDATE (CREG_BASE + 0x194)
+#define CREG_CORE_IF_CLK_DIV (CREG_BASE + 0x4B8)
+#define CREG_CORE_IF_CLK_DIV_2 0x1
+#define CGU_BASE ARC_PERIPHERAL_BASE
+#define CGU_PLL_STATUS (ARC_PERIPHERAL_BASE + 0x4)
+#define CGU_PLL_CTRL (ARC_PERIPHERAL_BASE + 0x0)
+#define CGU_PLL_STATUS_LOCK BIT(0)
+#define CGU_PLL_STATUS_ERR BIT(1)
+#define CGU_PLL_CTRL_1GHZ 0x3A10
+#define HSDK_PLL_LOCK_TIMEOUT 500
+
+#define HSDK_PLL_LOCKED() \
+ !!(ioread32((void __iomem *) CGU_PLL_STATUS) & CGU_PLL_STATUS_LOCK)
+
+#define HSDK_PLL_ERR() \
+ !!(ioread32((void __iomem *) CGU_PLL_STATUS) & CGU_PLL_STATUS_ERR)
+
+static void __init hsdk_set_cpu_freq_1ghz(void)
+{
+ u32 timeout = HSDK_PLL_LOCK_TIMEOUT;
+
+ /*
+ * As we set cpu clock which exceeds 500MHz, the divider for the interface
+ * clock must be programmed to div-by-2.
+ */
+ iowrite32(CREG_CORE_IF_CLK_DIV_2, (void __iomem *) CREG_CORE_IF_CLK_DIV);
+
+ /* Set cpu clock to 1GHz */
+ iowrite32(CGU_PLL_CTRL_1GHZ, (void __iomem *) CGU_PLL_CTRL);
+
+ while (!HSDK_PLL_LOCKED() && timeout--)
+ cpu_relax();
+
+ if (!HSDK_PLL_LOCKED() || HSDK_PLL_ERR())
+ pr_err("Failed to setup CPU frequency to 1GHz!");
+}
+
static void __init hsdk_init_early(void)
{
/*
/* Really apply settings made above */
writel(1, (void __iomem *) CREG_PAE_UPDATE);
+
+ /*
+ * Setup CPU frequency to 1GHz.
+ * TODO: remove it after smart hsdk pll driver will be introduced.
+ */
+ hsdk_set_cpu_freq_1ghz();
}
static const char *hsdk_compat[] __initconst = {
phy1 = &usb1_phy;
ethernet0 = &cpsw_emac0;
ethernet1 = &cpsw_emac1;
+ spi0 = &spi0;
+ spi1 = &spi1;
};
cpus {
pinctrl-0 = <&cpsw_default>;
pinctrl-1 = <&cpsw_sleep>;
status = "okay";
+ slaves = <1>;
};
&davinci_mdio {
phy-mode = "rmii";
};
-&cpsw_emac1 {
- phy_id = <&davinci_mdio>, <1>;
- phy-mode = "rmii";
-};
-
&phy_sel {
rmii-clock-ext;
};
usb1: ohci@00400000 {
num-ports = <3>;
- atmel,vbus-gpio = <&pioA PIN_PA10 GPIO_ACTIVE_HIGH>;
+ atmel,vbus-gpio = <0 /* &pioA PIN_PD20 GPIO_ACTIVE_HIGH */
+ &pioA PIN_PA27 GPIO_ACTIVE_HIGH
+ 0
+ >;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usb_default>;
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_mikrobus2_uart>;
atmel,use-dma-rx;
- atmel-use-dma-tx;
+ atmel,use-dma-tx;
status = "okay";
};
uart4: serial@fc00c000 {
atmel,use-dma-rx;
atmel,use-dma-tx;
- pinctrl-name = "default";
+ pinctrl-names = "default";
pinctrl-0 = <&pinctrl_mikrobus1_uart>;
status = "okay";
};
};
pinctrl_led_gpio_default: led_gpio_default {
- pinmux = <PIN_PA27__GPIO>,
+ pinmux = <PIN_PA10__GPIO>,
<PIN_PB1__GPIO>,
<PIN_PA31__GPIO>;
bias-pull-up;
};
pinctrl_usb_default: usb_default {
- pinmux = <PIN_PA10__GPIO>,
+ pinmux = <PIN_PA27__GPIO>,
<PIN_PD19__GPIO>;
bias-disable;
};
red {
label = "red";
- gpios = <&pioA PIN_PA27 GPIO_ACTIVE_LOW>;
+ gpios = <&pioA PIN_PA10 GPIO_ACTIVE_HIGH>;
};
green {
label = "green";
- gpios = <&pioA PIN_PB1 GPIO_ACTIVE_LOW>;
+ gpios = <&pioA PIN_PB1 GPIO_ACTIVE_HIGH>;
};
blue {
label = "blue";
- gpios = <&pioA PIN_PA31 GPIO_ACTIVE_LOW>;
+ gpios = <&pioA PIN_PA31 GPIO_ACTIVE_HIGH>;
linux,default-trigger = "heartbeat";
};
};
compatible = "ti,da850-evm", "ti,da850";
model = "DA850/AM1808/OMAP-L138 EVM";
+ aliases {
+ serial0 = &serial0;
+ serial1 = &serial1;
+ serial2 = &serial2;
+ ethernet0 = ð0;
+ };
+
soc@1c00000 {
pmx_core: pinmux@14120 {
status = "okay";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
ti,bit-shift = <24>;
reg = <0x1868>;
+ assigned-clocks = <&mcasp3_ahclkx_mux>;
+ assigned-clock-parents = <&abe_24m_fclk>;
};
mcasp3_aux_gfclk_mux: mcasp3_aux_gfclk_mux@1868 {
io-channel-names = "temp", "bsi", "vbat";
};
- rear_camera: camera@0 {
- compatible = "linux,camera";
-
- module {
- model = "TCM8341MD";
- sensor = <&cam1>;
- };
- };
-
pwm9: dmtimer-pwm {
compatible = "ti,omap-dmtimer-pwm";
#pwm-cells = <3>;
clock-lanes = <1>;
data-lanes = <0>;
lane-polarity = <0 0>;
- clock-inv = <0>;
/* Select strobe = <1> for back camera, <0> for front camera */
strobe = <1>;
- crc = <0>;
};
};
};
clock-frequency = <400000>;
as3645a@30 {
+ #address-cells = <1>;
+ #size-cells = <0>;
reg = <0x30>;
compatible = "ams,as3645a";
- flash {
+ flash@0 {
+ reg = <0x0>;
flash-timeout-us = <150000>;
flash-max-microamp = <320000>;
led-max-microamp = <60000>;
- peak-current-limit = <1750000>;
+ ams,input-max-microamp = <1750000>;
};
- indicator {
+ indicator@1 {
+ reg = <0x1>;
led-max-microamp = <10000>;
};
};
/dts-v1/;
#include "stm32f429.dtsi"
+#include "stm32f429-pinctrl.dtsi"
#include <dt-bindings/input/input.h>
#include <dt-bindings/gpio/gpio.h>
stmpe1600: stmpe1600@42 {
compatible = "st,stmpe1600";
reg = <0x42>;
- irq-gpio = <&gpioi 8 0>;
- irq-trigger = <3>;
interrupts = <8 3>;
- interrupt-parent = <&exti>;
+ interrupt-parent = <&gpioi>;
interrupt-controller;
wakeup-source;
--- /dev/null
+/*
+ * Copyright 2017 - Alexandre Torgue <alexandre.torgue@st.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <dt-bindings/pinctrl/stm32f429-pinfunc.h>
+#include <dt-bindings/mfd/stm32f4-rcc.h>
+
+/ {
+ soc {
+ pinctrl: pin-controller {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x40020000 0x3000>;
+ interrupt-parent = <&exti>;
+ st,syscfg = <&syscfg 0x8>;
+ pins-are-numbered;
+
+ gpioa: gpio@40020000 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x0 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOA)>;
+ st,bank-name = "GPIOA";
+ };
+
+ gpiob: gpio@40020400 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x400 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOB)>;
+ st,bank-name = "GPIOB";
+ };
+
+ gpioc: gpio@40020800 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x800 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOC)>;
+ st,bank-name = "GPIOC";
+ };
+
+ gpiod: gpio@40020c00 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0xc00 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOD)>;
+ st,bank-name = "GPIOD";
+ };
+
+ gpioe: gpio@40021000 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x1000 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOE)>;
+ st,bank-name = "GPIOE";
+ };
+
+ gpiof: gpio@40021400 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x1400 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOF)>;
+ st,bank-name = "GPIOF";
+ };
+
+ gpiog: gpio@40021800 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x1800 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOG)>;
+ st,bank-name = "GPIOG";
+ };
+
+ gpioh: gpio@40021c00 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x1c00 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOH)>;
+ st,bank-name = "GPIOH";
+ };
+
+ gpioi: gpio@40022000 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x2000 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOI)>;
+ st,bank-name = "GPIOI";
+ };
+
+ gpioj: gpio@40022400 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x2400 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOJ)>;
+ st,bank-name = "GPIOJ";
+ };
+
+ gpiok: gpio@40022800 {
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x2800 0x400>;
+ clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOK)>;
+ st,bank-name = "GPIOK";
+ };
+
+ usart1_pins_a: usart1@0 {
+ pins1 {
+ pinmux = <STM32F429_PA9_FUNC_USART1_TX>;
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+ pinmux = <STM32F429_PA10_FUNC_USART1_RX>;
+ bias-disable;
+ };
+ };
+
+ usart3_pins_a: usart3@0 {
+ pins1 {
+ pinmux = <STM32F429_PB10_FUNC_USART3_TX>;
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+ pinmux = <STM32F429_PB11_FUNC_USART3_RX>;
+ bias-disable;
+ };
+ };
+
+ usbotg_fs_pins_a: usbotg_fs@0 {
+ pins {
+ pinmux = <STM32F429_PA10_FUNC_OTG_FS_ID>,
+ <STM32F429_PA11_FUNC_OTG_FS_DM>,
+ <STM32F429_PA12_FUNC_OTG_FS_DP>;
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <2>;
+ };
+ };
+
+ usbotg_fs_pins_b: usbotg_fs@1 {
+ pins {
+ pinmux = <STM32F429_PB12_FUNC_OTG_HS_ID>,
+ <STM32F429_PB14_FUNC_OTG_HS_DM>,
+ <STM32F429_PB15_FUNC_OTG_HS_DP>;
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <2>;
+ };
+ };
+
+ usbotg_hs_pins_a: usbotg_hs@0 {
+ pins {
+ pinmux = <STM32F429_PH4_FUNC_OTG_HS_ULPI_NXT>,
+ <STM32F429_PI11_FUNC_OTG_HS_ULPI_DIR>,
+ <STM32F429_PC0_FUNC_OTG_HS_ULPI_STP>,
+ <STM32F429_PA5_FUNC_OTG_HS_ULPI_CK>,
+ <STM32F429_PA3_FUNC_OTG_HS_ULPI_D0>,
+ <STM32F429_PB0_FUNC_OTG_HS_ULPI_D1>,
+ <STM32F429_PB1_FUNC_OTG_HS_ULPI_D2>,
+ <STM32F429_PB10_FUNC_OTG_HS_ULPI_D3>,
+ <STM32F429_PB11_FUNC_OTG_HS_ULPI_D4>,
+ <STM32F429_PB12_FUNC_OTG_HS_ULPI_D5>,
+ <STM32F429_PB13_FUNC_OTG_HS_ULPI_D6>,
+ <STM32F429_PB5_FUNC_OTG_HS_ULPI_D7>;
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <2>;
+ };
+ };
+
+ ethernet_mii: mii@0 {
+ pins {
+ pinmux = <STM32F429_PG13_FUNC_ETH_MII_TXD0_ETH_RMII_TXD0>,
+ <STM32F429_PG14_FUNC_ETH_MII_TXD1_ETH_RMII_TXD1>,
+ <STM32F429_PC2_FUNC_ETH_MII_TXD2>,
+ <STM32F429_PB8_FUNC_ETH_MII_TXD3>,
+ <STM32F429_PC3_FUNC_ETH_MII_TX_CLK>,
+ <STM32F429_PG11_FUNC_ETH_MII_TX_EN_ETH_RMII_TX_EN>,
+ <STM32F429_PA2_FUNC_ETH_MDIO>,
+ <STM32F429_PC1_FUNC_ETH_MDC>,
+ <STM32F429_PA1_FUNC_ETH_MII_RX_CLK_ETH_RMII_REF_CLK>,
+ <STM32F429_PA7_FUNC_ETH_MII_RX_DV_ETH_RMII_CRS_DV>,
+ <STM32F429_PC4_FUNC_ETH_MII_RXD0_ETH_RMII_RXD0>,
+ <STM32F429_PC5_FUNC_ETH_MII_RXD1_ETH_RMII_RXD1>,
+ <STM32F429_PH6_FUNC_ETH_MII_RXD2>,
+ <STM32F429_PH7_FUNC_ETH_MII_RXD3>;
+ slew-rate = <2>;
+ };
+ };
+
+ adc3_in8_pin: adc@200 {
+ pins {
+ pinmux = <STM32F429_PF10_FUNC_ANALOG>;
+ };
+ };
+
+ pwm1_pins: pwm@1 {
+ pins {
+ pinmux = <STM32F429_PA8_FUNC_TIM1_CH1>,
+ <STM32F429_PB13_FUNC_TIM1_CH1N>,
+ <STM32F429_PB12_FUNC_TIM1_BKIN>;
+ };
+ };
+
+ pwm3_pins: pwm@3 {
+ pins {
+ pinmux = <STM32F429_PB4_FUNC_TIM3_CH1>,
+ <STM32F429_PB5_FUNC_TIM3_CH2>;
+ };
+ };
+
+ i2c1_pins: i2c1@0 {
+ pins {
+ pinmux = <STM32F429_PB9_FUNC_I2C1_SDA>,
+ <STM32F429_PB6_FUNC_I2C1_SCL>;
+ bias-disable;
+ drive-open-drain;
+ slew-rate = <3>;
+ };
+ };
+
+ ltdc_pins: ltdc@0 {
+ pins {
+ pinmux = <STM32F429_PI12_FUNC_LCD_HSYNC>,
+ <STM32F429_PI13_FUNC_LCD_VSYNC>,
+ <STM32F429_PI14_FUNC_LCD_CLK>,
+ <STM32F429_PI15_FUNC_LCD_R0>,
+ <STM32F429_PJ0_FUNC_LCD_R1>,
+ <STM32F429_PJ1_FUNC_LCD_R2>,
+ <STM32F429_PJ2_FUNC_LCD_R3>,
+ <STM32F429_PJ3_FUNC_LCD_R4>,
+ <STM32F429_PJ4_FUNC_LCD_R5>,
+ <STM32F429_PJ5_FUNC_LCD_R6>,
+ <STM32F429_PJ6_FUNC_LCD_R7>,
+ <STM32F429_PJ7_FUNC_LCD_G0>,
+ <STM32F429_PJ8_FUNC_LCD_G1>,
+ <STM32F429_PJ9_FUNC_LCD_G2>,
+ <STM32F429_PJ10_FUNC_LCD_G3>,
+ <STM32F429_PJ11_FUNC_LCD_G4>,
+ <STM32F429_PJ12_FUNC_LCD_B0>,
+ <STM32F429_PJ13_FUNC_LCD_B1>,
+ <STM32F429_PJ14_FUNC_LCD_B2>,
+ <STM32F429_PJ15_FUNC_LCD_B3>,
+ <STM32F429_PK0_FUNC_LCD_G5>,
+ <STM32F429_PK1_FUNC_LCD_G6>,
+ <STM32F429_PK2_FUNC_LCD_G7>,
+ <STM32F429_PK3_FUNC_LCD_B4>,
+ <STM32F429_PK4_FUNC_LCD_B5>,
+ <STM32F429_PK5_FUNC_LCD_B6>,
+ <STM32F429_PK6_FUNC_LCD_B7>,
+ <STM32F429_PK7_FUNC_LCD_DE>;
+ slew-rate = <2>;
+ };
+ };
+
+ dcmi_pins: dcmi@0 {
+ pins {
+ pinmux = <STM32F429_PA4_FUNC_DCMI_HSYNC>,
+ <STM32F429_PB7_FUNC_DCMI_VSYNC>,
+ <STM32F429_PA6_FUNC_DCMI_PIXCLK>,
+ <STM32F429_PC6_FUNC_DCMI_D0>,
+ <STM32F429_PC7_FUNC_DCMI_D1>,
+ <STM32F429_PC8_FUNC_DCMI_D2>,
+ <STM32F429_PC9_FUNC_DCMI_D3>,
+ <STM32F429_PC11_FUNC_DCMI_D4>,
+ <STM32F429_PD3_FUNC_DCMI_D5>,
+ <STM32F429_PB8_FUNC_DCMI_D6>,
+ <STM32F429_PE6_FUNC_DCMI_D7>,
+ <STM32F429_PC10_FUNC_DCMI_D8>,
+ <STM32F429_PC12_FUNC_DCMI_D9>,
+ <STM32F429_PD6_FUNC_DCMI_D10>,
+ <STM32F429_PD2_FUNC_DCMI_D11>;
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <3>;
+ };
+ };
+ };
+ };
+};
/dts-v1/;
#include "stm32f429.dtsi"
+#include "stm32f429-pinctrl.dtsi"
#include <dt-bindings/input/input.h>
/ {
--- /dev/null
+/*
+ * Copyright 2017 - Alexandre Torgue <alexandre.torgue@st.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "stm32f4-pinctrl.dtsi"
+
+/ {
+ soc {
+ pinctrl: pin-controller {
+ compatible = "st,stm32f429-pinctrl";
+
+ gpioa: gpio@40020000 {
+ gpio-ranges = <&pinctrl 0 0 16>;
+ };
+
+ gpiob: gpio@40020400 {
+ gpio-ranges = <&pinctrl 0 16 16>;
+ };
+
+ gpioc: gpio@40020800 {
+ gpio-ranges = <&pinctrl 0 32 16>;
+ };
+
+ gpiod: gpio@40020c00 {
+ gpio-ranges = <&pinctrl 0 48 16>;
+ };
+
+ gpioe: gpio@40021000 {
+ gpio-ranges = <&pinctrl 0 64 16>;
+ };
+
+ gpiof: gpio@40021400 {
+ gpio-ranges = <&pinctrl 0 80 16>;
+ };
+
+ gpiog: gpio@40021800 {
+ gpio-ranges = <&pinctrl 0 96 16>;
+ };
+
+ gpioh: gpio@40021c00 {
+ gpio-ranges = <&pinctrl 0 112 16>;
+ };
+
+ gpioi: gpio@40022000 {
+ gpio-ranges = <&pinctrl 0 128 16>;
+ };
+
+ gpioj: gpio@40022400 {
+ gpio-ranges = <&pinctrl 0 144 16>;
+ };
+
+ gpiok: gpio@40022800 {
+ gpio-ranges = <&pinctrl 0 160 8>;
+ };
+ };
+ };
+};
#include "skeleton.dtsi"
#include "armv7-m.dtsi"
-#include <dt-bindings/pinctrl/stm32f429-pinfunc.h>
#include <dt-bindings/clock/stm32fx-clock.h>
#include <dt-bindings/mfd/stm32f4-rcc.h>
status = "disabled";
};
- pinctrl: pin-controller {
- #address-cells = <1>;
- #size-cells = <1>;
- compatible = "st,stm32f429-pinctrl";
- ranges = <0 0x40020000 0x3000>;
- interrupt-parent = <&exti>;
- st,syscfg = <&syscfg 0x8>;
- pins-are-numbered;
-
- gpioa: gpio@40020000 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x0 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOA)>;
- st,bank-name = "GPIOA";
- };
-
- gpiob: gpio@40020400 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x400 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOB)>;
- st,bank-name = "GPIOB";
- };
-
- gpioc: gpio@40020800 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x800 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOC)>;
- st,bank-name = "GPIOC";
- };
-
- gpiod: gpio@40020c00 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0xc00 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOD)>;
- st,bank-name = "GPIOD";
- };
-
- gpioe: gpio@40021000 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x1000 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOE)>;
- st,bank-name = "GPIOE";
- };
-
- gpiof: gpio@40021400 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x1400 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOF)>;
- st,bank-name = "GPIOF";
- };
-
- gpiog: gpio@40021800 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x1800 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOG)>;
- st,bank-name = "GPIOG";
- };
-
- gpioh: gpio@40021c00 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x1c00 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOH)>;
- st,bank-name = "GPIOH";
- };
-
- gpioi: gpio@40022000 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x2000 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOI)>;
- st,bank-name = "GPIOI";
- };
-
- gpioj: gpio@40022400 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x2400 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOJ)>;
- st,bank-name = "GPIOJ";
- };
-
- gpiok: gpio@40022800 {
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0x2800 0x400>;
- clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOK)>;
- st,bank-name = "GPIOK";
- };
-
- usart1_pins_a: usart1@0 {
- pins1 {
- pinmux = <STM32F429_PA9_FUNC_USART1_TX>;
- bias-disable;
- drive-push-pull;
- slew-rate = <0>;
- };
- pins2 {
- pinmux = <STM32F429_PA10_FUNC_USART1_RX>;
- bias-disable;
- };
- };
-
- usart3_pins_a: usart3@0 {
- pins1 {
- pinmux = <STM32F429_PB10_FUNC_USART3_TX>;
- bias-disable;
- drive-push-pull;
- slew-rate = <0>;
- };
- pins2 {
- pinmux = <STM32F429_PB11_FUNC_USART3_RX>;
- bias-disable;
- };
- };
-
- usbotg_fs_pins_a: usbotg_fs@0 {
- pins {
- pinmux = <STM32F429_PA10_FUNC_OTG_FS_ID>,
- <STM32F429_PA11_FUNC_OTG_FS_DM>,
- <STM32F429_PA12_FUNC_OTG_FS_DP>;
- bias-disable;
- drive-push-pull;
- slew-rate = <2>;
- };
- };
-
- usbotg_fs_pins_b: usbotg_fs@1 {
- pins {
- pinmux = <STM32F429_PB12_FUNC_OTG_HS_ID>,
- <STM32F429_PB14_FUNC_OTG_HS_DM>,
- <STM32F429_PB15_FUNC_OTG_HS_DP>;
- bias-disable;
- drive-push-pull;
- slew-rate = <2>;
- };
- };
-
- usbotg_hs_pins_a: usbotg_hs@0 {
- pins {
- pinmux = <STM32F429_PH4_FUNC_OTG_HS_ULPI_NXT>,
- <STM32F429_PI11_FUNC_OTG_HS_ULPI_DIR>,
- <STM32F429_PC0_FUNC_OTG_HS_ULPI_STP>,
- <STM32F429_PA5_FUNC_OTG_HS_ULPI_CK>,
- <STM32F429_PA3_FUNC_OTG_HS_ULPI_D0>,
- <STM32F429_PB0_FUNC_OTG_HS_ULPI_D1>,
- <STM32F429_PB1_FUNC_OTG_HS_ULPI_D2>,
- <STM32F429_PB10_FUNC_OTG_HS_ULPI_D3>,
- <STM32F429_PB11_FUNC_OTG_HS_ULPI_D4>,
- <STM32F429_PB12_FUNC_OTG_HS_ULPI_D5>,
- <STM32F429_PB13_FUNC_OTG_HS_ULPI_D6>,
- <STM32F429_PB5_FUNC_OTG_HS_ULPI_D7>;
- bias-disable;
- drive-push-pull;
- slew-rate = <2>;
- };
- };
-
- ethernet_mii: mii@0 {
- pins {
- pinmux = <STM32F429_PG13_FUNC_ETH_MII_TXD0_ETH_RMII_TXD0>,
- <STM32F429_PG14_FUNC_ETH_MII_TXD1_ETH_RMII_TXD1>,
- <STM32F429_PC2_FUNC_ETH_MII_TXD2>,
- <STM32F429_PB8_FUNC_ETH_MII_TXD3>,
- <STM32F429_PC3_FUNC_ETH_MII_TX_CLK>,
- <STM32F429_PG11_FUNC_ETH_MII_TX_EN_ETH_RMII_TX_EN>,
- <STM32F429_PA2_FUNC_ETH_MDIO>,
- <STM32F429_PC1_FUNC_ETH_MDC>,
- <STM32F429_PA1_FUNC_ETH_MII_RX_CLK_ETH_RMII_REF_CLK>,
- <STM32F429_PA7_FUNC_ETH_MII_RX_DV_ETH_RMII_CRS_DV>,
- <STM32F429_PC4_FUNC_ETH_MII_RXD0_ETH_RMII_RXD0>,
- <STM32F429_PC5_FUNC_ETH_MII_RXD1_ETH_RMII_RXD1>,
- <STM32F429_PH6_FUNC_ETH_MII_RXD2>,
- <STM32F429_PH7_FUNC_ETH_MII_RXD3>;
- slew-rate = <2>;
- };
- };
-
- adc3_in8_pin: adc@200 {
- pins {
- pinmux = <STM32F429_PF10_FUNC_ANALOG>;
- };
- };
-
- pwm1_pins: pwm@1 {
- pins {
- pinmux = <STM32F429_PA8_FUNC_TIM1_CH1>,
- <STM32F429_PB13_FUNC_TIM1_CH1N>,
- <STM32F429_PB12_FUNC_TIM1_BKIN>;
- };
- };
-
- pwm3_pins: pwm@3 {
- pins {
- pinmux = <STM32F429_PB4_FUNC_TIM3_CH1>,
- <STM32F429_PB5_FUNC_TIM3_CH2>;
- };
- };
-
- i2c1_pins: i2c1@0 {
- pins {
- pinmux = <STM32F429_PB9_FUNC_I2C1_SDA>,
- <STM32F429_PB6_FUNC_I2C1_SCL>;
- bias-disable;
- drive-open-drain;
- slew-rate = <3>;
- };
- };
-
- ltdc_pins: ltdc@0 {
- pins {
- pinmux = <STM32F429_PI12_FUNC_LCD_HSYNC>,
- <STM32F429_PI13_FUNC_LCD_VSYNC>,
- <STM32F429_PI14_FUNC_LCD_CLK>,
- <STM32F429_PI15_FUNC_LCD_R0>,
- <STM32F429_PJ0_FUNC_LCD_R1>,
- <STM32F429_PJ1_FUNC_LCD_R2>,
- <STM32F429_PJ2_FUNC_LCD_R3>,
- <STM32F429_PJ3_FUNC_LCD_R4>,
- <STM32F429_PJ4_FUNC_LCD_R5>,
- <STM32F429_PJ5_FUNC_LCD_R6>,
- <STM32F429_PJ6_FUNC_LCD_R7>,
- <STM32F429_PJ7_FUNC_LCD_G0>,
- <STM32F429_PJ8_FUNC_LCD_G1>,
- <STM32F429_PJ9_FUNC_LCD_G2>,
- <STM32F429_PJ10_FUNC_LCD_G3>,
- <STM32F429_PJ11_FUNC_LCD_G4>,
- <STM32F429_PJ12_FUNC_LCD_B0>,
- <STM32F429_PJ13_FUNC_LCD_B1>,
- <STM32F429_PJ14_FUNC_LCD_B2>,
- <STM32F429_PJ15_FUNC_LCD_B3>,
- <STM32F429_PK0_FUNC_LCD_G5>,
- <STM32F429_PK1_FUNC_LCD_G6>,
- <STM32F429_PK2_FUNC_LCD_G7>,
- <STM32F429_PK3_FUNC_LCD_B4>,
- <STM32F429_PK4_FUNC_LCD_B5>,
- <STM32F429_PK5_FUNC_LCD_B6>,
- <STM32F429_PK6_FUNC_LCD_B7>,
- <STM32F429_PK7_FUNC_LCD_DE>;
- slew-rate = <2>;
- };
- };
-
- dcmi_pins: dcmi@0 {
- pins {
- pinmux = <STM32F429_PA4_FUNC_DCMI_HSYNC>,
- <STM32F429_PB7_FUNC_DCMI_VSYNC>,
- <STM32F429_PA6_FUNC_DCMI_PIXCLK>,
- <STM32F429_PC6_FUNC_DCMI_D0>,
- <STM32F429_PC7_FUNC_DCMI_D1>,
- <STM32F429_PC8_FUNC_DCMI_D2>,
- <STM32F429_PC9_FUNC_DCMI_D3>,
- <STM32F429_PC11_FUNC_DCMI_D4>,
- <STM32F429_PD3_FUNC_DCMI_D5>,
- <STM32F429_PB8_FUNC_DCMI_D6>,
- <STM32F429_PE6_FUNC_DCMI_D7>,
- <STM32F429_PC10_FUNC_DCMI_D8>,
- <STM32F429_PC12_FUNC_DCMI_D9>,
- <STM32F429_PD6_FUNC_DCMI_D10>,
- <STM32F429_PD2_FUNC_DCMI_D11>;
- bias-disable;
- drive-push-pull;
- slew-rate = <3>;
- };
- };
- };
-
crc: crc@40023000 {
compatible = "st,stm32f4-crc";
reg = <0x40023000 0x400>;
/dts-v1/;
#include "stm32f429.dtsi"
+#include "stm32f469-pinctrl.dtsi"
/ {
model = "STMicroelectronics STM32F469i-DISCO board";
--- /dev/null
+/*
+ * Copyright 2017 - Alexandre Torgue <alexandre.torgue@st.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "stm32f4-pinctrl.dtsi"
+
+/ {
+ soc {
+ pinctrl: pin-controller {
+ compatible = "st,stm32f469-pinctrl";
+
+ gpioa: gpio@40020000 {
+ gpio-ranges = <&pinctrl 0 0 16>;
+ };
+
+ gpiob: gpio@40020400 {
+ gpio-ranges = <&pinctrl 0 16 16>;
+ };
+
+ gpioc: gpio@40020800 {
+ gpio-ranges = <&pinctrl 0 32 16>;
+ };
+
+ gpiod: gpio@40020c00 {
+ gpio-ranges = <&pinctrl 0 48 16>;
+ };
+
+ gpioe: gpio@40021000 {
+ gpio-ranges = <&pinctrl 0 64 16>;
+ };
+
+ gpiof: gpio@40021400 {
+ gpio-ranges = <&pinctrl 0 80 16>;
+ };
+
+ gpiog: gpio@40021800 {
+ gpio-ranges = <&pinctrl 0 96 16>;
+ };
+
+ gpioh: gpio@40021c00 {
+ gpio-ranges = <&pinctrl 0 112 16>;
+ };
+
+ gpioi: gpio@40022000 {
+ gpio-ranges = <&pinctrl 0 128 16>;
+ };
+
+ gpioj: gpio@40022400 {
+ gpio-ranges = <&pinctrl 0 144 6>,
+ <&pinctrl 12 156 4>;
+ };
+
+ gpiok: gpio@40022800 {
+ gpio-ranges = <&pinctrl 3 163 5>;
+ };
+ };
+ };
+};
CONFIG_BLK_DEV_SD=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_ATA=y
+CONFIG_PATA_FTIDE010=y
CONFIG_INPUT_EVDEV=y
CONFIG_KEYBOARD_GPIO=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_HEARTBEAT=y
CONFIG_RTC_CLASS=y
-CONFIG_RTC_DRV_GEMINI=y
CONFIG_DMADEVICES=y
+CONFIG_AMBA_PL08X=y
# CONFIG_DNOTIFY is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_LCD_TOSA=m
CONFIG_BACKLIGHT_PWM=m
CONFIG_BACKLIGHT_TOSA=m
-CONFIG_FRAMEBUFFER_CONSOLE=m
+CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_BACKLIGHT_PWM=m
# CONFIG_VGA_CONSOLE is not set
-CONFIG_FRAMEBUFFER_CONSOLE=m
+CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_FB_PXA_PARAMETERS=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_VGA_CONSOLE is not set
-CONFIG_FRAMEBUFFER_CONSOLE=m
+CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_SND=m
#define TIF_NEED_RESCHED 1 /* rescheduling necessary */
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_UPROBE 3 /* breakpointed or singlestepping */
-#define TIF_FSCHECK 4 /* Check FS is USER_DS on return */
-#define TIF_SYSCALL_TRACE 5 /* syscall trace active */
-#define TIF_SYSCALL_AUDIT 6 /* syscall auditing active */
-#define TIF_SYSCALL_TRACEPOINT 7 /* syscall tracepoint instrumentation */
-#define TIF_SECCOMP 8 /* seccomp syscall filtering active */
+#define TIF_SYSCALL_TRACE 4 /* syscall trace active */
+#define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */
+#define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
+#define TIF_SECCOMP 7 /* seccomp syscall filtering active */
#define TIF_NOHZ 12 /* in adaptive nohz mode */
#define TIF_USING_IWMMXT 17
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_UPROBE (1 << TIF_UPROBE)
-#define _TIF_FSCHECK (1 << TIF_FSCHECK)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
/*
* Change these and you break ASM code in entry-common.S
*/
-#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
- _TIF_FSCHECK)
+#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+ _TIF_NOTIFY_RESUME | _TIF_UPROBE)
#endif /* __KERNEL__ */
#endif /* __ASM_ARM_THREAD_INFO_H */
{
current_thread_info()->addr_limit = fs;
modify_domain(DOMAIN_KERNEL, fs ? DOMAIN_CLIENT : DOMAIN_MANAGER);
- /* On user-mode return, check fs is correct */
- set_thread_flag(TIF_FSCHECK);
}
#define segment_eq(a, b) ((a) == (b))
#include <asm/unistd.h>
#include <asm/ftrace.h>
#include <asm/unwind.h>
+#include <asm/memory.h>
#ifdef CONFIG_AEABI
#include <asm/unistd-oabi.h>
#endif
UNWIND(.fnstart )
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
+ ldr r2, [tsk, #TI_ADDR_LIMIT]
+ cmp r2, #TASK_SIZE
+ blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK
- bne fast_work_pending
- tst r1, #_TIF_WORK_MASK
+ tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
bne fast_work_pending
+
/* perform architecture specific actions before user return */
arch_ret_to_user r1, lr
UNWIND(.cantunwind )
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
disable_irq_notrace @ disable interrupts
+ ldr r2, [tsk, #TI_ADDR_LIMIT]
+ cmp r2, #TASK_SIZE
+ blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK
- bne fast_work_pending
- tst r1, #_TIF_WORK_MASK
+ tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
beq no_work_pending
UNWIND(.fnend )
ENDPROC(ret_fast_syscall)
/* Slower path - fall through to work_pending */
-fast_work_pending:
#endif
tst r1, #_TIF_SYSCALL_WORK
ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
+ ldr r2, [tsk, #TI_ADDR_LIMIT]
+ cmp r2, #TASK_SIZE
+ blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
bne slow_work_pending
* Update the trace code with the current status.
*/
trace_hardirqs_off();
-
- /* Check valid user FS if needed */
- addr_limit_user_check();
-
do {
if (likely(thread_flags & _TIF_NEED_RESCHED)) {
schedule();
return page;
}
+
+/* Defer to generic check */
+asmlinkage void addr_limit_check_failed(void)
+{
+ addr_limit_user_check();
+}
}
pm_bu->suspended = 0;
- pm_bu->canary = virt_to_phys(&canary);
- pm_bu->resume = virt_to_phys(cpu_resume);
+ pm_bu->canary = __pa_symbol(&canary);
+ pm_bu->resume = __pa_symbol(cpu_resume);
return;
{ .compatible = "mediatek,mt8135", .data = &mtk_mt8135_tz_boot },
{ .compatible = "mediatek,mt8127", .data = &mtk_mt8135_tz_boot },
{ .compatible = "mediatek,mt2701", .data = &mtk_mt8135_tz_boot },
+ {},
};
static const struct of_device_id mtk_smp_boot_infos[] __initconst = {
{ .compatible = "mediatek,mt6589", .data = &mtk_mt6589_boot },
{ .compatible = "mediatek,mt7623", .data = &mtk_mt7623_boot },
{ .compatible = "mediatek,mt7623a", .data = &mtk_mt7623_boot },
+ {},
};
static void __iomem *mtk_smp_base;
struct platform_device *pdev;
int res;
- if (omap_hsmmc_done != 1)
+ if (omap_hsmmc_done)
return;
- omap_hsmmc_done++;
+ omap_hsmmc_done = 1;
for (; c->mmc; c++) {
pdev = c->pdev;
.name = "gpio1",
.class = &dra7xx_gpio_hwmod_class,
.clkdm_name = "wkupaon_clkdm",
+ .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET,
.main_clk = "wkupaon_iclk_mux",
.prcm = {
.omap4 = {
This enables support for Hisilicon ARMv8 SoC family
config ARCH_MEDIATEK
- bool "Mediatek MT65xx & MT81xx ARMv8 SoC"
+ bool "MediaTek SoC Family"
select ARM_GIC
select PINCTRL
select MTK_TIMER
help
- Support for Mediatek MT65xx & MT81xx ARMv8 SoCs
+ This enables support for MediaTek MT27xx, MT65xx, MT76xx
+ & MT81xx ARMv8 SoCs
config ARCH_MESON
bool "Amlogic Platforms"
KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads)
KBUILD_AFLAGS += $(lseinstr) $(brokengasinst)
+KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
+KBUILD_AFLAGS += $(call cc-option,-mabi=lp64)
+
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
KBUILD_CPPFLAGS += -mbig-endian
CHECKFLAGS += -D__AARCH64EB__
AS += -EB
LD += -EB
+LDFLAGS += -maarch64linuxb
UTS_MACHINE := aarch64_be
else
KBUILD_CPPFLAGS += -mlittle-endian
CHECKFLAGS += -D__AARCH64EL__
AS += -EL
LD += -EL
+LDFLAGS += -maarch64linux
UTS_MACHINE := aarch64
endif
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
states = <3300000 0>,
<1800000 1>;
+
+ regulator-settling-time-up-us = <100>;
+ regulator-settling-time-down-us = <5000>;
};
wifi_32k: wifi-32k {
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins>, <&sdio_irq_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
- max-frequency = <100000000>;
+ sd-uhs-sdr12;
+ sd-uhs-sdr25;
+ sd-uhs-sdr50;
+ sd-uhs-sdr104;
+ max-frequency = <200000000>;
disable-wp;
cd-gpios = <&gpio CARD_6 GPIO_ACTIVE_HIGH>;
&sd_emmc_c {
status = "disabled";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
max-frequency = <200000000>;
non-removable;
disable-wp;
/ {
compatible = "nexbox,a95x", "amlogic,meson-gxbb";
model = "NEXBOX A95X";
-
+
aliases {
serial0 = &uart_AO;
};
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
/ {
compatible = "hardkernel,odroid-c2", "amlogic,meson-gxbb";
model = "Hardkernel ODROID-C2";
-
+
aliases {
serial0 = &uart_AO;
};
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
max-frequency = <200000000>;
non-removable;
disable-wp;
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
+ sd-uhs-sdr12;
+ sd-uhs-sdr25;
+ sd-uhs-sdr50;
max-frequency = <100000000>;
disable-wp;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins &sdio_irq_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
};
};
+ emmc_clk_gate_pins: emmc_clk_gate {
+ mux {
+ groups = "BOOT_8";
+ function = "gpio_periphs";
+ };
+ cfg-pull-down {
+ pins = "BOOT_8";
+ bias-pull-down;
+ };
+ };
+
nor_pins: nor {
mux {
groups = "nor_d",
};
};
+ sdcard_clk_gate_pins: sdcard_clk_gate {
+ mux {
+ groups = "CARD_2";
+ function = "gpio_periphs";
+ };
+ cfg-pull-down {
+ pins = "CARD_2";
+ bias-pull-down;
+ };
+ };
+
sdio_pins: sdio {
mux {
groups = "sdio_d0",
};
};
+ sdio_clk_gate_pins: sdio_clk_gate {
+ mux {
+ groups = "GPIOX_4";
+ function = "gpio_periphs";
+ };
+ cfg-pull-down {
+ pins = "GPIOX_4";
+ bias-pull-down;
+ };
+ };
+
sdio_irq_pins: sdio_irq {
mux {
groups = "sdio_irq";
&sd_emmc_a {
clocks = <&clkc CLKID_SD_EMMC_A>,
- <&xtal>,
+ <&clkc CLKID_SD_EMMC_A_CLK0>,
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
};
&sd_emmc_b {
clocks = <&clkc CLKID_SD_EMMC_B>,
- <&xtal>,
+ <&clkc CLKID_SD_EMMC_B_CLK0>,
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
};
&sd_emmc_c {
clocks = <&clkc CLKID_SD_EMMC_C>,
- <&xtal>,
+ <&clkc CLKID_SD_EMMC_C_CLK0>,
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
};
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <100000000>;
non-removable;
states = <3300000 0>,
<1800000 1>;
+
+ regulator-settling-time-up-us = <200>;
+ regulator-settling-time-down-us = <50000>;
};
vddio_boot: regulator-vddio_boot {
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
+ sd-uhs-sdr12;
+ sd-uhs-sdr25;
+ sd-uhs-sdr50;
max-frequency = <100000000>;
disable-wp;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
cap-mmc-highspeed;
+ mmc-ddr-3_3v;
max-frequency = <50000000>;
non-removable;
disable-wp;
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
&sd_emmc_a {
status = "okay";
pinctrl-0 = <&sdio_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdio_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
#address-cells = <1>;
#size-cells = <0>;
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
};
};
+ emmc_clk_gate_pins: emmc_clk_gate {
+ mux {
+ groups = "BOOT_8";
+ function = "gpio_periphs";
+ };
+ cfg-pull-down {
+ pins = "BOOT_8";
+ bias-pull-down;
+ };
+ };
+
nor_pins: nor {
mux {
groups = "nor_d",
};
};
+ sdcard_clk_gate_pins: sdcard_clk_gate {
+ mux {
+ groups = "CARD_2";
+ function = "gpio_periphs";
+ };
+ cfg-pull-down {
+ pins = "CARD_2";
+ bias-pull-down;
+ };
+ };
+
sdio_pins: sdio {
mux {
groups = "sdio_d0",
};
};
+ sdio_clk_gate_pins: sdio_clk_gate {
+ mux {
+ groups = "GPIOX_4";
+ function = "gpio_periphs";
+ };
+ cfg-pull-down {
+ pins = "GPIOX_4";
+ bias-pull-down;
+ };
+ };
+
sdio_irq_pins: sdio_irq {
mux {
groups = "sdio_irq";
&sd_emmc_a {
clocks = <&clkc CLKID_SD_EMMC_A>,
- <&xtal>,
+ <&clkc CLKID_SD_EMMC_A_CLK0>,
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
};
&sd_emmc_b {
clocks = <&clkc CLKID_SD_EMMC_B>,
- <&xtal>,
+ <&clkc CLKID_SD_EMMC_B_CLK0>,
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
};
&sd_emmc_c {
clocks = <&clkc CLKID_SD_EMMC_C>,
- <&xtal>,
+ <&clkc CLKID_SD_EMMC_C_CLK0>,
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
};
&sd_emmc_b {
status = "okay";
pinctrl-0 = <&sdcard_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdcard_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <4>;
cap-sd-highspeed;
&sd_emmc_c {
status = "okay";
pinctrl-0 = <&emmc_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&emmc_clk_gate_pins>;
+ pinctrl-names = "default", "clk-gate";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
pinctrl-names = "default";
bus-width = <8>;
- cap-sd-highspeed;
cap-mmc-highspeed;
max-frequency = <200000000>;
non-removable;
ap_syscon: system-controller@6f4000 {
compatible = "syscon", "simple-mfd";
- reg = <0x6f4000 0x1000>;
+ reg = <0x6f4000 0x2000>;
ap_clk: clock {
compatible = "marvell,ap806-clock";
compatible = "marvell,ap806-pinctrl";
};
- ap_gpio: gpio {
+ ap_gpio: gpio@1040 {
compatible = "marvell,armada-8k-gpio";
offset = <0x1040>;
ngpios = <20>;
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x0>;
enable-method = "psci";
- clocks = <&cru ARMCLKL>;
- operating-points-v2 = <&cluster0_opp>;
+
#cooling-cells = <2>; /* min followed by max */
};
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x1>;
enable-method = "psci";
- clocks = <&cru ARMCLKL>;
- operating-points-v2 = <&cluster0_opp>;
};
cpu_l2: cpu@2 {
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x2>;
enable-method = "psci";
- clocks = <&cru ARMCLKL>;
- operating-points-v2 = <&cluster0_opp>;
};
cpu_l3: cpu@3 {
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x3>;
enable-method = "psci";
- clocks = <&cru ARMCLKL>;
- operating-points-v2 = <&cluster0_opp>;
};
cpu_b0: cpu@100 {
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x100>;
enable-method = "psci";
- clocks = <&cru ARMCLKB>;
- operating-points-v2 = <&cluster1_opp>;
+
#cooling-cells = <2>; /* min followed by max */
};
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x101>;
enable-method = "psci";
- clocks = <&cru ARMCLKB>;
- operating-points-v2 = <&cluster1_opp>;
};
cpu_b2: cpu@102 {
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x102>;
enable-method = "psci";
- clocks = <&cru ARMCLKB>;
- operating-points-v2 = <&cluster1_opp>;
};
cpu_b3: cpu@103 {
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0 0x103>;
enable-method = "psci";
- clocks = <&cru ARMCLKB>;
- operating-points-v2 = <&cluster1_opp>;
- };
- };
-
- cluster0_opp: opp-table0 {
- compatible = "operating-points-v2";
- opp-shared;
-
- opp00 {
- opp-hz = /bits/ 64 <312000000>;
- opp-microvolt = <950000>;
- clock-latency-ns = <40000>;
- };
- opp01 {
- opp-hz = /bits/ 64 <408000000>;
- opp-microvolt = <950000>;
- };
- opp02 {
- opp-hz = /bits/ 64 <600000000>;
- opp-microvolt = <950000>;
- };
- opp03 {
- opp-hz = /bits/ 64 <816000000>;
- opp-microvolt = <1025000>;
- };
- opp04 {
- opp-hz = /bits/ 64 <1008000000>;
- opp-microvolt = <1125000>;
- };
- };
-
- cluster1_opp: opp-table1 {
- compatible = "operating-points-v2";
- opp-shared;
-
- opp00 {
- opp-hz = /bits/ 64 <312000000>;
- opp-microvolt = <950000>;
- clock-latency-ns = <40000>;
- };
- opp01 {
- opp-hz = /bits/ 64 <408000000>;
- opp-microvolt = <950000>;
- };
- opp02 {
- opp-hz = /bits/ 64 <600000000>;
- opp-microvolt = <950000>;
- };
- opp03 {
- opp-hz = /bits/ 64 <816000000>;
- opp-microvolt = <975000>;
- };
- opp04 {
- opp-hz = /bits/ 64 <1008000000>;
- opp-microvolt = <1050000>;
};
};
compatible = "rockchip,rk3399-mipi-dsi", "snps,dw-mipi-dsi";
reg = <0x0 0xff960000 0x0 0x8000>;
interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH 0>;
- clocks = <&cru SCLK_MIPIDPHY_REF>, <&cru PCLK_MIPI_DSI0>,
- <&cru SCLK_DPHY_TX0_CFG>;
- clock-names = "ref", "pclk", "phy_cfg";
+ clocks = <&cru SCLK_DPHY_PLL>, <&cru PCLK_MIPI_DSI0>,
+ <&cru SCLK_DPHY_TX0_CFG>, <&cru PCLK_VIO_GRF>;
+ clock-names = "ref", "pclk", "phy_cfg", "grf";
power-domains = <&power RK3399_PD_VIO>;
rockchip,grf = <&grf>;
status = "disabled";
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
-#define __ALIGN .align 4
-#define __ALIGN_STR ".align 4"
+#define __ALIGN .align 2
+#define __ALIGN_STR ".align 2"
#endif
#define KERNEL_END _end
/*
- * The size of the KASAN shadow region. This should be 1/8th of the
- * size of the entire kernel virtual address space.
+ * KASAN requires 1/8th of the kernel virtual address space for the shadow
+ * region. KASAN can bloat the stack significantly, so double the (minimum)
+ * stack size when KASAN is in use.
*/
#ifdef CONFIG_KASAN
#define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - 3))
+#define KASAN_THREAD_SHIFT 1
#else
#define KASAN_SHADOW_SIZE (0)
+#define KASAN_THREAD_SHIFT 0
#endif
-#define MIN_THREAD_SHIFT 14
+#define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT)
/*
* VMAP'd stacks are allocated at page granularity, so we must ensure that such
/* Find an entry in the third-level page table. */
#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
-#define pte_offset_phys(dir,addr) (pmd_page_paddr(*(dir)) + pte_index(addr) * sizeof(pte_t))
+#define pte_offset_phys(dir,addr) (pmd_page_paddr(READ_ONCE(*(dir))) + pte_index(addr) * sizeof(pte_t))
#define pte_offset_kernel(dir,addr) ((pte_t *)__va(pte_offset_phys((dir), (addr))))
#define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr))
return 0;
}
-late_initcall(armv8_deprecated_init);
+core_initcall(armv8_deprecated_init);
return 0;
}
-late_initcall(enable_mrs_emulation);
+core_initcall(enable_mrs_emulation);
}
EXPORT_SYMBOL(kernel_neon_end);
+#ifdef CONFIG_EFI
+
static DEFINE_PER_CPU(struct fpsimd_state, efi_fpsimd_state);
static DEFINE_PER_CPU(bool, efi_fpsimd_state_used);
kernel_neon_end();
}
+#endif /* CONFIG_EFI */
+
#endif /* CONFIG_KERNEL_MODE_NEON */
#ifdef CONFIG_CPU_PM
return 0;
}
-late_initcall(fpsimd_init);
+core_initcall(fpsimd_init);
* booted in EL1 or EL2 respectively.
*/
ENTRY(el2_setup)
+ msr SPsel, #1 // We want to use SP_EL{1,2}
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.eq 1f
*/
trace_hardirqs_off();
- /* Check valid user FS if needed */
- addr_limit_user_check();
-
do {
+ /* Check valid user FS if needed */
+ addr_limit_user_check();
+
if (thread_flags & _TIF_NEED_RESCHED) {
schedule();
} else {
(esr & ESR_ELx_SF) >> ESR_ELx_SF_SHIFT,
(esr & ESR_ELx_AR) >> ESR_ELx_AR_SHIFT);
} else {
- pr_alert(" ISV = 0, ISS = 0x%08lu\n", esr & ESR_ELx_ISS_MASK);
+ pr_alert(" ISV = 0, ISS = 0x%08lx\n", esr & ESR_ELx_ISS_MASK);
}
pr_alert(" CM = %lu, WnR = %lu\n",
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 0 translation fault" },
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 1 translation fault" },
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 2 translation fault" },
- { do_page_fault, SIGSEGV, SEGV_MAPERR, "level 3 translation fault" },
+ { do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 3 translation fault" },
{ do_bad, SIGBUS, 0, "unknown 8" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 1 access flag fault" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 2 access flag fault" },
{
}
-#define copy_segments(tsk, mm) do { } while (0)
-#define release_segments(mm) do { } while (0)
-
/*
* saved kernel SP and DP of a blocked thread.
*/
extern asmlinkage void save_user_regs(struct user_context *target);
extern asmlinkage void *restore_user_regs(const struct user_context *target, ...);
-#define copy_segments(tsk, mm) do { } while (0)
-#define release_segments(mm) do { } while (0)
-#define forget_segments() do { } while (0)
-
unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) ((tsk)->thread.frame0->pc)
int "Timer divider (integer)"
default "128"
+config CPU_BIG_ENDIAN
+ bool "Generate big endian code"
+ default n
+
config CPU_LITTLE_ENDIAN
bool "Generate little endian code"
default n
/* Free all resources held by a thread. */
extern void release_thread(struct task_struct *);
-/* Copy and release all segment info associated with a VM */
-extern void copy_segments(struct task_struct *p, struct mm_struct * mm);
-extern void release_segments(struct mm_struct * mm);
-
-/* Copy and release all segment info associated with a VM */
-#define copy_segments(p, mm) do { } while (0)
-#define release_segments(mm) do { } while (0)
-
unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) ((tsk)->thread.lr)
#define KSTK_ESP(tsk) ((tsk)->thread.sp)
_flush_cache_copyback_all();
}
+void abort(void)
+{
+ BUG();
+
+ /* if that doesn't kill us, halt */
+ panic("Oops failed to kill thread");
+}
+EXPORT_SYMBOL(abort);
+
void __init trap_init(void)
{
set_eit_vector_entries();
{
}
-#define copy_segments(tsk, mm) do { } while (0)
-#define release_segments(mm) do { } while (0)
-
/*
* Return saved PC of a blocked thread.
*/
# Endianness selection
choice
prompt "Endianness selection"
- default CPU_BIG_ENDIAN
+ default CPU_LITTLE_ENDIAN
help
microblaze architectures can be configured for either little or
big endian formats. Be sure to select the appropriate mode.
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += ipcbuf.h
+generic-y += kvm_para.h
generic-y += mman.h
generic-y += msgbuf.h
generic-y += param.h
unsigned long attrs)
{
#ifdef CONFIG_MMU
- unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ unsigned long user_count = vma_pages(vma);
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
unsigned long off = vma->vm_pgoff;
unsigned long pfn;
#include "pci.h"
static int (*ath79_pci_plat_dev_init)(struct pci_dev *dev);
-static const struct ath79_pci_irq *ath79_pci_irq_map __initdata;
-static unsigned ath79_pci_nr_irqs __initdata;
+static const struct ath79_pci_irq *ath79_pci_irq_map;
+static unsigned ath79_pci_nr_irqs;
-static const struct ath79_pci_irq ar71xx_pci_irq_map[] __initconst = {
+static const struct ath79_pci_irq ar71xx_pci_irq_map[] = {
{
.slot = 17,
.pin = 1,
}
};
-static const struct ath79_pci_irq ar724x_pci_irq_map[] __initconst = {
+static const struct ath79_pci_irq ar724x_pci_irq_map[] = {
{
.slot = 0,
.pin = 1,
}
};
-static const struct ath79_pci_irq qca955x_pci_irq_map[] __initconst = {
+static const struct ath79_pci_irq qca955x_pci_irq_map[] = {
{
.bus = 0,
.slot = 0,
},
};
-int __init pcibios_map_irq(const struct pci_dev *dev, uint8_t slot, uint8_t pin)
+int pcibios_map_irq(const struct pci_dev *dev, uint8_t slot, uint8_t pin)
{
int irq = -1;
int i;
#define __write_64bit_c0_split(source, sel, val) \
do { \
+ unsigned long long __tmp; \
unsigned long __flags; \
\
local_irq_save(__flags); \
if (sel == 0) \
__asm__ __volatile__( \
".set\tmips64\n\t" \
- "dsll\t%L0, %L0, 32\n\t" \
+ "dsll\t%L0, %L1, 32\n\t" \
"dsrl\t%L0, %L0, 32\n\t" \
- "dsll\t%M0, %M0, 32\n\t" \
+ "dsll\t%M0, %M1, 32\n\t" \
"or\t%L0, %L0, %M0\n\t" \
"dmtc0\t%L0, " #source "\n\t" \
".set\tmips0" \
- : : "r" (val)); \
+ : "=&r,r" (__tmp) \
+ : "r,0" (val)); \
else \
__asm__ __volatile__( \
".set\tmips64\n\t" \
- "dsll\t%L0, %L0, 32\n\t" \
+ "dsll\t%L0, %L1, 32\n\t" \
"dsrl\t%L0, %L0, 32\n\t" \
- "dsll\t%M0, %M0, 32\n\t" \
+ "dsll\t%M0, %M1, 32\n\t" \
"or\t%L0, %L0, %M0\n\t" \
"dmtc0\t%L0, " #source ", " #sel "\n\t" \
".set\tmips0" \
- : : "r" (val)); \
+ : "=&r,r" (__tmp) \
+ : "r,0" (val)); \
local_irq_restore(__flags); \
} while (0)
return -ENOENT;
}
- if ((unsigned int)event->cpu >= nr_cpumask_bits ||
- (event->cpu >= 0 && !cpu_online(event->cpu)))
+ if (event->cpu >= 0 && !cpu_online(event->cpu))
return -ENODEV;
if (!atomic_inc_not_zero(&active_events)) {
#define INTC PC104PLUS_INTC_IRQ
#define INTD PC104PLUS_INTD_IRQ
-static char irq_tab_capcella[][5] __initdata = {
+static char irq_tab_capcella[][5] = {
[11] = { -1, INT1, INT1, INT1, INT1 },
[12] = { -1, INT2, INT2, INT2, INT2 },
[14] = { -1, INTA, INTB, INTC, INTD }
};
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return irq_tab_capcella[slot][pin];
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0,
qube_raq_via_board_id_fixup);
-static char irq_tab_qube1[] __initdata = {
+static char irq_tab_qube1[] = {
[COBALT_PCICONF_CPU] = 0,
[COBALT_PCICONF_ETH0] = QUBE1_ETH0_IRQ,
[COBALT_PCICONF_RAQSCSI] = SCSI_IRQ,
[COBALT_PCICONF_ETH1] = 0
};
-static char irq_tab_cobalt[] __initdata = {
+static char irq_tab_cobalt[] = {
[COBALT_PCICONF_CPU] = 0,
[COBALT_PCICONF_ETH0] = ETH0_IRQ,
[COBALT_PCICONF_RAQSCSI] = SCSI_IRQ,
[COBALT_PCICONF_ETH1] = ETH1_IRQ
};
-static char irq_tab_raq2[] __initdata = {
+static char irq_tab_raq2[] = {
[COBALT_PCICONF_CPU] = 0,
[COBALT_PCICONF_ETH0] = ETH0_IRQ,
[COBALT_PCICONF_RAQSCSI] = RAQ2_SCSI_IRQ,
[COBALT_PCICONF_ETH1] = ETH1_IRQ
};
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
if (cobalt_board_id <= COBALT_BRD_ID_QUBE1)
return irq_tab_qube1[slot];
*/
#define MAX_SLOT_NUM 10
-static unsigned char irq_map[][5] __initdata = {
+static unsigned char irq_map[][5] = {
[3] = {0, MARKEINS_PCI_IRQ_INTB, MARKEINS_PCI_IRQ_INTC,
MARKEINS_PCI_IRQ_INTD, 0,},
[4] = {0, MARKEINS_PCI_IRQ_INTA, 0, 0, 0,},
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_EMMA2RH,
emma2rh_pci_host_fixup);
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return irq_map[slot][pin];
}
/* South bridge slot number is set by the pci probe process */
static u8 sb_slot = 5;
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int irq = 0;
#define INTB MACEPCI_SHARED0_IRQ
#define INTC MACEPCI_SHARED1_IRQ
#define INTD MACEPCI_SHARED2_IRQ
-static char irq_tab_mace[][5] __initdata = {
+static char irq_tab_mace[][5] = {
/* Dummy INT#A INT#B INT#C INT#D */
{0, 0, 0, 0, 0}, /* This is placeholder row - never used */
{0, SCSI0, SCSI0, SCSI0, SCSI0},
* irqs. I suppose a device without a pin A will thank us for doing it
* right if there exists such a broken piece of crap.
*/
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return irq_tab_mace[slot][pin];
}
#include <asm/txx9/pci.h>
#include <asm/txx9/jmr3927.h>
-int __init jmr3927_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int jmr3927_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
unsigned char irq = pin;
return 0;
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return of_irq_parse_and_map_pci(dev, slot, pin);
}
#define PCID 7
/* all the pci device has the PCIA pin, check the datasheet. */
-static char irq_tab[][5] __initdata = {
+static char irq_tab[][5] = {
/* INTA INTB INTC INTD */
{0, 0, 0, 0, 0}, /* 11: Unused */
{0, 0, 0, 0, 0}, /* 12: Unused */
{0, 0, 0, 0, 0}, /* 27: Unused */
};
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int virq;
pdev->vendor, pdev->device, pdev->irq);
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
print_fixup_info(dev);
return dev->irq;
static char pci_irq[5] = {
};
-static char irq_tab[][5] __initdata = {
+static char irq_tab[][5] = {
/* INTA INTB INTC INTD */
{0, 0, 0, 0, 0 }, /* 0: GT64120 PCI bridge */
{0, 0, 0, 0, 0 }, /* 1: Unused */
{0, PCID, PCIA, PCIB, PCIC } /* 21: PCI Slot 4 */
};
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int virq;
virq = irq_tab[slot][pin];
#include <asm/vr41xx/mpc30x.h>
-static const int internal_func_irqs[] __initconst = {
+static const int internal_func_irqs[] = {
VRC4173_CASCADE_IRQ,
VRC4173_AC97_IRQ,
VRC4173_USB_IRQ,
};
-static const int irq_tab_mpc30x[] __initconst = {
+static const int irq_tab_mpc30x[] = {
[12] = VRC4173_PCMCIA1_IRQ,
[13] = VRC4173_PCMCIA2_IRQ,
[29] = MQ200_IRQ,
};
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
if (slot == 30)
return internal_func_irqs[PCI_FUNC(dev->devfn)];
#if defined(CONFIG_PMC_MSP7120_GW)
/* Garibaldi Board IRQ wiring to PCI slots */
-static char irq_tab[][5] __initdata = {
+static char irq_tab[][5] = {
/* INTA INTB INTC INTD */
{0, 0, 0, 0, 0 }, /* (AD[0]): Unused */
{0, 0, 0, 0, 0 }, /* (AD[1]): Unused */
#elif defined(CONFIG_PMC_MSP7120_EVAL)
/* MSP7120 Eval Board IRQ wiring to PCI slots */
-static char irq_tab[][5] __initdata = {
+static char irq_tab[][5] = {
/* INTA INTB INTC INTD */
{0, 0, 0, 0, 0 }, /* (AD[0]): Unused */
{0, 0, 0, 0, 0 }, /* (AD[1]): Unused */
#else
/* Unknown board -- don't assign any IRQs */
-static char irq_tab[][5] __initdata = {
+static char irq_tab[][5] = {
/* INTA INTB INTC INTD */
{0, 0, 0, 0, 0 }, /* (AD[0]): Unused */
{0, 0, 0, 0, 0 }, /* (AD[1]): Unused */
* RETURNS: IRQ number
*
****************************************************************************/
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
#if !defined(CONFIG_PMC_MSP7120_GW) && !defined(CONFIG_PMC_MSP7120_EVAL)
printk(KERN_WARNING "PCI: unknown board, no PCI IRQs assigned.\n");
#include <asm/txx9/pci.h>
#include <asm/txx9/rbtx4927.h>
-int __init rbtx4927_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int rbtx4927_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
unsigned char irq = pin;
#include <asm/txx9/pci.h>
#include <asm/txx9/rbtx4938.h>
-int __init rbtx4938_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int rbtx4938_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int irq = tx4938_pcic1_map_irq(dev, slot);
* seem to be a documentation error. At least on my RM200C the Cirrus
* Logic CL-GD5434 VGA is device 3.
*/
-static char irq_tab_rm200[8][5] __initdata = {
+static char irq_tab_rm200[8][5] = {
/* INTA INTB INTC INTD */
{ 0, 0, 0, 0, 0 }, /* EISA bridge */
{ SCSI, SCSI, SCSI, SCSI, SCSI }, /* SCSI */
*
* The VGA card is optional for RM300 systems.
*/
-static char irq_tab_rm300d[8][5] __initdata = {
+static char irq_tab_rm300d[8][5] = {
/* INTA INTB INTC INTD */
{ 0, 0, 0, 0, 0 }, /* EISA bridge */
{ SCSI, SCSI, SCSI, SCSI, SCSI }, /* SCSI */
{ 0, INTD, INTA, INTB, INTC }, /* Slot 4 */
};
-static char irq_tab_rm300e[5][5] __initdata = {
+static char irq_tab_rm300e[5][5] = {
/* INTA INTB INTC INTD */
{ 0, 0, 0, 0, 0 }, /* HOST bridge */
{ SCSI, SCSI, SCSI, SCSI, SCSI }, /* SCSI */
#define INTC PCIT_IRQ_INTC
#define INTD PCIT_IRQ_INTD
-static char irq_tab_pcit[13][5] __initdata = {
+static char irq_tab_pcit[13][5] = {
/* INTA INTB INTC INTD */
{ 0, 0, 0, 0, 0 }, /* HOST bridge */
{ SCSI0, SCSI0, SCSI0, SCSI0, SCSI0 }, /* SCSI */
{ 0, INTA, INTB, INTC, INTD }, /* Slot 5 */
};
-static char irq_tab_pcit_cplus[13][5] __initdata = {
+static char irq_tab_pcit_cplus[13][5] = {
/* INTA INTB INTC INTD */
{ 0, 0, 0, 0, 0 }, /* HOST bridge */
{ 0, INTB, INTC, INTD, INTA }, /* PCI Slot 9 */
return (csmsr & 0xa0) == 0x20;
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
switch (sni_brd_type) {
case SNI_BRD_PCI_TOWER_CPLUS:
#include <asm/vr41xx/tb0219.h>
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int irq = -1;
#include <asm/vr41xx/giu.h>
#include <asm/vr41xx/tb0226.h>
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int irq = -1;
#include <asm/vr41xx/tb0287.h>
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
unsigned char bus;
int irq = -1;
arch_initcall(alchemy_pci_init);
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
struct alchemy_pci_context *ctx = dev->sysdata;
if (ctx && ctx->board_map_irq)
#include <linux/bcma/bcma.h>
#include <bcm47xx.h>
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return 0;
}
#define LASAT_IRQ_PCIC (LASAT_IRQ_BASE + 7)
#define LASAT_IRQ_PCID (LASAT_IRQ_BASE + 8)
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
switch (slot) {
case 1:
return 0;
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
u16 cmd;
u32 val;
} s;
};
-int __initconst (*octeon_pcibios_map_irq)(const struct pci_dev *dev,
- u8 slot, u8 pin);
+int (*octeon_pcibios_map_irq)(const struct pci_dev *dev, u8 slot, u8 pin);
enum octeon_dma_bar_type octeon_dma_bar_type = OCTEON_DMA_BAR_TYPE_INVALID;
/**
* as it goes through each bridge.
* Returns Interrupt number for the device
*/
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
if (octeon_pcibios_map_irq)
return octeon_pcibios_map_irq(dev, slot, pin);
spin_unlock_irqrestore(&rt2880_pci_lock, flags);
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
u16 cmd;
int irq = -1;
return err;
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return of_irq_parse_and_map_pci(dev, slot, pin);
}
return pciclk;
}
-int __init tx4938_pcic1_map_irq(const struct pci_dev *dev, u8 slot)
+int tx4938_pcic1_map_irq(const struct pci_dev *dev, u8 slot)
{
if (get_tx4927_pcicptr(dev->bus->sysdata) == tx4938_pcic1ptr) {
switch (slot) {
((pciclk + 50000) / 100000) % 10);
}
-int __init tx4939_pcic1_map_irq(const struct pci_dev *dev, u8 slot)
+int tx4939_pcic1_map_irq(const struct pci_dev *dev, u8 slot)
{
if (get_tx4927_pcicptr(dev->bus->sysdata) == tx4939_pcic1ptr) {
switch (slot) {
return -1;
}
-int __init tx4939_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int tx4939_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
int irq = tx4939_pcic1_map_irq(dev, slot);
return PCI_SLOT(lnkdev->devfn) / 8;
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
struct pci_dev *lnkdev;
int lnkfunc, node;
}
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
return get_irq_vector(dev);
}
* as it goes through each bridge.
* Returns Interrupt number for the device
*/
-int __init octeon_pcie_pcibios_map_irq(const struct pci_dev *dev,
- u8 slot, u8 pin)
+int octeon_pcie_pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
/*
* The EBH5600 board with the PCI to PCIe bridge mistakenly
#include <linux/smp.h>
#include <linux/interrupt.h>
+#include <asm/setup.h>
+
#ifdef CONFIG_MIPS_MT_SMP
#define MIPS_CPU_IPI_RESCHED_IRQ 0 /* SW int 0 for resched */
#define MIPS_CPU_IPI_CALL_IRQ 1 /* SW int 1 for call */
return 0;
}
-int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+static int (*txx9_pci_map_irq)(const struct pci_dev *dev, u8 slot, u8 pin);
+int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
- return txx9_board_vec->pci_map_irq(dev, slot, pin);
+ return txx9_pci_map_irq(dev, slot, pin);
}
char * (*txx9_board_pcibios_setup)(char *str) __initdata;
txx9_pci_err_action = TXX9_PCI_ERR_IGNORE;
return NULL;
}
+
+ txx9_pci_map_irq = txx9_board_vec->pci_map_irq;
+
return str;
}
}
#endif
-void release_segments(struct mm_struct *mm)
-{
-}
-
void machine_restart(char *cmd)
{
#ifdef CONFIG_KERNEL_DEBUGGER
{
}
-/*
- * we do not have to muck with descriptors here, that is
- * done in switch_mm() as needed.
- */
-void copy_segments(struct task_struct *p, struct mm_struct *new_mm)
-{
-}
-
/*
* this gets called so that we can store lazy state into memory and copy the
* current task into the new thread.
endchoice
+config PARISC_SELF_EXTRACT
+ bool "Build kernel as self-extracting executable"
+ default y
+ help
+ Say Y if you want to build the parisc kernel as a kind of
+ self-extracting executable.
+
+ If you say N here, the kernel will be compressed with gzip
+ which can be loaded by the palo bootloader directly too.
+
+ If you don't know what to do here, say Y.
+
config SMP
bool "Symmetric multi-processing support"
---help---
bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
+ifdef CONFIG_PARISC_SELF_EXTRACT
vmlinuz: bzImage
$(OBJCOPY) $(boot)/bzImage $@
+else
+vmlinuz: vmlinux
+ @gzip -cf -9 $< > $@
+endif
install:
$(CONFIG_SHELL) $(src)/arch/parisc/install.sh \
KBUILD_CFLAGS := -D__KERNEL__ -O2 -DBOOTLOADER
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks
-KBUILD_CFLAGS += -fno-PIE -mno-space-regs -mdisable-fpregs
+KBUILD_CFLAGS += -fno-PIE -mno-space-regs -mdisable-fpregs -Os
ifndef CONFIG_64BIT
KBUILD_CFLAGS += -mfast-indirect-calls
endif
/* Symbols defined by linker scripts */
extern char input_data[];
extern int input_len;
-extern __le32 output_len; /* at unaligned address, little-endian */
+/* output_len is inserted by the linker possibly at an unaligned address */
+extern __le32 output_len __aligned(1);
extern char _text, _end;
extern char _bss, _ebss;
extern char _startcode_end;
/* wrapper-functions from pdc.c */
int pdc_add_valid(unsigned long address);
+int pdc_instr(unsigned int *instr);
int pdc_chassis_info(struct pdc_chassis_info *chassis_info, void *led_info, unsigned long len);
int pdc_chassis_disp(unsigned long disp);
int pdc_chassis_warn(unsigned long *warn);
#ifndef __ASM_SMP_H
#define __ASM_SMP_H
+extern int init_per_cpu(int cpuid);
#if defined(CONFIG_SMP)
}
EXPORT_SYMBOL(pdc_add_valid);
+/**
+ * pdc_instr - Get instruction that invokes PDCE_CHECK in HPMC handler.
+ * @instr: Pointer to variable which will get instruction opcode.
+ *
+ * The return value is PDC_OK (0) in case call succeeded.
+ */
+int __init pdc_instr(unsigned int *instr)
+{
+ int retval;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pdc_lock, flags);
+ retval = mem_pdc_call(PDC_INSTR, 0UL, __pa(pdc_result));
+ convert_to_wide(pdc_result);
+ *instr = pdc_result[0];
+ spin_unlock_irqrestore(&pdc_lock, flags);
+
+ return retval;
+}
+
/**
* pdc_chassis_info - Return chassis information.
* @result: The return buffer.
#include <linux/memblock.h>
#include <linux/seq_file.h>
#include <linux/kthread.h>
+#include <linux/initrd.h>
#include <asm/pdc.h>
#include <asm/pdcpat.h>
}
for (i = 0; i < pdt_status.pdt_entries; i++) {
+ unsigned long addr;
+
report_mem_err(pdt_entry[i]);
+ addr = pdt_entry[i] & PDT_ADDR_PHYS_MASK;
+ if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) &&
+ addr >= initrd_start && addr < initrd_end)
+ pr_crit("CRITICAL: initrd possibly broken "
+ "due to bad memory!\n");
+
/* mark memory page bad */
memblock_reserve(pdt_entry[i] & PAGE_MASK, PAGE_SIZE);
}
/* prevent soft lockup/stalled CPU messages for endless loop. */
rcu_sysrq_start();
- lockup_detector_suspend();
+ lockup_detector_soft_poweroff();
for (;;);
}
*
* o Enable CPU profiling hooks.
*/
-int init_per_cpu(int cpunum)
+int __init init_per_cpu(int cpunum)
{
int ret;
struct pdc_coproc_cfg coproc_cfg;
#include <linux/export.h>
#include <linux/sched.h>
#include <linux/sched/clock.h>
+#include <linux/start_kernel.h>
#include <asm/processor.h>
#include <asm/sections.h>
#include <asm/io.h>
#include <asm/setup.h>
#include <asm/unwind.h>
+#include <asm/smp.h>
static char __initdata command_line[COMMAND_LINE_SIZE];
}
#endif
-extern int init_per_cpu(int cpuid);
extern void collect_boot_cpu_data(void);
void __init setup_arch(char **cmdline_p)
}
arch_initcall(parisc_init);
-void start_parisc(void)
+void __init start_parisc(void)
{
- extern void start_kernel(void);
extern void early_trap_init(void);
int ret, cpunum;
static void __init
smp_cpu_init(int cpunum)
{
- extern int init_per_cpu(int); /* arch/parisc/kernel/processor.c */
extern void init_IRQ(void); /* arch/parisc/kernel/irq.c */
extern void start_cpu_itimer(void); /* arch/parisc/kernel/time.c */
/* Set modes and Enable floating point coprocessor */
- (void) init_per_cpu(cpunum);
+ init_per_cpu(cpunum);
disable_sr_hashing();
u32 check = 0;
u32 *ivap;
u32 *hpmcp;
- u32 length;
+ u32 length, instr;
if (strcmp((const char *)iva, "cows can fly"))
panic("IVT invalid");
for (i = 0; i < 8; i++)
*ivap++ = 0;
+ /*
+ * Use PDC_INSTR firmware function to get instruction that invokes
+ * PDCE_CHECK in HPMC handler. See programming note at page 1-31 of
+ * the PA 1.1 Firmware Architecture document.
+ */
+ if (pdc_instr(&instr) == PDC_OK)
+ ivap[0] = instr;
+
/* Compute Checksum for HPMC handler */
length = os_hpmc_size;
ivap[7] = length;
#include <linux/slab.h>
#include <linux/kallsyms.h>
#include <linux/sort.h>
+#include <linux/sched.h>
#include <linux/uaccess.h>
#include <asm/assembly.h>
info->prev_sp = sp - 64;
info->prev_ip = 0;
+
+ /* The stack is at the end inside the thread_union
+ * struct. If we reach data, we have reached the
+ * beginning of the stack and should stop unwinding. */
+ if (info->prev_sp >= (unsigned long) task_thread_info(info->t) &&
+ info->prev_sp < ((unsigned long) task_thread_info(info->t)
+ + THREAD_SZ_ALGN)) {
+ info->prev_sp = 0;
+ break;
+ }
+
if (get_user(tmp, (unsigned long *)(info->prev_sp - RP_OFFSET)))
break;
info->prev_ip = tmp;
#include <linux/interrupt.h>
#include <linux/extable.h>
#include <linux/uaccess.h>
+#include <linux/hugetlb.h>
#include <asm/traps.h>
struct task_struct *tsk;
struct mm_struct *mm;
unsigned long acc_type;
- int fault;
+ int fault = 0;
unsigned int flags;
if (faulthandler_disabled())
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
- else if (fault & VM_FAULT_SIGBUS)
+ else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|
+ VM_FAULT_HWPOISON_LARGE))
goto bad_area;
BUG();
}
if (user_mode(regs)) {
struct siginfo si;
-
- show_signal_msg(regs, code, address, tsk, vma);
+ unsigned int lsb = 0;
switch (code) {
case 15: /* Data TLB miss fault/Data page fault */
si.si_code = (code == 26) ? SEGV_ACCERR : SEGV_MAPERR;
break;
}
+
+#ifdef CONFIG_MEMORY_FAILURE
+ if (fault & (VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) {
+ printk(KERN_ERR
+ "MCE: Killing %s:%d due to hardware memory corruption fault at %08lx\n",
+ tsk->comm, tsk->pid, address);
+ si.si_signo = SIGBUS;
+ si.si_code = BUS_MCEERR_AR;
+ }
+#endif
+
+ /*
+ * Either small page or large page may be poisoned.
+ * In other words, VM_FAULT_HWPOISON_LARGE and
+ * VM_FAULT_HWPOISON are mutually exclusive.
+ */
+ if (fault & VM_FAULT_HWPOISON_LARGE)
+ lsb = hstate_index_to_shift(VM_FAULT_GET_HINDEX(fault));
+ else if (fault & VM_FAULT_HWPOISON)
+ lsb = PAGE_SHIFT;
+ else
+ show_signal_msg(regs, code, address, tsk, vma);
+ si.si_addr_lsb = lsb;
+
si.si_errno = 0;
si.si_addr = (void __user *) address;
force_sig_info(si.si_signo, &si, current);
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_SND=m
-CONFIG_SND_SEQUENCER=m
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
-CONFIG_SND_SEQUENCER_OSS=y
+CONFIG_SND_SEQUENCER=m
+CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_POWERMAC=m
CONFIG_SND_AOA=m
CONFIG_SND_AOA_FABRIC_LAYOUT=m
# CONFIG_LOGO_LINUX_CLUT224 is not set
CONFIG_SOUND=y
CONFIG_SND=y
-CONFIG_SND_SEQUENCER=y
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=y
CONFIG_SND_PCM_OSS=y
-CONFIG_SND_SEQUENCER_OSS=y
# CONFIG_SND_VERBOSE_PROCFS is not set
+CONFIG_SND_SEQUENCER=y
+CONFIG_SND_SEQUENCER_OSS=y
# CONFIG_USB_SUPPORT is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_GENERIC=y
CONFIG_LOGO=y
CONFIG_SOUND=y
CONFIG_SND=y
-CONFIG_SND_SEQUENCER=y
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=y
CONFIG_SND_PCM_OSS=y
+CONFIG_SND_SEQUENCER=y
CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_USB_AUDIO=y
CONFIG_SND_USB_USX2Y=y
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_SND=m
-CONFIG_SND_SEQUENCER=m
-CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
-CONFIG_SND_SEQUENCER_OSS=y
+CONFIG_SND_SEQUENCER=m
+CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_DUMMY=m
CONFIG_SND_POWERMAC=m
CONFIG_SND_AOA=m
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_SND=m
-CONFIG_SND_SEQUENCER=m
-CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
-CONFIG_SND_SEQUENCER_OSS=y
+CONFIG_SND_SEQUENCER=m
+CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_POWERMAC=m
CONFIG_SND_AOA=m
CONFIG_SND_AOA_FABRIC_LAYOUT=m
CONFIG_LOGO=y
CONFIG_SOUND=m
CONFIG_SND=m
-CONFIG_SND_SEQUENCER=m
-CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
-CONFIG_SND_SEQUENCER_OSS=y
+CONFIG_SND_SEQUENCER=m
+CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_SEQUENCER_OSS=m
CONFIG_HID_DRAGONRISE=y
CONFIG_HID_GYRATION=y
CONFIG_HID_TWINHAN=y
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_SOUND=m
CONFIG_SND=m
-CONFIG_SND_SEQUENCER=m
-CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
-CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_DYNAMIC_MINORS=y
# CONFIG_SND_SUPPORT_OLD_API is not set
CONFIG_SND_VERBOSE_PRINTK=y
CONFIG_SND_DEBUG=y
CONFIG_SND_DEBUG_VERBOSE=y
CONFIG_SND_PCM_XRUN_DEBUG=y
+CONFIG_SND_SEQUENCER=m
+CONFIG_SND_SEQ_DUMMY=m
+CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_DUMMY=m
CONFIG_SND_VIRMIDI=m
CONFIG_SND_MTPAV=m
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_SOUND=y
CONFIG_SND=y
-CONFIG_SND_SEQUENCER=y
+CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=y
CONFIG_SND_PCM_OSS=y
-CONFIG_SND_SEQUENCER_OSS=y
# CONFIG_SND_VERBOSE_PROCFS is not set
+CONFIG_SND_SEQUENCER=y
+CONFIG_SND_SEQUENCER_OSS=y
CONFIG_HID_APPLE=m
CONFIG_HID_WACOM=m
CONFIG_MMC=y
case PVR_POWER8:
case PVR_POWER8E:
case PVR_POWER8NVL:
- __flush_tlb_power8(POWER8_TLB_SETS);
+ __flush_tlb_power8(TLB_INVAL_SCOPE_GLOBAL);
break;
case PVR_POWER9:
- __flush_tlb_power9(POWER9_TLB_SETS_HASH);
+ __flush_tlb_power9(TLB_INVAL_SCOPE_GLOBAL);
break;
default:
pr_err("unknown CPU version for boot TLB flush\n");
} else if ((ret = eeh_ops->init()))
return ret;
+ /* Initialize PHB PEs */
+ list_for_each_entry_safe(hose, tmp, &hose_list, list_node)
+ eeh_dev_phb_init_dynamic(hose);
+
/* Initialize EEH event */
ret = eeh_event_init();
if (ret)
/* EEH PE for PHB */
eeh_phb_pe_create(phb);
}
-
-/**
- * eeh_dev_phb_init - Create EEH devices for devices included in existing PHBs
- *
- * Scan all the existing PHBs and create EEH devices for their OF
- * nodes and their children OF nodes
- */
-static int __init eeh_dev_phb_init(void)
-{
- struct pci_controller *phb, *tmp;
-
- list_for_each_entry_safe(phb, tmp, &hose_list, list_node)
- eeh_dev_phb_init_dynamic(phb);
-
- return 0;
-}
-
-core_initcall(eeh_dev_phb_init);
long __machine_check_early_realmode_p9(struct pt_regs *regs)
{
+ /*
+ * On POWER9 DD2.1 and below, it's possible to get a machine check
+ * caused by a paste instruction where only DSISR bit 25 is set. This
+ * will result in the MCE handler seeing an unknown event and the kernel
+ * crashing. An MCE that occurs like this is spurious, so we don't need
+ * to do anything in terms of servicing it. If there is something that
+ * needs to be serviced, the CPU will raise the MCE again with the
+ * correct DSISR so that it can be serviced properly. So detect this
+ * case and mark it as handled.
+ */
+ if (SRR1_MC_LOADSTORE(regs->msr) && regs->dsisr == 0x02000000)
+ return 1;
+
return mce_handle_error(regs, mce_p9_derror_table, mce_p9_ierror_table);
}
* and that can be emulated.
*/
if (!is_conditional_branch(*p->ainsn.insn) &&
- analyse_instr(&op, ®s, *p->ainsn.insn))
+ analyse_instr(&op, ®s, *p->ainsn.insn) == 1) {
+ emulate_update_regs(®s, &op);
nip = regs.nip;
+ }
return nip;
}
* in the appropriate thread structures from live.
*/
- if (tsk != current)
+ if ((!cpu_has_feature(CPU_FTR_TM)) || (tsk != current))
return;
if (MSR_TM_SUSPENDED(mfmsr())) {
#endif
#endif
-#ifdef CONFIG_PPC_64K_PAGES
- init_mm.context.pte_frag = NULL;
-#endif
#ifdef CONFIG_SPAPR_TCE_IOMMU
mm_iommu_init(&init_mm);
#endif
int machine_check_e500mc(struct pt_regs *regs)
{
unsigned long mcsr = mfspr(SPRN_MCSR);
+ unsigned long pvr = mfspr(SPRN_PVR);
unsigned long reason = mcsr;
int recoverable = 1;
* may still get logged and cause a machine check. We should
* only treat the non-write shadow case as non-recoverable.
*/
- if (!(mfspr(SPRN_L1CSR2) & L1CSR2_DCWS))
- recoverable = 0;
+ /* On e6500 core, L1 DCWS (Data cache write shadow mode) bit
+ * is not implemented but L1 data cache always runs in write
+ * shadow mode. Hence on data cache parity errors HW will
+ * automatically invalidate the L1 Data Cache.
+ */
+ if (PVR_VER(pvr) != PVR_VER_E6500) {
+ if (!(mfspr(SPRN_L1CSR2) & L1CSR2_DCWS))
+ recoverable = 0;
+ }
}
if (reason & MCSR_L2MMU_MHIT) {
if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
return 0;
- if (watchdog_suspended)
- return 0;
-
if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
return 0;
wd_timer_period_ms = watchdog_thresh * 1000 * 2 / 5;
}
-void watchdog_nmi_reconfigure(void)
+void watchdog_nmi_stop(void)
{
int cpu;
- watchdog_calc_timeouts();
-
for_each_cpu(cpu, &wd_cpus_enabled)
stop_wd_on_cpu(cpu);
+}
+void watchdog_nmi_start(void)
+{
+ int cpu;
+
+ watchdog_calc_timeouts();
for_each_cpu_and(cpu, cpu_online_mask, &watchdog_cpumask)
start_wd_on_cpu(cpu);
}
/*
- * This runs after lockup_detector_init() which sets up watchdog_cpumask.
+ * Invoked from core watchdog init.
*/
-static int __init powerpc_watchdog_init(void)
+int __init watchdog_nmi_probe(void)
{
int err;
- watchdog_calc_timeouts();
-
- err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powerpc/watchdog:online",
- start_wd_on_cpu, stop_wd_on_cpu);
- if (err < 0)
+ err = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+ "powerpc/watchdog:online",
+ start_wd_on_cpu, stop_wd_on_cpu);
+ if (err < 0) {
pr_warn("Watchdog could not be initialized");
-
+ return err;
+ }
return 0;
}
-arch_initcall(powerpc_watchdog_init);
static void handle_backtrace_ipi(struct pt_regs *regs)
{
BEGIN_FTR_SECTION
mtspr SPRN_PPR, r0
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+
+/* Move canary into DSISR to check for later */
+BEGIN_FTR_SECTION
+ li r0, 0x7fff
+ mtspr SPRN_HDSISR, r0
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+
ld r0, VCPU_GPR(R0)(r4)
ld r4, VCPU_GPR(R4)(r4)
kvmppc_hdsi:
ld r3, VCPU_KVM(r9)
lbz r0, KVM_RADIX(r3)
- cmpwi r0, 0
mfspr r4, SPRN_HDAR
mfspr r6, SPRN_HDSISR
+BEGIN_FTR_SECTION
+ /* Look for DSISR canary. If we find it, retry instruction */
+ cmpdi r6, 0x7fff
+ beq 6f
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ cmpwi r0, 0
bne .Lradix_hdsi /* on radix, just save DAR/DSISR/ASDR */
/* HPTE not found fault or protection fault? */
andis. r0, r6, (DSISR_NOHPTE | DSISR_PROTFAULT)@h
return -EINVAL;
state = &sb->irq_state[idx];
arch_spin_lock(&sb->lock);
- *server = state->guest_server;
+ *server = state->act_server;
*priority = state->guest_priority;
arch_spin_unlock(&sb->lock);
xive->saved_src_count++;
/* Convert saved state into something compatible with xics */
- val = state->guest_server;
+ val = state->act_server;
prio = state->saved_scan_prio;
if (prio == MASKED) {
/* First convert prio and mark interrupt as untargetted */
act_prio = xive_prio_from_guest(guest_prio);
state->act_priority = MASKED;
- state->guest_server = server;
/*
* We need to drop the lock due to the mutex below. Hopefully
struct xive_irq_data *pt_data; /* XIVE Pass-through associated data */
/* Targetting as set by guest */
- u32 guest_server; /* Current guest selected target */
u8 guest_priority; /* Guest set priority */
u8 saved_priority; /* Saved priority when masking */
: "r" (addr), "i" (-EFAULT), "0" (err))
static nokprobe_inline void set_cr0(const struct pt_regs *regs,
- struct instruction_op *op, int rd)
+ struct instruction_op *op)
{
- long val = regs->gpr[rd];
+ long val = op->val;
op->type |= SETCC;
op->ccval = (regs->ccr & 0x0fffffff) | ((regs->xer >> 3) & 0x10000000);
case 13: /* addic. */
imm = (short) instr;
add_with_carry(regs, op, rd, regs->gpr[ra], imm, 0);
- set_cr0(regs, op, rd);
+ set_cr0(regs, op);
return 1;
case 14: /* addi */
case 28: /* andi. */
op->val = regs->gpr[rd] & (unsigned short) instr;
- set_cr0(regs, op, ra);
+ set_cr0(regs, op);
goto logical_done_nocc;
case 29: /* andis. */
imm = (unsigned short) instr;
op->val = regs->gpr[rd] & (imm << 16);
- set_cr0(regs, op, ra);
+ set_cr0(regs, op);
goto logical_done_nocc;
#ifdef __powerpc64__
op->type = COMPUTE + SETCC;
imm = 0xf0000000UL;
val = regs->gpr[rd];
- op->val = regs->ccr;
+ op->ccval = regs->ccr;
for (sh = 0; sh < 8; ++sh) {
if (instr & (0x80000 >> sh))
- op->val = (op->val & ~imm) |
+ op->ccval = (op->ccval & ~imm) |
(val & imm);
imm >>= 4;
}
goto arith_done;
case 235: /* mullw */
- op->val = (unsigned int) regs->gpr[ra] *
- (unsigned int) regs->gpr[rb];
+ op->val = (long)(int) regs->gpr[ra] *
+ (int) regs->gpr[rb];
+
goto arith_done;
case 266: /* add */
logical_done:
if (instr & 1)
- set_cr0(regs, op, ra);
+ set_cr0(regs, op);
logical_done_nocc:
op->reg = ra;
op->type |= SETREG;
arith_done:
if (instr & 1)
- set_cr0(regs, op, rd);
+ set_cr0(regs, op);
compute_done:
op->reg = rd;
op->type |= SETREG;
break;
}
wmb();
+ local_irq_restore(flags);
flush_tlb_kernel_range((unsigned long)page_address(start),
(unsigned long)page_address(page));
- local_irq_restore(flags);
return err;
}
u32 pmcs[MAX_HWEVENTS];
int i;
+ if (!ppmu) {
+ pr_info("Performance monitor hardware not registered.\n");
+ return;
+ }
+
if (!ppmu->n_counter)
return;
u64 pir = get_hard_smp_processor_id(cpu);
mtspr(SPRN_LPCR, lpcr_val);
- opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+
+ /*
+ * Program the LPCR via stop-api only if the deepest stop state
+ * can lose hypervisor context.
+ */
+ if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT)
+ opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
}
/*
#ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
static unsigned long pnv_memory_block_size(void)
{
- return 256UL * 1024 * 1024;
+ /*
+ * We map the kernel linear region with 1GB large pages on radix. For
+ * memory hot unplug to work our memory block size must be at least
+ * this size.
+ */
+ if (radix_enabled())
+ return 1UL * 1024 * 1024 * 1024;
+ else
+ return 256UL * 1024 * 1024;
}
#endif
return rc;
}
- of_node_put(dn->parent);
return 0;
}
}
dn = dlpar_configure_connector(cpu_to_be32(drc_index), parent);
- of_node_put(parent);
if (!dn) {
pr_warn("Failed call to configure-connector, drc index: %x\n",
drc_index);
dlpar_release_drc(drc_index);
+ of_node_put(parent);
return -EINVAL;
}
rc = dlpar_attach_node(dn, parent);
+
+ /* Regardless we are done with parent now */
+ of_node_put(parent);
+
if (rc) {
saved_rc = rc;
pr_warn("Failed to attach node %s, rc: %d, drc index: %x\n",
return -ENOENT;
dn = dlpar_configure_connector(drc_index, parent_dn);
- if (!dn)
+ if (!dn) {
+ of_node_put(parent_dn);
return -ENOENT;
+ }
rc = dlpar_attach_node(dn, parent_dn);
if (rc)
#define RIWAR_WRTYP_ALLOC 0x00006000
#define RIWAR_SIZE_MASK 0x0000003F
+static DEFINE_SPINLOCK(fsl_rio_config_lock);
+
#define __fsl_read_rio_config(x, addr, err, op) \
__asm__ __volatile__( \
"1: "op" %1,0(%2)\n" \
u8 hopcount, u32 offset, int len, u32 *val)
{
struct rio_priv *priv = mport->priv;
+ unsigned long flags;
u8 *data;
u32 rval, err = 0;
if (offset > (0x1000000 - len) || !IS_ALIGNED(offset, len))
return -EINVAL;
+ spin_lock_irqsave(&fsl_rio_config_lock, flags);
+
out_be32(&priv->maint_atmu_regs->rowtar,
(destid << 22) | (hopcount << 12) | (offset >> 12));
out_be32(&priv->maint_atmu_regs->rowtear, (destid >> 10));
__fsl_read_rio_config(rval, data, err, "lwz");
break;
default:
+ spin_unlock_irqrestore(&fsl_rio_config_lock, flags);
return -EINVAL;
}
err, destid, hopcount, offset);
}
+ spin_unlock_irqrestore(&fsl_rio_config_lock, flags);
*val = rval;
return err;
u8 hopcount, u32 offset, int len, u32 val)
{
struct rio_priv *priv = mport->priv;
+ unsigned long flags;
u8 *data;
+ int ret = 0;
+
pr_debug
("fsl_rio_config_write:"
" index %d destid %d hopcount %d offset %8.8x len %d val %8.8x\n",
if (offset > (0x1000000 - len) || !IS_ALIGNED(offset, len))
return -EINVAL;
+ spin_lock_irqsave(&fsl_rio_config_lock, flags);
+
out_be32(&priv->maint_atmu_regs->rowtar,
(destid << 22) | (hopcount << 12) | (offset >> 12));
out_be32(&priv->maint_atmu_regs->rowtear, (destid >> 10));
out_be32((u32 *) data, val);
break;
default:
- return -EINVAL;
+ ret = -EINVAL;
}
+ spin_unlock_irqrestore(&fsl_rio_config_lock, flags);
- return 0;
+ return ret;
}
static void fsl_rio_inbound_mem_init(struct rio_priv *priv)
#define DOORBELL_MESSAGE_SIZE 0x08
+static DEFINE_SPINLOCK(fsl_rio_doorbell_lock);
+
struct rio_msg_regs {
u32 omr;
u32 osr;
int fsl_rio_doorbell_send(struct rio_mport *mport,
int index, u16 destid, u16 data)
{
+ unsigned long flags;
+
pr_debug("fsl_doorbell_send: index %d destid %4.4x data %4.4x\n",
index, destid, data);
+ spin_lock_irqsave(&fsl_rio_doorbell_lock, flags);
+
/* In the serial version silicons, such as MPC8548, MPC8641,
* below operations is must be.
*/
out_be32(&dbell->dbell_regs->oddatr, (index << 20) | data);
out_be32(&dbell->dbell_regs->odmr, 0x00000001);
+ spin_unlock_irqrestore(&fsl_rio_doorbell_lock, flags);
+
return 0;
}
if (xive_ops->teardown_cpu)
xive_ops->teardown_cpu(cpu, xc);
+
+#ifdef CONFIG_SMP
+ /* Get rid of IPI */
+ xive_cleanup_cpu_ipi(cpu, xc);
+#endif
+
+ /* Disable and free the queues */
+ xive_cleanup_cpu_queues(cpu, xc);
}
void xive_kexec_teardown_cpu(int secondary)
static void xive_spapr_put_ipi(unsigned int cpu, struct xive_cpu *xc)
{
+ if (!xc->hw_ipi)
+ return;
+
xive_irq_bitmap_free(xc->hw_ipi);
+ xc->hw_ipi = 0;
}
#endif /* CONFIG_SMP */
static inline void pmdp_invalidate(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp)
{
- pmdp_xchg_direct(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
+ pmd_t pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID);
+
+ pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd);
}
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
#endif
}
-static int __init topology_setup(char *str)
-{
- bool enabled;
- int rc;
-
- rc = kstrtobool(str, &enabled);
- if (!rc && !enabled)
- S390_lowcore.machine_flags &= ~MACHINE_FLAG_TOPOLOGY;
- return rc;
-}
-early_param("topology", topology_setup);
-
static int __init disable_vector_extension(char *str)
{
S390_lowcore.machine_flags &= ~MACHINE_FLAG_VX;
}
/* Check online status of the CPU to which the event is pinned */
- if ((unsigned int)event->cpu >= nr_cpumask_bits ||
- (event->cpu >= 0 && !cpu_online(event->cpu)))
- return -ENODEV;
+ if (event->cpu >= 0) {
+ if ((unsigned int)event->cpu >= nr_cpumask_bits)
+ return -ENODEV;
+ if (!cpu_online(event->cpu))
+ return -ENODEV;
+ }
/* Force reset of idle/hv excludes regardless of what the
* user requested.
#include <linux/workqueue.h>
#include <linux/bootmem.h>
+#include <linux/uaccess.h>
+#include <linux/sysctl.h>
#include <linux/cpuset.h>
#include <linux/device.h>
#include <linux/export.h>
#define PTF_VERTICAL (1UL)
#define PTF_CHECK (2UL)
+enum {
+ TOPOLOGY_MODE_HW,
+ TOPOLOGY_MODE_SINGLE,
+ TOPOLOGY_MODE_PACKAGE,
+ TOPOLOGY_MODE_UNINITIALIZED
+};
+
struct mask_info {
struct mask_info *next;
unsigned char id;
cpumask_t mask;
};
+static int topology_mode = TOPOLOGY_MODE_UNINITIALIZED;
static void set_topology_timer(void);
static void topology_work_fn(struct work_struct *work);
static struct sysinfo_15_1_x *tl_info;
cpumask_t mask;
cpumask_copy(&mask, cpumask_of(cpu));
- if (!MACHINE_HAS_TOPOLOGY)
- return mask;
- for (; info; info = info->next) {
- if (cpumask_test_cpu(cpu, &info->mask))
- return info->mask;
+ switch (topology_mode) {
+ case TOPOLOGY_MODE_HW:
+ while (info) {
+ if (cpumask_test_cpu(cpu, &info->mask)) {
+ mask = info->mask;
+ break;
+ }
+ info = info->next;
+ }
+ if (cpumask_empty(&mask))
+ cpumask_copy(&mask, cpumask_of(cpu));
+ break;
+ case TOPOLOGY_MODE_PACKAGE:
+ cpumask_copy(&mask, cpu_present_mask);
+ break;
+ default:
+ /* fallthrough */
+ case TOPOLOGY_MODE_SINGLE:
+ cpumask_copy(&mask, cpumask_of(cpu));
+ break;
}
return mask;
}
int i;
cpumask_copy(&mask, cpumask_of(cpu));
- if (!MACHINE_HAS_TOPOLOGY)
+ if (topology_mode != TOPOLOGY_MODE_HW)
return mask;
cpu -= cpu % (smp_cpu_mtid + 1);
for (i = 0; i <= smp_cpu_mtid; i++)
{
int cpu;
- mutex_lock(&smp_cpu_state_mutex);
for_each_possible_cpu(cpu)
smp_cpu_set_polarization(cpu, POLARIZATION_HRZ);
- mutex_unlock(&smp_cpu_state_mutex);
}
static int ptf(unsigned long fc)
static void update_cpu_masks(void)
{
struct cpu_topology_s390 *topo;
- int cpu;
+ int cpu, id;
for_each_possible_cpu(cpu) {
topo = &cpu_topology[cpu];
topo->core_mask = cpu_group_map(&socket_info, cpu);
topo->book_mask = cpu_group_map(&book_info, cpu);
topo->drawer_mask = cpu_group_map(&drawer_info, cpu);
- if (!MACHINE_HAS_TOPOLOGY) {
+ if (topology_mode != TOPOLOGY_MODE_HW) {
+ id = topology_mode == TOPOLOGY_MODE_PACKAGE ? 0 : cpu;
topo->thread_id = cpu;
topo->core_id = cpu;
- topo->socket_id = cpu;
- topo->book_id = cpu;
- topo->drawer_id = cpu;
+ topo->socket_id = id;
+ topo->book_id = id;
+ topo->drawer_id = id;
if (cpu_present(cpu))
cpumask_set_cpu(cpu, &cpus_with_topology);
}
struct sysinfo_15_1_x *info = tl_info;
int rc = 0;
+ mutex_lock(&smp_cpu_state_mutex);
cpumask_clear(&cpus_with_topology);
if (MACHINE_HAS_TOPOLOGY) {
rc = 1;
update_cpu_masks();
if (!MACHINE_HAS_TOPOLOGY)
topology_update_polarization_simple();
+ mutex_unlock(&smp_cpu_state_mutex);
return rc;
}
schedule_work(&topology_work);
}
+static void topology_flush_work(void)
+{
+ flush_work(&topology_work);
+}
+
static void topology_timer_fn(unsigned long ignored)
{
if (ptf(PTF_CHECK))
struct sysinfo_15_1_x *info;
set_sched_topology(s390_topology);
+ if (topology_mode == TOPOLOGY_MODE_UNINITIALIZED) {
+ if (MACHINE_HAS_TOPOLOGY)
+ topology_mode = TOPOLOGY_MODE_HW;
+ else
+ topology_mode = TOPOLOGY_MODE_SINGLE;
+ }
if (!MACHINE_HAS_TOPOLOGY)
goto out;
tl_info = memblock_virt_alloc(PAGE_SIZE, PAGE_SIZE);
__arch_update_cpu_topology();
}
+static inline int topology_get_mode(int enabled)
+{
+ if (!enabled)
+ return TOPOLOGY_MODE_SINGLE;
+ return MACHINE_HAS_TOPOLOGY ? TOPOLOGY_MODE_HW : TOPOLOGY_MODE_PACKAGE;
+}
+
+static inline int topology_is_enabled(void)
+{
+ return topology_mode != TOPOLOGY_MODE_SINGLE;
+}
+
+static int __init topology_setup(char *str)
+{
+ bool enabled;
+ int rc;
+
+ rc = kstrtobool(str, &enabled);
+ if (rc)
+ return rc;
+ topology_mode = topology_get_mode(enabled);
+ return 0;
+}
+early_param("topology", topology_setup);
+
+static int topology_ctl_handler(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ unsigned int len;
+ int new_mode;
+ char buf[2];
+
+ if (!*lenp || *ppos) {
+ *lenp = 0;
+ return 0;
+ }
+ if (!write) {
+ strncpy(buf, topology_is_enabled() ? "1\n" : "0\n",
+ ARRAY_SIZE(buf));
+ len = strnlen(buf, ARRAY_SIZE(buf));
+ if (len > *lenp)
+ len = *lenp;
+ if (copy_to_user(buffer, buf, len))
+ return -EFAULT;
+ goto out;
+ }
+ len = *lenp;
+ if (copy_from_user(buf, buffer, len > sizeof(buf) ? sizeof(buf) : len))
+ return -EFAULT;
+ if (buf[0] != '0' && buf[0] != '1')
+ return -EINVAL;
+ mutex_lock(&smp_cpu_state_mutex);
+ new_mode = topology_get_mode(buf[0] == '1');
+ if (topology_mode != new_mode) {
+ topology_mode = new_mode;
+ topology_schedule_update();
+ }
+ mutex_unlock(&smp_cpu_state_mutex);
+ topology_flush_work();
+out:
+ *lenp = len;
+ *ppos += len;
+ return 0;
+}
+
+static struct ctl_table topology_ctl_table[] = {
+ {
+ .procname = "topology",
+ .mode = 0644,
+ .proc_handler = topology_ctl_handler,
+ },
+ { },
+};
+
+static struct ctl_table topology_dir_table[] = {
+ {
+ .procname = "s390",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = topology_ctl_table,
+ },
+ { },
+};
+
static int __init topology_init(void)
{
if (MACHINE_HAS_TOPOLOGY)
set_topology_timer();
else
topology_update_polarization_simple();
+ register_sysctl_table(topology_dir_table);
return device_create_file(cpu_subsys.dev_root, &dev_attr_dispatching);
}
device_initcall(topology_init);
static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
- unsigned long mask, result;
struct page *head, *page;
+ unsigned long mask;
int refs;
- result = write ? 0 : _SEGMENT_ENTRY_PROTECT;
- mask = result | _SEGMENT_ENTRY_INVALID;
- if ((pmd_val(pmd) & mask) != result)
+ mask = (write ? _SEGMENT_ENTRY_PROTECT : 0) | _SEGMENT_ENTRY_INVALID;
+ if ((pmd_val(pmd) & mask) != 0)
return 0;
VM_BUG_ON(!pfn_valid(pmd_val(pmd) >> PAGE_SHIFT));
/* Free all resources held by a thread. */
extern void release_thread(struct task_struct *);
-/* Copy and release all segment info associated with a VM */
-#define copy_segments(p, mm) do { } while(0)
-#define release_segments(mm) do { } while(0)
-
/*
* FPU lazy state save handling.
*/
/* Free all resources held by a thread. */
extern void release_thread(struct task_struct *);
-/* Copy and release all segment info associated with a VM */
-#define copy_segments(p, mm) do { } while (0)
-#define release_segments(mm) do { } while (0)
-#define forget_segments() do { } while (0)
/*
* FPU lazy state save handling.
*/
GPIO_PG7, GPIO_PG6, GPIO_PG5, GPIO_PG4,
GPIO_PG3, GPIO_PG2, GPIO_PG1, GPIO_PG0,
- /* Port H */
- GPIO_PH7, GPIO_PH6, GPIO_PH5, GPIO_PH4,
- GPIO_PH3, GPIO_PH2, GPIO_PH1, GPIO_PH0,
+ /* Port H - Port H does not have a Data Register */
/* Port I - not on device */
GPIO_PG7, GPIO_PG6, GPIO_PG5, GPIO_PG4,
GPIO_PG3, GPIO_PG2, GPIO_PG1, GPIO_PG0,
- /* Port H */
- GPIO_PH7, GPIO_PH6, GPIO_PH5, GPIO_PH4,
- GPIO_PH3, GPIO_PH2, GPIO_PH1, GPIO_PH0,
+ /* Port H - Port H does not have a Data Register */
/* Port I - not on device */
GPIO_PTN3, GPIO_PTN2, GPIO_PTN1, GPIO_PTN0,
/* PTQ */
- GPIO_PTQ7, GPIO_PTQ6, GPIO_PTQ5, GPIO_PTQ4,
+ GPIO_PTQ6, GPIO_PTQ5, GPIO_PTQ4,
GPIO_PTQ3, GPIO_PTQ2, GPIO_PTQ1, GPIO_PTQ0,
/* PTR */
/* PTJ */
GPIO_PTJ0, GPIO_PTJ1, GPIO_PTJ2, GPIO_PTJ3,
- GPIO_PTJ4, GPIO_PTJ5, GPIO_PTJ6, GPIO_PTJ7_RESV,
+ GPIO_PTJ4, GPIO_PTJ5, GPIO_PTJ6,
/* PTK */
GPIO_PTK0, GPIO_PTK1, GPIO_PTK2, GPIO_PTK3,
/* PTL */
GPIO_PTL0, GPIO_PTL1, GPIO_PTL2, GPIO_PTL3,
- GPIO_PTL4, GPIO_PTL5, GPIO_PTL6, GPIO_PTL7_RESV,
+ GPIO_PTL4, GPIO_PTL5, GPIO_PTL6,
/* PTM */
GPIO_PTM0, GPIO_PTM1, GPIO_PTM2, GPIO_PTM3,
/* PTN */
GPIO_PTN0, GPIO_PTN1, GPIO_PTN2, GPIO_PTN3,
- GPIO_PTN4, GPIO_PTN5, GPIO_PTN6, GPIO_PTN7_RESV,
+ GPIO_PTN4, GPIO_PTN5, GPIO_PTN6,
/* PTO */
GPIO_PTO0, GPIO_PTO1, GPIO_PTO2, GPIO_PTO3,
/* PTQ */
GPIO_PTQ0, GPIO_PTQ1, GPIO_PTQ2, GPIO_PTQ3,
- GPIO_PTQ4, GPIO_PTQ5, GPIO_PTQ6, GPIO_PTQ7_RESV,
+ GPIO_PTQ4, GPIO_PTQ5, GPIO_PTQ6,
/* PTR */
GPIO_PTR0, GPIO_PTR1, GPIO_PTR2, GPIO_PTR3,
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
-CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
-CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
-CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m
CONFIG_CRC_CCITT=m
CONFIG_CRC7=m
{
char *endp;
unsigned long long maxnodemem;
- long node;
+ unsigned long node;
node = str ? simple_strtoul(str, &endp, 0) : INT_MAX;
if (node >= MAX_NUMNODES || *endp != ':')
{
}
-static inline void mm_copy_segments(struct mm_struct *from_mm,
- struct mm_struct *new_mm)
-{
-}
-
#define init_stack (init_thread_union.stack)
/*
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
-static void __init timer_setup(void)
+static void __init um_timer_setup(void)
{
int err;
void __init time_init(void)
{
timer_set_signal_handler();
- late_time_init = timer_setup;
+ late_time_init = um_timer_setup;
}
#define s3 ((16 + 2 + (3 * 256)) * 4)
/* register macros */
-#define CTX %rdi
+#define CTX %r12
#define RIO %rsi
#define RX0 %rax
#define RX2bh %ch
#define RX3bh %dh
-#define RT0 %rbp
+#define RT0 %rdi
#define RT1 %rsi
#define RT2 %r8
#define RT3 %r9
-#define RT0d %ebp
+#define RT0d %edi
#define RT1d %esi
#define RT2d %r8d
#define RT3d %r9d
ENTRY(__blowfish_enc_blk)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
* %rcx: bool, if true: xor output
*/
- movq %rbp, %r11;
+ movq %r12, %r11;
+ movq %rdi, CTX;
movq %rsi, %r10;
movq %rdx, RIO;
round_enc(14);
add_roundkey_enc(16);
- movq %r11, %rbp;
+ movq %r11, %r12;
movq %r10, RIO;
test %cl, %cl;
ENTRY(blowfish_dec_blk)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
- movq %rbp, %r11;
+ movq %r12, %r11;
+ movq %rdi, CTX;
movq %rsi, %r10;
movq %rdx, RIO;
movq %r10, RIO;
write_block();
- movq %r11, %rbp;
+ movq %r11, %r12;
ret;
ENDPROC(blowfish_dec_blk)
ENTRY(__blowfish_enc_blk_4way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
* %rcx: bool, if true: xor output
*/
- pushq %rbp;
+ pushq %r12;
pushq %rbx;
pushq %rcx;
- preload_roundkey_enc(0);
-
+ movq %rdi, CTX
movq %rsi, %r11;
movq %rdx, RIO;
+ preload_roundkey_enc(0);
+
read_block4();
round_enc4(0);
round_enc4(14);
add_preloaded_roundkey4();
- popq %rbp;
+ popq %r12;
movq %r11, RIO;
- test %bpl, %bpl;
+ test %r12b, %r12b;
jnz .L__enc_xor4;
write_block4();
popq %rbx;
- popq %rbp;
+ popq %r12;
ret;
.L__enc_xor4:
xor_block4();
popq %rbx;
- popq %rbp;
+ popq %r12;
ret;
ENDPROC(__blowfish_enc_blk_4way)
ENTRY(blowfish_dec_blk_4way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
- pushq %rbp;
+ pushq %r12;
pushq %rbx;
- preload_roundkey_dec(17);
- movq %rsi, %r11;
+ movq %rdi, CTX;
+ movq %rsi, %r11
movq %rdx, RIO;
+ preload_roundkey_dec(17);
read_block4();
round_dec4(17);
write_block4();
popq %rbx;
- popq %rbp;
+ popq %r12;
ret;
ENDPROC(blowfish_dec_blk_4way)
#define RCD1bh %dh
#define RT0 %rsi
-#define RT1 %rbp
+#define RT1 %r12
#define RT2 %r8
#define RT0d %esi
-#define RT1d %ebp
+#define RT1d %r12d
#define RT2d %r8d
#define RT2bl %r8b
#define RXOR %r9
-#define RRBP %r10
+#define RR12 %r10
#define RDST %r11
#define RXORd %r9d
* %rdx: src
* %rcx: bool xor
*/
- movq %rbp, RRBP;
+ movq %r12, RR12;
movq %rcx, RXOR;
movq %rsi, RDST;
enc_outunpack(mov, RT1);
- movq RRBP, %rbp;
+ movq RR12, %r12;
ret;
.L__enc_xor:
enc_outunpack(xor, RT1);
- movq RRBP, %rbp;
+ movq RR12, %r12;
ret;
ENDPROC(__camellia_enc_blk)
movl $24, RXORd;
cmovel RXORd, RT2d; /* max */
- movq %rbp, RRBP;
+ movq %r12, RR12;
movq %rsi, RDST;
movq %rdx, RIO;
dec_outunpack();
- movq RRBP, %rbp;
+ movq RR12, %r12;
ret;
ENDPROC(camellia_dec_blk)
*/
pushq %rbx;
- movq %rbp, RRBP;
+ movq %r12, RR12;
movq %rcx, RXOR;
movq %rsi, RDST;
movq %rdx, RIO;
enc_outunpack2(mov, RT2);
- movq RRBP, %rbp;
+ movq RR12, %r12;
popq %rbx;
ret;
.L__enc2_xor:
enc_outunpack2(xor, RT2);
- movq RRBP, %rbp;
+ movq RR12, %r12;
popq %rbx;
ret;
ENDPROC(__camellia_enc_blk_2way)
cmovel RXORd, RT2d; /* max */
movq %rbx, RXOR;
- movq %rbp, RRBP;
+ movq %r12, RR12;
movq %rsi, RDST;
movq %rdx, RIO;
dec_outunpack2();
- movq RRBP, %rbp;
+ movq RR12, %r12;
movq RXOR, %rbx;
ret;
ENDPROC(camellia_dec_blk_2way)
/**********************************************************************
16-way AVX cast5
**********************************************************************/
-#define CTX %rdi
+#define CTX %r15
#define RL1 %xmm0
#define RR1 %xmm1
#define RTMP %xmm15
-#define RID1 %rbp
-#define RID1d %ebp
+#define RID1 %rdi
+#define RID1d %edi
#define RID2 %rsi
#define RID2d %esi
.align 16
__cast5_enc_blk16:
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* RL1: blocks 1 and 2
* RR1: blocks 3 and 4
* RL2: blocks 5 and 6
* RR4: encrypted blocks 15 and 16
*/
- pushq %rbp;
+ pushq %r15;
pushq %rbx;
+ movq %rdi, CTX;
+
vmovdqa .Lbswap_mask, RKM;
vmovd .Lfirst_mask, R1ST;
vmovd .L32_mask, R32;
.L__skip_enc:
popq %rbx;
- popq %rbp;
+ popq %r15;
vmovdqa .Lbswap_mask, RKM;
.align 16
__cast5_dec_blk16:
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* RL1: encrypted blocks 1 and 2
* RR1: encrypted blocks 3 and 4
* RL2: encrypted blocks 5 and 6
* RR4: decrypted blocks 15 and 16
*/
- pushq %rbp;
+ pushq %r15;
pushq %rbx;
+ movq %rdi, CTX;
+
vmovdqa .Lbswap_mask, RKM;
vmovd .Lfirst_mask, R1ST;
vmovd .L32_mask, R32;
vmovdqa .Lbswap_mask, RKM;
popq %rbx;
- popq %rbp;
+ popq %r15;
outunpack_blocks(RR1, RL1, RTMP, RX, RKM);
outunpack_blocks(RR2, RL2, RTMP, RX, RKM);
ENTRY(cast5_ecb_enc_16way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
FRAME_BEGIN
+ pushq %r15;
+ movq %rdi, CTX;
movq %rsi, %r11;
vmovdqu (0*4*4)(%rdx), RL1;
vmovdqu RR4, (6*4*4)(%r11);
vmovdqu RL4, (7*4*4)(%r11);
+ popq %r15;
FRAME_END
ret;
ENDPROC(cast5_ecb_enc_16way)
ENTRY(cast5_ecb_dec_16way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
FRAME_BEGIN
+ pushq %r15;
+
+ movq %rdi, CTX;
movq %rsi, %r11;
vmovdqu (0*4*4)(%rdx), RL1;
vmovdqu RR4, (6*4*4)(%r11);
vmovdqu RL4, (7*4*4)(%r11);
+ popq %r15;
FRAME_END
ret;
ENDPROC(cast5_ecb_dec_16way)
ENTRY(cast5_cbc_dec_16way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
FRAME_BEGIN
-
pushq %r12;
+ pushq %r15;
+ movq %rdi, CTX;
movq %rsi, %r11;
movq %rdx, %r12;
vmovdqu RR4, (6*16)(%r11);
vmovdqu RL4, (7*16)(%r11);
+ popq %r15;
popq %r12;
-
FRAME_END
ret;
ENDPROC(cast5_cbc_dec_16way)
ENTRY(cast5_ctr_16way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
* %rcx: iv (big endian, 64bit)
*/
FRAME_BEGIN
-
pushq %r12;
+ pushq %r15;
+ movq %rdi, CTX;
movq %rsi, %r11;
movq %rdx, %r12;
vmovdqu RR4, (6*16)(%r11);
vmovdqu RL4, (7*16)(%r11);
+ popq %r15;
popq %r12;
-
FRAME_END
ret;
ENDPROC(cast5_ctr_16way)
/**********************************************************************
8-way AVX cast6
**********************************************************************/
-#define CTX %rdi
+#define CTX %r15
#define RA1 %xmm0
#define RB1 %xmm1
#define RTMP %xmm15
-#define RID1 %rbp
-#define RID1d %ebp
+#define RID1 %rdi
+#define RID1d %edi
#define RID2 %rsi
#define RID2d %esi
.align 8
__cast6_enc_blk8:
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
* output:
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks
*/
- pushq %rbp;
+ pushq %r15;
pushq %rbx;
+ movq %rdi, CTX;
+
vmovdqa .Lbswap_mask, RKM;
vmovd .Lfirst_mask, R1ST;
vmovd .L32_mask, R32;
QBAR(11);
popq %rbx;
- popq %rbp;
+ popq %r15;
vmovdqa .Lbswap_mask, RKM;
.align 8
__cast6_dec_blk8:
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks
* output:
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: decrypted blocks
*/
- pushq %rbp;
+ pushq %r15;
pushq %rbx;
+ movq %rdi, CTX;
+
vmovdqa .Lbswap_mask, RKM;
vmovd .Lfirst_mask, R1ST;
vmovd .L32_mask, R32;
QBAR(0);
popq %rbx;
- popq %rbp;
+ popq %r15;
vmovdqa .Lbswap_mask, RKM;
outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM);
ENTRY(cast6_ecb_enc_8way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
FRAME_BEGIN
+ pushq %r15;
+ movq %rdi, CTX;
movq %rsi, %r11;
load_8way(%rdx, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
store_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ popq %r15;
FRAME_END
ret;
ENDPROC(cast6_ecb_enc_8way)
ENTRY(cast6_ecb_dec_8way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
FRAME_BEGIN
+ pushq %r15;
+ movq %rdi, CTX;
movq %rsi, %r11;
load_8way(%rdx, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
store_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ popq %r15;
FRAME_END
ret;
ENDPROC(cast6_ecb_dec_8way)
ENTRY(cast6_cbc_dec_8way)
/* input:
- * %rdi: ctx, CTX
+ * %rdi: ctx
* %rsi: dst
* %rdx: src
*/
FRAME_BEGIN
-
pushq %r12;
+ pushq %r15;
+ movq %rdi, CTX;
movq %rsi, %r11;
movq %rdx, %r12;
store_cbc_8way(%r12, %r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ popq %r15;
popq %r12;
-
FRAME_END
ret;
ENDPROC(cast6_cbc_dec_8way)
* %rcx: iv (little endian, 128bit)
*/
FRAME_BEGIN
-
pushq %r12;
+ pushq %r15
+ movq %rdi, CTX;
movq %rsi, %r11;
movq %rdx, %r12;
store_ctr_8way(%r12, %r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ popq %r15;
popq %r12;
-
FRAME_END
ret;
ENDPROC(cast6_ctr_8way)
* %rcx: iv (t ⊕ αⁿ ∈ GF(2¹²⁸))
*/
FRAME_BEGIN
+ pushq %r15;
+ movq %rdi, CTX
movq %rsi, %r11;
/* regs <= src, dst <= IVs, regs <= regs xor IVs */
/* dst <= regs xor IVs(in dst) */
store_xts_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ popq %r15;
FRAME_END
ret;
ENDPROC(cast6_xts_enc_8way)
* %rcx: iv (t ⊕ αⁿ ∈ GF(2¹²⁸))
*/
FRAME_BEGIN
+ pushq %r15;
+ movq %rdi, CTX
movq %rsi, %r11;
/* regs <= src, dst <= IVs, regs <= regs xor IVs */
/* dst <= regs xor IVs(in dst) */
store_xts_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2);
+ popq %r15;
FRAME_END
ret;
ENDPROC(cast6_xts_dec_8way)
#define RW2bh %ch
#define RT0 %r15
-#define RT1 %rbp
+#define RT1 %rsi
#define RT2 %r14
#define RT3 %rdx
#define RT0d %r15d
-#define RT1d %ebp
+#define RT1d %esi
#define RT2d %r14d
#define RT3d %edx
* %rsi: dst
* %rdx: src
*/
- pushq %rbp;
pushq %rbx;
pushq %r12;
pushq %r13;
pushq %r14;
pushq %r15;
+ pushq %rsi; /* dst */
+
read_block(%rdx, RL0, RR0);
initial_permutation(RL0, RR0);
round1(32+15, RL0, RR0, dummy2);
final_permutation(RR0, RL0);
+
+ popq %rsi /* dst */
write_block(%rsi, RR0, RL0);
popq %r15;
popq %r13;
popq %r12;
popq %rbx;
- popq %rbp;
ret;
ENDPROC(des3_ede_x86_64_crypt_blk)
* %rdx: src (3 blocks)
*/
- pushq %rbp;
pushq %rbx;
pushq %r12;
pushq %r13;
pushq %r14;
pushq %r15;
+ pushq %rsi /* dst */
+
/* load input */
movl 0 * 4(%rdx), RL0d;
movl 1 * 4(%rdx), RR0d;
bswapl RR2d;
bswapl RL2d;
+ popq %rsi /* dst */
movl RR0d, 0 * 4(%rsi);
movl RL0d, 1 * 4(%rsi);
movl RR1d, 2 * 4(%rsi);
popq %r13;
popq %r12;
popq %rbx;
- popq %rbp;
ret;
ENDPROC(des3_ede_x86_64_crypt_blk_3way)
#define REG_RE %rdx
#define REG_RTA %r12
#define REG_RTB %rbx
-#define REG_T1 %ebp
+#define REG_T1 %r11d
#define xmm_mov vmovups
#define avx2_zeroupper vzeroupper
#define RND_F1 1
ENTRY(\name)
push %rbx
- push %rbp
push %r12
push %r13
push %r14
pop %r14
pop %r13
pop %r12
- pop %rbp
pop %rbx
ret
#define REG_A %ecx
#define REG_B %esi
#define REG_C %edi
-#define REG_D %ebp
+#define REG_D %r12d
#define REG_E %edx
#define REG_T1 %eax
ENTRY(\name)
push %rbx
- push %rbp
push %r12
+ push %rbp
+ mov %rsp, %rbp
- mov %rsp, %r12
sub $64, %rsp # allocate workspace
and $~15, %rsp # align stack
xor %rax, %rax
rep stosq
- mov %r12, %rsp # deallocate workspace
-
- pop %r12
+ mov %rbp, %rsp # deallocate workspace
pop %rbp
+ pop %r12
pop %rbx
ret
c = %ecx
d = %r8d
e = %edx
-TBL = %rbp
+TBL = %r12
a = %eax
b = %ebx
ENTRY(sha256_transform_avx)
.align 32
pushq %rbx
- pushq %rbp
+ pushq %r12
pushq %r13
pushq %r14
pushq %r15
- pushq %r12
+ pushq %rbp
+ movq %rsp, %rbp
- mov %rsp, %r12
subq $STACK_SIZE, %rsp # allocate stack space
and $~15, %rsp # align stack pointer
done_hash:
- mov %r12, %rsp
-
- popq %r12
+ mov %rbp, %rsp
+ popq %rbp
popq %r15
popq %r14
popq %r13
- popq %rbp
+ popq %r12
popq %rbx
ret
ENDPROC(sha256_transform_avx)
e = %edx # clobbers NUM_BLKS
y3 = %esi # clobbers INP
-
-TBL = %rbp
SRND = CTX # SRND is same register as CTX
a = %eax
ENTRY(sha256_transform_rorx)
.align 32
pushq %rbx
- pushq %rbp
pushq %r12
pushq %r13
pushq %r14
mov CTX, _CTX(%rsp)
loop0:
- lea K256(%rip), TBL
-
## Load first 16 dwords from two blocks
VMOVDQ 0*32(INP),XTMP0
VMOVDQ 1*32(INP),XTMP1
.align 16
loop1:
- vpaddd 0*32(TBL, SRND), X0, XFER
+ vpaddd K256+0*32(SRND), X0, XFER
vmovdqa XFER, 0*32+_XFER(%rsp, SRND)
FOUR_ROUNDS_AND_SCHED _XFER + 0*32
- vpaddd 1*32(TBL, SRND), X0, XFER
+ vpaddd K256+1*32(SRND), X0, XFER
vmovdqa XFER, 1*32+_XFER(%rsp, SRND)
FOUR_ROUNDS_AND_SCHED _XFER + 1*32
- vpaddd 2*32(TBL, SRND), X0, XFER
+ vpaddd K256+2*32(SRND), X0, XFER
vmovdqa XFER, 2*32+_XFER(%rsp, SRND)
FOUR_ROUNDS_AND_SCHED _XFER + 2*32
- vpaddd 3*32(TBL, SRND), X0, XFER
+ vpaddd K256+3*32(SRND), X0, XFER
vmovdqa XFER, 3*32+_XFER(%rsp, SRND)
FOUR_ROUNDS_AND_SCHED _XFER + 3*32
loop2:
## Do last 16 rounds with no scheduling
- vpaddd 0*32(TBL, SRND), X0, XFER
+ vpaddd K256+0*32(SRND), X0, XFER
vmovdqa XFER, 0*32+_XFER(%rsp, SRND)
DO_4ROUNDS _XFER + 0*32
- vpaddd 1*32(TBL, SRND), X1, XFER
+
+ vpaddd K256+1*32(SRND), X1, XFER
vmovdqa XFER, 1*32+_XFER(%rsp, SRND)
DO_4ROUNDS _XFER + 1*32
add $2*32, SRND
ja done_hash
do_last_block:
- #### do last block
- lea K256(%rip), TBL
-
VMOVDQ 0*16(INP),XWORD0
VMOVDQ 1*16(INP),XWORD1
VMOVDQ 2*16(INP),XWORD2
popq %r14
popq %r13
popq %r12
- popq %rbp
popq %rbx
ret
ENDPROC(sha256_transform_rorx)
c = %ecx
d = %r8d
e = %edx
-TBL = %rbp
+TBL = %r12
a = %eax
b = %ebx
ENTRY(sha256_transform_ssse3)
.align 32
pushq %rbx
- pushq %rbp
+ pushq %r12
pushq %r13
pushq %r14
pushq %r15
- pushq %r12
+ pushq %rbp
+ mov %rsp, %rbp
- mov %rsp, %r12
subq $STACK_SIZE, %rsp
and $~15, %rsp
done_hash:
- mov %r12, %rsp
-
- popq %r12
+ mov %rbp, %rsp
+ popq %rbp
popq %r15
popq %r14
popq %r13
- popq %rbp
+ popq %r12
popq %rbx
ret
BYTE_FLIP_MASK = %ymm9
-# 1st arg
-CTX = %rdi
+# 1st arg is %rdi, which is saved to the stack and accessed later via %r12
+CTX1 = %rdi
+CTX2 = %r12
# 2nd arg
INP = %rsi
# 3rd arg
e = %rdx
y3 = %rsi
-TBL = %rbp
+TBL = %rdi # clobbers CTX1
a = %rax
b = %rbx
h = %r11
old_h = %r11
-T1 = %r12
+T1 = %r12 # clobbers CTX2
y0 = %r13
y1 = %r14
y2 = %r15
-y4 = %r12
-
# Local variables (stack frame)
XFER_SIZE = 4*8
SRND_SIZE = 1*8
INP_SIZE = 1*8
INPEND_SIZE = 1*8
+CTX_SIZE = 1*8
RSPSAVE_SIZE = 1*8
-GPRSAVE_SIZE = 6*8
+GPRSAVE_SIZE = 5*8
frame_XFER = 0
frame_SRND = frame_XFER + XFER_SIZE
frame_INP = frame_SRND + SRND_SIZE
frame_INPEND = frame_INP + INP_SIZE
-frame_RSPSAVE = frame_INPEND + INPEND_SIZE
+frame_CTX = frame_INPEND + INPEND_SIZE
+frame_RSPSAVE = frame_CTX + CTX_SIZE
frame_GPRSAVE = frame_RSPSAVE + RSPSAVE_SIZE
frame_size = frame_GPRSAVE + GPRSAVE_SIZE
mov %rax, frame_RSPSAVE(%rsp)
# Save GPRs
- mov %rbp, frame_GPRSAVE(%rsp)
- mov %rbx, 8*1+frame_GPRSAVE(%rsp)
- mov %r12, 8*2+frame_GPRSAVE(%rsp)
- mov %r13, 8*3+frame_GPRSAVE(%rsp)
- mov %r14, 8*4+frame_GPRSAVE(%rsp)
- mov %r15, 8*5+frame_GPRSAVE(%rsp)
+ mov %rbx, 8*0+frame_GPRSAVE(%rsp)
+ mov %r12, 8*1+frame_GPRSAVE(%rsp)
+ mov %r13, 8*2+frame_GPRSAVE(%rsp)
+ mov %r14, 8*3+frame_GPRSAVE(%rsp)
+ mov %r15, 8*4+frame_GPRSAVE(%rsp)
shl $7, NUM_BLKS # convert to bytes
jz done_hash
mov NUM_BLKS, frame_INPEND(%rsp)
## load initial digest
- mov 8*0(CTX),a
- mov 8*1(CTX),b
- mov 8*2(CTX),c
- mov 8*3(CTX),d
- mov 8*4(CTX),e
- mov 8*5(CTX),f
- mov 8*6(CTX),g
- mov 8*7(CTX),h
+ mov 8*0(CTX1), a
+ mov 8*1(CTX1), b
+ mov 8*2(CTX1), c
+ mov 8*3(CTX1), d
+ mov 8*4(CTX1), e
+ mov 8*5(CTX1), f
+ mov 8*6(CTX1), g
+ mov 8*7(CTX1), h
+
+ # save %rdi (CTX) before it gets clobbered
+ mov %rdi, frame_CTX(%rsp)
vmovdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), BYTE_FLIP_MASK
subq $1, frame_SRND(%rsp)
jne loop2
- addm 8*0(CTX),a
- addm 8*1(CTX),b
- addm 8*2(CTX),c
- addm 8*3(CTX),d
- addm 8*4(CTX),e
- addm 8*5(CTX),f
- addm 8*6(CTX),g
- addm 8*7(CTX),h
+ mov frame_CTX(%rsp), CTX2
+ addm 8*0(CTX2), a
+ addm 8*1(CTX2), b
+ addm 8*2(CTX2), c
+ addm 8*3(CTX2), d
+ addm 8*4(CTX2), e
+ addm 8*5(CTX2), f
+ addm 8*6(CTX2), g
+ addm 8*7(CTX2), h
mov frame_INP(%rsp), INP
add $128, INP
done_hash:
# Restore GPRs
- mov frame_GPRSAVE(%rsp) ,%rbp
- mov 8*1+frame_GPRSAVE(%rsp) ,%rbx
- mov 8*2+frame_GPRSAVE(%rsp) ,%r12
- mov 8*3+frame_GPRSAVE(%rsp) ,%r13
- mov 8*4+frame_GPRSAVE(%rsp) ,%r14
- mov 8*5+frame_GPRSAVE(%rsp) ,%r15
+ mov 8*0+frame_GPRSAVE(%rsp), %rbx
+ mov 8*1+frame_GPRSAVE(%rsp), %r12
+ mov 8*2+frame_GPRSAVE(%rsp), %r13
+ mov 8*3+frame_GPRSAVE(%rsp), %r14
+ mov 8*4+frame_GPRSAVE(%rsp), %r15
# Restore Stack Pointer
mov frame_RSPSAVE(%rsp), %rsp
#define RT %xmm14
#define RR %xmm15
-#define RID1 %rbp
-#define RID1d %ebp
+#define RID1 %r13
+#define RID1d %r13d
#define RID2 %rsi
#define RID2d %esi
vmovdqu w(CTX), RK1;
- pushq %rbp;
+ pushq %r13;
pushq %rbx;
pushq %rcx;
popq %rcx;
popq %rbx;
- popq %rbp;
+ popq %r13;
outunpack_blocks(RC1, RD1, RA1, RB1, RK1, RX0, RY0, RK2);
outunpack_blocks(RC2, RD2, RA2, RB2, RK1, RX0, RY0, RK2);
vmovdqu (w+4*4)(CTX), RK1;
- pushq %rbp;
+ pushq %r13;
pushq %rbx;
inpack_blocks(RC1, RD1, RA1, RB1, RK1, RX0, RY0, RK2);
vmovdqu (w)(CTX), RK1;
popq %rbx;
- popq %rbp;
+ popq %r13;
outunpack_blocks(RA1, RB1, RC1, RD1, RK1, RX0, RY0, RK2);
outunpack_blocks(RA2, RB2, RC2, RD2, RK1, RX0, RY0, RK2);
return 0;
}
- if (lockup_detector_suspend() != 0) {
- pr_debug("failed to disable PMU erratum BJ122, BV98, HSD29 workaround\n");
- return 0;
- }
+ cpus_read_lock();
+
+ hardlockup_detector_perf_stop();
x86_pmu.flags &= ~(PMU_FL_EXCL_CNTRS | PMU_FL_EXCL_ENABLED);
x86_pmu.commit_scheduling = NULL;
x86_pmu.stop_scheduling = NULL;
- lockup_detector_resume();
-
- cpus_read_lock();
+ hardlockup_detector_perf_restart();
for_each_online_cpu(c)
free_excl_cntrs(c);
X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_MOBILE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_DESKTOP, snb_cstates),
+ X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_X, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_KABYLAKE_MOBILE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_KABYLAKE_DESKTOP, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_XEON_PHI_KNM, knl_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT, glm_cstates),
+ X86_CSTATES_MODEL(INTEL_FAM6_ATOM_DENVERTON, glm_cstates),
+
+ X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GEMINI_LAKE, glm_cstates),
{ },
};
MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
X86_RAPL_MODEL_MATCH(INTEL_FAM6_KABYLAKE_DESKTOP, skl_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT, hsw_rapl_init),
+ X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_DENVERTON, hsw_rapl_init),
+
+ X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GEMINI_LAKE, hsw_rapl_init),
{},
};
static struct intel_uncore_type skx_uncore_iio = {
.name = "iio",
.num_counters = 4,
- .num_boxes = 5,
+ .num_boxes = 6,
.perf_ctr_bits = 48,
.event_ctl = SKX_IIO0_MSR_PMON_CTL0,
.perf_ctr = SKX_IIO0_MSR_PMON_CTR0,
static struct intel_uncore_type skx_uncore_irp = {
.name = "irp",
.num_counters = 2,
- .num_boxes = 5,
+ .num_boxes = 6,
.perf_ctr_bits = 48,
.event_ctl = SKX_IRP0_MSR_PMON_CTL0,
.perf_ctr = SKX_IRP0_MSR_PMON_CTR0,
case INTEL_FAM6_ATOM_SILVERMONT1:
case INTEL_FAM6_ATOM_SILVERMONT2:
case INTEL_FAM6_ATOM_AIRMONT:
+
+ case INTEL_FAM6_ATOM_GOLDMONT:
+ case INTEL_FAM6_ATOM_DENVERTON:
+
+ case INTEL_FAM6_ATOM_GEMINI_LAKE:
+
+ case INTEL_FAM6_XEON_PHI_KNL:
+ case INTEL_FAM6_XEON_PHI_KNM:
if (idx == PERF_MSR_SMI)
return true;
break;
ksig->ka.sa.sa_restorer)
sp = (unsigned long) ksig->ka.sa.sa_restorer;
- if (fpu->fpstate_active) {
+ if (fpu->initialized) {
unsigned long fx_aligned, math_size;
sp = fpu__alloc_mathframe(sp, 1, &fx_aligned, &math_size);
#define alternative_call_2(oldfunc, newfunc1, feature1, newfunc2, feature2, \
output, input...) \
{ \
- register void *__sp asm(_ASM_SP); \
asm volatile (ALTERNATIVE_2("call %P[old]", "call %P[new1]", feature1,\
"call %P[new2]", feature2) \
- : output, "+r" (__sp) \
+ : output, ASM_CALL_CONSTRAINT \
: [old] "i" (oldfunc), [new1] "i" (newfunc1), \
[new2] "i" (newfunc2), ## input); \
}
# define __ASM_FORM_COMMA(x) " " #x ","
#endif
-#ifdef CONFIG_X86_32
+#ifndef __x86_64__
+/* 32 bit */
# define __ASM_SEL(a,b) __ASM_FORM(a)
# define __ASM_SEL_RAW(a,b) __ASM_FORM_RAW(a)
#else
+/* 64 bit */
# define __ASM_SEL(a,b) __ASM_FORM(b)
# define __ASM_SEL_RAW(a,b) __ASM_FORM_RAW(b)
#endif
/* For C file, we already have NOKPROBE_SYMBOL macro */
#endif
+#ifndef __ASSEMBLY__
+/*
+ * This output constraint should be used for any inline asm which has a "call"
+ * instruction. Otherwise the asm may be inserted before the frame pointer
+ * gets set up by the containing function. If you forget to do this, objtool
+ * may print a "call without frame pointer save/setup" warning.
+ */
+register unsigned long current_stack_pointer asm(_ASM_SP);
+#define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
+#endif
+
#endif /* _ASM_X86_ASM_H */
/*
* High level FPU state handling functions:
*/
-extern void fpu__activate_curr(struct fpu *fpu);
-extern void fpu__activate_fpstate_read(struct fpu *fpu);
-extern void fpu__activate_fpstate_write(struct fpu *fpu);
-extern void fpu__current_fpstate_write_begin(void);
-extern void fpu__current_fpstate_write_end(void);
+extern void fpu__initialize(struct fpu *fpu);
+extern void fpu__prepare_read(struct fpu *fpu);
+extern void fpu__prepare_write(struct fpu *fpu);
extern void fpu__save(struct fpu *fpu);
extern void fpu__restore(struct fpu *fpu);
extern int fpu__restore_sig(void __user *buf, int ia32_frame);
err; \
})
-#define check_insn(insn, output, input...) \
-({ \
- int err; \
+#define kernel_insn(insn, output, input...) \
asm volatile("1:" #insn "\n\t" \
"2:\n" \
- ".section .fixup,\"ax\"\n" \
- "3: movl $-1,%[err]\n" \
- " jmp 2b\n" \
- ".previous\n" \
- _ASM_EXTABLE(1b, 3b) \
- : [err] "=r" (err), output \
- : "0"(0), input); \
- err; \
-})
+ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_fprestore) \
+ : output : input)
static inline int copy_fregs_to_user(struct fregs_state __user *fx)
{
static inline void copy_kernel_to_fxregs(struct fxregs_state *fx)
{
- int err;
-
if (IS_ENABLED(CONFIG_X86_32)) {
- err = check_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx));
+ kernel_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx));
} else {
if (IS_ENABLED(CONFIG_AS_FXSAVEQ)) {
- err = check_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx));
+ kernel_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx));
} else {
/* See comment in copy_fxregs_to_kernel() below. */
- err = check_insn(rex64/fxrstor (%[fx]), "=m" (*fx), [fx] "R" (fx), "m" (*fx));
+ kernel_insn(rex64/fxrstor (%[fx]), "=m" (*fx), [fx] "R" (fx), "m" (*fx));
}
}
- /* Copying from a kernel buffer to FPU registers should never fail: */
- WARN_ON_FPU(err);
}
static inline int copy_user_to_fxregs(struct fxregs_state __user *fx)
static inline void copy_kernel_to_fregs(struct fregs_state *fx)
{
- int err = check_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx));
-
- WARN_ON_FPU(err);
+ kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx));
}
static inline int copy_user_to_fregs(struct fregs_state __user *fx)
* Use XRSTORS to restore context if it is enabled. XRSTORS supports compact
* XSAVE area format.
*/
-#define XSTATE_XRESTORE(st, lmask, hmask, err) \
+#define XSTATE_XRESTORE(st, lmask, hmask) \
asm volatile(ALTERNATIVE(XRSTOR, \
XRSTORS, X86_FEATURE_XSAVES) \
"\n" \
- "xor %[err], %[err]\n" \
"3:\n" \
- ".pushsection .fixup,\"ax\"\n" \
- "4: movl $-2, %[err]\n" \
- "jmp 3b\n" \
- ".popsection\n" \
- _ASM_EXTABLE(661b, 4b) \
- : [err] "=r" (err) \
+ _ASM_EXTABLE_HANDLE(661b, 3b, ex_handler_fprestore)\
+ : \
: "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \
: "memory")
else
XSTATE_OP(XRSTOR, xstate, lmask, hmask, err);
- /* We should never fault when copying from a kernel buffer: */
+ /*
+ * We should never fault when copying from a kernel buffer, and the FPU
+ * state we set at boot time should be valid.
+ */
WARN_ON_FPU(err);
}
u32 hmask = mask >> 32;
int err;
- WARN_ON(!alternatives_patched);
+ WARN_ON_FPU(!alternatives_patched);
XSTATE_XSAVE(xstate, lmask, hmask, err);
{
u32 lmask = mask;
u32 hmask = mask >> 32;
- int err;
-
- XSTATE_XRESTORE(xstate, lmask, hmask, err);
- /* We should never fault when copying from a kernel buffer: */
- WARN_ON_FPU(err);
+ XSTATE_XRESTORE(xstate, lmask, hmask);
}
/*
*/
static inline void fpregs_deactivate(struct fpu *fpu)
{
- WARN_ON_FPU(!fpu->fpregs_active);
-
- fpu->fpregs_active = 0;
this_cpu_write(fpu_fpregs_owner_ctx, NULL);
trace_x86_fpu_regs_deactivated(fpu);
}
static inline void fpregs_activate(struct fpu *fpu)
{
- WARN_ON_FPU(fpu->fpregs_active);
-
- fpu->fpregs_active = 1;
this_cpu_write(fpu_fpregs_owner_ctx, fpu);
trace_x86_fpu_regs_activated(fpu);
}
-/*
- * The question "does this thread have fpu access?"
- * is slightly racy, since preemption could come in
- * and revoke it immediately after the test.
- *
- * However, even in that very unlikely scenario,
- * we can just assume we have FPU access - typically
- * to save the FP state - we'll just take a #NM
- * fault and get the FPU access back.
- */
-static inline int fpregs_active(void)
-{
- return current->thread.fpu.fpregs_active;
-}
-
/*
* FPU state switching for scheduling.
*
static inline void
switch_fpu_prepare(struct fpu *old_fpu, int cpu)
{
- if (old_fpu->fpregs_active) {
+ if (old_fpu->initialized) {
if (!copy_fpregs_to_fpstate(old_fpu))
old_fpu->last_cpu = -1;
else
old_fpu->last_cpu = cpu;
/* But leave fpu_fpregs_owner_ctx! */
- old_fpu->fpregs_active = 0;
trace_x86_fpu_regs_deactivated(old_fpu);
} else
old_fpu->last_cpu = -1;
static inline void switch_fpu_finish(struct fpu *new_fpu, int cpu)
{
bool preload = static_cpu_has(X86_FEATURE_FPU) &&
- new_fpu->fpstate_active;
+ new_fpu->initialized;
if (preload) {
if (!fpregs_state_valid(new_fpu, cpu))
struct fpu *fpu = ¤t->thread.fpu;
preempt_disable();
- if (!fpregs_active())
- fpregs_activate(fpu);
+ fpregs_activate(fpu);
preempt_enable();
}
/* Default value for fxregs_state.mxcsr: */
#define MXCSR_DEFAULT 0x1f80
+/* Copy both mxcsr & mxcsr_flags with a single u64 memcpy: */
+#define MXCSR_AND_FLAGS_SIZE sizeof(u64)
+
/*
* Software based FPU emulation state. This is arbitrary really,
* it matches the x87 format to make it easier to understand:
unsigned int last_cpu;
/*
- * @fpstate_active:
+ * @initialized:
*
- * This flag indicates whether this context is active: if the task
+ * This flag indicates whether this context is initialized: if the task
* is not running then we can restore from this context, if the task
* is running then we should save into this context.
*/
- unsigned char fpstate_active;
-
- /*
- * @fpregs_active:
- *
- * This flag determines whether a given context is actively
- * loaded into the FPU's registers and that those registers
- * represent the task's current FPU state.
- *
- * Note the interaction with fpstate_active:
- *
- * # task does not use the FPU:
- * fpstate_active == 0
- *
- * # task uses the FPU and regs are active:
- * fpstate_active == 1 && fpregs_active == 1
- *
- * # the regs are inactive but still match fpstate:
- * fpstate_active == 1 && fpregs_active == 0 && fpregs_owner == fpu
- *
- * The third state is what we use for the lazy restore optimization
- * on lazy-switching CPUs.
- */
- unsigned char fpregs_active;
+ unsigned char initialized;
/*
* @state:
void *get_xsave_addr(struct xregs_state *xsave, int xstate);
const void *get_xsave_field_ptr(int xstate_field);
int using_compacted_format(void);
-int copyout_from_xsaves(unsigned int pos, unsigned int count, void *kbuf,
- void __user *ubuf, struct xregs_state *xsave);
-int copyin_to_xsaves(const void *kbuf, const void __user *ubuf,
- struct xregs_state *xsave);
+int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int offset, unsigned int size);
+int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned int offset, unsigned int size);
+int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf);
+int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf);
+
+/* Validate an xstate header supplied by userspace (ptrace or sigreturn) */
+extern int validate_xstate_header(const struct xstate_header *hdr);
+
#endif
bool kvm_para_available(void);
unsigned int kvm_arch_para_features(void);
void __init kvm_guest_init(void);
-void kvm_async_pf_task_wait(u32 token);
+void kvm_async_pf_task_wait(u32 token, int interrupt_kernel);
void kvm_async_pf_task_wake(u32 token);
u32 kvm_read_and_reset_pf_reason(void);
extern void kvm_disable_steal_time(void);
#else /* CONFIG_KVM_GUEST */
#define kvm_guest_init() do {} while (0)
-#define kvm_async_pf_task_wait(T) do {} while(0)
+#define kvm_async_pf_task_wait(T, I) do {} while(0)
#define kvm_async_pf_task_wake(T) do {} while(0)
static inline bool kvm_para_available(void)
return __pkru_allows_pkey(vma_pkey(vma), write);
}
+/*
+ * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
+ * bits. This serves two purposes. It prevents a nasty situation in
+ * which PCID-unaware code saves CR3, loads some other value (with PCID
+ * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
+ * the saved ASID was nonzero. It also means that any bugs involving
+ * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
+ * deterministically.
+ */
+
+static inline unsigned long build_cr3(struct mm_struct *mm, u16 asid)
+{
+ if (static_cpu_has(X86_FEATURE_PCID)) {
+ VM_WARN_ON_ONCE(asid > 4094);
+ return __sme_pa(mm->pgd) | (asid + 1);
+ } else {
+ VM_WARN_ON_ONCE(asid != 0);
+ return __sme_pa(mm->pgd);
+ }
+}
+
+static inline unsigned long build_cr3_noflush(struct mm_struct *mm, u16 asid)
+{
+ VM_WARN_ON_ONCE(asid > 4094);
+ return __sme_pa(mm->pgd) | (asid + 1) | CR3_NOFLUSH;
+}
/*
* This can be used from process context to figure out what the value of
*/
static inline unsigned long __get_current_cr3_fast(void)
{
- unsigned long cr3 = __pa(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd);
-
- if (static_cpu_has(X86_FEATURE_PCID))
- cr3 |= this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+ unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm),
+ this_cpu_read(cpu_tlbstate.loaded_mm_asid));
/* For now, be very restrictive about when this can be called. */
VM_WARN_ON(in_nmi() || preemptible());
u64 input_address = input ? virt_to_phys(input) : 0;
u64 output_address = output ? virt_to_phys(output) : 0;
u64 hv_status;
- register void *__sp asm(_ASM_SP);
#ifdef CONFIG_X86_64
if (!hv_hypercall_pg)
__asm__ __volatile__("mov %4, %%r8\n"
"call *%5"
- : "=a" (hv_status), "+r" (__sp),
+ : "=a" (hv_status), ASM_CALL_CONSTRAINT,
"+c" (control), "+d" (input_address)
: "r" (output_address), "m" (hv_hypercall_pg)
: "cc", "memory", "r8", "r9", "r10", "r11");
__asm__ __volatile__("call *%7"
: "=A" (hv_status),
- "+c" (input_address_lo), "+r" (__sp)
+ "+c" (input_address_lo), ASM_CALL_CONSTRAINT
: "A" (control),
"b" (input_address_hi),
"D"(output_address_hi), "S"(output_address_lo),
static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1)
{
u64 hv_status, control = (u64)code | HV_HYPERCALL_FAST_BIT;
- register void *__sp asm(_ASM_SP);
#ifdef CONFIG_X86_64
{
__asm__ __volatile__("call *%4"
- : "=a" (hv_status), "+r" (__sp),
+ : "=a" (hv_status), ASM_CALL_CONSTRAINT,
"+c" (control), "+d" (input1)
: "m" (hv_hypercall_pg)
: "cc", "r8", "r9", "r10", "r11");
__asm__ __volatile__ ("call *%5"
: "=A"(hv_status),
"+c"(input1_lo),
- "+r"(__sp)
+ ASM_CALL_CONSTRAINT
: "A" (control),
"b" (input1_hi),
"m" (hv_hypercall_pg)
*/
#ifdef CONFIG_X86_32
#define PVOP_VCALL_ARGS \
- unsigned long __eax = __eax, __edx = __edx, __ecx = __ecx; \
- register void *__sp asm("esp")
+ unsigned long __eax = __eax, __edx = __edx, __ecx = __ecx;
+
#define PVOP_CALL_ARGS PVOP_VCALL_ARGS
#define PVOP_CALL_ARG1(x) "a" ((unsigned long)(x))
/* [re]ax isn't an arg, but the return val */
#define PVOP_VCALL_ARGS \
unsigned long __edi = __edi, __esi = __esi, \
- __edx = __edx, __ecx = __ecx, __eax = __eax; \
- register void *__sp asm("rsp")
+ __edx = __edx, __ecx = __ecx, __eax = __eax;
+
#define PVOP_CALL_ARGS PVOP_VCALL_ARGS
#define PVOP_CALL_ARG1(x) "D" ((unsigned long)(x))
asm volatile(pre \
paravirt_alt(PARAVIRT_CALL) \
post \
- : call_clbr, "+r" (__sp) \
+ : call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
paravirt_clobber(clbr), \
##__VA_ARGS__ \
asm volatile(pre \
paravirt_alt(PARAVIRT_CALL) \
post \
- : call_clbr, "+r" (__sp) \
+ : call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
paravirt_clobber(clbr), \
##__VA_ARGS__ \
asm volatile(pre \
paravirt_alt(PARAVIRT_CALL) \
post \
- : call_clbr, "+r" (__sp) \
+ : call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
paravirt_clobber(clbr), \
##__VA_ARGS__ \
#ifdef CONFIG_PREEMPT
extern asmlinkage void ___preempt_schedule(void);
-# define __preempt_schedule() \
-({ \
- register void *__sp asm(_ASM_SP); \
- asm volatile ("call ___preempt_schedule" : "+r"(__sp)); \
-})
+# define __preempt_schedule() \
+ asm volatile ("call ___preempt_schedule" : ASM_CALL_CONSTRAINT)
extern asmlinkage void preempt_schedule(void);
extern asmlinkage void ___preempt_schedule_notrace(void);
-# define __preempt_schedule_notrace() \
-({ \
- register void *__sp asm(_ASM_SP); \
- asm volatile ("call ___preempt_schedule_notrace" : "+r"(__sp)); \
-})
+# define __preempt_schedule_notrace() \
+ asm volatile ("call ___preempt_schedule_notrace" : ASM_CALL_CONSTRAINT)
+
extern asmlinkage void preempt_schedule_notrace(void);
#endif
* Like all of Linux's memory ordering operations, this is a
* compiler barrier as well.
*/
- register void *__sp asm(_ASM_SP);
-
#ifdef CONFIG_X86_32
asm volatile (
"pushfl\n\t"
"pushl $1f\n\t"
"iret\n\t"
"1:"
- : "+r" (__sp) : : "memory");
+ : ASM_CALL_CONSTRAINT : : "memory");
#else
unsigned int tmp;
"iretq\n\t"
UNWIND_HINT_RESTORE
"1:"
- : "=&r" (tmp), "+r" (__sp) : : "cc", "memory");
+ : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory");
#endif
}
({ \
long tmp; \
struct rw_semaphore* ret; \
- register void *__sp asm(_ASM_SP); \
\
asm volatile("# beginning down_write\n\t" \
LOCK_PREFIX " xadd %1,(%4)\n\t" \
" call " slow_path "\n" \
"1:\n" \
"# ending down_write" \
- : "+m" (sem->count), "=d" (tmp), "=a" (ret), "+r" (__sp) \
+ : "+m" (sem->count), "=d" (tmp), \
+ "=a" (ret), ASM_CALL_CONSTRAINT \
: "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) \
: "memory", "cc"); \
ret; \
*/
#ifndef __ASSEMBLY__
-static inline unsigned long current_stack_pointer(void)
-{
- unsigned long sp;
-#ifdef CONFIG_X86_64
- asm("mov %%rsp,%0" : "=g" (sp));
-#else
- asm("mov %%esp,%0" : "=g" (sp));
-#endif
- return sp;
-}
-
/*
* Walks up the stack frames to make sure that the specified object is
* entirely contained by a single stack frame.
TP_STRUCT__entry(
__field(struct fpu *, fpu)
- __field(bool, fpregs_active)
- __field(bool, fpstate_active)
+ __field(bool, initialized)
__field(u64, xfeatures)
__field(u64, xcomp_bv)
),
TP_fast_assign(
__entry->fpu = fpu;
- __entry->fpregs_active = fpu->fpregs_active;
- __entry->fpstate_active = fpu->fpstate_active;
+ __entry->initialized = fpu->initialized;
if (boot_cpu_has(X86_FEATURE_OSXSAVE)) {
__entry->xfeatures = fpu->state.xsave.header.xfeatures;
__entry->xcomp_bv = fpu->state.xsave.header.xcomp_bv;
}
),
- TP_printk("x86/fpu: %p fpregs_active: %d fpstate_active: %d xfeatures: %llx xcomp_bv: %llx",
+ TP_printk("x86/fpu: %p initialized: %d xfeatures: %llx xcomp_bv: %llx",
__entry->fpu,
- __entry->fpregs_active,
- __entry->fpstate_active,
+ __entry->initialized,
__entry->xfeatures,
__entry->xcomp_bv
)
({ \
int __ret_gu; \
register __inttype(*(ptr)) __val_gu asm("%"_ASM_DX); \
- register void *__sp asm(_ASM_SP); \
__chk_user_ptr(ptr); \
might_fault(); \
asm volatile("call __get_user_%P4" \
- : "=a" (__ret_gu), "=r" (__val_gu), "+r" (__sp) \
+ : "=a" (__ret_gu), "=r" (__val_gu), \
+ ASM_CALL_CONSTRAINT \
: "0" (ptr), "i" (sizeof(*(ptr)))); \
(x) = (__force __typeof__(*(ptr))) __val_gu; \
__builtin_expect(__ret_gu, 0); \
_ASM_EXTABLE(1b, 4b) \
_ASM_EXTABLE(2b, 4b) \
: "=r" (retval), "=&A"(x) \
- : "m" (__m(__ptr)), "m" __m(((u32 *)(__ptr)) + 1), \
+ : "m" (__m(__ptr)), "m" __m(((u32 __user *)(__ptr)) + 1), \
"i" (errret), "0" (retval)); \
})
register unsigned long __arg2 asm(__HYPERCALL_ARG2REG) = __arg2; \
register unsigned long __arg3 asm(__HYPERCALL_ARG3REG) = __arg3; \
register unsigned long __arg4 asm(__HYPERCALL_ARG4REG) = __arg4; \
- register unsigned long __arg5 asm(__HYPERCALL_ARG5REG) = __arg5; \
- register void *__sp asm(_ASM_SP);
+ register unsigned long __arg5 asm(__HYPERCALL_ARG5REG) = __arg5;
-#define __HYPERCALL_0PARAM "=r" (__res), "+r" (__sp)
+#define __HYPERCALL_0PARAM "=r" (__res), ASM_CALL_CONSTRAINT
#define __HYPERCALL_1PARAM __HYPERCALL_0PARAM, "+r" (__arg1)
#define __HYPERCALL_2PARAM __HYPERCALL_1PARAM, "+r" (__arg2)
#define __HYPERCALL_3PARAM __HYPERCALL_2PARAM, "+r" (__arg3)
MULTI_update_descriptor(struct multicall_entry *mcl, u64 maddr,
struct desc_struct desc)
{
- u32 *p = (u32 *) &desc;
-
mcl->op = __HYPERVISOR_update_descriptor;
if (sizeof(maddr) == sizeof(long)) {
mcl->args[0] = maddr;
mcl->args[1] = *(unsigned long *)&desc;
} else {
+ u32 *p = (u32 *)&desc;
+
mcl->args[0] = maddr;
mcl->args[1] = maddr >> 32;
mcl->args[2] = *p++;
}
}
+static void init_amd_zn(struct cpuinfo_x86 *c)
+{
+ /*
+ * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
+ * all up to and including B1.
+ */
+ if (c->x86_model <= 1 && c->x86_mask <= 1)
+ set_cpu_cap(c, X86_FEATURE_CPB);
+}
+
static void init_amd(struct cpuinfo_x86 *c)
{
early_init_amd(c);
case 0x10: init_amd_gh(c); break;
case 0x12: init_amd_ln(c); break;
case 0x15: init_amd_bd(c); break;
+ case 0x17: init_amd_zn(c); break;
}
/* Enable workaround for FXSAVE leak */
void __init check_bugs(void)
{
-#ifdef CONFIG_X86_32
- /*
- * Regardless of whether PCID is enumerated, the SDM says
- * that it can't be enabled in 32-bit mode.
- */
- setup_clear_cpu_cap(X86_FEATURE_PCID);
-#endif
-
identify_boot_cpu();
if (!IS_ENABLED(CONFIG_SMP)) {
setup_force_cpu_cap(X86_FEATURE_ALWAYS);
fpu__init_system(c);
+
+#ifdef CONFIG_X86_32
+ /*
+ * Regardless of whether PCID is enumerated, the SDM says
+ * that it can't be enabled in 32-bit mode.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_PCID);
+#endif
}
void __init early_cpu_init(void)
kernel_fpu_disable();
- if (fpu->fpregs_active) {
+ if (fpu->initialized) {
/*
* Ignore return value -- we don't care if reg state
* is clobbered.
{
struct fpu *fpu = ¤t->thread.fpu;
- if (fpu->fpregs_active)
+ if (fpu->initialized)
copy_kernel_to_fpregs(&fpu->state);
kernel_fpu_enable();
preempt_disable();
trace_x86_fpu_before_save(fpu);
- if (fpu->fpregs_active) {
+ if (fpu->initialized) {
if (!copy_fpregs_to_fpstate(fpu)) {
copy_kernel_to_fpregs(&fpu->state);
}
int fpu__copy(struct fpu *dst_fpu, struct fpu *src_fpu)
{
- dst_fpu->fpregs_active = 0;
dst_fpu->last_cpu = -1;
- if (!src_fpu->fpstate_active || !static_cpu_has(X86_FEATURE_FPU))
+ if (!src_fpu->initialized || !static_cpu_has(X86_FEATURE_FPU))
return 0;
WARN_ON_FPU(src_fpu != ¤t->thread.fpu);
/*
* Save current FPU registers directly into the child
* FPU context, without any memory-to-memory copying.
- * In lazy mode, if the FPU context isn't loaded into
- * fpregs, CR0.TS will be set and do_device_not_available
- * will load the FPU context.
*
- * We have to do all this with preemption disabled,
- * mostly because of the FNSAVE case, because in that
- * case we must not allow preemption in the window
- * between the FNSAVE and us marking the context lazy.
- *
- * It shouldn't be an issue as even FNSAVE is plenty
- * fast in terms of critical section length.
+ * ( The function 'fails' in the FNSAVE case, which destroys
+ * register contents so we have to copy them back. )
*/
- preempt_disable();
if (!copy_fpregs_to_fpstate(dst_fpu)) {
- memcpy(&src_fpu->state, &dst_fpu->state,
- fpu_kernel_xstate_size);
-
+ memcpy(&src_fpu->state, &dst_fpu->state, fpu_kernel_xstate_size);
copy_kernel_to_fpregs(&src_fpu->state);
}
- preempt_enable();
trace_x86_fpu_copy_src(src_fpu);
trace_x86_fpu_copy_dst(dst_fpu);
* Activate the current task's in-memory FPU context,
* if it has not been used before:
*/
-void fpu__activate_curr(struct fpu *fpu)
+void fpu__initialize(struct fpu *fpu)
{
WARN_ON_FPU(fpu != ¤t->thread.fpu);
- if (!fpu->fpstate_active) {
+ if (!fpu->initialized) {
fpstate_init(&fpu->state);
trace_x86_fpu_init_state(fpu);
trace_x86_fpu_activate_state(fpu);
/* Safe to do for the current task: */
- fpu->fpstate_active = 1;
+ fpu->initialized = 1;
}
}
-EXPORT_SYMBOL_GPL(fpu__activate_curr);
+EXPORT_SYMBOL_GPL(fpu__initialize);
/*
* This function must be called before we read a task's fpstate.
*
- * If the task has not used the FPU before then initialize its
- * fpstate.
+ * There's two cases where this gets called:
+ *
+ * - for the current task (when coredumping), in which case we have
+ * to save the latest FPU registers into the fpstate,
+ *
+ * - or it's called for stopped tasks (ptrace), in which case the
+ * registers were already saved by the context-switch code when
+ * the task scheduled out - we only have to initialize the registers
+ * if they've never been initialized.
*
* If the task has used the FPU before then save it.
*/
-void fpu__activate_fpstate_read(struct fpu *fpu)
+void fpu__prepare_read(struct fpu *fpu)
{
- /*
- * If fpregs are active (in the current CPU), then
- * copy them to the fpstate:
- */
- if (fpu->fpregs_active) {
+ if (fpu == ¤t->thread.fpu) {
fpu__save(fpu);
} else {
- if (!fpu->fpstate_active) {
+ if (!fpu->initialized) {
fpstate_init(&fpu->state);
trace_x86_fpu_init_state(fpu);
trace_x86_fpu_activate_state(fpu);
/* Safe to do for current and for stopped child tasks: */
- fpu->fpstate_active = 1;
+ fpu->initialized = 1;
}
}
}
/*
* This function must be called before we write a task's fpstate.
*
- * If the task has used the FPU before then unlazy it.
+ * If the task has used the FPU before then invalidate any cached FPU registers.
* If the task has not used the FPU before then initialize its fpstate.
*
* After this function call, after registers in the fpstate are
* modified and the child task has woken up, the child task will
* restore the modified FPU state from the modified context. If we
- * didn't clear its lazy status here then the lazy in-registers
+ * didn't clear its cached status here then the cached in-registers
* state pending on its former CPU could be restored, corrupting
* the modifications.
*/
-void fpu__activate_fpstate_write(struct fpu *fpu)
+void fpu__prepare_write(struct fpu *fpu)
{
/*
* Only stopped child tasks can be used to modify the FPU
*/
WARN_ON_FPU(fpu == ¤t->thread.fpu);
- if (fpu->fpstate_active) {
- /* Invalidate any lazy state: */
+ if (fpu->initialized) {
+ /* Invalidate any cached state: */
__fpu_invalidate_fpregs_state(fpu);
} else {
fpstate_init(&fpu->state);
trace_x86_fpu_activate_state(fpu);
/* Safe to do for stopped child tasks: */
- fpu->fpstate_active = 1;
+ fpu->initialized = 1;
}
}
-/*
- * This function must be called before we write the current
- * task's fpstate.
- *
- * This call gets the current FPU register state and moves
- * it in to the 'fpstate'. Preemption is disabled so that
- * no writes to the 'fpstate' can occur from context
- * swiches.
- *
- * Must be followed by a fpu__current_fpstate_write_end().
- */
-void fpu__current_fpstate_write_begin(void)
-{
- struct fpu *fpu = ¤t->thread.fpu;
-
- /*
- * Ensure that the context-switching code does not write
- * over the fpstate while we are doing our update.
- */
- preempt_disable();
-
- /*
- * Move the fpregs in to the fpu's 'fpstate'.
- */
- fpu__activate_fpstate_read(fpu);
-
- /*
- * The caller is about to write to 'fpu'. Ensure that no
- * CPU thinks that its fpregs match the fpstate. This
- * ensures we will not be lazy and skip a XRSTOR in the
- * future.
- */
- __fpu_invalidate_fpregs_state(fpu);
-}
-
-/*
- * This function must be paired with fpu__current_fpstate_write_begin()
- *
- * This will ensure that the modified fpstate gets placed back in
- * the fpregs if necessary.
- *
- * Note: This function may be called whether or not an _actual_
- * write to the fpstate occurred.
- */
-void fpu__current_fpstate_write_end(void)
-{
- struct fpu *fpu = ¤t->thread.fpu;
-
- /*
- * 'fpu' now has an updated copy of the state, but the
- * registers may still be out of date. Update them with
- * an XRSTOR if they are active.
- */
- if (fpregs_active())
- copy_kernel_to_fpregs(&fpu->state);
-
- /*
- * Our update is done and the fpregs/fpstate are in sync
- * if necessary. Context switches can happen again.
- */
- preempt_enable();
-}
-
/*
* 'fpu__restore()' is called to copy FPU registers from
* the FPU fpstate to the live hw registers and to activate
*/
void fpu__restore(struct fpu *fpu)
{
- fpu__activate_curr(fpu);
+ fpu__initialize(fpu);
/* Avoid __kernel_fpu_begin() right after fpregs_activate() */
kernel_fpu_disable();
{
preempt_disable();
- if (fpu->fpregs_active) {
- /* Ignore delayed exceptions from user space */
- asm volatile("1: fwait\n"
- "2:\n"
- _ASM_EXTABLE(1b, 2b));
- fpregs_deactivate(fpu);
+ if (fpu == ¤t->thread.fpu) {
+ if (fpu->initialized) {
+ /* Ignore delayed exceptions from user space */
+ asm volatile("1: fwait\n"
+ "2:\n"
+ _ASM_EXTABLE(1b, 2b));
+ fpregs_deactivate(fpu);
+ }
}
- fpu->fpstate_active = 0;
+ fpu->initialized = 0;
trace_x86_fpu_dropped(fpu);
* Make sure fpstate is cleared and initialized.
*/
if (static_cpu_has(X86_FEATURE_FPU)) {
- fpu__activate_curr(fpu);
+ preempt_disable();
+ fpu__initialize(fpu);
user_fpu_begin();
copy_init_fpstate_to_fpregs();
+ preempt_enable();
}
}
WARN_ON_FPU(!on_boot_cpu);
on_boot_cpu = 0;
- WARN_ON_FPU(current->thread.fpu.fpstate_active);
+ WARN_ON_FPU(current->thread.fpu.initialized);
}
/*
{
struct fpu *target_fpu = &target->thread.fpu;
- return target_fpu->fpstate_active ? regset->n : 0;
+ return target_fpu->initialized ? regset->n : 0;
}
int regset_xregset_fpregs_active(struct task_struct *target, const struct user_regset *regset)
{
struct fpu *target_fpu = &target->thread.fpu;
- if (boot_cpu_has(X86_FEATURE_FXSR) && target_fpu->fpstate_active)
+ if (boot_cpu_has(X86_FEATURE_FXSR) && target_fpu->initialized)
return regset->n;
else
return 0;
if (!boot_cpu_has(X86_FEATURE_FXSR))
return -ENODEV;
- fpu__activate_fpstate_read(fpu);
+ fpu__prepare_read(fpu);
fpstate_sanitize_xstate(fpu);
return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
if (!boot_cpu_has(X86_FEATURE_FXSR))
return -ENODEV;
- fpu__activate_fpstate_write(fpu);
+ fpu__prepare_write(fpu);
fpstate_sanitize_xstate(fpu);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
xsave = &fpu->state.xsave;
- fpu__activate_fpstate_read(fpu);
+ fpu__prepare_read(fpu);
if (using_compacted_format()) {
- ret = copyout_from_xsaves(pos, count, kbuf, ubuf, xsave);
+ if (kbuf)
+ ret = copy_xstate_to_kernel(kbuf, xsave, pos, count);
+ else
+ ret = copy_xstate_to_user(ubuf, xsave, pos, count);
} else {
fpstate_sanitize_xstate(fpu);
/*
xsave = &fpu->state.xsave;
- fpu__activate_fpstate_write(fpu);
+ fpu__prepare_write(fpu);
- if (boot_cpu_has(X86_FEATURE_XSAVES))
- ret = copyin_to_xsaves(kbuf, ubuf, xsave);
- else
+ if (using_compacted_format()) {
+ if (kbuf)
+ ret = copy_kernel_to_xstate(xsave, kbuf);
+ else
+ ret = copy_user_to_xstate(xsave, ubuf);
+ } else {
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, xsave, 0, -1);
-
- /*
- * In case of failure, mark all states as init:
- */
- if (ret)
- fpstate_init(&fpu->state);
+ if (!ret)
+ ret = validate_xstate_header(&xsave->header);
+ }
/*
* mxcsr reserved bits must be masked to zero for security reasons.
*/
xsave->i387.mxcsr &= mxcsr_feature_mask;
- xsave->header.xfeatures &= xfeatures_mask;
+
/*
- * These bits must be zero.
+ * In case of failure, mark all states as init:
*/
- memset(&xsave->header.reserved, 0, 48);
+ if (ret)
+ fpstate_init(&fpu->state);
return ret;
}
struct fpu *fpu = &target->thread.fpu;
struct user_i387_ia32_struct env;
- fpu__activate_fpstate_read(fpu);
+ fpu__prepare_read(fpu);
if (!boot_cpu_has(X86_FEATURE_FPU))
return fpregs_soft_get(target, regset, pos, count, kbuf, ubuf);
struct user_i387_ia32_struct env;
int ret;
- fpu__activate_fpstate_write(fpu);
+ fpu__prepare_write(fpu);
fpstate_sanitize_xstate(fpu);
if (!boot_cpu_has(X86_FEATURE_FPU))
struct fpu *fpu = &tsk->thread.fpu;
int fpvalid;
- fpvalid = fpu->fpstate_active;
+ fpvalid = fpu->initialized;
if (fpvalid)
fpvalid = !fpregs_get(tsk, NULL,
0, sizeof(struct user_i387_ia32_struct),
*/
int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
{
- struct xregs_state *xsave = ¤t->thread.fpu.state.xsave;
+ struct fpu *fpu = ¤t->thread.fpu;
+ struct xregs_state *xsave = &fpu->state.xsave;
struct task_struct *tsk = current;
int ia32_fxstate = (buf != buf_fx);
sizeof(struct user_i387_ia32_struct), NULL,
(struct _fpstate_32 __user *) buf) ? -1 : 1;
- if (fpregs_active() || using_compacted_format()) {
+ if (fpu->initialized || using_compacted_format()) {
/* Save the live register state to the user directly. */
if (copy_fpregs_to_sigframe(buf_fx))
return -1;
/* Update the thread's fxstate to save the fsave header. */
if (ia32_fxstate)
- copy_fxregs_to_kernel(&tsk->thread.fpu);
+ copy_fxregs_to_kernel(fpu);
} else {
/*
* It is a *bug* if kernel uses compacted-format for xsave
return -1;
}
- fpstate_sanitize_xstate(&tsk->thread.fpu);
+ fpstate_sanitize_xstate(fpu);
if (__copy_to_user(buf_fx, xsave, fpu_user_xstate_size))
return -1;
}
struct xstate_header *header = &xsave->header;
if (use_xsave()) {
- /* These bits must be zero. */
- memset(header->reserved, 0, 48);
+ /*
+ * Note: we don't need to zero the reserved bits in the
+ * xstate_header here because we either didn't copy them at all,
+ * or we checked earlier that they aren't set.
+ */
/*
* Init the state that is not present in the memory
if (fx_only)
header->xfeatures = XFEATURE_MASK_FPSSE;
else
- header->xfeatures &= (xfeatures_mask & xfeatures);
+ header->xfeatures &= xfeatures;
}
if (use_fxsr()) {
if (!access_ok(VERIFY_READ, buf, size))
return -EACCES;
- fpu__activate_curr(fpu);
+ fpu__initialize(fpu);
if (!static_cpu_has(X86_FEATURE_FPU))
return fpregs_soft_set(current, NULL,
/*
* For 32-bit frames with fxstate, copy the user state to the
* thread's fpu state, reconstruct fxstate from the fsave
- * header. Sanitize the copied state etc.
+ * header. Validate and sanitize the copied state.
*/
struct fpu *fpu = &tsk->thread.fpu;
struct user_i387_ia32_struct env;
int err = 0;
/*
- * Drop the current fpu which clears fpu->fpstate_active. This ensures
+ * Drop the current fpu which clears fpu->initialized. This ensures
* that any context-switch during the copy of the new state,
* avoids the intermediate state from getting restored/saved.
* Thus avoiding the new restored state from getting corrupted.
* We will be ready to restore/save the state only after
- * fpu->fpstate_active is again set.
+ * fpu->initialized is again set.
*/
fpu__drop(fpu);
if (using_compacted_format()) {
- err = copyin_to_xsaves(NULL, buf_fx,
- &fpu->state.xsave);
+ err = copy_user_to_xstate(&fpu->state.xsave, buf_fx);
} else {
- err = __copy_from_user(&fpu->state.xsave,
- buf_fx, state_size);
+ err = __copy_from_user(&fpu->state.xsave, buf_fx, state_size);
+
+ if (!err && state_size > offsetof(struct xregs_state, header))
+ err = validate_xstate_header(&fpu->state.xsave.header);
}
if (err || __copy_from_user(&env, buf, sizeof(env))) {
sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
}
- fpu->fpstate_active = 1;
+ fpu->initialized = 1;
preempt_disable();
fpu__restore(fpu);
preempt_enable();
return boot_cpu_has(X86_FEATURE_XSAVES);
}
+/* Validate an xstate header supplied by userspace (ptrace or sigreturn) */
+int validate_xstate_header(const struct xstate_header *hdr)
+{
+ /* No unknown or supervisor features may be set */
+ if (hdr->xfeatures & (~xfeatures_mask | XFEATURE_MASK_SUPERVISOR))
+ return -EINVAL;
+
+ /* Userspace must use the uncompacted format */
+ if (hdr->xcomp_bv)
+ return -EINVAL;
+
+ /*
+ * If 'reserved' is shrunken to add a new field, make sure to validate
+ * that new field here!
+ */
+ BUILD_BUG_ON(sizeof(hdr->reserved) != 48);
+
+ /* No reserved bits may be set */
+ if (memchr_inv(hdr->reserved, 0, sizeof(hdr->reserved)))
+ return -EINVAL;
+
+ return 0;
+}
+
static void __xstate_dump_leaves(void)
{
int i;
{
struct fpu *fpu = ¤t->thread.fpu;
- if (!fpu->fpstate_active)
+ if (!fpu->initialized)
return NULL;
/*
* fpu__save() takes the CPU's xstate registers
}
#endif /* ! CONFIG_ARCH_HAS_PKEYS */
+/*
+ * Weird legacy quirk: SSE and YMM states store information in the
+ * MXCSR and MXCSR_FLAGS fields of the FP area. That means if the FP
+ * area is marked as unused in the xfeatures header, we need to copy
+ * MXCSR and MXCSR_FLAGS if either SSE or YMM are in use.
+ */
+static inline bool xfeatures_mxcsr_quirk(u64 xfeatures)
+{
+ if (!(xfeatures & (XFEATURE_MASK_SSE|XFEATURE_MASK_YMM)))
+ return false;
+
+ if (xfeatures & XFEATURE_MASK_FP)
+ return false;
+
+ return true;
+}
+
/*
* This is similar to user_regset_copyout(), but will not add offset to
* the source data pointer or increment pos, count, kbuf, and ubuf.
*/
-static inline int xstate_copyout(unsigned int pos, unsigned int count,
- void *kbuf, void __user *ubuf,
- const void *data, const int start_pos,
- const int end_pos)
+static inline void
+__copy_xstate_to_kernel(void *kbuf, const void *data,
+ unsigned int offset, unsigned int size, unsigned int size_total)
{
- if ((count == 0) || (pos < start_pos))
- return 0;
+ if (offset < size_total) {
+ unsigned int copy = min(size, size_total - offset);
- if (end_pos < 0 || pos < end_pos) {
- unsigned int copy = (end_pos < 0 ? count : min(count, end_pos - pos));
+ memcpy(kbuf + offset, data, copy);
+ }
+}
- if (kbuf) {
- memcpy(kbuf + pos, data, copy);
- } else {
- if (__copy_to_user(ubuf + pos, data, copy))
- return -EFAULT;
+/*
+ * Convert from kernel XSAVES compacted format to standard format and copy
+ * to a kernel-space ptrace buffer.
+ *
+ * It supports partial copy but pos always starts from zero. This is called
+ * from xstateregs_get() and there we check the CPU has XSAVES.
+ */
+int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int offset_start, unsigned int size_total)
+{
+ unsigned int offset, size;
+ struct xstate_header header;
+ int i;
+
+ /*
+ * Currently copy_regset_to_user() starts from pos 0:
+ */
+ if (unlikely(offset_start != 0))
+ return -EFAULT;
+
+ /*
+ * The destination is a ptrace buffer; we put in only user xstates:
+ */
+ memset(&header, 0, sizeof(header));
+ header.xfeatures = xsave->header.xfeatures;
+ header.xfeatures &= ~XFEATURE_MASK_SUPERVISOR;
+
+ /*
+ * Copy xregs_state->header:
+ */
+ offset = offsetof(struct xregs_state, header);
+ size = sizeof(header);
+
+ __copy_xstate_to_kernel(kbuf, &header, offset, size, size_total);
+
+ for (i = 0; i < XFEATURE_MAX; i++) {
+ /*
+ * Copy only in-use xstates:
+ */
+ if ((header.xfeatures >> i) & 1) {
+ void *src = __raw_xsave_addr(xsave, 1 << i);
+
+ offset = xstate_offsets[i];
+ size = xstate_sizes[i];
+
+ /* The next component has to fit fully into the output buffer: */
+ if (offset + size > size_total)
+ break;
+
+ __copy_xstate_to_kernel(kbuf, src, offset, size, size_total);
}
+
+ }
+
+ if (xfeatures_mxcsr_quirk(header.xfeatures)) {
+ offset = offsetof(struct fxregs_state, mxcsr);
+ size = MXCSR_AND_FLAGS_SIZE;
+ __copy_xstate_to_kernel(kbuf, &xsave->i387.mxcsr, offset, size, size_total);
+ }
+
+ /*
+ * Fill xsave->i387.sw_reserved value for ptrace frame:
+ */
+ offset = offsetof(struct fxregs_state, sw_reserved);
+ size = sizeof(xstate_fx_sw_bytes);
+
+ __copy_xstate_to_kernel(kbuf, xstate_fx_sw_bytes, offset, size, size_total);
+
+ return 0;
+}
+
+static inline int
+__copy_xstate_to_user(void __user *ubuf, const void *data, unsigned int offset, unsigned int size, unsigned int size_total)
+{
+ if (!size)
+ return 0;
+
+ if (offset < size_total) {
+ unsigned int copy = min(size, size_total - offset);
+
+ if (__copy_to_user(ubuf + offset, data, copy))
+ return -EFAULT;
}
return 0;
}
/*
* Convert from kernel XSAVES compacted format to standard format and copy
- * to a ptrace buffer. It supports partial copy but pos always starts from
+ * to a user-space buffer. It supports partial copy but pos always starts from
* zero. This is called from xstateregs_get() and there we check the CPU
* has XSAVES.
*/
-int copyout_from_xsaves(unsigned int pos, unsigned int count, void *kbuf,
- void __user *ubuf, struct xregs_state *xsave)
+int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned int offset_start, unsigned int size_total)
{
unsigned int offset, size;
int ret, i;
/*
* Currently copy_regset_to_user() starts from pos 0:
*/
- if (unlikely(pos != 0))
+ if (unlikely(offset_start != 0))
return -EFAULT;
/*
offset = offsetof(struct xregs_state, header);
size = sizeof(header);
- ret = xstate_copyout(offset, size, kbuf, ubuf, &header, 0, count);
-
+ ret = __copy_xstate_to_user(ubuf, &header, offset, size, size_total);
if (ret)
return ret;
offset = xstate_offsets[i];
size = xstate_sizes[i];
- ret = xstate_copyout(offset, size, kbuf, ubuf, src, 0, count);
+ /* The next component has to fit fully into the output buffer: */
+ if (offset + size > size_total)
+ break;
+ ret = __copy_xstate_to_user(ubuf, src, offset, size, size_total);
if (ret)
return ret;
-
- if (offset + size >= count)
- break;
}
}
+ if (xfeatures_mxcsr_quirk(header.xfeatures)) {
+ offset = offsetof(struct fxregs_state, mxcsr);
+ size = MXCSR_AND_FLAGS_SIZE;
+ __copy_xstate_to_user(ubuf, &xsave->i387.mxcsr, offset, size, size_total);
+ }
+
/*
* Fill xsave->i387.sw_reserved value for ptrace frame:
*/
offset = offsetof(struct fxregs_state, sw_reserved);
size = sizeof(xstate_fx_sw_bytes);
- ret = xstate_copyout(offset, size, kbuf, ubuf, xstate_fx_sw_bytes, 0, count);
-
+ ret = __copy_xstate_to_user(ubuf, xstate_fx_sw_bytes, offset, size, size_total);
if (ret)
return ret;
}
/*
- * Convert from a ptrace standard-format buffer to kernel XSAVES format
- * and copy to the target thread. This is called from xstateregs_set() and
- * there we check the CPU has XSAVES and a whole standard-sized buffer
- * exists.
+ * Convert from a ptrace standard-format kernel buffer to kernel XSAVES format
+ * and copy to the target thread. This is called from xstateregs_set().
*/
-int copyin_to_xsaves(const void *kbuf, const void __user *ubuf,
- struct xregs_state *xsave)
+int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf)
{
unsigned int offset, size;
int i;
- u64 xfeatures;
- u64 allowed_features;
+ struct xstate_header hdr;
offset = offsetof(struct xregs_state, header);
- size = sizeof(xfeatures);
+ size = sizeof(hdr);
- if (kbuf) {
- memcpy(&xfeatures, kbuf + offset, size);
- } else {
- if (__copy_from_user(&xfeatures, ubuf + offset, size))
- return -EFAULT;
+ memcpy(&hdr, kbuf + offset, size);
+
+ if (validate_xstate_header(&hdr))
+ return -EINVAL;
+
+ for (i = 0; i < XFEATURE_MAX; i++) {
+ u64 mask = ((u64)1 << i);
+
+ if (hdr.xfeatures & mask) {
+ void *dst = __raw_xsave_addr(xsave, 1 << i);
+
+ offset = xstate_offsets[i];
+ size = xstate_sizes[i];
+
+ memcpy(dst, kbuf + offset, size);
+ }
+ }
+
+ if (xfeatures_mxcsr_quirk(hdr.xfeatures)) {
+ offset = offsetof(struct fxregs_state, mxcsr);
+ size = MXCSR_AND_FLAGS_SIZE;
+ memcpy(&xsave->i387.mxcsr, kbuf + offset, size);
}
/*
- * Reject if the user sets any disabled or supervisor features:
+ * The state that came in from userspace was user-state only.
+ * Mask all the user states out of 'xfeatures':
+ */
+ xsave->header.xfeatures &= XFEATURE_MASK_SUPERVISOR;
+
+ /*
+ * Add back in the features that came in from userspace:
*/
- allowed_features = xfeatures_mask & ~XFEATURE_MASK_SUPERVISOR;
+ xsave->header.xfeatures |= hdr.xfeatures;
- if (xfeatures & ~allowed_features)
+ return 0;
+}
+
+/*
+ * Convert from a ptrace or sigreturn standard-format user-space buffer to
+ * kernel XSAVES format and copy to the target thread. This is called from
+ * xstateregs_set(), as well as potentially from the sigreturn() and
+ * rt_sigreturn() system calls.
+ */
+int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf)
+{
+ unsigned int offset, size;
+ int i;
+ struct xstate_header hdr;
+
+ offset = offsetof(struct xregs_state, header);
+ size = sizeof(hdr);
+
+ if (__copy_from_user(&hdr, ubuf + offset, size))
+ return -EFAULT;
+
+ if (validate_xstate_header(&hdr))
return -EINVAL;
for (i = 0; i < XFEATURE_MAX; i++) {
u64 mask = ((u64)1 << i);
- if (xfeatures & mask) {
+ if (hdr.xfeatures & mask) {
void *dst = __raw_xsave_addr(xsave, 1 << i);
offset = xstate_offsets[i];
size = xstate_sizes[i];
- if (kbuf) {
- memcpy(dst, kbuf + offset, size);
- } else {
- if (__copy_from_user(dst, ubuf + offset, size))
- return -EFAULT;
- }
+ if (__copy_from_user(dst, ubuf + offset, size))
+ return -EFAULT;
}
}
+ if (xfeatures_mxcsr_quirk(hdr.xfeatures)) {
+ offset = offsetof(struct fxregs_state, mxcsr);
+ size = MXCSR_AND_FLAGS_SIZE;
+ if (__copy_from_user(&xsave->i387.mxcsr, ubuf + offset, size))
+ return -EFAULT;
+ }
+
/*
* The state that came in from userspace was user-state only.
* Mask all the user states out of 'xfeatures':
/*
* Add back in the features that came in from userspace:
*/
- xsave->header.xfeatures |= xfeatures;
+ xsave->header.xfeatures |= hdr.xfeatures;
return 0;
}
static inline void *current_stack(void)
{
- return (void *)(current_stack_pointer() & ~(THREAD_SIZE - 1));
+ return (void *)(current_stack_pointer & ~(THREAD_SIZE - 1));
}
static inline int execute_on_irq_stack(int overflow, struct irq_desc *desc)
/* Save the next esp at the bottom of the stack */
prev_esp = (u32 *)irqstk;
- *prev_esp = current_stack_pointer();
+ *prev_esp = current_stack_pointer;
if (unlikely(overflow))
call_on_stack(print_stack_overflow, isp);
/* Push the previous esp onto the stack */
prev_esp = (u32 *)irqstk;
- *prev_esp = current_stack_pointer();
+ *prev_esp = current_stack_pointer;
call_on_stack(__do_softirq, isp);
}
return 0;
out_clean_nodes:
- for (j = i - 1; j > 0; j--)
+ for (j = i - 1; j >= 0; j--)
cleanup_setup_data_node(*(kobjp + j));
kfree(kobjp);
out_setup_data_kobj:
return NULL;
}
-void kvm_async_pf_task_wait(u32 token)
+/*
+ * @interrupt_kernel: Is this called from a routine which interrupts the kernel
+ * (other than user space)?
+ */
+void kvm_async_pf_task_wait(u32 token, int interrupt_kernel)
{
u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS);
struct kvm_task_sleep_head *b = &async_pf_sleepers[key];
n.token = token;
n.cpu = smp_processor_id();
- n.halted = is_idle_task(current) || preempt_count() > 1;
+ n.halted = is_idle_task(current) ||
+ (IS_ENABLED(CONFIG_PREEMPT_COUNT)
+ ? preempt_count() > 1 || rcu_preempt_depth()
+ : interrupt_kernel);
init_swait_queue_head(&n.wq);
hlist_add_head(&n.link, &b->list);
raw_spin_unlock(&b->lock);
case KVM_PV_REASON_PAGE_NOT_PRESENT:
/* page is swapped out by the host. */
prev_state = exception_enter();
- kvm_async_pf_task_wait((u32)read_cr2());
+ kvm_async_pf_task_wait((u32)read_cr2(), !user_mode(regs));
exception_exit(prev_state);
break;
case KVM_PV_REASON_PAGE_READY:
sp = (unsigned long) ka->sa.sa_restorer;
}
- if (fpu->fpstate_active) {
+ if (fpu->initialized) {
sp = fpu__alloc_mathframe(sp, IS_ENABLED(CONFIG_X86_32),
&buf_fx, &math_size);
*fpstate = (void __user *)sp;
return (void __user *)-1L;
/* save i387 and extended state */
- if (fpu->fpstate_active &&
+ if (fpu->initialized &&
copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size) < 0)
return (void __user *)-1L;
/*
* Ensure the signal handler starts with the new fpu state.
*/
- if (fpu->fpstate_active)
+ if (fpu->initialized)
fpu__clear(fpu);
}
signal_setup_done(failed, ksig, stepping);
*/
if (boot_cpu_has(X86_FEATURE_PCID))
__write_cr4(__read_cr4() | X86_CR4_PCIDE);
- cpu_init();
- x86_cpuinit.early_percpu_clock_init();
- preempt_disable();
- smp_callin();
-
- enable_start_cpu0 = 0;
#ifdef CONFIG_X86_32
/* switch away from the initial page table */
__flush_tlb_all();
#endif
+ cpu_init();
+ x86_cpuinit.early_percpu_clock_init();
+ preempt_disable();
+ smp_callin();
+
+ enable_start_cpu0 = 0;
+
/* otherwise gcc will move up smp_processor_id before the cpu_init */
barrier();
/*
* from double_fault.
*/
BUG_ON((unsigned long)(current_top_of_stack() -
- current_stack_pointer()) >= THREAD_SIZE);
+ current_stack_pointer) >= THREAD_SIZE);
preempt_enable_no_resched();
}
depends on HIGH_RES_TIMERS
# for TASKSTATS/TASK_DELAY_ACCT:
depends on NET && MULTIUSER
+ depends on X86_LOCAL_APIC
select PREEMPT_NOTIFIERS
select MMU_NOTIFIER
select ANON_INODES
#op " %al \n\t" \
FOP_RET
-asm(".global kvm_fastop_exception \n"
- "kvm_fastop_exception: xor %esi, %esi; ret");
+asm(".pushsection .fixup, \"ax\"\n"
+ ".global kvm_fastop_exception \n"
+ "kvm_fastop_exception: xor %esi, %esi; ret\n"
+ ".popsection");
FOP_START(setcc)
FOP_SETCC(seto)
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
if (efer & EFER_LMA) {
u64 maxphyaddr;
- u32 eax = 0x80000008;
+ u32 eax, ebx, ecx, edx;
- if (ctxt->ops->get_cpuid(ctxt, &eax, NULL, NULL,
- NULL, false))
+ eax = 0x80000008;
+ ecx = 0;
+ if (ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx,
+ &edx, false))
maxphyaddr = eax & 0xff;
else
maxphyaddr = 36;
static int fastop(struct x86_emulate_ctxt *ctxt, void (*fop)(struct fastop *))
{
- register void *__sp asm(_ASM_SP);
ulong flags = (ctxt->eflags & EFLAGS_MASK) | X86_EFLAGS_IF;
if (!(ctxt->d & ByteOp))
asm("push %[flags]; popf; call *%[fastop]; pushf; pop %[flags]\n"
: "+a"(ctxt->dst.val), "+d"(ctxt->src.val), [flags]"+D"(flags),
- [fastop]"+S"(fop), "+r"(__sp)
+ [fastop]"+S"(fop), ASM_CALL_CONSTRAINT
: "c"(ctxt->src2.val));
ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK);
case KVM_PV_REASON_PAGE_NOT_PRESENT:
vcpu->arch.apf.host_apf_reason = 0;
local_irq_disable();
- kvm_async_pf_task_wait(fault_address);
+ kvm_async_pf_task_wait(fault_address, 0);
local_irq_enable();
break;
case KVM_PV_REASON_PAGE_READY:
int cpu;
bool launched;
bool nmi_known_unmasked;
+ unsigned long vmcs_host_cr3; /* May not match real cr3 */
+ unsigned long vmcs_host_cr4; /* May not match real cr4 */
struct list_head loaded_vmcss_on_cpu_link;
};
int gs_ldt_reload_needed;
int fs_reload_needed;
u64 msr_host_bndcfgs;
- unsigned long vmcs_host_cr3; /* May not match real cr3 */
- unsigned long vmcs_host_cr4; /* May not match real cr4 */
} host_state;
struct {
int vm86_active;
struct pi_desc old, new;
unsigned int dest;
- if (!kvm_arch_has_assigned_device(vcpu->kvm) ||
- !irq_remapping_cap(IRQ_POSTING_CAP) ||
- !kvm_vcpu_apicv_active(vcpu))
+ /*
+ * In case of hot-plug or hot-unplug, we may have to undo
+ * vmx_vcpu_pi_put even if there is no assigned device. And we
+ * always keep PI.NDST up to date for simplicity: it makes the
+ * code easier, and CPU migration is not a fast path.
+ */
+ if (!pi_test_sn(pi_desc) && vcpu->cpu == cpu)
+ return;
+
+ /*
+ * First handle the simple case where no cmpxchg is necessary; just
+ * allow posting non-urgent interrupts.
+ *
+ * If the 'nv' field is POSTED_INTR_WAKEUP_VECTOR, do not change
+ * PI.NDST: pi_post_block will do it for us and the wakeup_handler
+ * expects the VCPU to be on the blocked_vcpu_list that matches
+ * PI.NDST.
+ */
+ if (pi_desc->nv == POSTED_INTR_WAKEUP_VECTOR ||
+ vcpu->cpu == cpu) {
+ pi_clear_sn(pi_desc);
return;
+ }
+ /* The full case. */
do {
old.control = new.control = pi_desc->control;
- /*
- * If 'nv' field is POSTED_INTR_WAKEUP_VECTOR, there
- * are two possible cases:
- * 1. After running 'pre_block', context switch
- * happened. For this case, 'sn' was set in
- * vmx_vcpu_put(), so we need to clear it here.
- * 2. After running 'pre_block', we were blocked,
- * and woken up by some other guy. For this case,
- * we don't need to do anything, 'pi_post_block'
- * will do everything for us. However, we cannot
- * check whether it is case #1 or case #2 here
- * (maybe, not needed), so we also clear sn here,
- * I think it is not a big deal.
- */
- if (pi_desc->nv != POSTED_INTR_WAKEUP_VECTOR) {
- if (vcpu->cpu != cpu) {
- dest = cpu_physical_id(cpu);
-
- if (x2apic_enabled())
- new.ndst = dest;
- else
- new.ndst = (dest << 8) & 0xFF00;
- }
+ dest = cpu_physical_id(cpu);
- /* set 'NV' to 'notification vector' */
- new.nv = POSTED_INTR_VECTOR;
- }
+ if (x2apic_enabled())
+ new.ndst = dest;
+ else
+ new.ndst = (dest << 8) & 0xFF00;
- /* Allow posting non-urgent interrupts */
new.sn = 0;
- } while (cmpxchg(&pi_desc->control, old.control,
- new.control) != old.control);
+ } while (cmpxchg64(&pi_desc->control, old.control,
+ new.control) != old.control);
}
static void decache_tsc_multiplier(struct vcpu_vmx *vmx)
int pi_vec = nested ? POSTED_INTR_NESTED_VECTOR : POSTED_INTR_VECTOR;
if (vcpu->mode == IN_GUEST_MODE) {
- struct vcpu_vmx *vmx = to_vmx(vcpu);
-
/*
- * Currently, we don't support urgent interrupt,
- * all interrupts are recognized as non-urgent
- * interrupt, so we cannot post interrupts when
- * 'SN' is set.
+ * The vector of interrupt to be delivered to vcpu had
+ * been set in PIR before this function.
+ *
+ * Following cases will be reached in this block, and
+ * we always send a notification event in all cases as
+ * explained below.
+ *
+ * Case 1: vcpu keeps in non-root mode. Sending a
+ * notification event posts the interrupt to vcpu.
*
- * If the vcpu is in guest mode, it means it is
- * running instead of being scheduled out and
- * waiting in the run queue, and that's the only
- * case when 'SN' is set currently, warning if
- * 'SN' is set.
+ * Case 2: vcpu exits to root mode and is still
+ * runnable. PIR will be synced to vIRR before the
+ * next vcpu entry. Sending a notification event in
+ * this case has no effect, as vcpu is not in root
+ * mode.
+ *
+ * Case 3: vcpu exits to root mode and is blocked.
+ * vcpu_block() has already synced PIR to vIRR and
+ * never blocks vcpu if vIRR is not cleared. Therefore,
+ * a blocked vcpu here does not wait for any requested
+ * interrupts in PIR, and sending a notification event
+ * which has no effect is safe here.
*/
- WARN_ON_ONCE(pi_test_sn(&vmx->pi_desc));
apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
return true;
*/
cr3 = __read_cr3();
vmcs_writel(HOST_CR3, cr3); /* 22.2.3 FIXME: shadow tables */
- vmx->host_state.vmcs_host_cr3 = cr3;
+ vmx->loaded_vmcs->vmcs_host_cr3 = cr3;
/* Save the most likely value for this task's CR4 in the VMCS. */
cr4 = cr4_read_shadow();
vmcs_writel(HOST_CR4, cr4); /* 22.2.3, 22.2.5 */
- vmx->host_state.vmcs_host_cr4 = cr4;
+ vmx->loaded_vmcs->vmcs_host_cr4 = cr4;
vmcs_write16(HOST_CS_SELECTOR, __KERNEL_CS); /* 22.2.4 */
#ifdef CONFIG_X86_64
static void vmx_handle_external_intr(struct kvm_vcpu *vcpu)
{
u32 exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
- register void *__sp asm(_ASM_SP);
if ((exit_intr_info & (INTR_INFO_VALID_MASK | INTR_INFO_INTR_TYPE_MASK))
== (INTR_INFO_VALID_MASK | INTR_TYPE_EXT_INTR)) {
#ifdef CONFIG_X86_64
[sp]"=&r"(tmp),
#endif
- "+r"(__sp)
+ ASM_CALL_CONSTRAINT
:
[entry]"r"(entry),
[ss]"i"(__KERNEL_DS),
vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]);
cr3 = __get_current_cr3_fast();
- if (unlikely(cr3 != vmx->host_state.vmcs_host_cr3)) {
+ if (unlikely(cr3 != vmx->loaded_vmcs->vmcs_host_cr3)) {
vmcs_writel(HOST_CR3, cr3);
- vmx->host_state.vmcs_host_cr3 = cr3;
+ vmx->loaded_vmcs->vmcs_host_cr3 = cr3;
}
cr4 = cr4_read_shadow();
- if (unlikely(cr4 != vmx->host_state.vmcs_host_cr4)) {
+ if (unlikely(cr4 != vmx->loaded_vmcs->vmcs_host_cr4)) {
vmcs_writel(HOST_CR4, cr4);
- vmx->host_state.vmcs_host_cr4 = cr4;
+ vmx->loaded_vmcs->vmcs_host_cr4 = cr4;
}
/* When single-stepping over STI and MOV SS, we must clear the
vmx->msr_ia32_feature_control_valid_bits = FEATURE_CONTROL_LOCKED;
+ /*
+ * Enforce invariant: pi_desc.nv is always either POSTED_INTR_VECTOR
+ * or POSTED_INTR_WAKEUP_VECTOR.
+ */
+ vmx->pi_desc.nv = POSTED_INTR_VECTOR;
+ vmx->pi_desc.sn = 1;
+
return &vmx->vcpu;
free_vmcs:
WARN_ON(!is_guest_mode(vcpu));
- if (nested_vmx_is_page_fault_vmexit(vmcs12, fault->error_code)) {
+ if (nested_vmx_is_page_fault_vmexit(vmcs12, fault->error_code) &&
+ !to_vmx(vcpu)->nested.nested_run_pending) {
vmcs12->vm_exit_intr_error_code = fault->error_code;
nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI,
PF_VECTOR | INTR_TYPE_HARD_EXCEPTION |
kvm_mmu_clear_dirty_pt_masked(kvm, memslot, offset, mask);
}
+static void __pi_post_block(struct kvm_vcpu *vcpu)
+{
+ struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
+ struct pi_desc old, new;
+ unsigned int dest;
+
+ do {
+ old.control = new.control = pi_desc->control;
+ WARN(old.nv != POSTED_INTR_WAKEUP_VECTOR,
+ "Wakeup handler not enabled while the VCPU is blocked\n");
+
+ dest = cpu_physical_id(vcpu->cpu);
+
+ if (x2apic_enabled())
+ new.ndst = dest;
+ else
+ new.ndst = (dest << 8) & 0xFF00;
+
+ /* set 'NV' to 'notification vector' */
+ new.nv = POSTED_INTR_VECTOR;
+ } while (cmpxchg64(&pi_desc->control, old.control,
+ new.control) != old.control);
+
+ if (!WARN_ON_ONCE(vcpu->pre_pcpu == -1)) {
+ spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ list_del(&vcpu->blocked_vcpu_list);
+ spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ vcpu->pre_pcpu = -1;
+ }
+}
+
/*
* This routine does the following things for vCPU which is going
* to be blocked if VT-d PI is enabled.
*/
static int pi_pre_block(struct kvm_vcpu *vcpu)
{
- unsigned long flags;
unsigned int dest;
struct pi_desc old, new;
struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
!kvm_vcpu_apicv_active(vcpu))
return 0;
- vcpu->pre_pcpu = vcpu->cpu;
- spin_lock_irqsave(&per_cpu(blocked_vcpu_on_cpu_lock,
- vcpu->pre_pcpu), flags);
- list_add_tail(&vcpu->blocked_vcpu_list,
- &per_cpu(blocked_vcpu_on_cpu,
- vcpu->pre_pcpu));
- spin_unlock_irqrestore(&per_cpu(blocked_vcpu_on_cpu_lock,
- vcpu->pre_pcpu), flags);
+ WARN_ON(irqs_disabled());
+ local_irq_disable();
+ if (!WARN_ON_ONCE(vcpu->pre_pcpu != -1)) {
+ vcpu->pre_pcpu = vcpu->cpu;
+ spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ list_add_tail(&vcpu->blocked_vcpu_list,
+ &per_cpu(blocked_vcpu_on_cpu,
+ vcpu->pre_pcpu));
+ spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ }
do {
old.control = new.control = pi_desc->control;
- /*
- * We should not block the vCPU if
- * an interrupt is posted for it.
- */
- if (pi_test_on(pi_desc) == 1) {
- spin_lock_irqsave(&per_cpu(blocked_vcpu_on_cpu_lock,
- vcpu->pre_pcpu), flags);
- list_del(&vcpu->blocked_vcpu_list);
- spin_unlock_irqrestore(
- &per_cpu(blocked_vcpu_on_cpu_lock,
- vcpu->pre_pcpu), flags);
- vcpu->pre_pcpu = -1;
-
- return 1;
- }
-
WARN((pi_desc->sn == 1),
"Warning: SN field of posted-interrupts "
"is set before blocking\n");
/* set 'NV' to 'wakeup vector' */
new.nv = POSTED_INTR_WAKEUP_VECTOR;
- } while (cmpxchg(&pi_desc->control, old.control,
- new.control) != old.control);
+ } while (cmpxchg64(&pi_desc->control, old.control,
+ new.control) != old.control);
- return 0;
+ /* We should not block the vCPU if an interrupt is posted for it. */
+ if (pi_test_on(pi_desc) == 1)
+ __pi_post_block(vcpu);
+
+ local_irq_enable();
+ return (vcpu->pre_pcpu == -1);
}
static int vmx_pre_block(struct kvm_vcpu *vcpu)
static void pi_post_block(struct kvm_vcpu *vcpu)
{
- struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
- struct pi_desc old, new;
- unsigned int dest;
- unsigned long flags;
-
- if (!kvm_arch_has_assigned_device(vcpu->kvm) ||
- !irq_remapping_cap(IRQ_POSTING_CAP) ||
- !kvm_vcpu_apicv_active(vcpu))
+ if (vcpu->pre_pcpu == -1)
return;
- do {
- old.control = new.control = pi_desc->control;
-
- dest = cpu_physical_id(vcpu->cpu);
-
- if (x2apic_enabled())
- new.ndst = dest;
- else
- new.ndst = (dest << 8) & 0xFF00;
-
- /* Allow posting non-urgent interrupts */
- new.sn = 0;
-
- /* set 'NV' to 'notification vector' */
- new.nv = POSTED_INTR_VECTOR;
- } while (cmpxchg(&pi_desc->control, old.control,
- new.control) != old.control);
-
- if(vcpu->pre_pcpu != -1) {
- spin_lock_irqsave(
- &per_cpu(blocked_vcpu_on_cpu_lock,
- vcpu->pre_pcpu), flags);
- list_del(&vcpu->blocked_vcpu_list);
- spin_unlock_irqrestore(
- &per_cpu(blocked_vcpu_on_cpu_lock,
- vcpu->pre_pcpu), flags);
- vcpu->pre_pcpu = -1;
- }
+ WARN_ON(irqs_disabled());
+ local_irq_disable();
+ __pi_post_block(vcpu);
+ local_irq_enable();
}
static void vmx_post_block(struct kvm_vcpu *vcpu)
if (set)
ret = irq_set_vcpu_affinity(host_irq, &vcpu_info);
- else {
- /* suppress notification event before unposting */
- pi_set_sn(vcpu_to_pi_desc(vcpu));
+ else
ret = irq_set_vcpu_affinity(host_irq, NULL);
- pi_clear_sn(vcpu_to_pi_desc(vcpu));
- }
if (ret < 0) {
printk(KERN_INFO "%s: failed to update PI IRTE\n",
int r;
sigset_t sigsaved;
- fpu__activate_curr(fpu);
+ fpu__initialize(fpu);
if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
struct desc_struct code_descriptor;
struct fpu *fpu = ¤t->thread.fpu;
- fpu__activate_curr(fpu);
+ fpu__initialize(fpu);
#ifdef RE_ENTRANT_CHECKING
if (emulating) {
#include <linux/uaccess.h>
#include <linux/sched/debug.h>
+#include <asm/fpu/internal.h>
#include <asm/traps.h>
#include <asm/kdebug.h>
}
EXPORT_SYMBOL_GPL(ex_handler_refcount);
+/*
+ * Handler for when we fail to restore a task's FPU state. We should never get
+ * here because the FPU state of a task using the FPU (task->thread.fpu.state)
+ * should always be valid. However, past bugs have allowed userspace to set
+ * reserved bits in the XSAVE area using PTRACE_SETREGSET or sys_rt_sigreturn().
+ * These caused XRSTOR to fail when switching to the task, leaking the FPU
+ * registers of the task previously executing on the CPU. Mitigate this class
+ * of vulnerability by restoring from the initial state (essentially, zeroing
+ * out all the FPU registers) if we can't restore from the task's FPU state.
+ */
+bool ex_handler_fprestore(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ regs->ip = ex_fixup_addr(fixup);
+
+ WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.",
+ (void *)instruction_pointer(regs));
+
+ __copy_kernel_to_fpregs(&init_fpstate, -1);
+ return true;
+}
+EXPORT_SYMBOL_GPL(ex_handler_fprestore);
+
bool ex_handler_ext(const struct exception_table_entry *fixup,
struct pt_regs *regs, int trapnr)
{
* 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really
* faulted on a pte with its pkey=4.
*/
-static void fill_sig_info_pkey(int si_code, siginfo_t *info,
- struct vm_area_struct *vma)
+static void fill_sig_info_pkey(int si_code, siginfo_t *info, u32 *pkey)
{
/* This is effectively an #ifdef */
if (!boot_cpu_has(X86_FEATURE_OSPKE))
* valid VMA, so we should never reach this without a
* valid VMA.
*/
- if (!vma) {
+ if (!pkey) {
WARN_ONCE(1, "PKU fault with no VMA passed in");
info->si_pkey = 0;
return;
* absolutely guranteed to be 100% accurate because of
* the race explained above.
*/
- info->si_pkey = vma_pkey(vma);
+ info->si_pkey = *pkey;
}
static void
force_sig_info_fault(int si_signo, int si_code, unsigned long address,
- struct task_struct *tsk, struct vm_area_struct *vma,
- int fault)
+ struct task_struct *tsk, u32 *pkey, int fault)
{
unsigned lsb = 0;
siginfo_t info;
lsb = PAGE_SHIFT;
info.si_addr_lsb = lsb;
- fill_sig_info_pkey(si_code, &info, vma);
+ fill_sig_info_pkey(si_code, &info, pkey);
force_sig_info(si_signo, &info, tsk);
}
struct task_struct *tsk = current;
unsigned long flags;
int sig;
- /* No context means no VMA to pass down */
- struct vm_area_struct *vma = NULL;
/* Are we prepared to handle this kernel fault? */
if (fixup_exception(regs, X86_TRAP_PF)) {
/* XXX: hwpoison faults will set the wrong code. */
force_sig_info_fault(signal, si_code, address,
- tsk, vma, 0);
+ tsk, NULL, 0);
}
/*
if (is_vmalloc_addr((void *)address) &&
(((unsigned long)tsk->stack - 1 - address < PAGE_SIZE) ||
address - ((unsigned long)tsk->stack + THREAD_SIZE) < PAGE_SIZE)) {
- register void *__sp asm("rsp");
unsigned long stack = this_cpu_read(orig_ist.ist[DOUBLEFAULT_STACK]) - sizeof(void *);
/*
* We're likely to be running with very little stack space
asm volatile ("movq %[stack], %%rsp\n\t"
"call handle_stack_overflow\n\t"
"1: jmp 1b"
- : "+r" (__sp)
+ : ASM_CALL_CONSTRAINT
: "D" ("kernel stack overflow (page fault)"),
"S" (regs), "d" (address),
[stack] "rm" (stack));
static void
__bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, struct vm_area_struct *vma,
- int si_code)
+ unsigned long address, u32 *pkey, int si_code)
{
struct task_struct *tsk = current;
tsk->thread.error_code = error_code;
tsk->thread.trap_nr = X86_TRAP_PF;
- force_sig_info_fault(SIGSEGV, si_code, address, tsk, vma, 0);
+ force_sig_info_fault(SIGSEGV, si_code, address, tsk, pkey, 0);
return;
}
static noinline void
bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, struct vm_area_struct *vma)
+ unsigned long address, u32 *pkey)
{
- __bad_area_nosemaphore(regs, error_code, address, vma, SEGV_MAPERR);
+ __bad_area_nosemaphore(regs, error_code, address, pkey, SEGV_MAPERR);
}
static void
unsigned long address, struct vm_area_struct *vma, int si_code)
{
struct mm_struct *mm = current->mm;
+ u32 pkey;
+
+ if (vma)
+ pkey = vma_pkey(vma);
/*
* Something tried to access memory that isn't in our memory map..
*/
up_read(&mm->mmap_sem);
- __bad_area_nosemaphore(regs, error_code, address, vma, si_code);
+ __bad_area_nosemaphore(regs, error_code, address,
+ (vma) ? &pkey : NULL, si_code);
}
static noinline void
static void
do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
- struct vm_area_struct *vma, unsigned int fault)
+ u32 *pkey, unsigned int fault)
{
struct task_struct *tsk = current;
int code = BUS_ADRERR;
code = BUS_MCEERR_AR;
}
#endif
- force_sig_info_fault(SIGBUS, code, address, tsk, vma, fault);
+ force_sig_info_fault(SIGBUS, code, address, tsk, pkey, fault);
}
static noinline void
mm_fault_error(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, struct vm_area_struct *vma,
- unsigned int fault)
+ unsigned long address, u32 *pkey, unsigned int fault)
{
if (fatal_signal_pending(current) && !(error_code & PF_USER)) {
no_context(regs, error_code, address, 0, 0);
} else {
if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|
VM_FAULT_HWPOISON_LARGE))
- do_sigbus(regs, error_code, address, vma, fault);
+ do_sigbus(regs, error_code, address, pkey, fault);
else if (fault & VM_FAULT_SIGSEGV)
- bad_area_nosemaphore(regs, error_code, address, vma);
+ bad_area_nosemaphore(regs, error_code, address, pkey);
else
BUG();
}
struct mm_struct *mm;
int fault, major = 0;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+ u32 pkey;
tsk = current;
mm = tsk->mm;
return;
}
+ pkey = vma_pkey(vma);
up_read(&mm->mmap_sem);
if (unlikely(fault & VM_FAULT_ERROR)) {
- mm_fault_error(regs, error_code, address, vma, fault);
+ mm_fault_error(regs, error_code, address, &pkey, fault);
return;
}
* published by the Free Software Foundation.
*/
+#define DISABLE_BRANCH_PROFILING
+
#include <linux/linkage.h>
#include <linux/init.h>
#include <linux/mm.h>
#include <asm/cpufeature.h> /* boot_cpu_has, ... */
#include <asm/mmu_context.h> /* vma_pkey() */
-#include <asm/fpu/internal.h> /* fpregs_active() */
int __execute_only_pkey(struct mm_struct *mm)
{
*/
preempt_disable();
if (!need_to_set_mm_pkey &&
- fpregs_active() &&
+ current->thread.fpu.initialized &&
!__pkru_allows_read(read_pkru(), execute_only_pkey)) {
preempt_enable();
return execute_only_pkey;
* isn't free.
*/
#ifdef CONFIG_DEBUG_VM
- if (WARN_ON_ONCE(__read_cr3() !=
- (__sme_pa(real_prev->pgd) | prev_asid))) {
+ if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev, prev_asid))) {
/*
* If we were to BUG here, we'd be very likely to kill
* the system so hard that we don't see the call trace.
*/
this_cpu_write(cpu_tlbstate.ctxs[prev_asid].tlb_gen,
next_tlb_gen);
- write_cr3(__sme_pa(next->pgd) | prev_asid);
+ write_cr3(build_cr3(next, prev_asid));
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH,
TLB_FLUSH_ALL);
}
* mapped in the new pgd, we'll double-fault. Forcibly
* map it.
*/
- unsigned int index = pgd_index(current_stack_pointer());
+ unsigned int index = pgd_index(current_stack_pointer);
pgd_t *pgd = next->pgd + index;
if (unlikely(pgd_none(*pgd)))
if (need_flush) {
this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
- write_cr3(__sme_pa(next->pgd) | new_asid);
+ write_cr3(build_cr3(next, new_asid));
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH,
TLB_FLUSH_ALL);
} else {
/* The new ASID is already up to date. */
- write_cr3(__sme_pa(next->pgd) | new_asid | CR3_NOFLUSH);
+ write_cr3(build_cr3_noflush(next, new_asid));
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, 0);
}
!(cr4_read_shadow() & X86_CR4_PCIDE));
/* Force ASID 0 and force a TLB flush. */
- write_cr3(cr3 & ~CR3_PCID_MASK);
+ write_cr3(build_cr3(mm, 0));
/* Reinitialize tlbstate. */
this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
/* if (index >= array->map.max_entries)
* goto out;
*/
- EMIT4(0x48, 0x8B, 0x46, /* mov rax, qword ptr [rsi + 16] */
+ EMIT2(0x89, 0xD2); /* mov edx, edx */
+ EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */
offsetof(struct bpf_array, map.max_entries));
- EMIT3(0x48, 0x39, 0xD0); /* cmp rax, rdx */
#define OFFSET1 43 /* number of bytes to jump */
EMIT2(X86_JBE, OFFSET1); /* jbe out */
label1 = cnt;
* from _brk_limit way up to the max_pfn_mapped (which is the end of
* the ramdisk). We continue on, erasing PMD entries that point to page
* tables - do note that they are accessible at this stage via __va.
- * For good measure we also round up to the PMD - which means that if
+ * As Xen is aligning the memory end to a 4MB boundary, for good
+ * measure we also round up to PMD_SIZE * 2 - which means that if
* anybody is using __ka address to the initial boot-stack - and try
* to use it - they are going to crash. The xen_start_info has been
* taken care of already in xen_setup_kernel_pagetable. */
addr = xen_start_info->pt_base;
- size = roundup(xen_start_info->nr_pt_frames * PAGE_SIZE, PMD_SIZE);
+ size = xen_start_info->nr_pt_frames * PAGE_SIZE;
- xen_cleanhighmap(addr, addr + size);
+ xen_cleanhighmap(addr, roundup(addr + size, PMD_SIZE * 2));
xen_start_info->pt_base = (unsigned long)__va(__pa(xen_start_info->pt_base));
-#ifdef DEBUG
- /* This is superfluous and is not necessary, but you know what
- * lets do it. The MODULES_VADDR -> MODULES_END should be clear of
- * anything at this stage. */
- xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
-#endif
}
#endif
* not the first page table in the page table pool.
* Iterate through the initial page tables to find the real page table base.
*/
-static phys_addr_t xen_find_pt_base(pmd_t *pmd)
+static phys_addr_t __init xen_find_pt_base(pmd_t *pmd)
{
phys_addr_t pt_base, paddr;
unsigned pmdidx;
/* Free all resources held by a thread. */
#define release_thread(thread) do { } while(0)
-/* Copy and release all segment info associated with a VM */
-#define copy_segments(p, mm) do { } while(0)
-#define release_segments(mm) do { } while(0)
-#define forget_segments() do { } while (0)
-
extern unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
kobject_init(&q->kobj, &blk_queue_ktype);
+#ifdef CONFIG_BLK_DEV_IO_TRACE
+ mutex_init(&q->blk_trace_mutex);
+#endif
mutex_init(&q->sysfs_lock);
spin_lock_init(&q->__queue_lock);
goto err;
/*
- * blk_mq_init_hctx() attempted to do this already, but q->debugfs_dir
+ * blk_mq_init_sched() attempted to do this already, but q->debugfs_dir
* didn't exist yet (because we don't know what to name the directory
* until the queue is registered to a gendisk).
*/
+ if (q->elevator && !q->sched_debugfs_dir)
+ blk_mq_debugfs_register_sched(q);
+
+ /* Similarly, blk_mq_init_hctx() couldn't do this previously. */
queue_for_each_hw_ctx(q, hctx, i) {
if (!hctx->debugfs_dir && blk_mq_debugfs_register_hctx(q, hctx))
goto err;
tg->disptime = jiffies - 1;
throtl_select_dispatch(sq);
- throtl_schedule_next_dispatch(sq, false);
+ throtl_schedule_next_dispatch(sq, true);
}
rcu_read_unlock();
throtl_select_dispatch(&td->service_queue);
- throtl_schedule_next_dispatch(&td->service_queue, false);
+ throtl_schedule_next_dispatch(&td->service_queue, true);
queue_work(kthrotld_workqueue, &td->dispatch_work);
}
failjob_rls_rqst_payload:
kfree(job->request_payload.sg_list);
failjob_rls_job:
- kfree(job);
return -ENOMEM;
}
struct bsg_job *job = blk_mq_rq_to_pdu(req);
struct scsi_request *sreq = &job->sreq;
+ /* called right after the request is allocated for the request_queue */
+
+ sreq->sense = kzalloc(SCSI_SENSE_BUFFERSIZE, gfp);
+ if (!sreq->sense)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void bsg_initialize_rq(struct request *req)
+{
+ struct bsg_job *job = blk_mq_rq_to_pdu(req);
+ struct scsi_request *sreq = &job->sreq;
+ void *sense = sreq->sense;
+
+ /* called right before the request is given to the request_queue user */
+
memset(job, 0, sizeof(*job));
scsi_req_init(sreq);
+
+ sreq->sense = sense;
sreq->sense_len = SCSI_SENSE_BUFFERSIZE;
- sreq->sense = kzalloc(sreq->sense_len, gfp);
- if (!sreq->sense)
- return -ENOMEM;
job->req = req;
- job->reply = sreq->sense;
+ job->reply = sense;
job->reply_len = sreq->sense_len;
job->dd_data = job + 1;
-
- return 0;
}
static void bsg_exit_rq(struct request_queue *q, struct request *req)
q->cmd_size = sizeof(struct bsg_job) + dd_job_size;
q->init_rq_fn = bsg_init_rq;
q->exit_rq_fn = bsg_exit_rq;
+ q->initialize_rq_fn = bsg_initialize_rq;
q->request_fn = bsg_request_fn;
ret = blk_init_allocated_queue(q);
struct device_attribute *attr, char *buf)
{
struct hd_struct *p = dev_to_part(dev);
- struct request_queue *q = dev_to_disk(dev)->queue;
+ struct request_queue *q = part_to_disk(p)->queue;
unsigned int inflight[2];
int cpu;
struct af_alg_ctx *ctx = ask->private;
struct af_alg_tsgl *sgl;
struct scatterlist *sg;
- unsigned int i, j;
+ unsigned int i, j = 0;
while (!list_empty(&ctx->tsgl_list)) {
sgl = list_first_entry(&ctx->tsgl_list, struct af_alg_tsgl,
list);
sg = sgl->sg;
- for (i = 0, j = 0; i < sgl->cur; i++) {
+ for (i = 0; i < sgl->cur; i++) {
size_t plen = min_t(size_t, used, sg[i].length);
struct page *page = sg_page(sg + i);
{
if (!drbg)
return;
- kzfree(drbg->V);
- drbg->Vbuf = NULL;
- kzfree(drbg->C);
- drbg->Cbuf = NULL;
+ kzfree(drbg->Vbuf);
+ drbg->V = NULL;
+ kzfree(drbg->Cbuf);
+ drbg->C = NULL;
kzfree(drbg->scratchpadbuf);
drbg->scratchpadbuf = NULL;
drbg->reseed_ctr = 0;
for (i = 0; i < wdat->entries; i++) {
const struct acpi_generic_address *gas;
struct resource_entry *rentry;
- struct resource res;
+ struct resource res = {};
bool found;
gas = &entries[i].register_region;
}
ghes_do_proc(ghes, ghes->estatus);
+out:
+ ghes_clear_estatus(ghes);
+
+ if (rc == -ENOENT)
+ return rc;
+
/*
* GHESv2 type HEST entries introduce support for error acknowledgment,
* so only acknowledge the error if this support is present.
*/
- if (is_hest_type_generic_v2(ghes)) {
- rc = ghes_ack_error(ghes->generic_v2);
- if (rc)
- return rc;
- }
-out:
- ghes_clear_estatus(ghes);
+ if (is_hest_type_generic_v2(ghes))
+ return ghes_ack_error(ghes->generic_v2);
+
return rc;
}
return ret;
}
+static bool __init iort_enable_acs(struct acpi_iort_node *iort_node)
+{
+ if (iort_node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX) {
+ struct acpi_iort_node *parent;
+ struct acpi_iort_id_mapping *map;
+ int i;
+
+ map = ACPI_ADD_PTR(struct acpi_iort_id_mapping, iort_node,
+ iort_node->mapping_offset);
+
+ for (i = 0; i < iort_node->mapping_count; i++, map++) {
+ if (!map->output_reference)
+ continue;
+
+ parent = ACPI_ADD_PTR(struct acpi_iort_node,
+ iort_table, map->output_reference);
+ /*
+ * If we detect a RC->SMMU mapping, make sure
+ * we enable ACS on the system.
+ */
+ if ((parent->type == ACPI_IORT_NODE_SMMU) ||
+ (parent->type == ACPI_IORT_NODE_SMMU_V3)) {
+ pci_request_acs();
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
static void __init iort_init_platform_devices(void)
{
struct acpi_iort_node *iort_node, *iort_end;
struct acpi_table_iort *iort;
struct fwnode_handle *fwnode;
int i, ret;
+ bool acs_enabled = false;
/*
* iort_table and iort both point to the start of IORT table, but
return;
}
+ if (!acs_enabled)
+ acs_enabled = iort_enable_acs(iort_node);
+
if ((iort_node->type == ACPI_IORT_NODE_SMMU) ||
(iort_node->type == ACPI_IORT_NODE_SMMU_V3)) {
struct fwnode_handle *child)
{
const struct acpi_device *adev = to_acpi_device_node(fwnode);
- struct acpi_device *child_adev = NULL;
const struct list_head *head;
struct list_head *next;
if (!child || is_acpi_device_node(child)) {
+ struct acpi_device *child_adev;
+
if (adev)
head = &adev->children;
else
goto nondev;
if (child) {
- child_adev = to_acpi_device_node(child);
- next = child_adev->node.next;
+ adev = to_acpi_device_node(child);
+ next = adev->node.next;
if (next == head) {
child = NULL;
goto nondev;
const struct acpi_data_node *data = to_acpi_data_node(fwnode);
struct acpi_data_node *dn;
- if (child_adev)
- head = &child_adev->data.subnodes;
+ if (adev)
+ head = &adev->data.subnodes;
else if (data)
head = &data->data.subnodes;
else
DECLARE_ACPI_FWNODE_OPS(acpi_device_fwnode_ops);
DECLARE_ACPI_FWNODE_OPS(acpi_data_fwnode_ops);
const struct fwnode_operations acpi_static_fwnode_ops;
+
+bool is_acpi_device_node(const struct fwnode_handle *fwnode)
+{
+ return !IS_ERR_OR_NULL(fwnode) &&
+ fwnode->ops == &acpi_device_fwnode_ops;
+}
+EXPORT_SYMBOL(is_acpi_device_node);
+
+bool is_acpi_data_node(const struct fwnode_handle *fwnode)
+{
+ return !IS_ERR_OR_NULL(fwnode) && fwnode->ops == &acpi_data_fwnode_ops;
+}
+EXPORT_SYMBOL(is_acpi_data_node);
debug_id, (u64)fda->num_fds);
continue;
}
- fd_array = (u32 *)(parent_buffer + fda->parent_offset);
+ fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
for (fd_index = 0; fd_index < fda->num_fds; fd_index++)
task_close_fd(proc, fd_array[fd_index]);
} break;
(u64)node->ptr);
binder_node_unlock(node);
} else {
- int ret;
struct binder_ref_data dest_rdata;
binder_node_unlock(node);
*/
parent_buffer = parent->buffer -
binder_alloc_get_user_buffer_offset(&target_proc->alloc);
- fd_array = (u32 *)(parent_buffer + fda->parent_offset);
+ fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) {
binder_user_error("%d:%d parent offset not aligned correctly.\n",
proc->pid, thread->pid);
proc->pid, thread->pid);
return -EINVAL;
}
- parent_buffer = (u8 *)(parent->buffer -
+ parent_buffer = (u8 *)((uintptr_t)parent->buffer -
binder_alloc_get_user_buffer_offset(
&target_proc->alloc));
*(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer;
err_dead_proc_or_thread:
return_error = BR_DEAD_REPLY;
return_error_line = __LINE__;
+ binder_dequeue_work(proc, tcomplete);
err_translate_failed:
err_bad_object_type:
err_bad_offset:
struct binder_alloc *alloc;
uintptr_t page_addr;
size_t index;
+ struct vm_area_struct *vma;
alloc = page->alloc;
if (!mutex_trylock(&alloc->mutex))
index = page - alloc->pages;
page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
- if (alloc->vma) {
+ vma = alloc->vma;
+ if (vma) {
mm = get_task_mm(alloc->tsk);
if (!mm)
goto err_get_task_mm_failed;
if (!down_write_trylock(&mm->mmap_sem))
goto err_down_write_mmap_sem_failed;
+ }
+
+ list_lru_isolate(lru, item);
+ spin_unlock(lock);
+ if (vma) {
trace_binder_unmap_user_start(alloc, index);
- zap_page_range(alloc->vma,
+ zap_page_range(vma,
page_addr + alloc->user_buffer_offset,
PAGE_SIZE);
trace_binder_unmap_kernel_end(alloc, index);
- list_lru_isolate(lru, item);
-
+ spin_lock(lock);
mutex_unlock(&alloc->mutex);
- return LRU_REMOVED;
+ return LRU_REMOVED_RETRY;
err_down_write_mmap_sem_failed:
- mmput(mm);
+ mmput_async(mm);
err_get_task_mm_failed:
err_page_already_freed:
mutex_unlock(&alloc->mutex);
static int ahci_pci_reset_controller(struct ata_host *host)
{
struct pci_dev *pdev = to_pci_dev(host->dev);
+ int rc;
- ahci_reset_controller(host);
+ rc = ahci_reset_controller(host);
+ if (rc)
+ return rc;
if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
struct ahci_host_priv *hpriv = host->private_data;
{ 0x27DF, 0x152D, 0x0778 }, /* ICH7 on unknown Intel */
{ 0x24CA, 0x1025, 0x0061 }, /* ICH4 on ACER Aspire 2023WLMi */
{ 0x24CA, 0x1025, 0x003d }, /* ICH4 on ACER TM290 */
+ { 0x24CA, 0x10CF, 0x11AB }, /* ICH4M on Fujitsu-Siemens Lifebook S6120 */
{ 0x266F, 0x1025, 0x0066 }, /* ICH6 on ACER Aspire 1694WLMi */
{ 0x2653, 0x1043, 0x82D8 }, /* ICH6M on Asus Eee 701 */
{ 0x27df, 0x104d, 0x900e }, /* ICH7 on Sony TZ-90 */
};
#define ENOUGH(v, unit) (((v)-1)/(unit)+1)
-#define EZ(v, unit) ((v)?ENOUGH(v, unit):0)
+#define EZ(v, unit) ((v)?ENOUGH(((v) * 1000), unit):0)
static void ata_timing_quantize(const struct ata_timing *t, struct ata_timing *q, int T, int UT)
{
- q->setup = EZ(t->setup * 1000, T);
- q->act8b = EZ(t->act8b * 1000, T);
- q->rec8b = EZ(t->rec8b * 1000, T);
- q->cyc8b = EZ(t->cyc8b * 1000, T);
- q->active = EZ(t->active * 1000, T);
- q->recover = EZ(t->recover * 1000, T);
- q->dmack_hold = EZ(t->dmack_hold * 1000, T);
- q->cycle = EZ(t->cycle * 1000, T);
- q->udma = EZ(t->udma * 1000, UT);
+ q->setup = EZ(t->setup, T);
+ q->act8b = EZ(t->act8b, T);
+ q->rec8b = EZ(t->rec8b, T);
+ q->cyc8b = EZ(t->cyc8b, T);
+ q->active = EZ(t->active, T);
+ q->recover = EZ(t->recover, T);
+ q->dmack_hold = EZ(t->dmack_hold, T);
+ q->cycle = EZ(t->cycle, T);
+ q->udma = EZ(t->udma, UT);
}
void ata_timing_merge(const struct ata_timing *a, const struct ata_timing *b,
static int charlcd_open(struct inode *inode, struct file *file)
{
struct charlcd_priv *priv = to_priv(the_charlcd);
+ int ret;
+ ret = -EBUSY;
if (!atomic_dec_and_test(&charlcd_available))
- return -EBUSY; /* open only once at a time */
+ goto fail; /* open only once at a time */
+ ret = -EPERM;
if (file->f_mode & FMODE_READ) /* device is write-only */
- return -EPERM;
+ goto fail;
if (priv->must_clear) {
charlcd_clear_display(&priv->lcd);
priv->must_clear = false;
}
return nonseekable_open(inode, file);
+
+ fail:
+ atomic_inc(&charlcd_available);
+ return ret;
}
static int charlcd_release(struct inode *inode, struct file *file)
static int keypad_open(struct inode *inode, struct file *file)
{
+ int ret;
+
+ ret = -EBUSY;
if (!atomic_dec_and_test(&keypad_available))
- return -EBUSY; /* open only once at a time */
+ goto fail; /* open only once at a time */
+ ret = -EPERM;
if (file->f_mode & FMODE_WRITE) /* device is read-only */
- return -EPERM;
+ goto fail;
keypad_buflen = 0; /* flush the buffer on opening */
return 0;
+ fail:
+ atomic_inc(&keypad_available);
+ return ret;
}
static int keypad_release(struct inode *inode, struct file *file)
}
#ifdef CONFIG_CPU_FREQ
-static cpumask_var_t cpus_to_visit;
-static void parsing_done_workfn(struct work_struct *work);
-static DECLARE_WORK(parsing_done_work, parsing_done_workfn);
+static cpumask_var_t cpus_to_visit __initdata;
+static void __init parsing_done_workfn(struct work_struct *work);
+static __initdata DECLARE_WORK(parsing_done_work, parsing_done_workfn);
-static int
+static int __init
init_cpu_capacity_callback(struct notifier_block *nb,
unsigned long val,
void *data)
return 0;
}
-static struct notifier_block init_cpu_capacity_notifier = {
+static struct notifier_block init_cpu_capacity_notifier __initdata = {
.notifier_call = init_cpu_capacity_callback,
};
}
core_initcall(register_cpufreq_notifier);
-static void parsing_done_workfn(struct work_struct *work)
+static void __init parsing_done_workfn(struct work_struct *work)
{
cpufreq_unregister_notifier(&init_cpu_capacity_notifier,
CPUFREQ_POLICY_NOTIFIER);
struct dma_coherent_mem *mem = rmem->priv;
int ret;
- if (!mem)
- return -ENODEV;
-
- ret = dma_init_coherent_memory(rmem->base, rmem->base, rmem->size,
- DMA_MEMORY_EXCLUSIVE, &mem);
-
- if (ret) {
- pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
- &rmem->base, (unsigned long)rmem->size / SZ_1M);
- return ret;
+ if (!mem) {
+ ret = dma_init_coherent_memory(rmem->base, rmem->base,
+ rmem->size,
+ DMA_MEMORY_EXCLUSIVE, &mem);
+ if (ret) {
+ pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
+ &rmem->base, (unsigned long)rmem->size / SZ_1M);
+ return ret;
+ }
}
mem->use_dev_dma_pfn_offset = true;
rmem->priv = mem;
struct platform_device *pdev = to_platform_device(dev);
char *driver_override, *old, *cp;
- if (count > PATH_MAX)
+ /* We need to keep extra room for a newline */
+ if (count >= (PAGE_SIZE - 1))
return -EINVAL;
driver_override = kstrndup(buf, count, GFP_KERNEL);
{
spin_lock_irq(&dev->power.lock);
dev->power.no_pm_callbacks =
- (!dev->bus || pm_ops_is_empty(dev->bus->pm)) &&
- (!dev->class || pm_ops_is_empty(dev->class->pm)) &&
+ (!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
+ !dev->bus->suspend && !dev->bus->resume)) &&
+ (!dev->class || (pm_ops_is_empty(dev->class->pm) &&
+ !dev->class->suspend && !dev->class->resume)) &&
(!dev->type || pm_ops_is_empty(dev->type->pm)) &&
(!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
- (!dev->driver || pm_ops_is_empty(dev->driver->pm));
+ (!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
+ !dev->driver->suspend && !dev->driver->resume));
spin_unlock_irq(&dev->power.lock);
}
opp->available = availability_req;
+ dev_pm_opp_get(opp);
+ mutex_unlock(&opp_table->lock);
+
/* Notify the change of the OPP availability */
if (availability_req)
blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ENABLE,
blocking_notifier_call_chain(&opp_table->head,
OPP_EVENT_DISABLE, opp);
+ dev_pm_opp_put(opp);
+ goto put_table;
+
unlock:
mutex_unlock(&opp_table->lock);
+put_table:
dev_pm_opp_put_opp_table(opp_table);
return r;
}
mutex_unlock(&dev_pm_qos_sysfs_mtx);
}
-static bool dev_pm_qos_invalid_request(struct device *dev,
- struct dev_pm_qos_request *req)
+static bool dev_pm_qos_invalid_req_type(struct device *dev,
+ enum dev_pm_qos_req_type type)
{
- return !req || (req->type == DEV_PM_QOS_LATENCY_TOLERANCE
- && !dev->power.set_latency_tolerance);
+ return type == DEV_PM_QOS_LATENCY_TOLERANCE &&
+ !dev->power.set_latency_tolerance;
}
static int __dev_pm_qos_add_request(struct device *dev,
{
int ret = 0;
- if (!dev || dev_pm_qos_invalid_request(dev, req))
+ if (!dev || !req || dev_pm_qos_invalid_req_type(dev, type))
return -EINVAL;
if (WARN(dev_pm_qos_request_active(req),
config BLK_DEV_NULL_BLK
tristate "Null test block driver"
- depends on CONFIGFS_FS
+ select CONFIGFS_FS
config BLK_DEV_FD
tristate "Normal floppy disk support"
if (!brd)
return -ENODEV;
- page = brd_insert_page(brd, PFN_PHYS(pgoff) / 512);
+ page = brd_insert_page(brd, (sector_t)pgoff << PAGE_SECTORS_SHIFT);
if (!page)
return -ENOSPC;
*kaddr = page_address(page);
struct loop_cmd {
struct kthread_work work;
struct request *rq;
- union {
- bool use_aio; /* use AIO interface to handle I/O */
- atomic_t ref; /* only for aio */
- };
+ bool use_aio; /* use AIO interface to handle I/O */
+ atomic_t ref; /* only for aio */
long ret;
struct kiocb iocb;
struct bio_vec *bvec;
* appropriate.
*/
ret = nbd_handle_cmd(cmd, hctx->queue_num);
+ if (ret < 0)
+ ret = BLK_STS_IOERR;
+ else if (!ret)
+ ret = BLK_STS_OK;
complete(&cmd->send_complete);
- return ret < 0 ? BLK_STS_IOERR : BLK_STS_OK;
+ return ret;
}
static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
+ /* The block layer will pass back some non-nbd ioctls in case we have
+ * special handling for them, but we don't so just return an error.
+ */
+ if (_IOC_TYPE(cmd) != 0xab)
+ return -EINVAL;
+
mutex_lock(&nbd->config_lock);
/* Don't allow ioctl operations on a nbd device that was created with
bit_spin_unlock(ZRAM_ACCESS, &zram->table[index].value);
}
-static bool zram_same_page_read(struct zram *zram, u32 index,
- struct page *page,
- unsigned int offset, unsigned int len)
-{
- zram_slot_lock(zram, index);
- if (unlikely(!zram_get_handle(zram, index) ||
- zram_test_flag(zram, index, ZRAM_SAME))) {
- void *mem;
-
- zram_slot_unlock(zram, index);
- mem = kmap_atomic(page);
- zram_fill_page(mem + offset, len,
- zram_get_element(zram, index));
- kunmap_atomic(mem);
- return true;
- }
- zram_slot_unlock(zram, index);
-
- return false;
-}
-
static void zram_meta_free(struct zram *zram, u64 disksize)
{
size_t num_pages = disksize >> PAGE_SHIFT;
zram_slot_unlock(zram, index);
}
- if (zram_same_page_read(zram, index, page, 0, PAGE_SIZE))
- return 0;
-
zram_slot_lock(zram, index);
handle = zram_get_handle(zram, index);
+ if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) {
+ unsigned long value;
+ void *mem;
+
+ value = handle ? zram_get_element(zram, index) : 0;
+ mem = kmap_atomic(page);
+ zram_fill_page(mem, PAGE_SIZE, value);
+ kunmap_atomic(mem);
+ zram_slot_unlock(zram, index);
+ return 0;
+ }
+
size = zram_get_obj_size(zram, index);
src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
Driver for the Tegra Generic Memory Interface bus which can be used
to attach devices such as NOR, UART, FPGA and more.
+config TS_NBUS
+ tristate "Technologic Systems NBUS Driver"
+ depends on SOC_IMX28
+ depends on OF_GPIO && PWM
+ help
+ Driver for the Technologic Systems NBUS which is used to interface
+ with the peripherals in the FPGA of the TS-4600 SoM.
+
config UNIPHIER_SYSTEM_BUS
tristate "UniPhier System Bus driver"
depends on ARCH_UNIPHIER && OF
obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o
+obj-$(CONFIG_TS_NBUS) += ts-nbus.o
obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
--- /dev/null
+/*
+ * NBUS driver for TS-4600 based boards
+ *
+ * Copyright (c) 2016 - Savoir-faire Linux
+ * Author: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ *
+ * This driver implements a GPIOs bit-banged bus, called the NBUS by Technologic
+ * Systems. It is used to communicate with the peripherals in the FPGA on the
+ * TS-4600 SoM.
+ */
+
+#include <linux/bitops.h>
+#include <linux/gpio/consumer.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/pwm.h>
+#include <linux/ts-nbus.h>
+
+#define TS_NBUS_DIRECTION_IN 0
+#define TS_NBUS_DIRECTION_OUT 1
+#define TS_NBUS_WRITE_ADR 0
+#define TS_NBUS_WRITE_VAL 1
+
+struct ts_nbus {
+ struct pwm_device *pwm;
+ struct gpio_descs *data;
+ struct gpio_desc *csn;
+ struct gpio_desc *txrx;
+ struct gpio_desc *strobe;
+ struct gpio_desc *ale;
+ struct gpio_desc *rdy;
+ struct mutex lock;
+};
+
+/*
+ * request all gpios required by the bus.
+ */
+static int ts_nbus_init_pdata(struct platform_device *pdev, struct ts_nbus
+ *ts_nbus)
+{
+ ts_nbus->data = devm_gpiod_get_array(&pdev->dev, "ts,data",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(ts_nbus->data)) {
+ dev_err(&pdev->dev, "failed to retrieve ts,data-gpio from dts\n");
+ return PTR_ERR(ts_nbus->data);
+ }
+
+ ts_nbus->csn = devm_gpiod_get(&pdev->dev, "ts,csn", GPIOD_OUT_HIGH);
+ if (IS_ERR(ts_nbus->csn)) {
+ dev_err(&pdev->dev, "failed to retrieve ts,csn-gpio from dts\n");
+ return PTR_ERR(ts_nbus->csn);
+ }
+
+ ts_nbus->txrx = devm_gpiod_get(&pdev->dev, "ts,txrx", GPIOD_OUT_HIGH);
+ if (IS_ERR(ts_nbus->txrx)) {
+ dev_err(&pdev->dev, "failed to retrieve ts,txrx-gpio from dts\n");
+ return PTR_ERR(ts_nbus->txrx);
+ }
+
+ ts_nbus->strobe = devm_gpiod_get(&pdev->dev, "ts,strobe", GPIOD_OUT_HIGH);
+ if (IS_ERR(ts_nbus->strobe)) {
+ dev_err(&pdev->dev, "failed to retrieve ts,strobe-gpio from dts\n");
+ return PTR_ERR(ts_nbus->strobe);
+ }
+
+ ts_nbus->ale = devm_gpiod_get(&pdev->dev, "ts,ale", GPIOD_OUT_HIGH);
+ if (IS_ERR(ts_nbus->ale)) {
+ dev_err(&pdev->dev, "failed to retrieve ts,ale-gpio from dts\n");
+ return PTR_ERR(ts_nbus->ale);
+ }
+
+ ts_nbus->rdy = devm_gpiod_get(&pdev->dev, "ts,rdy", GPIOD_IN);
+ if (IS_ERR(ts_nbus->rdy)) {
+ dev_err(&pdev->dev, "failed to retrieve ts,rdy-gpio from dts\n");
+ return PTR_ERR(ts_nbus->rdy);
+ }
+
+ return 0;
+}
+
+/*
+ * the data gpios are used for reading and writing values, their directions
+ * should be adjusted accordingly.
+ */
+static void ts_nbus_set_direction(struct ts_nbus *ts_nbus, int direction)
+{
+ int i;
+
+ for (i = 0; i < 8; i++) {
+ if (direction == TS_NBUS_DIRECTION_IN)
+ gpiod_direction_input(ts_nbus->data->desc[i]);
+ else
+ /* when used as output the default state of the data
+ * lines are set to high */
+ gpiod_direction_output(ts_nbus->data->desc[i], 1);
+ }
+}
+
+/*
+ * reset the bus in its initial state.
+ * The data, csn, strobe and ale lines must be zero'ed to let the FPGA knows a
+ * new transaction can be process.
+ */
+static void ts_nbus_reset_bus(struct ts_nbus *ts_nbus)
+{
+ int i;
+ int values[8];
+
+ for (i = 0; i < 8; i++)
+ values[i] = 0;
+
+ gpiod_set_array_value_cansleep(8, ts_nbus->data->desc, values);
+ gpiod_set_value_cansleep(ts_nbus->csn, 0);
+ gpiod_set_value_cansleep(ts_nbus->strobe, 0);
+ gpiod_set_value_cansleep(ts_nbus->ale, 0);
+}
+
+/*
+ * let the FPGA knows it can process.
+ */
+static void ts_nbus_start_transaction(struct ts_nbus *ts_nbus)
+{
+ gpiod_set_value_cansleep(ts_nbus->strobe, 1);
+}
+
+/*
+ * read a byte value from the data gpios.
+ * return 0 on success or negative errno on failure.
+ */
+static int ts_nbus_read_byte(struct ts_nbus *ts_nbus, u8 *val)
+{
+ struct gpio_descs *gpios = ts_nbus->data;
+ int ret, i;
+
+ *val = 0;
+ for (i = 0; i < 8; i++) {
+ ret = gpiod_get_value_cansleep(gpios->desc[i]);
+ if (ret < 0)
+ return ret;
+ if (ret)
+ *val |= BIT(i);
+ }
+
+ return 0;
+}
+
+/*
+ * set the data gpios accordingly to the byte value.
+ */
+static void ts_nbus_write_byte(struct ts_nbus *ts_nbus, u8 byte)
+{
+ struct gpio_descs *gpios = ts_nbus->data;
+ int i;
+ int values[8];
+
+ for (i = 0; i < 8; i++)
+ if (byte & BIT(i))
+ values[i] = 1;
+ else
+ values[i] = 0;
+
+ gpiod_set_array_value_cansleep(8, gpios->desc, values);
+}
+
+/*
+ * reading the bus consists of resetting the bus, then notifying the FPGA to
+ * send the data in the data gpios and return the read value.
+ * return 0 on success or negative errno on failure.
+ */
+static int ts_nbus_read_bus(struct ts_nbus *ts_nbus, u8 *val)
+{
+ ts_nbus_reset_bus(ts_nbus);
+ ts_nbus_start_transaction(ts_nbus);
+
+ return ts_nbus_read_byte(ts_nbus, val);
+}
+
+/*
+ * writing to the bus consists of resetting the bus, then define the type of
+ * command (address/value), write the data and notify the FPGA to retrieve the
+ * value in the data gpios.
+ */
+static void ts_nbus_write_bus(struct ts_nbus *ts_nbus, int cmd, u8 val)
+{
+ ts_nbus_reset_bus(ts_nbus);
+
+ if (cmd == TS_NBUS_WRITE_ADR)
+ gpiod_set_value_cansleep(ts_nbus->ale, 1);
+
+ ts_nbus_write_byte(ts_nbus, val);
+ ts_nbus_start_transaction(ts_nbus);
+}
+
+/*
+ * read the value in the FPGA register at the given address.
+ * return 0 on success or negative errno on failure.
+ */
+int ts_nbus_read(struct ts_nbus *ts_nbus, u8 adr, u16 *val)
+{
+ int ret, i;
+ u8 byte;
+
+ /* bus access must be atomic */
+ mutex_lock(&ts_nbus->lock);
+
+ /* set the bus in read mode */
+ gpiod_set_value_cansleep(ts_nbus->txrx, 0);
+
+ /* write address */
+ ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_ADR, adr);
+
+ /* set the data gpios direction as input before reading */
+ ts_nbus_set_direction(ts_nbus, TS_NBUS_DIRECTION_IN);
+
+ /* reading value MSB first */
+ do {
+ *val = 0;
+ byte = 0;
+ for (i = 1; i >= 0; i--) {
+ /* read a byte from the bus, leave on error */
+ ret = ts_nbus_read_bus(ts_nbus, &byte);
+ if (ret < 0)
+ goto err;
+
+ /* append the byte read to the final value */
+ *val |= byte << (i * 8);
+ }
+ gpiod_set_value_cansleep(ts_nbus->csn, 1);
+ ret = gpiod_get_value_cansleep(ts_nbus->rdy);
+ } while (ret);
+
+err:
+ /* restore the data gpios direction as output after reading */
+ ts_nbus_set_direction(ts_nbus, TS_NBUS_DIRECTION_OUT);
+
+ mutex_unlock(&ts_nbus->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ts_nbus_read);
+
+/*
+ * write the desired value in the FPGA register at the given address.
+ */
+int ts_nbus_write(struct ts_nbus *ts_nbus, u8 adr, u16 val)
+{
+ int i;
+
+ /* bus access must be atomic */
+ mutex_lock(&ts_nbus->lock);
+
+ /* set the bus in write mode */
+ gpiod_set_value_cansleep(ts_nbus->txrx, 1);
+
+ /* write address */
+ ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_ADR, adr);
+
+ /* writing value MSB first */
+ for (i = 1; i >= 0; i--)
+ ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_VAL, (u8)(val >> (i * 8)));
+
+ /* wait for completion */
+ gpiod_set_value_cansleep(ts_nbus->csn, 1);
+ while (gpiod_get_value_cansleep(ts_nbus->rdy) != 0) {
+ gpiod_set_value_cansleep(ts_nbus->csn, 0);
+ gpiod_set_value_cansleep(ts_nbus->csn, 1);
+ }
+
+ mutex_unlock(&ts_nbus->lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ts_nbus_write);
+
+static int ts_nbus_probe(struct platform_device *pdev)
+{
+ struct pwm_device *pwm;
+ struct pwm_args pargs;
+ struct device *dev = &pdev->dev;
+ struct ts_nbus *ts_nbus;
+ int ret;
+
+ ts_nbus = devm_kzalloc(dev, sizeof(*ts_nbus), GFP_KERNEL);
+ if (!ts_nbus)
+ return -ENOMEM;
+
+ mutex_init(&ts_nbus->lock);
+
+ ret = ts_nbus_init_pdata(pdev, ts_nbus);
+ if (ret < 0)
+ return ret;
+
+ pwm = devm_pwm_get(dev, NULL);
+ if (IS_ERR(pwm)) {
+ ret = PTR_ERR(pwm);
+ if (ret != -EPROBE_DEFER)
+ dev_err(dev, "unable to request PWM\n");
+ return ret;
+ }
+
+ pwm_get_args(pwm, &pargs);
+ if (!pargs.period) {
+ dev_err(&pdev->dev, "invalid PWM period\n");
+ return -EINVAL;
+ }
+
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to
+ * the atomic PWM API.
+ */
+ pwm_apply_args(pwm);
+ ret = pwm_config(pwm, pargs.period, pargs.period);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * we can now start the FPGA and populate the peripherals.
+ */
+ pwm_enable(pwm);
+ ts_nbus->pwm = pwm;
+
+ /*
+ * let the child nodes retrieve this instance of the ts-nbus.
+ */
+ dev_set_drvdata(dev, ts_nbus);
+
+ ret = of_platform_populate(dev->of_node, NULL, NULL, dev);
+ if (ret < 0)
+ return ret;
+
+ dev_info(dev, "initialized\n");
+
+ return 0;
+}
+
+static int ts_nbus_remove(struct platform_device *pdev)
+{
+ struct ts_nbus *ts_nbus = dev_get_drvdata(&pdev->dev);
+
+ /* shutdown the FPGA */
+ mutex_lock(&ts_nbus->lock);
+ pwm_disable(ts_nbus->pwm);
+ mutex_unlock(&ts_nbus->lock);
+
+ return 0;
+}
+
+static const struct of_device_id ts_nbus_of_match[] = {
+ { .compatible = "technologic,ts-nbus", },
+ { },
+};
+MODULE_DEVICE_TABLE(of, ts_nbus_of_match);
+
+static struct platform_driver ts_nbus_driver = {
+ .probe = ts_nbus_probe,
+ .remove = ts_nbus_remove,
+ .driver = {
+ .name = "ts_nbus",
+ .of_match_table = ts_nbus_of_match,
+ },
+};
+
+module_platform_driver(ts_nbus_driver);
+
+MODULE_ALIAS("platform:ts_nbus");
+MODULE_AUTHOR("Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>");
+MODULE_DESCRIPTION("Technologic Systems NBUS");
+MODULE_LICENSE("GPL v2");
goto out;
}
- msleep(TPM_TIMEOUT); /* CHECK */
+ tpm_msleep(TPM_TIMEOUT);
rmb();
} while (time_before(jiffies, stop));
dev_info(
&chip->dev, HW_ERR
"TPM command timed out during continue self test");
- msleep(delay_msec);
+ tpm_msleep(delay_msec);
continue;
}
}
if (rc != TPM_WARN_DOING_SELFTEST)
return rc;
- msleep(delay_msec);
+ tpm_msleep(delay_msec);
} while (--loops > 0);
return rc;
}
} else {
do {
- msleep(TPM_TIMEOUT);
+ tpm_msleep(TPM_TIMEOUT);
status = chip->ops->status(chip);
if ((status & mask) == mask)
return 0;
*/
if (rc != TPM_WARN_RETRY)
break;
- msleep(TPM_TIMEOUT_RETRY);
+ tpm_msleep(TPM_TIMEOUT_RETRY);
}
if (rc)
enum tpm_timeout {
TPM_TIMEOUT = 5, /* msecs */
- TPM_TIMEOUT_RETRY = 100 /* msecs */
+ TPM_TIMEOUT_RETRY = 100, /* msecs */
+ TPM_TIMEOUT_RANGE_US = 300 /* usecs */
};
/* TPM addresses */
int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
wait_queue_head_t *queue, bool check_cancel);
+static inline void tpm_msleep(unsigned int delay_msec)
+{
+ usleep_range(delay_msec * 1000,
+ (delay_msec * 1000) + TPM_TIMEOUT_RANGE_US);
+};
+
struct tpm_chip *tpm_chip_find_get(int chip_num);
__must_check int tpm_try_get_ops(struct tpm_chip *chip);
void tpm_put_ops(struct tpm_chip *chip);
if (rc != TPM2_RC_TESTING)
break;
- msleep(delay_msec);
+ tpm_msleep(delay_msec);
}
return rc;
SET_RUNTIME_PM_OPS(crb_pm_runtime_suspend, crb_pm_runtime_resume, NULL)
};
-static struct acpi_device_id crb_device_ids[] = {
+static const struct acpi_device_id crb_device_ids[] = {
{"MSFT0101", 0},
{"", 0},
};
static const char tpm_ibmvtpm_driver_name[] = "tpm_ibmvtpm";
-static struct vio_device_id tpm_ibmvtpm_device_table[] = {
+static const struct vio_device_id tpm_ibmvtpm_device_table[] = {
{ "IBM,vtpm", "IBM,vtpm"},
{ "", "" }
};
MODULE_DEVICE_TABLE(vio, tpm_ibmvtpm_device_table);
/**
+ *
+ * ibmvtpm_send_crq_word - Send a CRQ request
+ * @vdev: vio device struct
+ * @w1: pre-constructed first word of tpm crq (second word is reserved)
+ *
+ * Return:
+ * 0 - Success
+ * Non-zero - Failure
+ */
+static int ibmvtpm_send_crq_word(struct vio_dev *vdev, u64 w1)
+{
+ return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, w1, 0);
+}
+
+/**
+ *
* ibmvtpm_send_crq - Send a CRQ request
*
* @vdev: vio device struct
- * @w1: first word
- * @w2: second word
+ * @valid: Valid field
+ * @msg: Type field
+ * @len: Length field
+ * @data: Data field
+ *
+ * The ibmvtpm crq is defined as follows:
+ *
+ * Byte | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7
+ * -----------------------------------------------------------------------
+ * Word0 | Valid | Type | Length | Data
+ * -----------------------------------------------------------------------
+ * Word1 | Reserved
+ * -----------------------------------------------------------------------
+ *
+ * Which matches the following structure (on bigendian host):
+ *
+ * struct ibmvtpm_crq {
+ * u8 valid;
+ * u8 msg;
+ * __be16 len;
+ * __be32 data;
+ * __be64 reserved;
+ * } __attribute__((packed, aligned(8)));
+ *
+ * However, the value is passed in a register so just compute the numeric value
+ * to load into the register avoiding byteswap altogether. Endian only affects
+ * memory loads and stores - registers are internally represented the same.
*
* Return:
- * 0 -Sucess
+ * 0 (H_SUCCESS) - Success
* Non-zero - Failure
*/
-static int ibmvtpm_send_crq(struct vio_dev *vdev, u64 w1, u64 w2)
+static int ibmvtpm_send_crq(struct vio_dev *vdev,
+ u8 valid, u8 msg, u16 len, u32 data)
{
- return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, w1, w2);
+ u64 w1 = ((u64)valid << 56) | ((u64)msg << 48) | ((u64)len << 32) |
+ (u64)data;
+ return ibmvtpm_send_crq_word(vdev, w1);
}
/**
static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
- struct ibmvtpm_crq crq;
- __be64 *word = (__be64 *)&crq;
int rc, sig;
if (!ibmvtpm->rtce_buf) {
spin_lock(&ibmvtpm->rtce_lock);
ibmvtpm->res_len = 0;
memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count);
- crq.valid = (u8)IBMVTPM_VALID_CMD;
- crq.msg = (u8)VTPM_TPM_COMMAND;
- crq.len = cpu_to_be16(count);
- crq.data = cpu_to_be32(ibmvtpm->rtce_dma_handle);
/*
* set the processing flag before the Hcall, since we may get the
*/
ibmvtpm->tpm_processing_cmd = true;
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]),
- be64_to_cpu(word[1]));
+ rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+ IBMVTPM_VALID_CMD, VTPM_TPM_COMMAND,
+ count, ibmvtpm->rtce_dma_handle);
if (rc != H_SUCCESS) {
dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
rc = 0;
*/
static int ibmvtpm_crq_get_rtce_size(struct ibmvtpm_dev *ibmvtpm)
{
- struct ibmvtpm_crq crq;
- u64 *buf = (u64 *) &crq;
int rc;
- crq.valid = (u8)IBMVTPM_VALID_CMD;
- crq.msg = (u8)VTPM_GET_RTCE_BUFFER_SIZE;
-
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(buf[0]),
- cpu_to_be64(buf[1]));
+ rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+ IBMVTPM_VALID_CMD, VTPM_GET_RTCE_BUFFER_SIZE, 0, 0);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
"ibmvtpm_crq_get_rtce_size failed rc=%d\n", rc);
*/
static int ibmvtpm_crq_get_version(struct ibmvtpm_dev *ibmvtpm)
{
- struct ibmvtpm_crq crq;
- u64 *buf = (u64 *) &crq;
int rc;
- crq.valid = (u8)IBMVTPM_VALID_CMD;
- crq.msg = (u8)VTPM_GET_VERSION;
-
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(buf[0]),
- cpu_to_be64(buf[1]));
+ rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+ IBMVTPM_VALID_CMD, VTPM_GET_VERSION, 0, 0);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
"ibmvtpm_crq_get_version failed rc=%d\n", rc);
{
int rc;
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, INIT_CRQ_COMP_CMD, 0);
+ rc = ibmvtpm_send_crq_word(ibmvtpm->vdev, INIT_CRQ_COMP_CMD);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
"ibmvtpm_crq_send_init_complete failed rc=%d\n", rc);
{
int rc;
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, INIT_CRQ_CMD, 0);
+ rc = ibmvtpm_send_crq_word(ibmvtpm->vdev, INIT_CRQ_CMD);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
"ibmvtpm_crq_send_init failed rc=%d\n", rc);
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
- struct ibmvtpm_crq crq;
- u64 *buf = (u64 *) &crq;
int rc = 0;
- crq.valid = (u8)IBMVTPM_VALID_CMD;
- crq.msg = (u8)VTPM_PREPARE_TO_SUSPEND;
-
- rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(buf[0]),
- cpu_to_be64(buf[1]));
+ rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+ IBMVTPM_VALID_CMD, VTPM_PREPARE_TO_SUSPEND, 0, 0);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
"tpm_ibmvtpm_suspend failed rc=%d\n", rc);
/* check the status-register if wait_for_bit is set */
if (status & 1 << wait_for_bit)
break;
- msleep(TPM_MSLEEP_TIME);
+ tpm_msleep(TPM_MSLEEP_TIME);
}
if (i == TPM_MAX_TRIES) { /* timeout occurs */
if (wait_for_bit == STAT_XFE)
wait_and_send(chip, TPM_CTRL_WTX);
wait_and_send(chip, 0x00);
wait_and_send(chip, 0x00);
- msleep(TPM_WTX_MSLEEP_TIME);
+ tpm_msleep(TPM_WTX_MSLEEP_TIME);
}
static void tpm_wtx_abort(struct tpm_chip *chip)
wait_and_send(chip, 0x00);
wait_and_send(chip, 0x00);
number_of_wtx = 0;
- msleep(TPM_WTX_MSLEEP_TIME);
+ tpm_msleep(TPM_WTX_MSLEEP_TIME);
}
static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
if (access & TPM_ACCESS_VALID)
return 0;
- msleep(TPM_TIMEOUT);
+ tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
return -1;
}
do {
if (check_locality(chip, l))
return l;
- msleep(TPM_TIMEOUT);
+ tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
}
return -1;
burstcnt = (value >> 8) & 0xFFFF;
if (burstcnt)
return burstcnt;
- msleep(TPM_TIMEOUT);
+ tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
return -EBUSY;
}
priv->irq = irq;
chip->flags |= TPM_CHIP_FLAG_IRQ;
if (!priv->irq_tested)
- msleep(1);
+ tpm_msleep(1);
if (!priv->irq_tested)
disable_interrupts(chip);
priv->irq_tested = true;
help
Enable common clock framework support for the Broadcom Cygnus SoC
+config CLK_BCM_HR2
+ bool "Broadcom Hurricane 2 clock support"
+ depends on ARCH_BCM_HR2 || COMPILE_TEST
+ select COMMON_CLK_IPROC
+ default ARCH_BCM_HR2
+ help
+ Enable common clock framework support for the Broadcom Hurricane 2
+ SoC
+
config CLK_BCM_NSP
bool "Broadcom Northstar/Northstar Plus clock support"
depends on ARCH_BCM_5301X || ARCH_BCM_NSP || COMPILE_TEST
obj-$(CONFIG_ARCH_BCM2835) += clk-bcm2835-aux.o
obj-$(CONFIG_ARCH_BCM_53573) += clk-bcm53573-ilp.o
obj-$(CONFIG_CLK_BCM_CYGNUS) += clk-cygnus.o
+obj-$(CONFIG_CLK_BCM_HR2) += clk-hr2.o
obj-$(CONFIG_CLK_BCM_NSP) += clk-nsp.o
obj-$(CONFIG_CLK_BCM_NS2) += clk-ns2.o
obj-$(CONFIG_CLK_BCM_SR) += clk-sr.o
--- /dev/null
+/*
+ * Copyright (C) 2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/err.h>
+#include <linux/clk-provider.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+
+#include "clk-iproc.h"
+
+static void __init hr2_armpll_init(struct device_node *node)
+{
+ iproc_armpll_setup(node);
+}
+CLK_OF_DECLARE(hr2_armpll, "brcm,hr2-armpll", hr2_armpll_init);
return ret;
}
+EXPORT_SYMBOL_GPL(clk_bulk_prepare);
#endif /* CONFIG_HAVE_CLK_PREPARE */
RK2928_CLKGATE_CON(10), 8, GFLAGS),
GATE(SCLK_PVTM_CORE, "clk_pvtm_core", "xin24m", 0,
- RK2928_CLKGATE_CON(10), 8, GFLAGS),
+ RK2928_CLKGATE_CON(10), 0, GFLAGS),
GATE(SCLK_PVTM_GPU, "clk_pvtm_gpu", "xin24m", 0,
- RK2928_CLKGATE_CON(10), 8, GFLAGS),
+ RK2928_CLKGATE_CON(10), 1, GFLAGS),
GATE(SCLK_PVTM_FUNC, "clk_pvtm_func", "xin24m", 0,
- RK2928_CLKGATE_CON(10), 8, GFLAGS),
+ RK2928_CLKGATE_CON(10), 2, GFLAGS),
GATE(SCLK_MIPI_24M, "clk_mipi_24m", "xin24m", CLK_IGNORE_UNUSED,
- RK2928_CLKGATE_CON(10), 8, GFLAGS),
+ RK2928_CLKGATE_CON(2), 15, GFLAGS),
COMPOSITE(SCLK_SDMMC, "sclk_sdmmc0", mux_mmc_src_p, 0,
RK2928_CLKSEL_CON(11), 6, 2, MFLAGS, 0, 6, DFLAGS,
GATE(0, "pclk_grf", "pclk_cpu", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(5), 4, GFLAGS),
GATE(0, "pclk_mipiphy", "pclk_cpu", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(5), 0, GFLAGS),
- GATE(0, "pclk_pmu", "pclk_pmu_pre", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(9), 2, GFLAGS),
+ GATE(0, "pclk_pmu", "pclk_pmu_pre", 0, RK2928_CLKGATE_CON(9), 2, GFLAGS),
GATE(0, "pclk_pmu_niu", "pclk_pmu_pre", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(9), 3, GFLAGS),
/* PD_MMC */
"aclk_peri",
"hclk_peri",
"pclk_peri",
+ "pclk_pmu",
+ "sclk_timer5",
};
static struct rockchip_clk_provider *__init rk3128_common_clk_init(struct device_node *np)
#define PLL_ENABLED (1 << 31)
#define PLL_LOCKED (1 << 29)
+static void exynos4_clk_enable_pll(u32 reg)
+{
+ u32 pll_con = readl(reg_base + reg);
+ pll_con |= PLL_ENABLED;
+ writel(pll_con, reg_base + reg);
+
+ while (!(pll_con & PLL_LOCKED)) {
+ cpu_relax();
+ pll_con = readl(reg_base + reg);
+ }
+}
+
static void exynos4_clk_wait_for_pll(u32 reg)
{
u32 pll_con;
samsung_clk_save(reg_base, exynos4_save_pll,
ARRAY_SIZE(exynos4_clk_pll_regs));
+ exynos4_clk_enable_pll(EPLL_CON0);
+ exynos4_clk_enable_pll(VPLL_CON0);
+
if (exynos4_soc == EXYNOS4210) {
samsung_clk_save(reg_base, exynos4_save_soc,
ARRAY_SIZE(exynos4210_clk_save));
return 0;
}
-static struct clock_event_device numachip2_clockevent = {
+static const struct clock_event_device numachip2_clockevent __initconst = {
.name = "numachip2",
.rating = 400,
.set_next_event = numachip2_set_next_event,
return -readl(sched_clk_base + TIMER_VALUE);
}
-static int integrator_clocksource_init(unsigned long inrate,
- void __iomem *base)
+static int __init integrator_clocksource_init(unsigned long inrate,
+ void __iomem *base)
{
u32 ctrl = TIMER_CTRL_ENABLE | TIMER_CTRL_PERIODIC;
unsigned long rate = inrate;
* platforms using "operating-points-v2" property.
*/
static const struct of_device_id blacklist[] __initconst = {
+ { .compatible = "calxeda,highbank", },
+ { .compatible = "calxeda,ecx-2000", },
+
+ { .compatible = "marvell,armadaxp", },
+
+ { .compatible = "nvidia,tegra124", },
+
+ { .compatible = "st,stih407", },
+ { .compatible = "st,stih410", },
+
+ { .compatible = "sigma,tango4", },
+
+ { .compatible = "ti,am33xx", },
+ { .compatible = "ti,am43", },
+ { .compatible = "ti,dra7", },
+
{ }
};
static const struct of_device_id ti_cpufreq_of_match[] = {
{ .compatible = "ti,am33xx", .data = &am3x_soc_data, },
- { .compatible = "ti,am4372", .data = &am4x_soc_data, },
+ { .compatible = "ti,am43", .data = &am4x_soc_data, },
{ .compatible = "ti,dra7", .data = &dra7_soc_data },
{},
};
ret = dt_init_idle_driver(drv, arm_idle_state_match, 1);
if (ret <= 0) {
ret = ret ? : -ENODEV;
- goto out_fail;
+ goto init_fail;
}
ret = cpuidle_register_driver(drv);
if (ret) {
pr_err("Failed to register cpuidle driver\n");
- goto out_fail;
+ goto init_fail;
}
/*
}
return 0;
+init_fail:
+ kfree(drv);
out_fail:
while (--cpu >= 0) {
dev = per_cpu(cpuidle_devices, cpu);
config CRYPTO_DEV_FSL_CAAM
tristate "Freescale CAAM-Multicore driver backend"
depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE
+ select SOC_BUS
help
Enables the driver module for Freescale's Cryptographic Accelerator
and Assurance Module (CAAM), also known as the SEC version 4 (SEC4).
To compile this as a module, choose M here: the module
will be called caamrng.
-config CRYPTO_DEV_FSL_CAAM_IMX
- def_bool SOC_IMX6 || SOC_IMX7D
- depends on CRYPTO_DEV_FSL_CAAM
-
config CRYPTO_DEV_FSL_CAAM_DEBUG
bool "Enable debug output in CAAM driver"
depends on CRYPTO_DEV_FSL_CAAM
#include <linux/device.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
+#include <linux/sys_soc.h>
#include "compat.h"
#include "regs.h"
EXPORT_SYMBOL(caam_little_end);
bool caam_dpaa2;
EXPORT_SYMBOL(caam_dpaa2);
+bool caam_imx;
+EXPORT_SYMBOL(caam_imx);
#ifdef CONFIG_CAAM_QI
#include "qi.h"
* i.MX targets tend to have clock control subsystems that can
* enable/disable clocking to our device.
*/
-#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
static inline struct clk *caam_drv_identify_clk(struct device *dev,
char *clk_name)
{
- return devm_clk_get(dev, clk_name);
+ return caam_imx ? devm_clk_get(dev, clk_name) : NULL;
}
-#else
-static inline struct clk *caam_drv_identify_clk(struct device *dev,
- char *clk_name)
-{
- return NULL;
-}
-#endif
/*
* Descriptor to instantiate RNG State Handle 0 in normal mode and
{
int ret, ring, gen_sk, ent_delay = RTSDCTL_ENT_DLY_MIN;
u64 caam_id;
+ static const struct soc_device_attribute imx_soc[] = {
+ {.family = "Freescale i.MX"},
+ {},
+ };
struct device *dev;
struct device_node *nprop, *np;
struct caam_ctrl __iomem *ctrl;
dev_set_drvdata(dev, ctrlpriv);
nprop = pdev->dev.of_node;
+ caam_imx = (bool)soc_device_match(imx_soc);
+
/* Enable clocking */
clk = caam_drv_identify_clk(&pdev->dev, "ipg");
if (IS_ERR(clk)) {
*/
extern bool caam_little_end;
+extern bool caam_imx;
#define caam_to_cpu(len) \
static inline u##len caam##len ## _to_cpu(u##len val) \
#else /* CONFIG_64BIT */
static inline void wr_reg64(void __iomem *reg, u64 data)
{
-#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
- if (caam_little_end) {
+ if (!caam_imx && caam_little_end) {
wr_reg32((u32 __iomem *)(reg) + 1, data >> 32);
wr_reg32((u32 __iomem *)(reg), data);
- } else
-#endif
- {
+ } else {
wr_reg32((u32 __iomem *)(reg), data >> 32);
wr_reg32((u32 __iomem *)(reg) + 1, data);
}
static inline u64 rd_reg64(void __iomem *reg)
{
-#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
- if (caam_little_end)
+ if (!caam_imx && caam_little_end)
return ((u64)rd_reg32((u32 __iomem *)(reg) + 1) << 32 |
(u64)rd_reg32((u32 __iomem *)(reg)));
- else
-#endif
- return ((u64)rd_reg32((u32 __iomem *)(reg)) << 32 |
- (u64)rd_reg32((u32 __iomem *)(reg) + 1));
+
+ return ((u64)rd_reg32((u32 __iomem *)(reg)) << 32 |
+ (u64)rd_reg32((u32 __iomem *)(reg) + 1));
}
#endif /* CONFIG_64BIT */
+static inline u64 cpu_to_caam_dma64(dma_addr_t value)
+{
+ if (caam_imx)
+ return (((u64)cpu_to_caam32(lower_32_bits(value)) << 32) |
+ (u64)cpu_to_caam32(upper_32_bits(value)));
+
+ return cpu_to_caam64(value);
+}
+
+static inline u64 caam_dma64_to_cpu(u64 value)
+{
+ if (caam_imx)
+ return (((u64)caam32_to_cpu(lower_32_bits(value)) << 32) |
+ (u64)caam32_to_cpu(upper_32_bits(value)));
+
+ return caam64_to_cpu(value);
+}
+
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
-#ifdef CONFIG_SOC_IMX7D
-#define cpu_to_caam_dma(value) \
- (((u64)cpu_to_caam32(lower_32_bits(value)) << 32) | \
- (u64)cpu_to_caam32(upper_32_bits(value)))
-#define caam_dma_to_cpu(value) \
- (((u64)caam32_to_cpu(lower_32_bits(value)) << 32) | \
- (u64)caam32_to_cpu(upper_32_bits(value)))
-#else
-#define cpu_to_caam_dma(value) cpu_to_caam64(value)
-#define caam_dma_to_cpu(value) caam64_to_cpu(value)
-#endif /* CONFIG_SOC_IMX7D */
+#define cpu_to_caam_dma(value) cpu_to_caam_dma64(value)
+#define caam_dma_to_cpu(value) caam_dma64_to_cpu(value)
#else
#define cpu_to_caam_dma(value) cpu_to_caam32(value)
#define caam_dma_to_cpu(value) caam32_to_cpu(value)
-#endif /* CONFIG_ARCH_DMA_ADDR_T_64BIT */
-
-#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
-#define cpu_to_caam_dma64(value) \
- (((u64)cpu_to_caam32(lower_32_bits(value)) << 32) | \
- (u64)cpu_to_caam32(upper_32_bits(value)))
-#else
-#define cpu_to_caam_dma64(value) cpu_to_caam64(value)
-#endif
+#endif /* CONFIG_ARCH_DMA_ADDR_T_64BIT */
/*
* jr_outentry
struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(tfm);
struct safexcel_crypto_priv *priv = ctx->priv;
struct skcipher_request req;
- struct safexcel_inv_result result = { 0 };
+ struct safexcel_inv_result result = {};
int ring = ctx->base.ring;
memset(&req, 0, sizeof(struct skcipher_request));
struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm);
struct safexcel_crypto_priv *priv = ctx->priv;
struct ahash_request req;
- struct safexcel_inv_result result = { 0 };
+ struct safexcel_inv_result result = {};
int ring = ctx->base.ring;
memset(&req, 0, sizeof(struct ahash_request));
req_ctx->swinit = 0;
} else {
desc->ptr[1] = zero_entry;
- /* Indicate next op is not the first. */
- req_ctx->first = 0;
}
+ /* Indicate next op is not the first. */
+ req_ctx->first = 0;
/* HMAC key */
if (ctx->keylen)
sg_count = edesc->src_nents ?: 1;
if (is_sec1 && sg_count > 1)
- sg_copy_to_buffer(areq->src, sg_count, edesc->buf, length);
+ sg_copy_to_buffer(req_ctx->psrc, sg_count, edesc->buf, length);
else
sg_count = dma_map_sg(dev, req_ctx->psrc, sg_count,
DMA_TO_DEVICE);
t_alg->algt.alg.hash.final = ahash_final;
t_alg->algt.alg.hash.finup = ahash_finup;
t_alg->algt.alg.hash.digest = ahash_digest;
- t_alg->algt.alg.hash.setkey = ahash_setkey;
+ if (!strncmp(alg->cra_name, "hmac", 4))
+ t_alg->algt.alg.hash.setkey = ahash_setkey;
t_alg->algt.alg.hash.import = ahash_import;
t_alg->algt.alg.hash.export = ahash_export;
def_bool y
depends on QCOM_SCM && ARM64
+config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
+ bool "Qualcomm download mode enabled by default"
+ depends on QCOM_SCM
+ help
+ A device with "download mode" enabled will upon an unexpected
+ warm-restart enter a special debug mode that allows the user to
+ "download" memory content over USB for offline postmortem analysis.
+ The feature can be enabled/disabled on the kernel command line.
+
+ Say Y here to enable "download mode" by default.
+
config TI_SCI_PROTOCOL
tristate "TI System Control Interface (TISCI) Message Protocol"
depends on TI_MESSAGE_MANAGER
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/bitmap.h>
+#include <linux/bitfield.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/export.h>
#define MAX_DVFS_DOMAINS 8
#define MAX_DVFS_OPPS 16
-#define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16)
-#define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff)
-
-#define PROTOCOL_REV_MINOR_BITS 16
-#define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1)
-#define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS)
-#define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK)
-
-#define FW_REV_MAJOR_BITS 24
-#define FW_REV_MINOR_BITS 16
-#define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1)
-#define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1)
-#define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS)
-#define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS)
-#define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK)
+
+#define PROTO_REV_MAJOR_MASK GENMASK(31, 16)
+#define PROTO_REV_MINOR_MASK GENMASK(15, 0)
+
+#define FW_REV_MAJOR_MASK GENMASK(31, 24)
+#define FW_REV_MINOR_MASK GENMASK(23, 16)
+#define FW_REV_PATCH_MASK GENMASK(15, 0)
#define MAX_RX_TIMEOUT (msecs_to_jiffies(30))
u8 name[20];
} __packed;
-struct clk_get_value {
- __le32 rate;
-} __packed;
-
struct clk_set_value {
__le16 id;
__le16 reserved;
} __packed;
struct dvfs_info {
- __le32 header;
+ u8 domain;
+ u8 opp_count;
+ __le16 latency;
struct {
__le32 freq;
__le32 m_volt;
char name[20];
};
-struct sensor_value {
- __le32 lo_val;
- __le32 hi_val;
-} __packed;
-
struct dev_pstate_set {
__le16 dev_id;
u8 pstate;
unsigned int len;
if (scpi_info->is_legacy) {
- struct legacy_scpi_shared_mem *mem = ch->rx_payload;
+ struct legacy_scpi_shared_mem __iomem *mem =
+ ch->rx_payload;
/* RX Length is not replied by the legacy Firmware */
len = match->rx_len;
- match->status = le32_to_cpu(mem->status);
+ match->status = ioread32(&mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
} else {
- struct scpi_shared_mem *mem = ch->rx_payload;
+ struct scpi_shared_mem __iomem *mem = ch->rx_payload;
len = min(match->rx_len, CMD_SIZE(cmd));
- match->status = le32_to_cpu(mem->status);
+ match->status = ioread32(&mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
}
static void scpi_handle_remote_msg(struct mbox_client *c, void *msg)
{
struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
- struct scpi_shared_mem *mem = ch->rx_payload;
+ struct scpi_shared_mem __iomem *mem = ch->rx_payload;
u32 cmd = 0;
if (!scpi_info->is_legacy)
- cmd = le32_to_cpu(mem->command);
+ cmd = ioread32(&mem->command);
scpi_process_cmd(ch, cmd);
}
unsigned long flags;
struct scpi_xfer *t = msg;
struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
- struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload;
+ struct scpi_shared_mem __iomem *mem = ch->tx_payload;
if (t->tx_buf) {
if (scpi_info->is_legacy)
}
if (!scpi_info->is_legacy)
- mem->command = cpu_to_le32(t->cmd);
+ iowrite32(t->cmd, &mem->command);
}
static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch)
static unsigned long scpi_clk_get_val(u16 clk_id)
{
int ret;
- struct clk_get_value clk;
+ __le32 rate;
__le16 le_clk_id = cpu_to_le16(clk_id);
ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
- sizeof(le_clk_id), &clk, sizeof(clk));
+ sizeof(le_clk_id), &rate, sizeof(rate));
- return ret ? ret : le32_to_cpu(clk.rate);
+ return ret ? ret : le32_to_cpu(rate);
}
static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
}
static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
+{
+ if (domain >= MAX_DVFS_DOMAINS)
+ return ERR_PTR(-EINVAL);
+
+ return scpi_info->dvfs[domain] ?: ERR_PTR(-EINVAL);
+}
+
+static int scpi_dvfs_populate_info(struct device *dev, u8 domain)
{
struct scpi_dvfs_info *info;
struct scpi_opp *opp;
struct dvfs_info buf;
int ret, i;
- if (domain >= MAX_DVFS_DOMAINS)
- return ERR_PTR(-EINVAL);
-
- if (scpi_info->dvfs[domain]) /* data already populated */
- return scpi_info->dvfs[domain];
-
ret = scpi_send_message(CMD_GET_DVFS_INFO, &domain, sizeof(domain),
&buf, sizeof(buf));
if (ret)
- return ERR_PTR(ret);
+ return ret;
- info = kmalloc(sizeof(*info), GFP_KERNEL);
+ info = devm_kmalloc(dev, sizeof(*info), GFP_KERNEL);
if (!info)
- return ERR_PTR(-ENOMEM);
+ return -ENOMEM;
- info->count = DVFS_OPP_COUNT(buf.header);
- info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */
+ info->count = buf.opp_count;
+ info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */
- info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL);
- if (!info->opps) {
- kfree(info);
- return ERR_PTR(-ENOMEM);
- }
+ info->opps = devm_kcalloc(dev, info->count, sizeof(*opp), GFP_KERNEL);
+ if (!info->opps)
+ return -ENOMEM;
for (i = 0, opp = info->opps; i < info->count; i++, opp++) {
opp->freq = le32_to_cpu(buf.opps[i].freq);
sort(info->opps, info->count, sizeof(*opp), opp_cmp_func, NULL);
scpi_info->dvfs[domain] = info;
- return info;
+ return 0;
+}
+
+static void scpi_dvfs_populate(struct device *dev)
+{
+ int domain;
+
+ for (domain = 0; domain < MAX_DVFS_DOMAINS; domain++)
+ scpi_dvfs_populate_info(dev, domain);
}
static int scpi_dev_domain_id(struct device *dev)
if (IS_ERR(info))
return PTR_ERR(info);
- if (!info->latency)
- return 0;
-
return info->latency;
}
static int scpi_sensor_get_value(u16 sensor, u64 *val)
{
__le16 id = cpu_to_le16(sensor);
- struct sensor_value buf;
+ __le64 value;
int ret;
ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id),
- &buf, sizeof(buf));
+ &value, sizeof(value));
if (ret)
return ret;
if (scpi_info->is_legacy)
- /* only 32-bits supported, hi_val can be junk */
- *val = le32_to_cpu(buf.lo_val);
+ /* only 32-bits supported, upper 32 bits can be junk */
+ *val = le32_to_cpup((__le32 *)&value);
else
- *val = (u64)le32_to_cpu(buf.hi_val) << 32 |
- le32_to_cpu(buf.lo_val);
+ *val = le64_to_cpu(value);
return 0;
}
static ssize_t protocol_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
- struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
-
- return sprintf(buf, "%d.%d\n",
- PROTOCOL_REV_MAJOR(scpi_info->protocol_version),
- PROTOCOL_REV_MINOR(scpi_info->protocol_version));
+ return sprintf(buf, "%lu.%lu\n",
+ FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version),
+ FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version));
}
static DEVICE_ATTR_RO(protocol_version);
static ssize_t firmware_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
- struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
-
- return sprintf(buf, "%d.%d.%d\n",
- FW_REV_MAJOR(scpi_info->firmware_version),
- FW_REV_MINOR(scpi_info->firmware_version),
- FW_REV_PATCH(scpi_info->firmware_version));
+ return sprintf(buf, "%lu.%lu.%lu\n",
+ FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version),
+ FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version),
+ FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version));
}
static DEVICE_ATTR_RO(firmware_version);
};
ATTRIBUTE_GROUPS(versions);
-static void
-scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count)
+static void scpi_free_channels(void *data)
{
+ struct scpi_drvinfo *info = data;
int i;
- for (i = 0; i < count && pchan->chan; i++, pchan++) {
- mbox_free_channel(pchan->chan);
- devm_kfree(dev, pchan->xfers);
- devm_iounmap(dev, pchan->rx_payload);
- }
-}
-
-static int scpi_remove(struct platform_device *pdev)
-{
- int i;
- struct device *dev = &pdev->dev;
- struct scpi_drvinfo *info = platform_get_drvdata(pdev);
-
- scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */
-
- of_platform_depopulate(dev);
- sysfs_remove_groups(&dev->kobj, versions_groups);
- scpi_free_channels(dev, info->channels, info->num_chans);
- platform_set_drvdata(pdev, NULL);
-
- for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) {
- kfree(info->dvfs[i]->opps);
- kfree(info->dvfs[i]);
- }
- devm_kfree(dev, info->channels);
- devm_kfree(dev, info);
-
- return 0;
+ for (i = 0; i < info->num_chans; i++)
+ mbox_free_channel(info->channels[i].chan);
}
#define MAX_SCPI_XFERS 10
{
int count, idx, ret;
struct resource res;
- struct scpi_chan *scpi_chan;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
return -ENODEV;
}
- scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL);
- if (!scpi_chan)
+ scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan),
+ GFP_KERNEL);
+ if (!scpi_info->channels)
return -ENOMEM;
- for (idx = 0; idx < count; idx++) {
+ ret = devm_add_action(dev, scpi_free_channels, scpi_info);
+ if (ret)
+ return ret;
+
+ for (; scpi_info->num_chans < count; scpi_info->num_chans++) {
resource_size_t size;
- struct scpi_chan *pchan = scpi_chan + idx;
+ int idx = scpi_info->num_chans;
+ struct scpi_chan *pchan = scpi_info->channels + idx;
struct mbox_client *cl = &pchan->cl;
struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
of_node_put(shmem);
if (ret) {
dev_err(dev, "failed to get SCPI payload mem resource\n");
- goto err;
+ return ret;
}
size = resource_size(&res);
pchan->rx_payload = devm_ioremap(dev, res.start, size);
if (!pchan->rx_payload) {
dev_err(dev, "failed to ioremap SCPI payload\n");
- ret = -EADDRNOTAVAIL;
- goto err;
+ return -EADDRNOTAVAIL;
}
pchan->tx_payload = pchan->rx_payload + (size >> 1);
dev_err(dev, "failed to get channel%d err %d\n",
idx, ret);
}
-err:
- scpi_free_channels(dev, scpi_chan, idx);
- scpi_info = NULL;
return ret;
}
- scpi_info->channels = scpi_chan;
- scpi_info->num_chans = count;
scpi_info->commands = scpi_std_commands;
-
- platform_set_drvdata(pdev, scpi_info);
+ scpi_info->scpi_ops = &scpi_ops;
if (scpi_info->is_legacy) {
/* Replace with legacy variants */
ret = scpi_init_versions(scpi_info);
if (ret) {
dev_err(dev, "incorrect or no SCP firmware found\n");
- scpi_remove(pdev);
return ret;
}
- _dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n",
- PROTOCOL_REV_MAJOR(scpi_info->protocol_version),
- PROTOCOL_REV_MINOR(scpi_info->protocol_version),
- FW_REV_MAJOR(scpi_info->firmware_version),
- FW_REV_MINOR(scpi_info->firmware_version),
- FW_REV_PATCH(scpi_info->firmware_version));
- scpi_info->scpi_ops = &scpi_ops;
+ scpi_dvfs_populate(dev);
+
+ _dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n",
+ FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version),
+ FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version),
+ FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version),
+ FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version),
+ FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version));
- ret = sysfs_create_groups(&dev->kobj, versions_groups);
+ ret = devm_device_add_groups(dev, versions_groups);
if (ret)
dev_err(dev, "unable to create sysfs version group\n");
- return of_platform_populate(dev->of_node, NULL, NULL, dev);
+ return devm_of_platform_populate(dev);
}
static const struct of_device_id scpi_of_match[] = {
.of_match_table = scpi_of_match,
},
.probe = scpi_probe,
- .remove = scpi_remove,
};
module_platform_driver(scpi_driver);
* later.
*/
del_timer(&wakeup_timer);
+ destroy_timer_on_stack(&wakeup_timer);
if (atomic_dec_return_relaxed(&nb_active_threads) == 0)
complete(&suspend_threads_done);
return ret ? : le32_to_cpu(out);
}
+int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
+{
+ return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE,
+ enable ? QCOM_SCM_SET_DLOAD_MODE : 0, 0);
+}
+
int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id)
{
struct {
{
return -ENODEV;
}
+
+int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr,
+ unsigned int *val)
+{
+ int ret;
+
+ ret = qcom_scm_call_atomic1(QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, addr);
+ if (ret >= 0)
+ *val = ret;
+
+ return ret < 0 ? ret : 0;
+}
+
+int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val)
+{
+ return qcom_scm_call_atomic2(QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE,
+ addr, val);
+}
return ret;
}
+
+int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
+{
+ struct qcom_scm_desc desc = {0};
+ struct arm_smccc_res res;
+
+ desc.args[0] = QCOM_SCM_SET_DLOAD_MODE;
+ desc.args[1] = enable ? QCOM_SCM_SET_DLOAD_MODE : 0;
+ desc.arginfo = QCOM_SCM_ARGS(2);
+
+ return qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE,
+ &desc, &res);
+}
+
+int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr,
+ unsigned int *val)
+{
+ struct qcom_scm_desc desc = {0};
+ struct arm_smccc_res res;
+ int ret;
+
+ desc.args[0] = addr;
+ desc.arginfo = QCOM_SCM_ARGS(1);
+
+ ret = qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ,
+ &desc, &res);
+ if (ret >= 0)
+ *val = res.a1;
+
+ return ret < 0 ? ret : 0;
+}
+
+int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val)
+{
+ struct qcom_scm_desc desc = {0};
+ struct arm_smccc_res res;
+
+ desc.args[0] = addr;
+ desc.args[1] = val;
+ desc.arginfo = QCOM_SCM_ARGS(2);
+
+ return qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE,
+ &desc, &res);
+}
#include <linux/cpumask.h>
#include <linux/export.h>
#include <linux/dma-mapping.h>
+#include <linux/module.h>
#include <linux/types.h>
#include <linux/qcom_scm.h>
#include <linux/of.h>
+#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/clk.h>
#include <linux/reset-controller.h>
#include "qcom_scm.h"
+static bool download_mode = IS_ENABLED(CONFIG_QCOM_SCM_DOWNLOAD_MODE_DEFAULT);
+module_param(download_mode, bool, 0);
+
#define SCM_HAS_CORE_CLK BIT(0)
#define SCM_HAS_IFACE_CLK BIT(1)
#define SCM_HAS_BUS_CLK BIT(2)
struct clk *iface_clk;
struct clk *bus_clk;
struct reset_controller_dev reset;
+
+ u64 dload_mode_addr;
};
static struct qcom_scm *__scm;
}
EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_init);
+int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val)
+{
+ return __qcom_scm_io_readl(__scm->dev, addr, val);
+}
+EXPORT_SYMBOL(qcom_scm_io_readl);
+
+int qcom_scm_io_writel(phys_addr_t addr, unsigned int val)
+{
+ return __qcom_scm_io_writel(__scm->dev, addr, val);
+}
+EXPORT_SYMBOL(qcom_scm_io_writel);
+
+static void qcom_scm_set_download_mode(bool enable)
+{
+ bool avail;
+ int ret = 0;
+
+ avail = __qcom_scm_is_call_available(__scm->dev,
+ QCOM_SCM_SVC_BOOT,
+ QCOM_SCM_SET_DLOAD_MODE);
+ if (avail) {
+ ret = __qcom_scm_set_dload_mode(__scm->dev, enable);
+ } else if (__scm->dload_mode_addr) {
+ ret = __qcom_scm_io_writel(__scm->dev, __scm->dload_mode_addr,
+ enable ? QCOM_SCM_SET_DLOAD_MODE : 0);
+ } else {
+ dev_err(__scm->dev,
+ "No available mechanism for setting download mode\n");
+ }
+
+ if (ret)
+ dev_err(__scm->dev, "failed to set download mode: %d\n", ret);
+}
+
+static int qcom_scm_find_dload_address(struct device *dev, u64 *addr)
+{
+ struct device_node *tcsr;
+ struct device_node *np = dev->of_node;
+ struct resource res;
+ u32 offset;
+ int ret;
+
+ tcsr = of_parse_phandle(np, "qcom,dload-mode", 0);
+ if (!tcsr)
+ return 0;
+
+ ret = of_address_to_resource(tcsr, 0, &res);
+ of_node_put(tcsr);
+ if (ret)
+ return ret;
+
+ ret = of_property_read_u32_index(np, "qcom,dload-mode", 1, &offset);
+ if (ret < 0)
+ return ret;
+
+ *addr = res.start + offset;
+
+ return 0;
+}
+
/**
* qcom_scm_is_available() - Checks if SCM is available
*/
if (!scm)
return -ENOMEM;
+ ret = qcom_scm_find_dload_address(&pdev->dev, &scm->dload_mode_addr);
+ if (ret < 0)
+ return ret;
+
clks = (unsigned long)of_device_get_match_data(&pdev->dev);
if (clks & SCM_HAS_CORE_CLK) {
scm->core_clk = devm_clk_get(&pdev->dev, "core");
__qcom_scm_init();
+ /*
+ * If requested enable "download mode", from this point on warmboot
+ * will cause the the boot stages to enter download mode, unless
+ * disabled below by a clean shutdown/reboot.
+ */
+ if (download_mode)
+ qcom_scm_set_download_mode(true);
+
return 0;
}
+static void qcom_scm_shutdown(struct platform_device *pdev)
+{
+ /* Clean shutdown, disable download mode to allow normal restart */
+ if (download_mode)
+ qcom_scm_set_download_mode(false);
+}
+
static const struct of_device_id qcom_scm_dt_match[] = {
{ .compatible = "qcom,scm-apq8064",
/* FIXME: This should have .data = (void *) SCM_HAS_CORE_CLK */
.of_match_table = qcom_scm_dt_match,
},
.probe = qcom_scm_probe,
+ .shutdown = qcom_scm_shutdown,
};
static int __init qcom_scm_init(void)
#define QCOM_SCM_SVC_BOOT 0x1
#define QCOM_SCM_BOOT_ADDR 0x1
+#define QCOM_SCM_SET_DLOAD_MODE 0x10
#define QCOM_SCM_BOOT_ADDR_MC 0x11
#define QCOM_SCM_SET_REMOTE_STATE 0xa
extern int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id);
+extern int __qcom_scm_set_dload_mode(struct device *dev, bool enable);
#define QCOM_SCM_FLAG_HLOS 0x01
#define QCOM_SCM_FLAG_COLDBOOT_MC 0x02
#define QCOM_SCM_CMD_CORE_HOTPLUGGED 0x10
extern void __qcom_scm_cpu_power_down(u32 flags);
+#define QCOM_SCM_SVC_IO 0x5
+#define QCOM_SCM_IO_READ 0x1
+#define QCOM_SCM_IO_WRITE 0x2
+extern int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, unsigned int *val);
+extern int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val);
+
#define QCOM_SCM_SVC_INFO 0x6
#define QCOM_IS_CALL_AVAIL_CMD 0x1
extern int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
-obj-$(CONFIG_TEGRA_BPMP) += bpmp.o
+tegra-bpmp-y = bpmp.o
+tegra-bpmp-$(CONFIG_DEBUG_FS) += bpmp-debugfs.o
+obj-$(CONFIG_TEGRA_BPMP) += tegra-bpmp.o
obj-$(CONFIG_TEGRA_IVC) += ivc.o
--- /dev/null
+/*
+ * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ */
+#include <linux/debugfs.h>
+#include <linux/dma-mapping.h>
+#include <linux/uaccess.h>
+
+#include <soc/tegra/bpmp.h>
+#include <soc/tegra/bpmp-abi.h>
+
+struct seqbuf {
+ char *buf;
+ size_t pos;
+ size_t size;
+};
+
+static void seqbuf_init(struct seqbuf *seqbuf, void *buf, size_t size)
+{
+ seqbuf->buf = buf;
+ seqbuf->size = size;
+ seqbuf->pos = 0;
+}
+
+static size_t seqbuf_avail(struct seqbuf *seqbuf)
+{
+ return seqbuf->pos < seqbuf->size ? seqbuf->size - seqbuf->pos : 0;
+}
+
+static size_t seqbuf_status(struct seqbuf *seqbuf)
+{
+ return seqbuf->pos <= seqbuf->size ? 0 : -EOVERFLOW;
+}
+
+static int seqbuf_eof(struct seqbuf *seqbuf)
+{
+ return seqbuf->pos >= seqbuf->size;
+}
+
+static int seqbuf_read(struct seqbuf *seqbuf, void *buf, size_t nbyte)
+{
+ nbyte = min(nbyte, seqbuf_avail(seqbuf));
+ memcpy(buf, seqbuf->buf + seqbuf->pos, nbyte);
+ seqbuf->pos += nbyte;
+ return seqbuf_status(seqbuf);
+}
+
+static int seqbuf_read_u32(struct seqbuf *seqbuf, uint32_t *v)
+{
+ int err;
+
+ err = seqbuf_read(seqbuf, v, 4);
+ *v = le32_to_cpu(*v);
+ return err;
+}
+
+static int seqbuf_read_str(struct seqbuf *seqbuf, const char **str)
+{
+ *str = seqbuf->buf + seqbuf->pos;
+ seqbuf->pos += strnlen(*str, seqbuf_avail(seqbuf));
+ seqbuf->pos++;
+ return seqbuf_status(seqbuf);
+}
+
+static void seqbuf_seek(struct seqbuf *seqbuf, ssize_t offset)
+{
+ seqbuf->pos += offset;
+}
+
+/* map filename in Linux debugfs to corresponding entry in BPMP */
+static const char *get_filename(struct tegra_bpmp *bpmp,
+ const struct file *file, char *buf, int size)
+{
+ char root_path_buf[512];
+ const char *root_path;
+ const char *filename;
+ size_t root_len;
+
+ root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf,
+ sizeof(root_path_buf));
+ if (IS_ERR(root_path))
+ return NULL;
+
+ root_len = strlen(root_path);
+
+ filename = dentry_path(file->f_path.dentry, buf, size);
+ if (IS_ERR(filename))
+ return NULL;
+
+ if (strlen(filename) < root_len ||
+ strncmp(filename, root_path, root_len))
+ return NULL;
+
+ filename += root_len;
+
+ return filename;
+}
+
+static int mrq_debugfs_read(struct tegra_bpmp *bpmp,
+ dma_addr_t name, size_t sz_name,
+ dma_addr_t data, size_t sz_data,
+ size_t *nbytes)
+{
+ struct mrq_debugfs_request req = {
+ .cmd = cpu_to_le32(CMD_DEBUGFS_READ),
+ .fop = {
+ .fnameaddr = cpu_to_le32((uint32_t)name),
+ .fnamelen = cpu_to_le32((uint32_t)sz_name),
+ .dataaddr = cpu_to_le32((uint32_t)data),
+ .datalen = cpu_to_le32((uint32_t)sz_data),
+ },
+ };
+ struct mrq_debugfs_response resp;
+ struct tegra_bpmp_message msg = {
+ .mrq = MRQ_DEBUGFS,
+ .tx = {
+ .data = &req,
+ .size = sizeof(req),
+ },
+ .rx = {
+ .data = &resp,
+ .size = sizeof(resp),
+ },
+ };
+ int err;
+
+ err = tegra_bpmp_transfer(bpmp, &msg);
+ if (err < 0)
+ return err;
+
+ *nbytes = (size_t)resp.fop.nbytes;
+
+ return 0;
+}
+
+static int mrq_debugfs_write(struct tegra_bpmp *bpmp,
+ dma_addr_t name, size_t sz_name,
+ dma_addr_t data, size_t sz_data)
+{
+ const struct mrq_debugfs_request req = {
+ .cmd = cpu_to_le32(CMD_DEBUGFS_WRITE),
+ .fop = {
+ .fnameaddr = cpu_to_le32((uint32_t)name),
+ .fnamelen = cpu_to_le32((uint32_t)sz_name),
+ .dataaddr = cpu_to_le32((uint32_t)data),
+ .datalen = cpu_to_le32((uint32_t)sz_data),
+ },
+ };
+ struct tegra_bpmp_message msg = {
+ .mrq = MRQ_DEBUGFS,
+ .tx = {
+ .data = &req,
+ .size = sizeof(req),
+ },
+ };
+
+ return tegra_bpmp_transfer(bpmp, &msg);
+}
+
+static int mrq_debugfs_dumpdir(struct tegra_bpmp *bpmp, dma_addr_t addr,
+ size_t size, size_t *nbytes)
+{
+ const struct mrq_debugfs_request req = {
+ .cmd = cpu_to_le32(CMD_DEBUGFS_DUMPDIR),
+ .dumpdir = {
+ .dataaddr = cpu_to_le32((uint32_t)addr),
+ .datalen = cpu_to_le32((uint32_t)size),
+ },
+ };
+ struct mrq_debugfs_response resp;
+ struct tegra_bpmp_message msg = {
+ .mrq = MRQ_DEBUGFS,
+ .tx = {
+ .data = &req,
+ .size = sizeof(req),
+ },
+ .rx = {
+ .data = &resp,
+ .size = sizeof(resp),
+ },
+ };
+ int err;
+
+ err = tegra_bpmp_transfer(bpmp, &msg);
+ if (err < 0)
+ return err;
+
+ *nbytes = (size_t)resp.dumpdir.nbytes;
+
+ return 0;
+}
+
+static int debugfs_show(struct seq_file *m, void *p)
+{
+ struct file *file = m->private;
+ struct inode *inode = file_inode(file);
+ struct tegra_bpmp *bpmp = inode->i_private;
+ const size_t datasize = m->size;
+ const size_t namesize = SZ_256;
+ void *datavirt, *namevirt;
+ dma_addr_t dataphys, namephys;
+ char buf[256];
+ const char *filename;
+ size_t len, nbytes;
+ int ret;
+
+ filename = get_filename(bpmp, file, buf, sizeof(buf));
+ if (!filename)
+ return -ENOENT;
+
+ namevirt = dma_alloc_coherent(bpmp->dev, namesize, &namephys,
+ GFP_KERNEL | GFP_DMA32);
+ if (!namevirt)
+ return -ENOMEM;
+
+ datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys,
+ GFP_KERNEL | GFP_DMA32);
+ if (!datavirt) {
+ ret = -ENOMEM;
+ goto free_namebuf;
+ }
+
+ len = strlen(filename);
+ strncpy(namevirt, filename, namesize);
+
+ ret = mrq_debugfs_read(bpmp, namephys, len, dataphys, datasize,
+ &nbytes);
+
+ if (!ret)
+ seq_write(m, datavirt, nbytes);
+
+ dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys);
+free_namebuf:
+ dma_free_coherent(bpmp->dev, namesize, namevirt, namephys);
+
+ return ret;
+}
+
+static int debugfs_open(struct inode *inode, struct file *file)
+{
+ return single_open_size(file, debugfs_show, file, SZ_128K);
+}
+
+static ssize_t debugfs_store(struct file *file, const char __user *buf,
+ size_t count, loff_t *f_pos)
+{
+ struct inode *inode = file_inode(file);
+ struct tegra_bpmp *bpmp = inode->i_private;
+ const size_t datasize = count;
+ const size_t namesize = SZ_256;
+ void *datavirt, *namevirt;
+ dma_addr_t dataphys, namephys;
+ char fnamebuf[256];
+ const char *filename;
+ size_t len;
+ int ret;
+
+ filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf));
+ if (!filename)
+ return -ENOENT;
+
+ namevirt = dma_alloc_coherent(bpmp->dev, namesize, &namephys,
+ GFP_KERNEL | GFP_DMA32);
+ if (!namevirt)
+ return -ENOMEM;
+
+ datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys,
+ GFP_KERNEL | GFP_DMA32);
+ if (!datavirt) {
+ ret = -ENOMEM;
+ goto free_namebuf;
+ }
+
+ len = strlen(filename);
+ strncpy(namevirt, filename, namesize);
+
+ if (copy_from_user(datavirt, buf, count)) {
+ ret = -EFAULT;
+ goto free_databuf;
+ }
+
+ ret = mrq_debugfs_write(bpmp, namephys, len, dataphys,
+ count);
+
+free_databuf:
+ dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys);
+free_namebuf:
+ dma_free_coherent(bpmp->dev, namesize, namevirt, namephys);
+
+ return ret ?: count;
+}
+
+static const struct file_operations debugfs_fops = {
+ .open = debugfs_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .write = debugfs_store,
+ .release = single_release,
+};
+
+static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf,
+ struct dentry *parent, uint32_t depth)
+{
+ int err;
+ uint32_t d, t;
+ const char *name;
+ struct dentry *dentry;
+
+ while (!seqbuf_eof(seqbuf)) {
+ err = seqbuf_read_u32(seqbuf, &d);
+ if (err < 0)
+ return err;
+
+ if (d < depth) {
+ seqbuf_seek(seqbuf, -4);
+ /* go up a level */
+ return 0;
+ } else if (d != depth) {
+ /* malformed data received from BPMP */
+ return -EIO;
+ }
+
+ err = seqbuf_read_u32(seqbuf, &t);
+ if (err < 0)
+ return err;
+ err = seqbuf_read_str(seqbuf, &name);
+ if (err < 0)
+ return err;
+
+ if (t & DEBUGFS_S_ISDIR) {
+ dentry = debugfs_create_dir(name, parent);
+ if (!dentry)
+ return -ENOMEM;
+ err = bpmp_populate_dir(bpmp, seqbuf, dentry, depth+1);
+ if (err < 0)
+ return err;
+ } else {
+ umode_t mode;
+
+ mode = t & DEBUGFS_S_IRUSR ? S_IRUSR : 0;
+ mode |= t & DEBUGFS_S_IWUSR ? S_IWUSR : 0;
+ dentry = debugfs_create_file(name, mode,
+ parent, bpmp,
+ &debugfs_fops);
+ if (!dentry)
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+static int create_debugfs_mirror(struct tegra_bpmp *bpmp, void *buf,
+ size_t bufsize, struct dentry *root)
+{
+ struct seqbuf seqbuf;
+ int err;
+
+ bpmp->debugfs_mirror = debugfs_create_dir("debug", root);
+ if (!bpmp->debugfs_mirror)
+ return -ENOMEM;
+
+ seqbuf_init(&seqbuf, buf, bufsize);
+ err = bpmp_populate_dir(bpmp, &seqbuf, bpmp->debugfs_mirror, 0);
+ if (err < 0) {
+ debugfs_remove_recursive(bpmp->debugfs_mirror);
+ bpmp->debugfs_mirror = NULL;
+ }
+
+ return err;
+}
+
+static int mrq_is_supported(struct tegra_bpmp *bpmp, unsigned int mrq)
+{
+ struct mrq_query_abi_request req = { .mrq = cpu_to_le32(mrq) };
+ struct mrq_query_abi_response resp;
+ struct tegra_bpmp_message msg = {
+ .mrq = MRQ_QUERY_ABI,
+ .tx = {
+ .data = &req,
+ .size = sizeof(req),
+ },
+ .rx = {
+ .data = &resp,
+ .size = sizeof(resp),
+ },
+ };
+ int ret;
+
+ ret = tegra_bpmp_transfer(bpmp, &msg);
+ if (ret < 0) {
+ /* something went wrong; assume not supported */
+ dev_warn(bpmp->dev, "tegra_bpmp_transfer failed (%d)\n", ret);
+ return 0;
+ }
+
+ return resp.status ? 0 : 1;
+}
+
+int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp)
+{
+ dma_addr_t phys;
+ void *virt;
+ const size_t sz = SZ_256K;
+ size_t nbytes;
+ int ret;
+ struct dentry *root;
+
+ if (!mrq_is_supported(bpmp, MRQ_DEBUGFS))
+ return 0;
+
+ root = debugfs_create_dir("bpmp", NULL);
+ if (!root)
+ return -ENOMEM;
+
+ virt = dma_alloc_coherent(bpmp->dev, sz, &phys,
+ GFP_KERNEL | GFP_DMA32);
+ if (!virt) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = mrq_debugfs_dumpdir(bpmp, phys, sz, &nbytes);
+ if (ret < 0)
+ goto free;
+
+ ret = create_debugfs_mirror(bpmp, virt, nbytes, root);
+free:
+ dma_free_coherent(bpmp->dev, sz, virt, phys);
+out:
+ if (ret < 0)
+ debugfs_remove(root);
+
+ return ret;
+}
}
static ssize_t __tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel,
- void *data, size_t size)
+ void *data, size_t size, int *ret)
{
+ int err;
+
if (data && size > 0)
memcpy(data, channel->ib->data, size);
- return tegra_ivc_read_advance(channel->ivc);
+ err = tegra_ivc_read_advance(channel->ivc);
+ if (err < 0)
+ return err;
+
+ *ret = channel->ib->code;
+
+ return 0;
}
static ssize_t tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel,
- void *data, size_t size)
+ void *data, size_t size, int *ret)
{
struct tegra_bpmp *bpmp = channel->bpmp;
unsigned long flags;
}
spin_lock_irqsave(&bpmp->lock, flags);
- err = __tegra_bpmp_channel_read(channel, data, size);
+ err = __tegra_bpmp_channel_read(channel, data, size, ret);
clear_bit(index, bpmp->threaded.allocated);
spin_unlock_irqrestore(&bpmp->lock, flags);
if (err < 0)
return err;
- return __tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size);
+ return __tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size,
+ &msg->rx.ret);
}
EXPORT_SYMBOL_GPL(tegra_bpmp_transfer_atomic);
if (err == 0)
return -ETIMEDOUT;
- return tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size);
+ return tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size,
+ &msg->rx.ret);
}
EXPORT_SYMBOL_GPL(tegra_bpmp_transfer);
return NULL;
}
-static void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel,
- int code, const void *data, size_t size)
+void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code,
+ const void *data, size_t size)
{
unsigned long flags = channel->ib->flags;
struct tegra_bpmp *bpmp = channel->bpmp;
mbox_client_txdone(bpmp->mbox.channel, 0);
}
}
+EXPORT_SYMBOL_GPL(tegra_bpmp_mrq_return);
static void tegra_bpmp_handle_mrq(struct tegra_bpmp *bpmp,
unsigned int mrq,
if (err < 0)
goto free_mrq;
+ err = tegra_bpmp_init_debugfs(bpmp);
+ if (err < 0)
+ dev_err(&pdev->dev, "debugfs initialization failed: %d\n", err);
+
return 0;
free_mrq:
/* And we wait for the response. */
timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms);
if (!wait_for_completion_timeout(&xfer->done, timeout)) {
- dev_err(dev, "Mbox timedout in resp(caller: %pF)\n",
+ dev_err(dev, "Mbox timedout in resp(caller: %pS)\n",
(void *)_RET_IP_);
ret = -ETIMEDOUT;
}
.write_complete = altera_cvp_write_complete,
};
-static ssize_t show_chkcfg(struct device_driver *dev, char *buf)
+static ssize_t chkcfg_show(struct device_driver *dev, char *buf)
{
return snprintf(buf, 3, "%d\n", altera_cvp_chkcfg);
}
-static ssize_t store_chkcfg(struct device_driver *drv, const char *buf,
+static ssize_t chkcfg_store(struct device_driver *drv, const char *buf,
size_t count)
{
int ret;
return count;
}
-static DRIVER_ATTR(chkcfg, 0600, show_chkcfg, store_chkcfg);
+static DRIVER_ATTR_RW(chkcfg);
static int altera_cvp_probe(struct pci_dev *pdev,
const struct pci_device_id *dev_id);
NUM_BANKS(ADDR_SURF_2_BANK);
for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
WREG32(mmGB_TILE_MODE0 + reg_offset, tilemode[reg_offset]);
- } else if (adev->asic_type == CHIP_OLAND || adev->asic_type == CHIP_HAINAN) {
+ } else if (adev->asic_type == CHIP_OLAND) {
+ tilemode[0] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[1] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[2] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[3] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[4] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[5] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(split_equal_to_row_size) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[6] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(split_equal_to_row_size) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[7] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(split_equal_to_row_size) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[8] = MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[9] = MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[10] = MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[11] = MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[12] = MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[13] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[14] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[15] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[16] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[17] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(split_equal_to_row_size) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[21] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[22] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4);
+ tilemode[23] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[24] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2);
+ tilemode[25] = MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+ ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
+ NUM_BANKS(ADDR_SURF_8_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1);
+ for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+ WREG32(mmGB_TILE_MODE0 + reg_offset, tilemode[reg_offset]);
+ } else if (adev->asic_type == CHIP_HAINAN) {
tilemode[0] = MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
PIPE_CONFIG(ADDR_SURF_P2) |
int err = 0;
dev = kfd_device_by_id(args->gpu_id);
+ if (!dev)
+ return -EINVAL;
dev->kfd2kgd->get_tile_config(dev->kgd, &config);
struct kfd_event *ev)
{
if (p->signal_event_count == KFD_SIGNAL_EVENT_LIMIT) {
- pr_warn("Signal event wasn't created because limit was reached\n");
+ if (!p->signal_event_limit_reached) {
+ pr_warn("Signal event wasn't created because limit was reached\n");
+ p->signal_event_limit_reached = true;
+ }
return -ENOMEM;
}
{
if (kq->queue->properties.type == KFD_QUEUE_TYPE_HIQ)
kq->mqd->destroy_mqd(kq->mqd,
- NULL,
- false,
+ kq->queue->mqd,
+ KFD_PREEMPT_TYPE_WAVEFRONT_RESET,
QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS,
kq->queue->pipe,
kq->queue->queue);
uint32_t wptr, rptr;
unsigned int *queue_address;
+ /* When rptr == wptr, the buffer is empty.
+ * When rptr == wptr + 1, the buffer is full.
+ * It is always rptr that advances to the position of wptr, rather than
+ * the opposite. So we can only use up to queue_size_dwords - 1 dwords.
+ */
rptr = *kq->rptr_kernel;
wptr = *kq->wptr_kernel;
queue_address = (unsigned int *)kq->pq_kernel_addr;
pr_debug("wptr: %d\n", wptr);
pr_debug("queue_address 0x%p\n", queue_address);
- available_size = (rptr - 1 - wptr + queue_size_dwords) %
+ available_size = (rptr + queue_size_dwords - 1 - wptr) %
queue_size_dwords;
- if (packet_size_in_dwords >= queue_size_dwords ||
- packet_size_in_dwords >= available_size) {
+ if (packet_size_in_dwords > available_size) {
/*
* make sure calling functions know
* acquire_packet_buffer() failed
}
if (wptr + packet_size_in_dwords >= queue_size_dwords) {
+ /* make sure after rolling back to position 0, there is
+ * still enough space.
+ */
+ if (packet_size_in_dwords >= rptr) {
+ *buffer_ptr = NULL;
+ return -ENOMEM;
+ }
+ /* fill nops, roll back and start at position 0 */
while (wptr > 0) {
queue_address[wptr] = kq->nop_packet;
wptr = (wptr + 1) % queue_size_dwords;
struct list_head signal_event_pages;
u32 next_nonsignal_event_id;
size_t signal_event_count;
+ bool signal_event_limit_reached;
};
/**
int num_queues = 0;
struct queue *cur;
- memset(&q_properties, 0, sizeof(struct queue_properties));
memcpy(&q_properties, properties, sizeof(struct queue_properties));
q = NULL;
kq = NULL;
void etnaviv_gem_free_object(struct drm_gem_object *obj)
{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
+ struct etnaviv_drm_private *priv = obj->dev->dev_private;
struct etnaviv_vram_mapping *mapping, *tmp;
/* object should not be active */
WARN_ON(is_active(etnaviv_obj));
+ mutex_lock(&priv->gem_lock);
list_del(&etnaviv_obj->gem_node);
+ mutex_unlock(&priv->gem_lock);
list_for_each_entry_safe(mapping, tmp, &etnaviv_obj->vram_list,
obj_node) {
cmdbuf->user_size = ALIGN(args->stream_size, 8);
ret = etnaviv_gpu_submit(gpu, submit, cmdbuf);
- if (ret == 0)
- cmdbuf = NULL;
+ if (ret)
+ goto out;
+
+ cmdbuf = NULL;
if (args->flags & ETNA_SUBMIT_FENCE_FD_OUT) {
/*
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/iopoll.h>
+#include <linux/irq.h>
#include <linux/mfd/syscon.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
static int exynos_drm_suspend(struct device *dev)
{
struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_connector *connector;
- struct drm_connector_list_iter conn_iter;
+ struct exynos_drm_private *private = drm_dev->dev_private;
if (pm_runtime_suspended(dev) || !drm_dev)
return 0;
- drm_connector_list_iter_begin(drm_dev, &conn_iter);
- drm_for_each_connector_iter(connector, &conn_iter) {
- int old_dpms = connector->dpms;
-
- if (connector->funcs->dpms)
- connector->funcs->dpms(connector, DRM_MODE_DPMS_OFF);
-
- /* Set the old mode back to the connector for resume */
- connector->dpms = old_dpms;
+ drm_kms_helper_poll_disable(drm_dev);
+ exynos_drm_fbdev_suspend(drm_dev);
+ private->suspend_state = drm_atomic_helper_suspend(drm_dev);
+ if (IS_ERR(private->suspend_state)) {
+ exynos_drm_fbdev_resume(drm_dev);
+ drm_kms_helper_poll_enable(drm_dev);
+ return PTR_ERR(private->suspend_state);
}
- drm_connector_list_iter_end(&conn_iter);
return 0;
}
static int exynos_drm_resume(struct device *dev)
{
struct drm_device *drm_dev = dev_get_drvdata(dev);
- struct drm_connector *connector;
- struct drm_connector_list_iter conn_iter;
+ struct exynos_drm_private *private = drm_dev->dev_private;
if (pm_runtime_suspended(dev) || !drm_dev)
return 0;
- drm_connector_list_iter_begin(drm_dev, &conn_iter);
- drm_for_each_connector_iter(connector, &conn_iter) {
- if (connector->funcs->dpms) {
- int dpms = connector->dpms;
-
- connector->dpms = DRM_MODE_DPMS_OFF;
- connector->funcs->dpms(connector, dpms);
- }
- }
- drm_connector_list_iter_end(&conn_iter);
+ drm_atomic_helper_resume(drm_dev, private->suspend_state);
+ exynos_drm_fbdev_resume(drm_dev);
+ drm_kms_helper_poll_enable(drm_dev);
return 0;
}
*/
struct exynos_drm_private {
struct drm_fb_helper *fb_helper;
+ struct drm_atomic_state *suspend_state;
struct device *dma_dev;
void *mapping;
#include <drm/drm_crtc_helper.h>
#include <drm/exynos_drm.h>
+#include <linux/console.h>
+
#include "exynos_drm_drv.h"
#include "exynos_drm_fb.h"
#include "exynos_drm_fbdev.h"
drm_fb_helper_hotplug_event(fb_helper);
}
+
+void exynos_drm_fbdev_suspend(struct drm_device *dev)
+{
+ struct exynos_drm_private *private = dev->dev_private;
+
+ console_lock();
+ drm_fb_helper_set_suspend(private->fb_helper, 1);
+ console_unlock();
+}
+
+void exynos_drm_fbdev_resume(struct drm_device *dev)
+{
+ struct exynos_drm_private *private = dev->dev_private;
+
+ console_lock();
+ drm_fb_helper_set_suspend(private->fb_helper, 0);
+ console_unlock();
+}
void exynos_drm_fbdev_fini(struct drm_device *dev);
void exynos_drm_fbdev_restore_mode(struct drm_device *dev);
void exynos_drm_output_poll_changed(struct drm_device *dev);
+void exynos_drm_fbdev_suspend(struct drm_device *drm);
+void exynos_drm_fbdev_resume(struct drm_device *drm);
#else
#define exynos_drm_output_poll_changed (NULL)
+static inline void exynos_drm_fbdev_suspend(struct drm_device *drm)
+{
+}
+
+static inline void exynos_drm_fbdev_resume(struct drm_device *drm)
+{
+}
+
#endif
#endif
struct drm_device *dev = encoder->dev;
struct drm_connector *connector;
struct drm_display_mode *m;
+ struct drm_connector_list_iter conn_iter;
int mode_ok;
drm_mode_set_crtcinfo(adjusted_mode, 0);
- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ drm_connector_list_iter_begin(dev, &conn_iter);
+ drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder == encoder)
break;
}
+ if (connector)
+ drm_connector_get(connector);
+ drm_connector_list_iter_end(&conn_iter);
- if (connector->encoder != encoder)
+ if (!connector)
return true;
mode_ok = hdmi_mode_valid(connector, adjusted_mode);
if (mode_ok == MODE_OK)
- return true;
+ goto cleanup;
/*
* Find the most suitable mode and copy it to adjusted_mode.
}
}
+cleanup:
+ drm_connector_put(connector);
+
return true;
}
static int emulate_pci_bar_write(struct intel_vgpu *vgpu, unsigned int offset,
void *p_data, unsigned int bytes)
{
- unsigned int bar_index =
- (rounddown(offset, 8) % PCI_BASE_ADDRESS_0) / 8;
u32 new = *(u32 *)(p_data);
bool lo = IS_ALIGNED(offset, 8);
u64 size;
int ret = 0;
bool mmio_enabled =
vgpu_cfg_space(vgpu)[PCI_COMMAND] & PCI_COMMAND_MEMORY;
+ struct intel_vgpu_pci_bar *bars = vgpu->cfg_space.bar;
- if (WARN_ON(bar_index >= INTEL_GVT_PCI_BAR_MAX))
- return -EINVAL;
-
+ /*
+ * Power-up software can determine how much address
+ * space the device requires by writing a value of
+ * all 1's to the register and then reading the value
+ * back. The device will return 0's in all don't-care
+ * address bits.
+ */
if (new == 0xffffffff) {
- /*
- * Power-up software can determine how much address
- * space the device requires by writing a value of
- * all 1's to the register and then reading the value
- * back. The device will return 0's in all don't-care
- * address bits.
- */
- size = vgpu->cfg_space.bar[bar_index].size;
- if (lo) {
- new = rounddown(new, size);
- } else {
- u32 val = vgpu_cfg_space(vgpu)[rounddown(offset, 8)];
- /* for 32bit mode bar it returns all-0 in upper 32
- * bit, for 64bit mode bar it will calculate the
- * size with lower 32bit and return the corresponding
- * value
+ switch (offset) {
+ case PCI_BASE_ADDRESS_0:
+ case PCI_BASE_ADDRESS_1:
+ size = ~(bars[INTEL_GVT_PCI_BAR_GTTMMIO].size -1);
+ intel_vgpu_write_pci_bar(vgpu, offset,
+ size >> (lo ? 0 : 32), lo);
+ /*
+ * Untrap the BAR, since guest hasn't configured a
+ * valid GPA
*/
- if (val & PCI_BASE_ADDRESS_MEM_TYPE_64)
- new &= (~(size-1)) >> 32;
- else
- new = 0;
- }
- /*
- * Unmapp & untrap the BAR, since guest hasn't configured a
- * valid GPA
- */
- switch (bar_index) {
- case INTEL_GVT_PCI_BAR_GTTMMIO:
ret = trap_gttmmio(vgpu, false);
break;
- case INTEL_GVT_PCI_BAR_APERTURE:
+ case PCI_BASE_ADDRESS_2:
+ case PCI_BASE_ADDRESS_3:
+ size = ~(bars[INTEL_GVT_PCI_BAR_APERTURE].size -1);
+ intel_vgpu_write_pci_bar(vgpu, offset,
+ size >> (lo ? 0 : 32), lo);
ret = map_aperture(vgpu, false);
break;
+ default:
+ /* Unimplemented BARs */
+ intel_vgpu_write_pci_bar(vgpu, offset, 0x0, false);
}
- intel_vgpu_write_pci_bar(vgpu, offset, new, lo);
} else {
- /*
- * Unmapp & untrap the old BAR first, since guest has
- * re-configured the BAR
- */
- switch (bar_index) {
- case INTEL_GVT_PCI_BAR_GTTMMIO:
- ret = trap_gttmmio(vgpu, false);
+ switch (offset) {
+ case PCI_BASE_ADDRESS_0:
+ case PCI_BASE_ADDRESS_1:
+ /*
+ * Untrap the old BAR first, since guest has
+ * re-configured the BAR
+ */
+ trap_gttmmio(vgpu, false);
+ intel_vgpu_write_pci_bar(vgpu, offset, new, lo);
+ ret = trap_gttmmio(vgpu, mmio_enabled);
break;
- case INTEL_GVT_PCI_BAR_APERTURE:
- ret = map_aperture(vgpu, false);
+ case PCI_BASE_ADDRESS_2:
+ case PCI_BASE_ADDRESS_3:
+ map_aperture(vgpu, false);
+ intel_vgpu_write_pci_bar(vgpu, offset, new, lo);
+ ret = map_aperture(vgpu, mmio_enabled);
break;
- }
- intel_vgpu_write_pci_bar(vgpu, offset, new, lo);
- /* Track the new BAR */
- if (mmio_enabled) {
- switch (bar_index) {
- case INTEL_GVT_PCI_BAR_GTTMMIO:
- ret = trap_gttmmio(vgpu, true);
- break;
- case INTEL_GVT_PCI_BAR_APERTURE:
- ret = map_aperture(vgpu, true);
- break;
- }
+ default:
+ intel_vgpu_write_pci_bar(vgpu, offset, new, lo);
}
}
return ret;
}
switch (rounddown(offset, 4)) {
- case PCI_BASE_ADDRESS_0:
- case PCI_BASE_ADDRESS_1:
- case PCI_BASE_ADDRESS_2:
- case PCI_BASE_ADDRESS_3:
+ case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_5:
if (WARN_ON(!IS_ALIGNED(offset, 4)))
return -EINVAL;
return emulate_pci_bar_write(vgpu, offset, p_data, bytes);
struct intel_gvt *gvt = vgpu->gvt;
const struct intel_gvt_device_info *info = &gvt->device_info;
u16 *gmch_ctl;
- int i;
memcpy(vgpu_cfg_space(vgpu), gvt->firmware.cfg_space,
info->cfg_space_size);
*/
memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_1, 0, 4);
memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_3, 0, 4);
+ memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_4, 0, 8);
memset(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_OPREGION, 0, 4);
- for (i = 0; i < INTEL_GVT_MAX_BAR_NUM; i++) {
- vgpu->cfg_space.bar[i].size = pci_resource_len(
- gvt->dev_priv->drm.pdev, i * 2);
- vgpu->cfg_space.bar[i].tracked = false;
- }
+ vgpu->cfg_space.bar[INTEL_GVT_PCI_BAR_GTTMMIO].size =
+ pci_resource_len(gvt->dev_priv->drm.pdev, 0);
+ vgpu->cfg_space.bar[INTEL_GVT_PCI_BAR_APERTURE].size =
+ pci_resource_len(gvt->dev_priv->drm.pdev, 2);
}
/**
pipe);
int position;
int vbl_start, vbl_end, hsync_start, htotal, vtotal;
- bool in_vbl = true;
unsigned long irqflags;
if (WARN_ON(!mode->crtc_clock)) {
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
- in_vbl = position >= vbl_start && position < vbl_end;
-
/*
* While in vblank, position will be negative
* counting up towards 0 at vbl_end. And outside
connector->encoder->base.id,
connector->encoder->name);
- /* ELD Conn_Type */
- connector->eld[5] &= ~(3 << 2);
- if (intel_crtc_has_dp_encoder(crtc_state))
- connector->eld[5] |= (1 << 2);
-
connector->eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2;
if (dev_priv->display.audio_codec_enable)
is_hdmi = is_dvi && (child->common.device_type & DEVICE_TYPE_NOT_HDMI_OUTPUT) == 0;
is_edp = is_dp && (child->common.device_type & DEVICE_TYPE_INTERNAL_CONNECTOR);
+ if (port == PORT_A && is_dvi) {
+ DRM_DEBUG_KMS("VBT claims port A supports DVI%s, ignoring\n",
+ is_hdmi ? "/HDMI" : "");
+ is_dvi = false;
+ is_hdmi = false;
+ }
+
info->supports_dvi = is_dvi;
info->supports_hdmi = is_hdmi;
info->supports_dp = is_dp;
mask = DC_STATE_DEBUG_MASK_MEMORY_UP;
- if (IS_BROXTON(dev_priv))
+ if (IS_GEN9_LP(dev_priv))
mask |= DC_STATE_DEBUG_MASK_CORES;
/* The below bit doesn't need to be cleared ever afterwards */
out:
if (ret && IS_GEN9_LP(dev_priv)) {
tmp = I915_READ(BXT_PHY_CTL(port));
- if ((tmp & (BXT_PHY_LANE_POWERDOWN_ACK |
+ if ((tmp & (BXT_PHY_CMNLANE_POWERDOWN_ACK |
+ BXT_PHY_LANE_POWERDOWN_ACK |
BXT_PHY_LANE_ENABLED)) != BXT_PHY_LANE_ENABLED)
DRM_ERROR("Port %c enabled but PHY powered down? "
"(PHY_CTL %08x)\n", port_name(port), tmp);
struct drm_crtc_state *old_crtc_state, *new_crtc_state;
struct drm_crtc *crtc;
struct intel_crtc_state *intel_cstate;
- bool hw_check = intel_state->modeset;
u64 put_domains[I915_MAX_PIPES] = {};
unsigned crtc_vblank_mask = 0;
int i;
if (needs_modeset(new_crtc_state) ||
to_intel_crtc_state(new_crtc_state)->update_pipe) {
- hw_check = true;
put_domains[to_intel_crtc(crtc)->pipe] =
modeset_get_crtc_power_domains(crtc,
if (mode_cmd->handles[i] != mode_cmd->handles[0]) {
DRM_DEBUG_KMS("bad plane %d handle\n", i);
- return -EINVAL;
+ goto err;
}
stride_alignment = intel_fb_stride_alignment(fb, i);
},
};
-static u32 bxt_phy_port_mask(const struct bxt_ddi_phy_info *phy_info)
-{
- return (phy_info->dual_channel * BIT(phy_info->channel[DPIO_CH1].port)) |
- BIT(phy_info->channel[DPIO_CH0].port);
-}
-
static const struct bxt_ddi_phy_info *
bxt_get_phy_list(struct drm_i915_private *dev_priv, int *count)
{
enum dpio_phy phy)
{
const struct bxt_ddi_phy_info *phy_info;
- enum port port;
phy_info = bxt_get_phy_info(dev_priv, phy);
return false;
}
- for_each_port_masked(port, bxt_phy_port_mask(phy_info)) {
- u32 tmp = I915_READ(BXT_PHY_CTL(port));
-
- if (tmp & BXT_PHY_CMNLANE_POWERDOWN_ACK) {
- DRM_DEBUG_DRIVER("DDI PHY %d powered, but common lane "
- "for port %c powered down "
- "(PHY_CTL %08x)\n",
- phy, port_name(port), tmp);
-
- return false;
- }
- }
-
return true;
}
struct intel_crtc_state *old_crtc_state,
struct drm_connector_state *old_conn_state)
{
- struct drm_device *dev = encoder->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
enum port port;
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_OFF);
intel_panel_disable_backlight(old_conn_state);
- /*
- * Disable Device ready before the port shutdown in order
- * to avoid split screen
- */
- if (IS_BROXTON(dev_priv)) {
- for_each_dsi_port(port, intel_dsi->ports)
- I915_WRITE(MIPI_DEVICE_READY(port), 0);
- }
-
/*
* According to the spec we should send SHUTDOWN before
* MIPI_SEQ_DISPLAY_OFF only for v3+ VBTs, but field testing
#include "intel_drv.h"
#include "i915_drv.h"
+static void intel_connector_update_eld_conn_type(struct drm_connector *connector)
+{
+ u8 conn_type;
+
+ if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
+ connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
+ conn_type = DRM_ELD_CONN_TYPE_DP;
+ } else {
+ conn_type = DRM_ELD_CONN_TYPE_HDMI;
+ }
+
+ connector->eld[DRM_ELD_SAD_COUNT_CONN_TYPE] &= ~DRM_ELD_CONN_TYPE_MASK;
+ connector->eld[DRM_ELD_SAD_COUNT_CONN_TYPE] |= conn_type;
+}
+
/**
* intel_connector_update_modes - update connector from edid
* @connector: DRM connector device to use
ret = drm_add_edid_modes(connector, edid);
drm_edid_to_eld(connector, edid);
+ intel_connector_update_eld_conn_type(connector);
+
return ret;
}
if (!panel->backlight.max)
return -ENODEV;
+ panel->backlight.min = get_backlight_min_vbt(connector);
+
val = bxt_get_backlight(connector);
val = intel_panel_compute_brightness(connector, val);
panel->backlight.level = clamp(val, panel->backlight.min,
if (!panel->backlight.max)
return -ENODEV;
+ panel->backlight.min = get_backlight_min_vbt(connector);
+
val = bxt_get_backlight(connector);
val = intel_panel_compute_brightness(connector, val);
panel->backlight.level = clamp(val, panel->backlight.min,
/* 6. Enable DBUF */
gen9_dbuf_enable(dev_priv);
+
+ if (resume && dev_priv->csr.dmc_payload)
+ intel_csr_load_program(dev_priv);
}
#undef CNL_PROCMON_IDX
.y2 = qfb->base.height
};
- if (!old_state->fb) {
- qxl_io_log(qdev,
- "create primary fb: %dx%d,%d,%d\n",
- bo->surf.width, bo->surf.height,
- bo->surf.stride, bo->surf.format);
+ if (old_state->fb) {
+ qfb_old = to_qxl_framebuffer(old_state->fb);
+ bo_old = gem_to_qxl_bo(qfb_old->obj);
+ } else {
+ bo_old = NULL;
+ }
- qxl_io_create_primary(qdev, 0, bo);
- bo->is_primary = true;
+ if (bo == bo_old)
return;
- } else {
- qfb_old = to_qxl_framebuffer(old_state->fb);
- bo_old = gem_to_qxl_bo(qfb_old->obj);
+ if (bo_old && bo_old->is_primary) {
+ qxl_io_destroy_primary(qdev);
bo_old->is_primary = false;
}
- bo->is_primary = true;
+ if (!bo->is_primary) {
+ qxl_io_create_primary(qdev, 0, bo);
+ bo->is_primary = true;
+ }
qxl_draw_dirty_fb(qdev, qfb, bo, 0, 0, &norect, 1, 1);
}
{
struct qxl_device *qdev = plane->dev->dev_private;
- if (old_state->fb)
- { struct qxl_framebuffer *qfb =
+ if (old_state->fb) {
+ struct qxl_framebuffer *qfb =
to_qxl_framebuffer(old_state->fb);
struct qxl_bo *bo = gem_to_qxl_bo(qfb->obj);
- qxl_io_destroy_primary(qdev);
- bo->is_primary = false;
+ if (bo->is_primary) {
+ qxl_io_destroy_primary(qdev);
+ bo->is_primary = false;
+ }
}
}
struct drm_gem_object *obj;
struct qxl_bo *user_bo;
- if (!plane->state->fb) {
- /* we never executed prepare_fb, so there's nothing to
+ if (!old_state->fb) {
+ /*
+ * we never executed prepare_fb, so there's nothing to
* unpin.
*/
return;
}
- obj = to_qxl_framebuffer(plane->state->fb)->obj;
+ obj = to_qxl_framebuffer(old_state->fb)->obj;
user_bo = gem_to_qxl_bo(obj);
qxl_bo_unpin(user_bo);
}
radeon_agp_suspend(rdev);
pci_save_state(dev->pdev);
- if (freeze && rdev->family >= CHIP_CEDAR) {
+ if (freeze && rdev->family >= CHIP_CEDAR && !(rdev->flags & RADEON_IS_IGP)) {
rdev->asic->asic_reset(rdev, true);
pci_restore_state(dev->pdev);
} else if (suspend) {
bool "Allwinner A10 HDMI CEC Support"
depends on DRM_SUN4I_HDMI
select CEC_CORE
- depends on CEC_PIN
+ select CEC_PIN
help
Choose this option if you have an Allwinner SoC with an HDMI
controller and want to use CEC.
#include <drm/drm_connector.h>
#include <drm/drm_encoder.h>
-#include <media/cec.h>
+#include <media/cec-pin.h>
#define SUN4I_HDMI_CTRL_REG 0x004
#define SUN4I_HDMI_CTRL_ENABLE BIT(31)
hdmi->mod_clk = devm_clk_get(dev, "mod");
if (IS_ERR(hdmi->mod_clk)) {
dev_err(dev, "Couldn't get the HDMI mod clock\n");
- return PTR_ERR(hdmi->mod_clk);
+ ret = PTR_ERR(hdmi->mod_clk);
+ goto err_disable_bus_clk;
}
clk_prepare_enable(hdmi->mod_clk);
hdmi->pll0_clk = devm_clk_get(dev, "pll-0");
if (IS_ERR(hdmi->pll0_clk)) {
dev_err(dev, "Couldn't get the HDMI PLL 0 clock\n");
- return PTR_ERR(hdmi->pll0_clk);
+ ret = PTR_ERR(hdmi->pll0_clk);
+ goto err_disable_mod_clk;
}
hdmi->pll1_clk = devm_clk_get(dev, "pll-1");
if (IS_ERR(hdmi->pll1_clk)) {
dev_err(dev, "Couldn't get the HDMI PLL 1 clock\n");
- return PTR_ERR(hdmi->pll1_clk);
+ ret = PTR_ERR(hdmi->pll1_clk);
+ goto err_disable_mod_clk;
}
ret = sun4i_tmds_create(hdmi);
if (ret) {
dev_err(dev, "Couldn't create the TMDS clock\n");
- return ret;
+ goto err_disable_mod_clk;
}
writel(SUN4I_HDMI_CTRL_ENABLE, hdmi->base + SUN4I_HDMI_CTRL_REG);
ret = sun4i_hdmi_i2c_create(dev, hdmi);
if (ret) {
dev_err(dev, "Couldn't create the HDMI I2C adapter\n");
- return ret;
+ goto err_disable_mod_clk;
}
drm_encoder_helper_add(&hdmi->encoder,
drm_encoder_cleanup(&hdmi->encoder);
err_del_i2c_adapter:
i2c_del_adapter(hdmi->i2c);
+err_disable_mod_clk:
+ clk_disable_unprepare(hdmi->mod_clk);
+err_disable_bus_clk:
+ clk_disable_unprepare(hdmi->bus_clk);
return ret;
}
drm_connector_cleanup(&hdmi->connector);
drm_encoder_cleanup(&hdmi->encoder);
i2c_del_adapter(hdmi->i2c);
+ clk_disable_unprepare(hdmi->mod_clk);
+ clk_disable_unprepare(hdmi->bus_clk);
}
static const struct component_ops sun4i_hdmi_ops = {
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
-#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_PATH ../../drivers/gpu/drm/tegra
#define TRACE_INCLUDE_FILE trace
#include <trace/define_trace.h>
#define USB_VENDOR_ID_IDEACOM 0x1cb6
#define USB_DEVICE_ID_IDEACOM_IDC6650 0x6650
#define USB_DEVICE_ID_IDEACOM_IDC6651 0x6651
+#define USB_DEVICE_ID_IDEACOM_IDC6680 0x6680
#define USB_VENDOR_ID_ILITEK 0x222a
#define USB_DEVICE_ID_ILITEK_MULTITOUCH 0x0001
#define USB_DEVICE_ID_LENOVO_CBTKBD 0x6048
#define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067
#define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085
+#define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3
#define USB_VENDOR_ID_LG 0x1fd2
#define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
field->application != HID_DG_PEN &&
field->application != HID_DG_TOUCHPAD &&
field->application != HID_GD_KEYBOARD &&
+ field->application != HID_GD_SYSTEM_CONTROL &&
field->application != HID_CP_CONSUMER_CONTROL &&
field->application != HID_GD_WIRELESS_RADIO_CTLS &&
!(field->application == HID_VD_ASUS_CUSTOM_MEDIA_KEYS &&
USB_VENDOR_ID_ALPS_JP,
HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP) },
+ /* Lenovo X1 TAB Gen 2 */
+ { .driver_data = MT_CLS_WIN_8_DUAL,
+ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
+ USB_VENDOR_ID_LENOVO,
+ USB_DEVICE_ID_LENOVO_X1_TAB) },
+
/* Anton devices */
{ .driver_data = MT_CLS_EXPORT_ALL_INPUTS,
MT_USB_DEVICE(USB_VENDOR_ID_ANTON,
if (!(data->device_flags & RMI_DEVICE))
return 0;
- ret = rmi_reset_attn_mode(hdev);
+ /* Make sure the HID device is ready to receive events */
+ ret = hid_hw_open(hdev);
if (ret)
return ret;
+ ret = rmi_reset_attn_mode(hdev);
+ if (ret)
+ goto out;
+
ret = rmi_driver_resume(rmi_dev, false);
if (ret) {
hid_warn(hdev, "Failed to resume device: %d\n", ret);
- return ret;
+ goto out;
}
- return 0;
+out:
+ hid_hw_close(hdev);
+ return ret;
}
#endif /* CONFIG_PM */
kfree(hidraw);
} else {
/* close device for last reader */
- hid_hw_power(hidraw->hid, PM_HINT_NORMAL);
hid_hw_close(hidraw->hid);
+ hid_hw_power(hidraw->hid, PM_HINT_NORMAL);
}
}
}
{
/* the worst case is computed from the set_report command with a
* reportID > 15 and the maximum report length */
- int args_len = sizeof(__u8) + /* optional ReportID byte */
+ int args_len = sizeof(__u8) + /* ReportID */
+ sizeof(__u8) + /* optional ReportID byte */
sizeof(__u16) + /* data register */
sizeof(__u16) + /* size of the report */
report_size; /* report */
{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_IDEACOM, USB_DEVICE_ID_IDEACOM_IDC6680, HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C007, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS, HID_QUIRK_NOGET },
/* Try to find an already-probed interface from the same device */
list_for_each_entry(data, &wacom_udev_list, list) {
- if (compare_device_paths(hdev, data->dev, '/'))
+ if (compare_device_paths(hdev, data->dev, '/')) {
+ kref_get(&data->kref);
return data;
+ }
}
/* Fallback to finding devices that appear to be "siblings" */
if (!wacom->led.groups)
return -ENOTSUPP;
+ if (wacom->wacom_wac.features.type == REMOTE)
+ return -ENOTSUPP;
+
if (wacom->wacom_wac.pid) { /* wireless connected */
report_id = WAC_CMD_WL_LED_CONTROL;
buf_size = 13;
keys = data[9] & 0x07;
}
} else {
- buttons = ((data[6] & 0x10) << 10) |
- ((data[5] & 0x10) << 9) |
+ buttons = ((data[6] & 0x10) << 5) |
+ ((data[5] & 0x10) << 4) |
((data[6] & 0x0F) << 4) |
(data[5] & 0x0F);
}
continue;
if (range) {
+ /* Fix rotation alignment: userspace expects zero at left */
+ int16_t rotation = (int16_t)get_unaligned_le16(&frame[9]);
+ rotation += 1800/4;
+ if (rotation > 899)
+ rotation -= 1800;
+
input_report_abs(pen_input, ABS_X, get_unaligned_le16(&frame[1]));
input_report_abs(pen_input, ABS_Y, get_unaligned_le16(&frame[3]));
- input_report_abs(pen_input, ABS_TILT_X, frame[7]);
- input_report_abs(pen_input, ABS_TILT_Y, frame[8]);
- input_report_abs(pen_input, ABS_Z, get_unaligned_le16(&frame[9]));
+ input_report_abs(pen_input, ABS_TILT_X, (char)frame[7]);
+ input_report_abs(pen_input, ABS_TILT_Y, (char)frame[8]);
+ input_report_abs(pen_input, ABS_Z, rotation);
input_report_abs(pen_input, ABS_WHEEL, get_unaligned_le16(&frame[11]));
}
input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5]));
unsigned char *data = wacom->data;
int buttons = (data[282] << 1) | ((data[281] >> 6) & 0x01);
- int ring = data[285];
- int prox = buttons | (ring & 0x80);
+ int ring = data[285] & 0x7F;
+ bool ringstatus = data[285] & 0x80;
+ bool prox = buttons || ringstatus;
+
+ /* Fix touchring data: userspace expects 0 at left and increasing clockwise */
+ ring = 71 - ring;
+ ring += 3*72/16;
+ if (ring > 71)
+ ring -= 72;
wacom_report_numbered_buttons(pad_input, 9, buttons);
- input_report_abs(pad_input, ABS_WHEEL, (ring & 0x80) ? (ring & 0x7f) : 0);
+ input_report_abs(pad_input, ABS_WHEEL, ringstatus ? ring : 0);
input_report_key(pad_input, wacom->tool[1], prox ? 1 : 0);
input_report_abs(pad_input, ABS_MISC, prox ? PAD_DEVICE_ID : 0);
return 0;
}
+static int wacom_offset_rotation(struct input_dev *input, struct hid_usage *usage,
+ int value, int num, int denom)
+{
+ struct input_absinfo *abs = &input->absinfo[usage->code];
+ int range = (abs->maximum - abs->minimum + 1);
+
+ value += num*range/denom;
+ if (value > abs->maximum)
+ value -= range;
+ else if (value < abs->minimum)
+ value += range;
+ return value;
+}
+
int wacom_equivalent_usage(int usage)
{
if ((usage & HID_USAGE_PAGE) == WACOM_HID_UP_WACOMDIGITIZER) {
unsigned equivalent_usage = wacom_equivalent_usage(usage->hid);
int i;
bool is_touch_on = value;
+ bool do_report = false;
/*
* Avoid reporting this event and setting inrange_state if this usage
}
switch (equivalent_usage) {
+ case WACOM_HID_WD_TOUCHRING:
+ /*
+ * Userspace expects touchrings to increase in value with
+ * clockwise gestures and have their zero point at the
+ * tablet's left. HID events "should" be clockwise-
+ * increasing and zero at top, though the MobileStudio
+ * Pro and 2nd-gen Intuos Pro don't do this...
+ */
+ if (hdev->vendor == 0x56a &&
+ (hdev->product == 0x34d || hdev->product == 0x34e || /* MobileStudio Pro */
+ hdev->product == 0x357 || hdev->product == 0x358)) { /* Intuos Pro 2 */
+ value = (field->logical_maximum - value);
+
+ if (hdev->product == 0x357 || hdev->product == 0x358)
+ value = wacom_offset_rotation(input, usage, value, 3, 16);
+ else if (hdev->product == 0x34d || hdev->product == 0x34e)
+ value = wacom_offset_rotation(input, usage, value, 1, 2);
+ }
+ else {
+ value = wacom_offset_rotation(input, usage, value, 1, 4);
+ }
+ do_report = true;
+ break;
case WACOM_HID_WD_TOUCHRINGSTATUS:
if (!value)
input_event(input, usage->type, usage->code, 0);
value, i);
/* fall through*/
default:
+ do_report = true;
+ break;
+ }
+
+ if (do_report) {
input_event(input, usage->type, usage->code, value);
if (value)
wacom_wac->hid_data.pad_input_event_flag = true;
- break;
}
}
wacom_wac->hid_data.tipswitch |= value;
return;
case HID_DG_TOOLSERIALNUMBER:
- wacom_wac->serial[0] = (wacom_wac->serial[0] & ~0xFFFFFFFFULL);
- wacom_wac->serial[0] |= (__u32)value;
+ if (value) {
+ wacom_wac->serial[0] = (wacom_wac->serial[0] & ~0xFFFFFFFFULL);
+ wacom_wac->serial[0] |= (__u32)value;
+ }
return;
+ case HID_DG_TWIST:
+ /*
+ * Userspace expects pen twist to have its zero point when
+ * the buttons/finger is on the tablet's left. HID values
+ * are zero when buttons are toward the top.
+ */
+ value = wacom_offset_rotation(input, usage, value, 1, 4);
+ break;
case WACOM_HID_WD_SENSE:
wacom_wac->hid_data.sense_state = value;
return;
case WACOM_HID_WD_SERIALHI:
- wacom_wac->serial[0] = (wacom_wac->serial[0] & 0xFFFFFFFF);
- wacom_wac->serial[0] |= ((__u64)value) << 32;
- /*
- * Non-USI EMR devices may contain additional tool type
- * information here. See WACOM_HID_WD_TOOLTYPE case for
- * more details.
- */
- if (value >> 20 == 1) {
- wacom_wac->id[0] |= value & 0xFFFFF;
+ if (value) {
+ wacom_wac->serial[0] = (wacom_wac->serial[0] & 0xFFFFFFFF);
+ wacom_wac->serial[0] |= ((__u64)value) << 32;
+ /*
+ * Non-USI EMR devices may contain additional tool type
+ * information here. See WACOM_HID_WD_TOOLTYPE case for
+ * more details.
+ */
+ if (value >> 20 == 1) {
+ wacom_wac->id[0] |= value & 0xFFFFF;
+ }
}
return;
case WACOM_HID_WD_TOOLTYPE:
input_report_key(input, wacom_wac->tool[0], prox);
if (wacom_wac->serial[0]) {
input_event(input, EV_MSC, MSC_SERIAL, wacom_wac->serial[0]);
- input_report_abs(input, ABS_MISC, id);
+ input_report_abs(input, ABS_MISC, prox ? id : 0);
}
wacom_wac->hid_data.tipswitch = false;
if (!prox) {
wacom_wac->tool[0] = 0;
wacom_wac->id[0] = 0;
+ wacom_wac->serial[0] = 0;
}
}
void vmbus_hvsock_device_unregister(struct vmbus_channel *channel)
{
- mutex_lock(&vmbus_connection.channel_mutex);
-
BUG_ON(!is_hvsock_channel(channel));
channel->rescind = true;
vmbus_device_unregister(channel->device_obj);
-
- mutex_unlock(&vmbus_connection.channel_mutex);
}
EXPORT_SYMBOL_GPL(vmbus_hvsock_device_unregister);
out_src = smsg_out;
break;
+ case WRITE_TO_FILE:
+ out_src = fcopy_transaction.fcopy_msg;
+ out_len = sizeof(struct hv_do_fcopy);
+ break;
default:
out_src = fcopy_transaction.fcopy_msg;
out_len = fcopy_transaction.recv_len;
sizeof(struct slimpro_resp_msg) * ASYNC_MSG_FIFO_SIZE,
GFP_KERNEL);
if (rc)
- goto out_mbox_free;
+ return -ENOMEM;
INIT_WORK(&ctx->workq, xgene_hwmon_evt_work);
if (IS_ERR(ctx->mbox_chan)) {
dev_err(&pdev->dev,
"SLIMpro mailbox channel request failed\n");
- return -ENODEV;
+ rc = -ENODEV;
+ goto out_mbox_free;
}
} else {
struct acpi_pcct_hw_reduced *cppc_ss;
if (device_property_read_u32(&pdev->dev, "pcc-channel",
&ctx->mbox_idx)) {
dev_err(&pdev->dev, "no pcc-channel property\n");
- return -ENODEV;
+ rc = -ENODEV;
+ goto out_mbox_free;
}
cl->rx_callback = xgene_hwmon_pcc_rx_cb;
if (IS_ERR(ctx->mbox_chan)) {
dev_err(&pdev->dev,
"PPC channel request failed\n");
- return -ENODEV;
+ rc = -ENODEV;
+ goto out_mbox_free;
}
/*
if (!cppc_ss) {
dev_err(&pdev->dev, "PPC subspace not found\n");
rc = -ENODEV;
- goto out_mbox_free;
+ goto out;
}
if (!ctx->mbox_chan->mbox->txdone_irq) {
dev_err(&pdev->dev, "PCC IRQ not supported\n");
rc = -ENODEV;
- goto out_mbox_free;
+ goto out;
}
/*
} else {
dev_err(&pdev->dev, "Failed to get PCC comm region\n");
rc = -ENODEV;
- goto out_mbox_free;
+ goto out;
}
if (!ctx->pcc_comm_addr) {
dev_err(&pdev->dev,
"Failed to ioremap PCC comm region\n");
rc = -ENOMEM;
- goto out_mbox_free;
+ goto out;
}
/*
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x19e1),
.driver_data = (kernel_ulong_t)0,
},
+ {
+ /* Lewisburg PCH */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa1a6),
+ .driver_data = (kernel_ulong_t)0,
+ },
{
/* Gemini Lake */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x318e),
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x9da6),
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
+ {
+ /* Cedar Fork PCH */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x18e1),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
{ 0 },
};
stm_source_link_drop(src);
- device_destroy(&stm_source_class, src->dev.devt);
+ device_unregister(&src->dev);
}
EXPORT_SYMBOL_GPL(stm_source_unregister_device);
Gemini Lake (SOC)
Cannon Lake-H (PCH)
Cannon Lake-LP (PCH)
+ Cedar Fork (PCH)
This driver can also be built as a module. If so, the module
will be called i2c-i801.
* Gemini Lake (SOC) 0x31d4 32 hard yes yes yes
* Cannon Lake-H (PCH) 0xa323 32 hard yes yes yes
* Cannon Lake-LP (PCH) 0x9da3 32 hard yes yes yes
+ * Cedar Fork (PCH) 0x18df 32 hard yes yes yes
*
* Features supported by this driver:
* Software PEC no
/* Older devices have their ID defined in <linux/pci_ids.h> */
#define PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS 0x0f12
+#define PCI_DEVICE_ID_INTEL_CDF_SMBUS 0x18df
#define PCI_DEVICE_ID_INTEL_DNV_SMBUS 0x19df
#define PCI_DEVICE_ID_INTEL_COUGARPOINT_SMBUS 0x1c22
#define PCI_DEVICE_ID_INTEL_PATSBURG_SMBUS 0x1d22
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BRASWELL_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_SMBUS) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CDF_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_DNV_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BROXTON_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_LEWISBURG_SMBUS) },
case PCI_DEVICE_ID_INTEL_CANNONLAKE_LP_SMBUS:
case PCI_DEVICE_ID_INTEL_LEWISBURG_SMBUS:
case PCI_DEVICE_ID_INTEL_LEWISBURG_SSKU_SMBUS:
+ case PCI_DEVICE_ID_INTEL_CDF_SMBUS:
case PCI_DEVICE_ID_INTEL_DNV_SMBUS:
case PCI_DEVICE_ID_INTEL_KABYLAKE_PCH_H_SMBUS:
priv->features |= FEATURE_I2C_BLOCK_READ;
};
module_platform_driver(img_scb_i2c_driver);
-MODULE_AUTHOR("James Hogan <james.hogan@imgtec.com>");
+MODULE_AUTHOR("James Hogan <jhogan@kernel.org>");
MODULE_DESCRIPTION("IMG host I2C driver");
MODULE_LICENSE("GPL v2");
static const struct of_device_id sprd_i2c_of_match[] = {
{ .compatible = "sprd,sc9860-i2c", },
+ {},
};
static struct platform_driver sprd_i2c_driver = {
unsigned int msg_num;
unsigned int msg_id;
struct stm32f7_i2c_msg f7_msg;
- struct stm32f7_i2c_setup *setup;
+ struct stm32f7_i2c_setup setup;
struct stm32f7_i2c_timings timing;
};
},
};
-struct stm32f7_i2c_setup stm32f7_setup = {
+static const struct stm32f7_i2c_setup stm32f7_setup = {
.rise_time = STM32F7_I2C_RISE_TIME_DEFAULT,
.fall_time = STM32F7_I2C_FALL_TIME_DEFAULT,
.dnf = STM32F7_I2C_DNF_DEFAULT,
writel_relaxed(timing, i2c_dev->base + STM32F7_I2C_TIMINGR);
/* Enable I2C */
- if (i2c_dev->setup->analog_filter)
+ if (i2c_dev->setup.analog_filter)
stm32f7_i2c_clr_bits(i2c_dev->base + STM32F7_I2C_CR1,
STM32F7_I2C_CR1_ANFOFF);
else
}
setup = of_device_get_match_data(&pdev->dev);
- i2c_dev->setup->rise_time = setup->rise_time;
- i2c_dev->setup->fall_time = setup->fall_time;
- i2c_dev->setup->dnf = setup->dnf;
- i2c_dev->setup->analog_filter = setup->analog_filter;
+ i2c_dev->setup = *setup;
ret = device_property_read_u32(i2c_dev->dev, "i2c-scl-rising-time-ns",
&rise_time);
if (!ret)
- i2c_dev->setup->rise_time = rise_time;
+ i2c_dev->setup.rise_time = rise_time;
ret = device_property_read_u32(i2c_dev->dev, "i2c-scl-falling-time-ns",
&fall_time);
if (!ret)
- i2c_dev->setup->fall_time = fall_time;
+ i2c_dev->setup.fall_time = fall_time;
- ret = stm32f7_i2c_setup_timing(i2c_dev, i2c_dev->setup);
+ ret = stm32f7_i2c_setup_timing(i2c_dev, &i2c_dev->setup);
if (ret)
goto clk_free;
if (hwif_init(hwif) == 0) {
printk(KERN_INFO "%s: failed to initialize IDE "
"interface\n", hwif->name);
+ device_unregister(hwif->portdev);
device_unregister(&hwif->gendev);
ide_disable_port(hwif);
continue;
{
struct list_head *l;
struct pci_driver *d;
+ int ret;
list_for_each(l, &ide_pci_drivers) {
d = list_entry(l, struct pci_driver, node);
const struct pci_device_id *id =
pci_match_id(d->id_table, dev);
- if (id != NULL && d->probe(dev, id) >= 0) {
- dev->driver = d;
- pci_dev_get(dev);
- return 1;
+ if (id != NULL) {
+ pci_assign_irq(dev);
+ ret = d->probe(dev, id);
+ if (ret >= 0) {
+ dev->driver = d;
+ pci_dev_get(dev);
+ return 1;
+ }
}
}
}
/**
* ide_pci_enable - do PCI enables
* @dev: PCI device
+ * @bars: PCI BARs mask
* @d: IDE port info
*
* Enable the IDE PCI device. We attempt to enable the device in full
* Returns zero on success or an error code
*/
-static int ide_pci_enable(struct pci_dev *dev, const struct ide_port_info *d)
+static int ide_pci_enable(struct pci_dev *dev, int bars,
+ const struct ide_port_info *d)
{
- int ret, bars;
+ int ret;
if (pci_enable_device(dev)) {
ret = pci_enable_device_io(dev);
goto out;
}
- if (d->host_flags & IDE_HFLAG_SINGLE)
- bars = (1 << 2) - 1;
- else
- bars = (1 << 4) - 1;
-
- if ((d->host_flags & IDE_HFLAG_NO_DMA) == 0) {
- if (d->host_flags & IDE_HFLAG_CS5520)
- bars |= (1 << 2);
- else
- bars |= (1 << 4);
- }
-
ret = pci_request_selected_regions(dev, bars, d->name);
if (ret < 0)
printk(KERN_ERR "%s %s: can't reserve resources\n",
/**
* ide_setup_pci_controller - set up IDE PCI
* @dev: PCI device
+ * @bars: PCI BARs mask
* @d: IDE port info
* @noisy: verbose flag
*
* and enables it if need be
*/
-static int ide_setup_pci_controller(struct pci_dev *dev,
+static int ide_setup_pci_controller(struct pci_dev *dev, int bars,
const struct ide_port_info *d, int noisy)
{
int ret;
if (noisy)
ide_setup_pci_noise(dev, d);
- ret = ide_pci_enable(dev, d);
+ ret = ide_pci_enable(dev, bars, d);
if (ret < 0)
goto out;
if (ret < 0) {
printk(KERN_ERR "%s %s: error accessing PCI regs\n",
d->name, pci_name(dev));
- goto out;
+ goto out_free_bars;
}
if (!(pcicmd & PCI_COMMAND_IO)) { /* is device disabled? */
ret = ide_pci_configure(dev, d);
if (ret < 0)
- goto out;
+ goto out_free_bars;
printk(KERN_INFO "%s %s: device enabled (Linux)\n",
d->name, pci_name(dev));
}
+ goto out;
+
+out_free_bars:
+ pci_release_selected_regions(dev, bars);
out:
return ret;
}
{
struct pci_dev *pdev[] = { dev1, dev2 };
struct ide_host *host;
- int ret, i, n_ports = dev2 ? 4 : 2;
+ int ret, i, n_ports = dev2 ? 4 : 2, bars;
struct ide_hw hw[4], *hws[] = { NULL, NULL, NULL, NULL };
+ if (d->host_flags & IDE_HFLAG_SINGLE)
+ bars = (1 << 2) - 1;
+ else
+ bars = (1 << 4) - 1;
+
+ if ((d->host_flags & IDE_HFLAG_NO_DMA) == 0) {
+ if (d->host_flags & IDE_HFLAG_CS5520)
+ bars |= (1 << 2);
+ else
+ bars |= (1 << 4);
+ }
+
for (i = 0; i < n_ports / 2; i++) {
- ret = ide_setup_pci_controller(pdev[i], d, !i);
- if (ret < 0)
+ ret = ide_setup_pci_controller(pdev[i], bars, d, !i);
+ if (ret < 0) {
+ if (i == 1)
+ pci_release_selected_regions(pdev[0], bars);
goto out;
+ }
ide_pci_setup_ports(pdev[i], d, &hw[i*2], &hws[i*2]);
}
host = ide_host_alloc(d, hws, n_ports);
if (host == NULL) {
ret = -ENOMEM;
- goto out;
+ goto out_free_bars;
}
host->dev[0] = &dev1->dev;
* do_ide_setup_pci_device() on the first device!
*/
if (ret < 0)
- goto out;
+ goto out_free_bars;
/* fixup IRQ */
if (ide_pci_is_in_compatibility_mode(pdev[i])) {
ret = ide_host_register(host, d, hws);
if (ret)
ide_host_free(host);
+ else
+ goto out;
+
+out_free_bars:
+ i = n_ports / 2;
+ while (i--)
+ pci_release_selected_regions(pdev[i], bars);
out:
return ret;
}
unsigned int vref_mv)
{
struct ad7793_state *st = iio_priv(indio_dev);
- int i, ret = -1;
+ int i, ret;
unsigned long long scale_uv;
u32 id;
return ret;
/* reset the serial interface */
- ret = spi_write(st->sd.spi, (u8 *)&ret, sizeof(ret));
+ ret = ad_sd_reset(&st->sd, 32);
if (ret < 0)
goto out;
usleep_range(500, 2000); /* Wait for at least 500us */
}
EXPORT_SYMBOL_GPL(ad_sd_read_reg);
+/**
+ * ad_sd_reset() - Reset the serial interface
+ *
+ * @sigma_delta: The sigma delta device
+ * @reset_length: Number of SCLKs with DIN = 1
+ *
+ * Returns 0 on success, an error code otherwise.
+ **/
+int ad_sd_reset(struct ad_sigma_delta *sigma_delta,
+ unsigned int reset_length)
+{
+ uint8_t *buf;
+ unsigned int size;
+ int ret;
+
+ size = DIV_ROUND_UP(reset_length, 8);
+ buf = kcalloc(size, sizeof(*buf), GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ memset(buf, 0xff, size);
+ ret = spi_write(sigma_delta->spi, buf, size);
+ kfree(buf);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ad_sd_reset);
+
static int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
unsigned int mode, unsigned int channel)
{
* MCP3204
* MCP3208
* ------------
+ * 13 bit converter
+ * MCP3301
*
* Datasheet can be found here:
* http://ww1.microchip.com/downloads/en/DeviceDoc/21293C.pdf mcp3001
}
static int mcp320x_adc_conversion(struct mcp320x *adc, u8 channel,
- bool differential, int device_index)
+ bool differential, int device_index, int *val)
{
int ret;
switch (device_index) {
case mcp3001:
- return (adc->rx_buf[0] << 5 | adc->rx_buf[1] >> 3);
+ *val = (adc->rx_buf[0] << 5 | adc->rx_buf[1] >> 3);
+ return 0;
case mcp3002:
case mcp3004:
case mcp3008:
- return (adc->rx_buf[0] << 2 | adc->rx_buf[1] >> 6);
+ *val = (adc->rx_buf[0] << 2 | adc->rx_buf[1] >> 6);
+ return 0;
case mcp3201:
- return (adc->rx_buf[0] << 7 | adc->rx_buf[1] >> 1);
+ *val = (adc->rx_buf[0] << 7 | adc->rx_buf[1] >> 1);
+ return 0;
case mcp3202:
case mcp3204:
case mcp3208:
- return (adc->rx_buf[0] << 4 | adc->rx_buf[1] >> 4);
+ *val = (adc->rx_buf[0] << 4 | adc->rx_buf[1] >> 4);
+ return 0;
case mcp3301:
- return sign_extend32((adc->rx_buf[0] & 0x1f) << 8 | adc->rx_buf[1], 12);
+ *val = sign_extend32((adc->rx_buf[0] & 0x1f) << 8
+ | adc->rx_buf[1], 12);
+ return 0;
default:
return -EINVAL;
}
switch (mask) {
case IIO_CHAN_INFO_RAW:
ret = mcp320x_adc_conversion(adc, channel->address,
- channel->differential, device_index);
-
+ channel->differential, device_index, val);
if (ret < 0)
goto out;
- *val = ret;
ret = IIO_VAL_INT;
break;
indio_dev->name = spi_get_device_id(spi)->name;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &mcp320x_info;
+ spi_set_drvdata(spi, indio_dev);
chip_info = &mcp320x_chip_infos[spi_get_device_id(spi)->driver_data];
indio_dev->channels = chip_info->channels;
num_channels = of_property_count_u32_elems(node, "st,adc-channels");
if (num_channels < 0 ||
- num_channels >= adc_info->max_channels) {
+ num_channels > adc_info->max_channels) {
dev_err(&indio_dev->dev, "Bad st,adc-channels?\n");
return num_channels < 0 ? num_channels : -EINVAL;
}
#define ADS1015_CFG_COMP_QUE_MASK GENMASK(1, 0)
#define ADS1015_CFG_COMP_LAT_MASK BIT(2)
-#define ADS1015_CFG_COMP_POL_MASK BIT(2)
+#define ADS1015_CFG_COMP_POL_MASK BIT(3)
#define ADS1015_CFG_COMP_MODE_MASK BIT(4)
#define ADS1015_CFG_DR_MASK GENMASK(7, 5)
#define ADS1015_CFG_MOD_MASK BIT(8)
switch (irq_trig) {
case IRQF_TRIGGER_LOW:
- cfg_comp |= ADS1015_CFG_COMP_POL_LOW;
+ cfg_comp |= ADS1015_CFG_COMP_POL_LOW <<
+ ADS1015_CFG_COMP_POL_SHIFT;
break;
case IRQF_TRIGGER_HIGH:
- cfg_comp |= ADS1015_CFG_COMP_POL_HIGH;
+ cfg_comp |= ADS1015_CFG_COMP_POL_HIGH <<
+ ADS1015_CFG_COMP_POL_SHIFT;
break;
default:
return -EINVAL;
/* Enable 3v1 bias regulator for MADC[3:6] */
madc->usb3v1 = devm_regulator_get(madc->dev, "vusb3v1");
- if (IS_ERR(madc->usb3v1))
- return -ENODEV;
+ if (IS_ERR(madc->usb3v1)) {
+ ret = -ENODEV;
+ goto err_i2c;
+ }
ret = regulator_enable(madc->usb3v1);
- if (ret)
+ if (ret) {
dev_err(madc->dev, "could not enable 3v1 bias regulator\n");
+ goto err_i2c;
+ }
ret = iio_device_register(iio_dev);
if (ret) {
dev_err(&pdev->dev, "could not register iio device\n");
- goto err_i2c;
+ goto err_usb3v1;
}
return 0;
+err_usb3v1:
+ regulator_disable(madc->usb3v1);
err_i2c:
twl4030_madc_set_current_generator(madc, 0, 0);
err_current_generator:
u8 drdy_mask;
struct st_sensor_data *sdata = iio_priv(indio_dev);
- if (!sdata->sensor_settings->drdy_irq.addr)
+ if (!sdata->sensor_settings->drdy_irq.addr) {
+ /*
+ * there are some devices (e.g. LIS3MDL) where drdy line is
+ * routed to a given pin and it is not possible to select a
+ * different one. Take into account irq status register
+ * to understand if irq trigger can be properly supported
+ */
+ if (sdata->sensor_settings->drdy_irq.addr_stat_drdy)
+ sdata->hw_irq_trigger = enable;
return 0;
+ }
/* Enable/Disable the interrupt generator 1. */
if (sdata->sensor_settings->drdy_irq.ig1.en_addr > 0) {
ret = indio_dev->info->debugfs_reg_access(indio_dev,
indio_dev->cached_reg_addr,
0, &val);
- if (ret)
+ if (ret) {
dev_err(indio_dev->dev.parent, "%s: read failed\n", __func__);
+ return ret;
+ }
len = snprintf(buf, sizeof(buf), "0x%X\n", val);
},
},
},
+ .drdy_irq = {
+ /* drdy line is routed drdy pin */
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
+ },
.multi_read_bit = true,
.bootime = 2,
},
u8 osrs = BMP280_OSRS_TEMP_X(data->oversampling_temp + 1) |
BMP280_OSRS_PRESS_X(data->oversampling_press + 1);
- ret = regmap_update_bits(data->regmap, BMP280_REG_CTRL_MEAS,
+ ret = regmap_write_bits(data->regmap, BMP280_REG_CTRL_MEAS,
BMP280_OSRS_TEMP_MASK |
BMP280_OSRS_PRESS_MASK |
BMP280_MODE_MASK,
clk_disable(priv->clk);
/* Stop timer */
+ regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0);
regmap_write(priv->regmap, TIM_PSC, 0);
regmap_write(priv->regmap, TIM_ARR, 0);
if (ret)
return ret;
+ /* TIMx_ARR register shouldn't be buffered (ARPE=0) */
+ regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
regmap_write(priv->regmap, TIM_ARR, preset);
- regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, TIM_CR1_ARPE);
return len;
}
if (ret)
goto pid_query_error;
+ nlmsg_end(skb, nlh);
+
pr_debug("%s: Multicasting a nlmsg (dev = %s ifname = %s iwpm = %s)\n",
__func__, pm_msg->dev_name, pm_msg->if_name, iwpm_ulib_name);
&pm_msg->loc_addr, IWPM_NLA_MANAGE_ADDR);
if (ret)
goto add_mapping_error;
+
+ nlmsg_end(skb, nlh);
nlmsg_request->req_buffer = pm_msg;
ret = rdma_nl_unicast_wait(skb, iwpm_user_pid);
&pm_msg->rem_addr, IWPM_NLA_QUERY_REMOTE_ADDR);
if (ret)
goto query_mapping_error;
+
+ nlmsg_end(skb, nlh);
nlmsg_request->req_buffer = pm_msg;
ret = rdma_nl_unicast_wait(skb, iwpm_user_pid);
if (ret)
goto remove_mapping_error;
+ nlmsg_end(skb, nlh);
+
ret = rdma_nl_unicast_wait(skb, iwpm_user_pid);
if (ret) {
skb = NULL; /* skb is freed in the netlink send-op handling */
&mapping_num, IWPM_NLA_MAPINFO_SEND_NUM);
if (ret)
goto mapinfo_num_error;
+
+ nlmsg_end(skb, nlh);
+
ret = rdma_nl_unicast(skb, iwpm_pid);
if (ret) {
skb = NULL;
if (ret)
goto send_mapping_info_unlock;
+ nlmsg_end(skb, nlh);
+
iwpm_print_sockaddr(&map_info->local_sockaddr,
"send_mapping_info: Local sockaddr:");
iwpm_print_sockaddr(&map_info->mapped_sockaddr,
atomic_set(&qp->qp_sec->error_list_count, 0);
init_completion(&qp->qp_sec->error_complete);
ret = security_ib_alloc_security(&qp->qp_sec->security);
- if (ret)
+ if (ret) {
kfree(qp->qp_sec);
+ qp->qp_sec = NULL;
+ }
return ret;
}
resp.raw_packet_caps = attr.raw_packet_caps;
resp.response_length += sizeof(resp.raw_packet_caps);
- if (ucore->outlen < resp.response_length + sizeof(resp.xrq_caps))
+ if (ucore->outlen < resp.response_length + sizeof(resp.tm_caps))
goto end;
- resp.xrq_caps.max_rndv_hdr_size = attr.xrq_caps.max_rndv_hdr_size;
- resp.xrq_caps.max_num_tags = attr.xrq_caps.max_num_tags;
- resp.xrq_caps.max_ops = attr.xrq_caps.max_ops;
- resp.xrq_caps.max_sge = attr.xrq_caps.max_sge;
- resp.xrq_caps.flags = attr.xrq_caps.flags;
- resp.response_length += sizeof(resp.xrq_caps);
+ resp.tm_caps.max_rndv_hdr_size = attr.tm_caps.max_rndv_hdr_size;
+ resp.tm_caps.max_num_tags = attr.tm_caps.max_num_tags;
+ resp.tm_caps.max_ops = attr.tm_caps.max_ops;
+ resp.tm_caps.max_sge = attr.tm_caps.max_sge;
+ resp.tm_caps.flags = attr.tm_caps.flags;
+ resp.response_length += sizeof(resp.tm_caps);
end:
err = ib_copy_to_udata(ucore, &resp, resp.response_length);
return err;
*/
if (!ib_query_qp(qp, &attr, IB_QP_STATE | IB_QP_PORT, &init_attr)) {
if (attr.qp_state >= IB_QPS_INIT) {
- if (qp->device->get_link_layer(qp->device, attr.port_num) !=
+ if (rdma_port_get_link_layer(qp->device, attr.port_num) !=
IB_LINK_LAYER_INFINIBAND)
return true;
goto lid_check;
/* Can't get a quick answer, iterate over all ports */
for (port = 0; port < qp->device->phys_port_cnt; port++)
- if (qp->device->get_link_layer(qp->device, port) !=
+ if (rdma_port_get_link_layer(qp->device, port) !=
IB_LINK_LAYER_INFINIBAND)
num_eth_ports++;
struct ib_device ibdev;
struct list_head list;
unsigned long flags;
-#define BNXT_RE_FLAG_NETDEV_REGISTERED 0
-#define BNXT_RE_FLAG_IBDEV_REGISTERED 1
-#define BNXT_RE_FLAG_GOT_MSIX 2
-#define BNXT_RE_FLAG_RCFW_CHANNEL_EN 8
-#define BNXT_RE_FLAG_QOS_WORK_REG 16
+#define BNXT_RE_FLAG_NETDEV_REGISTERED 0
+#define BNXT_RE_FLAG_IBDEV_REGISTERED 1
+#define BNXT_RE_FLAG_GOT_MSIX 2
+#define BNXT_RE_FLAG_HAVE_L2_REF 3
+#define BNXT_RE_FLAG_RCFW_CHANNEL_EN 4
+#define BNXT_RE_FLAG_QOS_WORK_REG 5
+#define BNXT_RE_FLAG_TASK_IN_PROG 6
struct net_device *netdev;
unsigned int version, major, minor;
struct bnxt_en_dev *en_dev;
struct delayed_work worker;
u8 cur_prio_map;
+ u8 active_speed;
+ u8 active_width;
/* FP Notification Queue (CQ & SRQ) */
struct tasklet_struct nq_task;
port_attr->sm_sl = 0;
port_attr->subnet_timeout = 0;
port_attr->init_type_reply = 0;
- /* call the underlying netdev's ethtool hooks to query speed settings
- * for which we acquire rtnl_lock _only_ if it's registered with
- * IB stack to avoid race in the NETDEV_UNREG path
- */
- if (test_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags))
- if (ib_get_eth_speed(ibdev, port_num, &port_attr->active_speed,
- &port_attr->active_width))
- return -EINVAL;
+ port_attr->active_speed = rdev->active_speed;
+ port_attr->active_width = rdev->active_width;
+
return 0;
}
struct bnxt_re_gid_ctx *ctx, **ctx_tbl;
struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl;
+ struct bnxt_qplib_gid *gid_to_del;
/* Delete the entry from the hardware */
ctx = *context;
if (sgid_tbl && sgid_tbl->active) {
if (ctx->idx >= sgid_tbl->max)
return -EINVAL;
+ gid_to_del = &sgid_tbl->tbl[ctx->idx];
+ /* DEL_GID is called in WQ context(netdevice_event_work_handler)
+ * or via the ib_unregister_device path. In the former case QP1
+ * may not be destroyed yet, in which case just return as FW
+ * needs that entry to be present and will fail it's deletion.
+ * We could get invoked again after QP1 is destroyed OR get an
+ * ADD_GID call with a different GID value for the same index
+ * where we issue MODIFY_GID cmd to update the GID entry -- TBD
+ */
+ if (ctx->idx == 0 &&
+ rdma_link_local_addr((struct in6_addr *)gid_to_del) &&
+ ctx->refcnt == 1 && rdev->qp1_sqp) {
+ dev_dbg(rdev_to_dev(rdev),
+ "Trying to delete GID0 while QP1 is alive\n");
+ return -EFAULT;
+ }
ctx->refcnt--;
if (!ctx->refcnt) {
- rc = bnxt_qplib_del_sgid(sgid_tbl,
- &sgid_tbl->tbl[ctx->idx],
- true);
+ rc = bnxt_qplib_del_sgid(sgid_tbl, gid_to_del, true);
if (rc) {
dev_err(rdev_to_dev(rdev),
"Failed to remove GID: %#x", rc);
kfree(rdev->sqp_ah);
kfree(rdev->qp1_sqp);
+ rdev->qp1_sqp = NULL;
+ rdev->sqp_ah = NULL;
}
if (!IS_ERR_OR_NULL(qp->rumem))
qp->qplib_qp.modify_flags |=
CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu);
+ qp->qplib_qp.mtu = ib_mtu_enum_to_int(qp_attr->path_mtu);
} else if (qp_attr->qp_state == IB_QPS_RTR) {
qp->qplib_qp.modify_flags |=
CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
qp->qplib_qp.path_mtu =
__from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu));
+ qp->qplib_qp.mtu =
+ ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
}
if (qp_attr_mask & IB_QP_TIMEOUT) {
{
struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
struct bnxt_re_dev *rdev = qp->rdev;
- struct bnxt_qplib_qp qplib_qp;
+ struct bnxt_qplib_qp *qplib_qp;
int rc;
- memset(&qplib_qp, 0, sizeof(struct bnxt_qplib_qp));
- qplib_qp.id = qp->qplib_qp.id;
- qplib_qp.ah.host_sgid_index = qp->qplib_qp.ah.host_sgid_index;
+ qplib_qp = kzalloc(sizeof(*qplib_qp), GFP_KERNEL);
+ if (!qplib_qp)
+ return -ENOMEM;
+
+ qplib_qp->id = qp->qplib_qp.id;
+ qplib_qp->ah.host_sgid_index = qp->qplib_qp.ah.host_sgid_index;
- rc = bnxt_qplib_query_qp(&rdev->qplib_res, &qplib_qp);
+ rc = bnxt_qplib_query_qp(&rdev->qplib_res, qplib_qp);
if (rc) {
dev_err(rdev_to_dev(rdev), "Failed to query HW QP");
- return rc;
+ goto out;
}
- qp_attr->qp_state = __to_ib_qp_state(qplib_qp.state);
- qp_attr->en_sqd_async_notify = qplib_qp.en_sqd_async_notify ? 1 : 0;
- qp_attr->qp_access_flags = __to_ib_access_flags(qplib_qp.access);
- qp_attr->pkey_index = qplib_qp.pkey_index;
- qp_attr->qkey = qplib_qp.qkey;
+ qp_attr->qp_state = __to_ib_qp_state(qplib_qp->state);
+ qp_attr->en_sqd_async_notify = qplib_qp->en_sqd_async_notify ? 1 : 0;
+ qp_attr->qp_access_flags = __to_ib_access_flags(qplib_qp->access);
+ qp_attr->pkey_index = qplib_qp->pkey_index;
+ qp_attr->qkey = qplib_qp->qkey;
qp_attr->ah_attr.type = RDMA_AH_ATTR_TYPE_ROCE;
- rdma_ah_set_grh(&qp_attr->ah_attr, NULL, qplib_qp.ah.flow_label,
- qplib_qp.ah.host_sgid_index,
- qplib_qp.ah.hop_limit,
- qplib_qp.ah.traffic_class);
- rdma_ah_set_dgid_raw(&qp_attr->ah_attr, qplib_qp.ah.dgid.data);
- rdma_ah_set_sl(&qp_attr->ah_attr, qplib_qp.ah.sl);
- ether_addr_copy(qp_attr->ah_attr.roce.dmac, qplib_qp.ah.dmac);
- qp_attr->path_mtu = __to_ib_mtu(qplib_qp.path_mtu);
- qp_attr->timeout = qplib_qp.timeout;
- qp_attr->retry_cnt = qplib_qp.retry_cnt;
- qp_attr->rnr_retry = qplib_qp.rnr_retry;
- qp_attr->min_rnr_timer = qplib_qp.min_rnr_timer;
- qp_attr->rq_psn = qplib_qp.rq.psn;
- qp_attr->max_rd_atomic = qplib_qp.max_rd_atomic;
- qp_attr->sq_psn = qplib_qp.sq.psn;
- qp_attr->max_dest_rd_atomic = qplib_qp.max_dest_rd_atomic;
- qp_init_attr->sq_sig_type = qplib_qp.sig_type ? IB_SIGNAL_ALL_WR :
- IB_SIGNAL_REQ_WR;
- qp_attr->dest_qp_num = qplib_qp.dest_qpn;
+ rdma_ah_set_grh(&qp_attr->ah_attr, NULL, qplib_qp->ah.flow_label,
+ qplib_qp->ah.host_sgid_index,
+ qplib_qp->ah.hop_limit,
+ qplib_qp->ah.traffic_class);
+ rdma_ah_set_dgid_raw(&qp_attr->ah_attr, qplib_qp->ah.dgid.data);
+ rdma_ah_set_sl(&qp_attr->ah_attr, qplib_qp->ah.sl);
+ ether_addr_copy(qp_attr->ah_attr.roce.dmac, qplib_qp->ah.dmac);
+ qp_attr->path_mtu = __to_ib_mtu(qplib_qp->path_mtu);
+ qp_attr->timeout = qplib_qp->timeout;
+ qp_attr->retry_cnt = qplib_qp->retry_cnt;
+ qp_attr->rnr_retry = qplib_qp->rnr_retry;
+ qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer;
+ qp_attr->rq_psn = qplib_qp->rq.psn;
+ qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic;
+ qp_attr->sq_psn = qplib_qp->sq.psn;
+ qp_attr->max_dest_rd_atomic = qplib_qp->max_dest_rd_atomic;
+ qp_init_attr->sq_sig_type = qplib_qp->sig_type ? IB_SIGNAL_ALL_WR :
+ IB_SIGNAL_REQ_WR;
+ qp_attr->dest_qp_num = qplib_qp->dest_qpn;
qp_attr->cap.max_send_wr = qp->qplib_qp.sq.max_wqe;
qp_attr->cap.max_send_sge = qp->qplib_qp.sq.max_sge;
qp_attr->cap.max_inline_data = qp->qplib_qp.max_inline_data;
qp_init_attr->cap = qp_attr->cap;
- return 0;
+out:
+ kfree(qplib_qp);
+ return rc;
}
/* Routine for sending QP1 packets for RoCE V1 an V2
switch (wr->opcode) {
case IB_WR_ATOMIC_CMP_AND_SWP:
wqe->type = BNXT_QPLIB_SWQE_TYPE_ATOMIC_CMP_AND_SWP;
+ wqe->atomic.cmp_data = atomic_wr(wr)->compare_add;
wqe->atomic.swap_data = atomic_wr(wr)->swap;
break;
case IB_WR_ATOMIC_FETCH_AND_ADD:
return rc;
}
- if (mr->npages && mr->pages) {
+ if (mr->pages) {
rc = bnxt_qplib_free_fast_reg_page_list(&rdev->qplib_res,
&mr->qplib_frpl);
kfree(mr->pages);
}
}
set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,
+ &rdev->active_width);
bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1, IB_EVENT_PORT_ACTIVE);
bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1, IB_EVENT_GID_CHANGE);
else if (netif_carrier_ok(rdev->netdev))
bnxt_re_dispatch_event(&rdev->ibdev, NULL, 1,
IB_EVENT_PORT_ACTIVE);
+ ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,
+ &rdev->active_width);
break;
default:
break;
}
+ smp_mb__before_atomic();
+ clear_bit(BNXT_RE_FLAG_TASK_IN_PROG, &rdev->flags);
kfree(re_work);
}
break;
case NETDEV_UNREGISTER:
+ /* netdev notifier will call NETDEV_UNREGISTER again later since
+ * we are still holding the reference to the netdev
+ */
+ if (test_bit(BNXT_RE_FLAG_TASK_IN_PROG, &rdev->flags))
+ goto exit;
bnxt_re_ib_unreg(rdev, false);
bnxt_re_remove_one(rdev);
bnxt_re_dev_unreg(rdev);
re_work->vlan_dev = (real_dev == netdev ?
NULL : netdev);
INIT_WORK(&re_work->work, bnxt_re_task);
+ set_bit(BNXT_RE_FLAG_TASK_IN_PROG, &rdev->flags);
queue_work(bnxt_re_wq, &re_work->work);
}
}
static void __exit bnxt_re_mod_exit(void)
{
+ struct bnxt_re_dev *rdev;
+ LIST_HEAD(to_be_deleted);
+
+ mutex_lock(&bnxt_re_dev_lock);
+ /* Free all adapter allocated resources */
+ if (!list_empty(&bnxt_re_dev_list))
+ list_splice_init(&bnxt_re_dev_list, &to_be_deleted);
+ mutex_unlock(&bnxt_re_dev_lock);
+
+ list_for_each_entry(rdev, &to_be_deleted, list) {
+ dev_info(rdev_to_dev(rdev), "Unregistering Device");
+ bnxt_re_dev_stop(rdev);
+ bnxt_re_ib_unreg(rdev, true);
+ bnxt_re_remove_one(rdev);
+ bnxt_re_dev_unreg(rdev);
+ }
unregister_netdevice_notifier(&bnxt_re_netdev_notifier);
if (bnxt_re_wq)
destroy_workqueue(bnxt_re_wq);
return -EINVAL;
}
+ if (test_bit(FIRMWARE_TIMED_OUT, &rcfw->flags))
+ return -ETIMEDOUT;
+
/* Cmdq are in 16-byte units, each request can consume 1 or more
* cmdqe
*/
/* timed out */
dev_err(&rcfw->pdev->dev, "QPLIB: cmdq[%#x]=%#x timedout (%d)msec",
cookie, opcode, RCFW_CMD_WAIT_TIME_MS);
+ set_bit(FIRMWARE_TIMED_OUT, &rcfw->flags);
return rc;
}
unsigned long *cmdq_bitmap;
u32 bmap_size;
unsigned long flags;
-#define FIRMWARE_INITIALIZED_FLAG 1
+#define FIRMWARE_INITIALIZED_FLAG BIT(0)
#define FIRMWARE_FIRST_FLAG BIT(31)
+#define FIRMWARE_TIMED_OUT BIT(3)
wait_queue_head_t waitq;
int (*aeq_handler)(struct bnxt_qplib_rcfw *,
struct creq_func_event *);
unsigned int stid = GET_TID(rpl);
struct c4iw_listen_ep *ep = get_ep_from_stid(dev, stid);
+ if (!ep) {
+ pr_debug("%s stid %d lookup failure!\n", __func__, stid);
+ goto out;
+ }
pr_debug("%s ep %p\n", __func__, ep);
c4iw_wake_up(&ep->com.wr_wait, status2errno(rpl->status));
c4iw_put_ep(&ep->com);
+out:
return 0;
}
c4iw_put_ep(&child_ep->com);
reject:
reject_cr(dev, hwtid, skb);
+out:
if (parent_ep)
c4iw_put_ep(&parent_ep->com);
-out:
return 0;
}
cm_id->provider_data = ep;
goto out;
}
-
+ remove_handle(ep->com.dev, &ep->com.dev->stid_idr, ep->stid);
cxgb4_free_stid(ep->com.dev->rdev.lldi.tids, ep->stid,
ep->com.local_addr.ss_family);
fail2:
static int thermal_init(struct hfi1_devdata *dd);
static void update_statusp(struct hfi1_pportdata *ppd, u32 state);
+static int wait_phys_link_offline_substates(struct hfi1_pportdata *ppd,
+ int msecs);
static int wait_logical_linkstate(struct hfi1_pportdata *ppd, u32 state,
int msecs);
static void log_state_transition(struct hfi1_pportdata *ppd, u32 state);
u64 regs[CCE_NUM_INT_CSRS];
u32 bit;
int i;
+ irqreturn_t handled = IRQ_NONE;
this_cpu_inc(*dd->int_counter);
for_each_set_bit(bit, (unsigned long *)®s[0],
CCE_NUM_INT_CSRS * 64) {
is_interrupt(dd, bit);
+ handled = IRQ_HANDLED;
}
- return IRQ_HANDLED;
+ return handled;
}
static irqreturn_t sdma_interrupt(int irq, void *data)
write_csr(dd, dd->hfi1_id ? ASIC_QSFP2_MASK : ASIC_QSFP1_MASK, mask);
}
-void reset_qsfp(struct hfi1_pportdata *ppd)
+int reset_qsfp(struct hfi1_pportdata *ppd)
{
struct hfi1_devdata *dd = ppd->dd;
u64 mask, qsfp_mask;
* for alarms and warnings
*/
set_qsfp_int_n(ppd, 1);
+
+ /*
+ * After the reset, AOC transmitters are enabled by default. They need
+ * to be turned off to complete the QSFP setup before they can be
+ * enabled again.
+ */
+ return set_qsfp_tx(ppd, 0);
}
static int handle_qsfp_error_conditions(struct hfi1_pportdata *ppd,
{
struct hfi1_devdata *dd = ppd->dd;
u32 previous_state;
+ int offline_state_ret;
int ret;
update_lcb_cache(dd);
ppd->offline_disabled_reason =
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_TRANSIENT);
- /*
- * Wait for offline transition. It can take a while for
- * the link to go down.
- */
- ret = wait_physical_linkstate(ppd, PLS_OFFLINE, 10000);
- if (ret < 0)
- return ret;
-
- /*
- * Now in charge of LCB - must be after the physical state is
- * offline.quiet and before host_link_state is changed.
- */
- set_host_lcb_access(dd);
- write_csr(dd, DC_LCB_ERR_EN, ~0ull); /* watch LCB errors */
-
- /* make sure the logical state is also down */
- ret = wait_logical_linkstate(ppd, IB_PORT_DOWN, 1000);
- if (ret)
- force_logical_link_state_down(ppd);
-
- ppd->host_link_state = HLS_LINK_COOLDOWN; /* LCB access allowed */
+ offline_state_ret = wait_phys_link_offline_substates(ppd, 10000);
+ if (offline_state_ret < 0)
+ return offline_state_ret;
+ /* Disabling AOC transmitters */
if (ppd->port_type == PORT_TYPE_QSFP &&
ppd->qsfp_info.limiting_active &&
qsfp_mod_present(ppd)) {
}
}
+ /*
+ * Wait for the offline.Quiet transition if it hasn't happened yet. It
+ * can take a while for the link to go down.
+ */
+ if (offline_state_ret != PLS_OFFLINE_QUIET) {
+ ret = wait_physical_linkstate(ppd, PLS_OFFLINE, 30000);
+ if (ret < 0)
+ return ret;
+ }
+
+ /*
+ * Now in charge of LCB - must be after the physical state is
+ * offline.quiet and before host_link_state is changed.
+ */
+ set_host_lcb_access(dd);
+ write_csr(dd, DC_LCB_ERR_EN, ~0ull); /* watch LCB errors */
+
+ /* make sure the logical state is also down */
+ ret = wait_logical_linkstate(ppd, IB_PORT_DOWN, 1000);
+ if (ret)
+ force_logical_link_state_down(ppd);
+
+ ppd->host_link_state = HLS_LINK_COOLDOWN; /* LCB access allowed */
+
/*
* The LNI has a mandatory wait time after the physical state
* moves to Offline.Quiet. The wait time may be different
& (HLS_DN_POLL | HLS_VERIFY_CAP | HLS_GOING_UP)) {
/* went down while attempting link up */
check_lni_states(ppd);
+
+ /* The QSFP doesn't need to be reset on LNI failure */
+ ppd->qsfp_info.reset_needed = 0;
}
/* the active link width (downgrade) is 0 on link down */
return 0;
}
+/*
+ * wait_phys_link_offline_quiet_substates - wait for any offline substate
+ * @ppd: port device
+ * @msecs: the number of milliseconds to wait
+ *
+ * Wait up to msecs milliseconds for any offline physical link
+ * state change to occur.
+ * Returns 0 if at least one state is reached, otherwise -ETIMEDOUT.
+ */
+static int wait_phys_link_offline_substates(struct hfi1_pportdata *ppd,
+ int msecs)
+{
+ u32 read_state;
+ unsigned long timeout;
+
+ timeout = jiffies + msecs_to_jiffies(msecs);
+ while (1) {
+ read_state = read_physical_state(ppd->dd);
+ if ((read_state & 0xF0) == PLS_OFFLINE)
+ break;
+ if (time_after(jiffies, timeout)) {
+ dd_dev_err(ppd->dd,
+ "timeout waiting for phy link offline.quiet substates. Read state 0x%x, %dms\n",
+ read_state, msecs);
+ return -ETIMEDOUT;
+ }
+ usleep_range(1950, 2050); /* sleep 2ms-ish */
+ }
+
+ log_state_transition(ppd, read_state);
+ return read_state;
+}
+
#define CLEAR_STATIC_RATE_CONTROL_SMASK(r) \
(r &= ~SEND_CTXT_CHECK_ENABLE_DISALLOW_PBC_STATIC_RATE_CONTROL_SMASK)
#define PLS_OFFLINE_READY_TO_QUIET_LT 0x92
#define PLS_OFFLINE_REPORT_FAILURE 0x93
#define PLS_OFFLINE_READY_TO_QUIET_BCC 0x94
+#define PLS_OFFLINE_QUIET_DURATION 0x95
#define PLS_POLLING 0x20
#define PLS_POLLING_QUIET 0x20
#define PLS_POLLING_ACTIVE 0x21
void handle_link_bounce(struct work_struct *work);
void handle_start_link(struct work_struct *work);
void handle_sma_message(struct work_struct *work);
-void reset_qsfp(struct hfi1_pportdata *ppd);
+int reset_qsfp(struct hfi1_pportdata *ppd);
void qsfp_event(struct work_struct *work);
void start_freeze_handling(struct hfi1_pportdata *ppd, int flags);
int send_idle_sma(struct hfi1_devdata *dd, u64 message);
return ret;
}
-/* magic character sequence that trails an image */
+/* magic character sequence that begins an image */
+#define IMAGE_START_MAGIC "APO="
+
+/* magic character sequence that might trail an image */
#define IMAGE_TRAIL_MAGIC "egamiAPO"
/* EPROM file types */
{
void *buffer;
void *p;
+ u32 length;
int ret;
buffer = kmalloc(P1_SIZE, GFP_KERNEL);
return ret;
}
- /* scan for image magic that may trail the actual data */
- p = strnstr(buffer, IMAGE_TRAIL_MAGIC, P1_SIZE);
- if (!p) {
+ /* config partition is valid only if it starts with IMAGE_START_MAGIC */
+ if (memcmp(buffer, IMAGE_START_MAGIC, strlen(IMAGE_START_MAGIC))) {
kfree(buffer);
return -ENOENT;
}
+ /* scan for image magic that may trail the actual data */
+ p = strnstr(buffer, IMAGE_TRAIL_MAGIC, P1_SIZE);
+ if (p)
+ length = p - buffer;
+ else
+ length = P1_SIZE;
+
*data = buffer;
- *size = p - buffer;
+ *size = length;
return 0;
}
switch (ret) {
case 0:
ret = setup_base_ctxt(fd, uctxt);
- if (uctxt->subctxt_cnt) {
- /*
- * Base context is done (successfully or not), notify
- * anybody using a sub-context that is waiting for
- * this completion.
- */
- clear_bit(HFI1_CTXT_BASE_UNINIT, &uctxt->event_flags);
- wake_up(&uctxt->wait);
- }
+ if (ret)
+ deallocate_ctxt(uctxt);
break;
case 1:
ret = complete_subctxt(fd);
/* Now allocate the RcvHdr queue and eager buffers. */
ret = hfi1_create_rcvhdrq(dd, uctxt);
if (ret)
- return ret;
+ goto done;
ret = hfi1_setup_eagerbufs(uctxt);
if (ret)
- goto setup_failed;
+ goto done;
/* If sub-contexts are enabled, do the appropriate setup */
if (uctxt->subctxt_cnt)
ret = setup_subctxt(uctxt);
if (ret)
- goto setup_failed;
+ goto done;
ret = hfi1_alloc_ctxt_rcv_groups(uctxt);
if (ret)
- goto setup_failed;
+ goto done;
ret = init_user_ctxt(fd, uctxt);
if (ret)
- goto setup_failed;
+ goto done;
user_init(uctxt);
fd->uctxt = uctxt;
hfi1_rcd_get(uctxt);
- return 0;
+done:
+ if (uctxt->subctxt_cnt) {
+ /*
+ * On error, set the failed bit so sub-contexts will clean up
+ * correctly.
+ */
+ if (ret)
+ set_bit(HFI1_CTXT_BASE_FAILED, &uctxt->event_flags);
-setup_failed:
- /* Set the failed bit so sub-context init can do the right thing */
- set_bit(HFI1_CTXT_BASE_FAILED, &uctxt->event_flags);
- deallocate_ctxt(uctxt);
+ /*
+ * Base context is done (successfully or not), notify anybody
+ * using a sub-context that is waiting for this completion.
+ */
+ clear_bit(HFI1_CTXT_BASE_UNINIT, &uctxt->event_flags);
+ wake_up(&uctxt->wait);
+ }
return ret;
}
/*
* Code to adjust PCIe capabilities.
*/
-static int tune_pcie_caps(struct hfi1_devdata *);
+static void tune_pcie_caps(struct hfi1_devdata *);
/*
* Do all the common PCIe setup and initialization.
*/
int request_msix(struct hfi1_devdata *dd, u32 msireq)
{
- int nvec, ret;
+ int nvec;
nvec = pci_alloc_irq_vectors(dd->pcidev, 1, msireq,
PCI_IRQ_MSIX | PCI_IRQ_LEGACY);
return nvec;
}
- ret = tune_pcie_caps(dd);
- if (ret) {
- dd_dev_err(dd, "tune_pcie_caps() failed: %d\n", ret);
- pci_free_irq_vectors(dd->pcidev);
- return ret;
- }
+ tune_pcie_caps(dd);
/* check for legacy IRQ */
if (nvec == 1 && !dd->pcidev->msix_enabled)
module_param_named(aspm, aspm_mode, uint, S_IRUGO);
MODULE_PARM_DESC(aspm, "PCIe ASPM: 0: disable, 1: enable, 2: dynamic");
-static int tune_pcie_caps(struct hfi1_devdata *dd)
+static void tune_pcie_caps(struct hfi1_devdata *dd)
{
struct pci_dev *parent;
u16 rc_mpss, rc_mps, ep_mpss, ep_mps;
* Turn on extended tags in DevCtl in case the BIOS has turned it off
* to improve WFR SDMA bandwidth
*/
- ret = pcie_capability_read_word(dd->pcidev,
- PCI_EXP_DEVCTL, &ectl);
- if (ret) {
- dd_dev_err(dd, "Unable to read from PCI config\n");
- return ret;
- }
-
- if (!(ectl & PCI_EXP_DEVCTL_EXT_TAG)) {
+ ret = pcie_capability_read_word(dd->pcidev, PCI_EXP_DEVCTL, &ectl);
+ if ((!ret) && !(ectl & PCI_EXP_DEVCTL_EXT_TAG)) {
dd_dev_info(dd, "Enabling PCIe extended tags\n");
ectl |= PCI_EXP_DEVCTL_EXT_TAG;
ret = pcie_capability_write_word(dd->pcidev,
PCI_EXP_DEVCTL, ectl);
- if (ret) {
- dd_dev_err(dd, "Unable to write to PCI config\n");
- return ret;
- }
+ if (ret)
+ dd_dev_info(dd, "Unable to write to PCI config\n");
}
/* Find out supported and configured values for parent (root) */
parent = dd->pcidev->bus->self;
* The driver cannot perform the tuning if it does not have
* access to the upstream component.
*/
- if (!parent)
- return -EINVAL;
+ if (!parent) {
+ dd_dev_info(dd, "Parent not found\n");
+ return;
+ }
if (!pci_is_root_bus(parent->bus)) {
dd_dev_info(dd, "Parent not root\n");
- return -EINVAL;
+ return;
+ }
+ if (!pci_is_pcie(parent)) {
+ dd_dev_info(dd, "Parent is not PCI Express capable\n");
+ return;
+ }
+ if (!pci_is_pcie(dd->pcidev)) {
+ dd_dev_info(dd, "PCI device is not PCI Express capable\n");
+ return;
}
-
- if (!pci_is_pcie(parent) || !pci_is_pcie(dd->pcidev))
- return -EINVAL;
rc_mpss = parent->pcie_mpss;
rc_mps = ffs(pcie_get_mps(parent)) - 8;
/* Find out supported and configured values for endpoint (us) */
ep_mrrs = max_mrrs;
pcie_set_readrq(dd->pcidev, ep_mrrs);
}
-
- return 0;
}
/* End of PCIe capability tuning */
* reuse of stale settings established in our previous pass through.
*/
if (ppd->qsfp_info.reset_needed) {
- reset_qsfp(ppd);
+ ret = reset_qsfp(ppd);
+ if (ret)
+ return ret;
refresh_qsfp_cache(ppd, &ppd->qsfp_info);
} else {
ppd->qsfp_info.reset_needed = 1;
CEQ_CREATED,
ILQ_CREATED,
IEQ_CREATED,
- INET_NOTIFIER,
IP_ADDR_REGISTERED,
RDMA_DEV_REGISTERED
};
}
/**
- * listen_port_in_use - determine if port is in use
- * @port: Listen port number
+ * i40iw_port_in_use - determine if port is in use
+ * @port: port number
+ * @active_side: flag for listener side vs active side
*/
-static bool i40iw_listen_port_in_use(struct i40iw_cm_core *cm_core, u16 port)
+static bool i40iw_port_in_use(struct i40iw_cm_core *cm_core, u16 port, bool active_side)
{
struct i40iw_cm_listener *listen_node;
+ struct i40iw_cm_node *cm_node;
unsigned long flags;
bool ret = false;
- spin_lock_irqsave(&cm_core->listen_list_lock, flags);
- list_for_each_entry(listen_node, &cm_core->listen_nodes, list) {
- if (listen_node->loc_port == port) {
- ret = true;
- break;
+ if (active_side) {
+ /* search connected node list */
+ spin_lock_irqsave(&cm_core->ht_lock, flags);
+ list_for_each_entry(cm_node, &cm_core->connected_nodes, list) {
+ if (cm_node->loc_port == port) {
+ ret = true;
+ break;
+ }
+ }
+ if (!ret)
+ clear_bit(port, cm_core->active_side_ports);
+ spin_unlock_irqrestore(&cm_core->ht_lock, flags);
+ } else {
+ spin_lock_irqsave(&cm_core->listen_list_lock, flags);
+ list_for_each_entry(listen_node, &cm_core->listen_nodes, list) {
+ if (listen_node->loc_port == port) {
+ ret = true;
+ break;
+ }
}
+ spin_unlock_irqrestore(&cm_core->listen_list_lock, flags);
}
- spin_unlock_irqrestore(&cm_core->listen_list_lock, flags);
+
return ret;
}
spin_unlock_irqrestore(&cm_core->listen_list_lock, flags);
if (listener->iwdev) {
- if (apbvt_del && !i40iw_listen_port_in_use(cm_core, listener->loc_port))
+ if (apbvt_del && !i40iw_port_in_use(cm_core, listener->loc_port, false))
i40iw_manage_apbvt(listener->iwdev,
listener->loc_port,
I40IW_MANAGE_APBVT_DEL);
if (cm_node->listener) {
i40iw_dec_refcnt_listen(cm_core, cm_node->listener, 0, true);
} else {
- if (!i40iw_listen_port_in_use(cm_core, cm_node->loc_port) &&
- cm_node->apbvt_set) {
+ if (!i40iw_port_in_use(cm_core, cm_node->loc_port, true) && cm_node->apbvt_set) {
i40iw_manage_apbvt(cm_node->iwdev,
cm_node->loc_port,
I40IW_MANAGE_APBVT_DEL);
- i40iw_get_addr_info(cm_node, &nfo);
- if (cm_node->qhash_set) {
- i40iw_manage_qhash(cm_node->iwdev,
- &nfo,
- I40IW_QHASH_TYPE_TCP_ESTABLISHED,
- I40IW_QHASH_MANAGE_TYPE_DELETE,
- NULL,
- false);
- cm_node->qhash_set = 0;
- }
+ cm_node->apbvt_set = 0;
+ }
+ i40iw_get_addr_info(cm_node, &nfo);
+ if (cm_node->qhash_set) {
+ i40iw_manage_qhash(cm_node->iwdev,
+ &nfo,
+ I40IW_QHASH_TYPE_TCP_ESTABLISHED,
+ I40IW_QHASH_MANAGE_TYPE_DELETE,
+ NULL,
+ false);
+ cm_node->qhash_set = 0;
}
}
tcp_info->snd_mss = cpu_to_le32(((u32)cm_node->tcp_cntxt.mss));
if (cm_node->vlan_id < VLAN_TAG_PRESENT) {
tcp_info->insert_vlan_tag = true;
- tcp_info->vlan_tag = cpu_to_le16(cm_node->vlan_id);
+ tcp_info->vlan_tag = cpu_to_le16(((u16)cm_node->user_pri << I40IW_VLAN_PRIO_SHIFT) |
+ cm_node->vlan_id);
}
if (cm_node->ipv4) {
tcp_info->src_port = cpu_to_le16(cm_node->loc_port);
struct sockaddr_in *raddr;
struct sockaddr_in6 *laddr6;
struct sockaddr_in6 *raddr6;
- bool qhash_set = false;
- int apbvt_set = 0;
- int err = 0;
- enum i40iw_status_code status;
+ int ret = 0;
+ unsigned long flags;
ibqp = i40iw_get_qp(cm_id->device, conn_param->qpn);
if (!ibqp)
cm_info.user_pri = rt_tos2priority(cm_id->tos);
i40iw_debug(&iwdev->sc_dev, I40IW_DEBUG_DCB, "%s TOS:[%d] UP:[%d]\n",
__func__, cm_id->tos, cm_info.user_pri);
- if ((cm_info.ipv4 && (laddr->sin_addr.s_addr != raddr->sin_addr.s_addr)) ||
- (!cm_info.ipv4 && memcmp(laddr6->sin6_addr.in6_u.u6_addr32,
- raddr6->sin6_addr.in6_u.u6_addr32,
- sizeof(laddr6->sin6_addr.in6_u.u6_addr32)))) {
- status = i40iw_manage_qhash(iwdev,
- &cm_info,
- I40IW_QHASH_TYPE_TCP_ESTABLISHED,
- I40IW_QHASH_MANAGE_TYPE_ADD,
- NULL,
- true);
- if (status)
- return -EINVAL;
- qhash_set = true;
- }
- status = i40iw_manage_apbvt(iwdev, cm_info.loc_port, I40IW_MANAGE_APBVT_ADD);
- if (status) {
- i40iw_manage_qhash(iwdev,
- &cm_info,
- I40IW_QHASH_TYPE_TCP_ESTABLISHED,
- I40IW_QHASH_MANAGE_TYPE_DELETE,
- NULL,
- false);
- return -EINVAL;
- }
-
- apbvt_set = 1;
cm_id->add_ref(cm_id);
cm_node = i40iw_create_cm_node(&iwdev->cm_core, iwdev,
conn_param->private_data_len,
&cm_info);
if (IS_ERR(cm_node)) {
- err = PTR_ERR(cm_node);
- goto err_out;
+ ret = PTR_ERR(cm_node);
+ cm_id->rem_ref(cm_id);
+ return ret;
+ }
+
+ if ((cm_info.ipv4 && (laddr->sin_addr.s_addr != raddr->sin_addr.s_addr)) ||
+ (!cm_info.ipv4 && memcmp(laddr6->sin6_addr.in6_u.u6_addr32,
+ raddr6->sin6_addr.in6_u.u6_addr32,
+ sizeof(laddr6->sin6_addr.in6_u.u6_addr32)))) {
+ if (i40iw_manage_qhash(iwdev, &cm_info, I40IW_QHASH_TYPE_TCP_ESTABLISHED,
+ I40IW_QHASH_MANAGE_TYPE_ADD, NULL, true)) {
+ ret = -EINVAL;
+ goto err;
+ }
+ cm_node->qhash_set = true;
}
+ spin_lock_irqsave(&iwdev->cm_core.ht_lock, flags);
+ if (!test_and_set_bit(cm_info.loc_port, iwdev->cm_core.active_side_ports)) {
+ spin_unlock_irqrestore(&iwdev->cm_core.ht_lock, flags);
+ if (i40iw_manage_apbvt(iwdev, cm_info.loc_port, I40IW_MANAGE_APBVT_ADD)) {
+ ret = -EINVAL;
+ goto err;
+ }
+ } else {
+ spin_unlock_irqrestore(&iwdev->cm_core.ht_lock, flags);
+ }
+
+ cm_node->apbvt_set = true;
i40iw_record_ird_ord(cm_node, (u16)conn_param->ird, (u16)conn_param->ord);
if (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO &&
!cm_node->ord_size)
cm_node->ord_size = 1;
- cm_node->apbvt_set = apbvt_set;
- cm_node->qhash_set = qhash_set;
iwqp->cm_node = cm_node;
cm_node->iwqp = iwqp;
iwqp->cm_id = cm_id;
if (cm_node->state != I40IW_CM_STATE_OFFLOADED) {
cm_node->state = I40IW_CM_STATE_SYN_SENT;
- err = i40iw_send_syn(cm_node, 0);
- if (err) {
- i40iw_rem_ref_cm_node(cm_node);
- goto err_out;
- }
+ ret = i40iw_send_syn(cm_node, 0);
+ if (ret)
+ goto err;
}
i40iw_debug(cm_node->dev,
cm_node->rem_port,
cm_node,
cm_node->cm_id);
+
return 0;
-err_out:
+err:
if (cm_info.ipv4)
i40iw_debug(&iwdev->sc_dev,
I40IW_DEBUG_CM,
"Api - connect() FAILED: dest addr=%pI6",
cm_info.rem_addr);
- if (qhash_set)
- i40iw_manage_qhash(iwdev,
- &cm_info,
- I40IW_QHASH_TYPE_TCP_ESTABLISHED,
- I40IW_QHASH_MANAGE_TYPE_DELETE,
- NULL,
- false);
-
- if (apbvt_set && !i40iw_listen_port_in_use(&iwdev->cm_core,
- cm_info.loc_port))
- i40iw_manage_apbvt(iwdev,
- cm_info.loc_port,
- I40IW_MANAGE_APBVT_DEL);
+ i40iw_rem_ref_cm_node(cm_node);
cm_id->rem_ref(cm_id);
iwdev->cm_core.stats_connect_errs++;
- return err;
+ return ret;
}
/**
#define I40IW_HW_IRD_SETTING_32 32
#define I40IW_HW_IRD_SETTING_64 64
+#define MAX_PORTS 65536
+#define I40IW_VLAN_PRIO_SHIFT 13
+
enum ietf_mpa_flags {
IETF_MPA_FLAGS_MARKERS = 0x80, /* receive Markers */
IETF_MPA_FLAGS_CRC = 0x40, /* receive Markers */
spinlock_t ht_lock; /* manage hash table */
spinlock_t listen_list_lock; /* listen list */
+ unsigned long active_side_ports[BITS_TO_LONGS(MAX_PORTS)];
+
u64 stats_nodes_created;
u64 stats_nodes_destroyed;
u64 stats_listen_created;
* @wqe: cqp wqe for header
* @header: header for the cqp wqe
*/
-static inline void i40iw_insert_wqe_hdr(u64 *wqe, u64 header)
+void i40iw_insert_wqe_hdr(u64 *wqe, u64 header)
{
wmb(); /* make sure WQE is populated before polarity is set */
set_64bit_val(wqe, 24, header);
.notifier_call = i40iw_net_event
};
-static atomic_t i40iw_notifiers_registered;
-
/**
* i40iw_find_i40e_handler - find a handler given a client info
* @ldev: pointer to a client info
*/
static void i40iw_register_notifiers(void)
{
- if (atomic_inc_return(&i40iw_notifiers_registered) == 1) {
- register_inetaddr_notifier(&i40iw_inetaddr_notifier);
- register_inet6addr_notifier(&i40iw_inetaddr6_notifier);
- register_netevent_notifier(&i40iw_net_notifier);
- }
+ register_inetaddr_notifier(&i40iw_inetaddr_notifier);
+ register_inet6addr_notifier(&i40iw_inetaddr6_notifier);
+ register_netevent_notifier(&i40iw_net_notifier);
+}
+
+/**
+ * i40iw_unregister_notifiers - unregister tcp ip notifiers
+ */
+
+static void i40iw_unregister_notifiers(void)
+{
+ unregister_netevent_notifier(&i40iw_net_notifier);
+ unregister_inetaddr_notifier(&i40iw_inetaddr_notifier);
+ unregister_inet6addr_notifier(&i40iw_inetaddr6_notifier);
}
/**
u32 i;
u32 size;
+ if (!ldev->msix_count) {
+ i40iw_pr_err("No MSI-X vectors\n");
+ return I40IW_ERR_CONFIG;
+ }
+
iwdev->msix_count = ldev->msix_count;
size = sizeof(struct i40iw_msix_vector) * iwdev->msix_count;
if (!iwdev->reset)
i40iw_del_macip_entry(iwdev, (u8)iwdev->mac_ip_table_idx);
/* fallthrough */
- case INET_NOTIFIER:
- if (!atomic_dec_return(&i40iw_notifiers_registered)) {
- unregister_netevent_notifier(&i40iw_net_notifier);
- unregister_inetaddr_notifier(&i40iw_inetaddr_notifier);
- unregister_inet6addr_notifier(&i40iw_inetaddr6_notifier);
- }
/* fallthrough */
case PBLE_CHUNK_MEM:
i40iw_destroy_pble_pool(dev, iwdev->pble_rsrc);
status = i40iw_save_msix_info(iwdev, ldev);
if (status)
- goto exit;
+ return status;
iwdev->hw.dev_context = (void *)ldev->pcidev;
iwdev->hw.hw_addr = ldev->hw_addr;
status = i40iw_allocate_dma_mem(&iwdev->hw,
break;
iwdev->init_state = PBLE_CHUNK_MEM;
iwdev->virtchnl_wq = alloc_ordered_workqueue("iwvch", WQ_MEM_RECLAIM);
- i40iw_register_notifiers();
- iwdev->init_state = INET_NOTIFIER;
status = i40iw_add_mac_ip(iwdev);
if (status)
break;
i40iw_client.type = I40E_CLIENT_IWARP;
spin_lock_init(&i40iw_handler_lock);
ret = i40e_register_client(&i40iw_client);
+ i40iw_register_notifiers();
+
return ret;
}
*/
static void __exit i40iw_exit_module(void)
{
+ i40iw_unregister_notifiers();
i40e_unregister_client(&i40iw_client);
}
struct i40iw_fast_reg_stag_info *info,
bool post_sq);
+void i40iw_insert_wqe_hdr(u64 *wqe, u64 header);
+
/* HMC/FPM functions */
enum i40iw_status_code i40iw_sc_init_iw_hmc(struct i40iw_sc_dev *dev,
u8 hmc_fn_id);
get_64bit_val(wqe, 24, &offset24);
offset24 = (offset24) ? 0 : LS_64(1, I40IWQPSQ_VALID);
- set_64bit_val(wqe, 24, offset24);
set_64bit_val(wqe, 0, buf->mem.pa);
set_64bit_val(wqe, 8,
LS_64(buf->mem.size, I40IWQPSQ_FRAG_LEN));
- set_64bit_val(wqe, 24, offset24);
+ i40iw_insert_wqe_hdr(wqe, offset24);
}
/**
set_64bit_val(wqe, 8, LS_64(info->len, I40IWQPSQ_FRAG_LEN));
set_64bit_val(wqe, 16, header[0]);
- /* Ensure all data is written before writing valid bit */
- wmb();
- set_64bit_val(wqe, 24, header[1]);
+ i40iw_insert_wqe_hdr(wqe, header[1]);
i40iw_debug_buf(qp->dev, I40IW_DEBUG_PUDA, "PUDA SEND WQE", wqe, 32);
i40iw_qp_post_wr(&qp->qp_uk);
LS_64(2, I40IW_CQPSQ_QP_NEXTIWSTATE) |
LS_64(cqp->polarity, I40IW_CQPSQ_WQEVALID);
- set_64bit_val(wqe, 24, header);
+ i40iw_insert_wqe_hdr(wqe, header);
i40iw_debug_buf(cqp->dev, I40IW_DEBUG_PUDA, "PUDA CQE", wqe, 32);
i40iw_sc_cqp_post_sq(cqp);
LS_64(1, I40IW_CQPSQ_CQ_ENCEQEMASK) |
LS_64(1, I40IW_CQPSQ_CQ_CEQIDVALID) |
LS_64(cqp->polarity, I40IW_CQPSQ_WQEVALID);
- set_64bit_val(wqe, 24, header);
+ i40iw_insert_wqe_hdr(wqe, header);
i40iw_debug_buf(dev, I40IW_DEBUG_PUDA, "PUDA CQE",
wqe, I40IW_CQP_WQE_SIZE * 8);
return NOTIFY_DONE;
iwdev = &hdl->device;
- if (iwdev->init_state < INET_NOTIFIER)
+ if (iwdev->init_state < IP_ADDR_REGISTERED || iwdev->closing)
return NOTIFY_DONE;
netdev = iwdev->ldev->netdev;
return NOTIFY_DONE;
iwdev = &hdl->device;
- if (iwdev->init_state < INET_NOTIFIER)
+ if (iwdev->init_state < IP_ADDR_REGISTERED || iwdev->closing)
return NOTIFY_DONE;
netdev = iwdev->ldev->netdev;
if (!iwhdl)
return NOTIFY_DONE;
iwdev = &iwhdl->device;
- if (iwdev->init_state < INET_NOTIFIER)
+ if (iwdev->init_state < IP_ADDR_REGISTERED || iwdev->closing)
return NOTIFY_DONE;
p = (__be32 *)neigh->primary_key;
i40iw_copy_ip_ntohl(local_ipaddr, p);
attr->cap.max_inline_data = I40IW_MAX_INLINE_DATA_SIZE;
attr->cap.max_send_sge = I40IW_MAX_WQ_FRAGMENT_COUNT;
attr->cap.max_recv_sge = I40IW_MAX_WQ_FRAGMENT_COUNT;
+ attr->port_num = 1;
init_attr->event_handler = iwqp->ibqp.event_handler;
init_attr->qp_context = iwqp->ibqp.qp_context;
init_attr->send_cq = iwqp->ibqp.send_cq;
init_attr->recv_cq = iwqp->ibqp.recv_cq;
init_attr->srq = iwqp->ibqp.srq;
init_attr->cap = attr->cap;
+ init_attr->port_num = 1;
return 0;
}
iwqp->hw_tcp_state = I40IW_TCP_STATE_CLOSED;
iwqp->last_aeq = I40IW_AE_RESET_SENT;
spin_unlock_irqrestore(&iwqp->lock, flags);
+ i40iw_cm_disconn(iwqp);
}
+ } else {
+ spin_lock_irqsave(&iwqp->lock, flags);
+ if (iwqp->cm_id) {
+ if (atomic_inc_return(&iwqp->close_timer_started) == 1) {
+ iwqp->cm_id->add_ref(iwqp->cm_id);
+ i40iw_schedule_cm_timer(iwqp->cm_node,
+ (struct i40iw_puda_buf *)iwqp,
+ I40IW_TIMER_TYPE_CLOSE, 1, 0);
+ }
+ }
+ spin_unlock_irqrestore(&iwqp->lock, flags);
}
}
return 0;
}
if (MLX5_CAP_GEN(mdev, tag_matching)) {
- props->xrq_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE;
- props->xrq_caps.max_num_tags =
+ props->tm_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE;
+ props->tm_caps.max_num_tags =
(1 << MLX5_CAP_GEN(mdev, log_tag_matching_list_sz)) - 1;
- props->xrq_caps.flags = IB_TM_CAP_RC;
- props->xrq_caps.max_ops =
+ props->tm_caps.flags = IB_TM_CAP_RC;
+ props->tm_caps.max_ops =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
- props->xrq_caps.max_sge = MLX5_TM_MAX_SGE;
+ props->tm_caps.max_sge = MLX5_TM_MAX_SGE;
}
if (field_avail(typeof(resp), cqe_comp_caps, uhw->outlen)) {
if (!dbg)
return -ENOMEM;
+ dev->delay_drop.dbg = dbg;
+
dbg->dir_debugfs =
debugfs_create_dir("delay_drop",
dev->mdev->priv.dbg_root);
if (!dbg->dir_debugfs)
- return -ENOMEM;
+ goto out_debugfs;
dbg->events_cnt_debugfs =
debugfs_create_atomic_t("num_timeout_events", 0400,
if (!dbg->timeout_debugfs)
goto out_debugfs;
- dev->delay_drop.dbg = dbg;
-
return 0;
out_debugfs:
err_uar_page:
mlx5_put_uars_page(dev->mdev, dev->mdev->priv.uar);
-err_cnt:
- mlx5_ib_cleanup_cong_debugfs(dev);
err_cong:
+ mlx5_ib_cleanup_cong_debugfs(dev);
+err_cnt:
if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt))
mlx5_ib_dealloc_counters(dev);
{
unsigned long tmp;
unsigned long m;
- int i, k;
- u64 base = 0;
- int p = 0;
- int skip;
- int mask;
- u64 len;
- u64 pfn;
+ u64 base = ~0, p = 0;
+ u64 len, pfn;
+ int i = 0;
struct scatterlist *sg;
int entry;
unsigned long page_shift = umem->page_shift;
m = find_first_bit(&tmp, BITS_PER_LONG);
if (max_page_shift)
m = min_t(unsigned long, max_page_shift - page_shift, m);
- skip = 1 << m;
- mask = skip - 1;
- i = 0;
+
for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
len = sg_dma_len(sg) >> page_shift;
pfn = sg_dma_address(sg) >> page_shift;
- for (k = 0; k < len; k++) {
- if (!(i & mask)) {
- tmp = (unsigned long)pfn;
- m = min_t(unsigned long, m, find_first_bit(&tmp, BITS_PER_LONG));
- skip = 1 << m;
- mask = skip - 1;
- base = pfn;
- p = 0;
- } else {
- if (base + p != pfn) {
- tmp = (unsigned long)p;
- m = find_first_bit(&tmp, BITS_PER_LONG);
- skip = 1 << m;
- mask = skip - 1;
- base = pfn;
- p = 0;
- }
- }
- p++;
- i++;
+ if (base + p != pfn) {
+ /* If either the offset or the new
+ * base are unaligned update m
+ */
+ tmp = (unsigned long)(pfn | p);
+ if (!IS_ALIGNED(tmp, 1 << m))
+ m = find_first_bit(&tmp, BITS_PER_LONG);
+
+ base = pfn;
+ p = 0;
}
+
+ p += len;
+ i += len;
}
if (i) {
#define MLX5_UMR_ALIGN 2048
-static int clean_mr(struct mlx5_ib_mr *mr);
+static int clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr);
+static int dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr);
static int mr_cache_max_order(struct mlx5_ib_dev *dev);
static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr);
err = mlx5_ib_update_xlt(mr, 0, ncont, page_shift,
update_xlt_flags);
+
if (err) {
- mlx5_ib_dereg_mr(&mr->ibmr);
+ dereg_mr(dev, mr);
return ERR_PTR(err);
}
}
err = mr_umem_get(pd, addr, len, access_flags, &mr->umem,
&npages, &page_shift, &ncont, &order);
if (err < 0) {
- clean_mr(mr);
+ clean_mr(dev, mr);
return err;
}
}
if (err) {
mlx5_ib_warn(dev, "Failed to rereg UMR\n");
ib_umem_release(mr->umem);
- clean_mr(mr);
+ clean_mr(dev, mr);
return err;
}
}
}
}
-static int clean_mr(struct mlx5_ib_mr *mr)
+static int clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
{
- struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
int allocated_from_cache = mr->allocated_from_cache;
int err;
return 0;
}
-int mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+static int dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
{
- struct mlx5_ib_dev *dev = to_mdev(ibmr->device);
- struct mlx5_ib_mr *mr = to_mmr(ibmr);
int npages = mr->npages;
struct ib_umem *umem = mr->umem;
}
#endif
- clean_mr(mr);
+ clean_mr(dev, mr);
if (umem) {
ib_umem_release(umem);
return 0;
}
+int mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+{
+ struct mlx5_ib_dev *dev = to_mdev(ibmr->device);
+ struct mlx5_ib_mr *mr = to_mmr(ibmr);
+
+ return dereg_mr(dev, mr);
+}
+
struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd,
enum ib_mr_type mr_type,
u32 max_num_sg)
mr->ibmr.iova);
set_wqe_32bit_value(wqe->wqe_words,
NES_IWARP_SQ_FMR_WQE_LENGTH_LOW_IDX,
- mr->ibmr.length);
+ lower_32_bits(mr->ibmr.length));
set_wqe_32bit_value(wqe->wqe_words,
NES_IWARP_SQ_FMR_WQE_LENGTH_HIGH_IDX, 0);
set_wqe_32bit_value(wqe->wqe_words,
mr->npages * 8);
nes_debug(NES_DBG_IW_TX, "SQ_REG_MR: iova_start: %llx, "
- "length: %d, rkey: %0x, pgl_paddr: %llx, "
+ "length: %lld, rkey: %0x, pgl_paddr: %llx, "
"page_list_len: %u, wqe_misc: %x\n",
(unsigned long long) mr->ibmr.iova,
mr->ibmr.length,
case OCRDMA_MBX_ADDI_STATUS_INSUFFICIENT_RESOURCES:
err_num = -EAGAIN;
break;
+ default:
+ err_num = -EFAULT;
}
+ break;
default:
err_num = -EFAULT;
}
u8 wqe_size;
u8 smac[ETH_ALEN];
- u16 vlan_id;
+ u16 vlan;
int rc;
} *rqe_wr_id;
qp->rqe_wr_id[qp->rq.gsi_cons].rc = data->u.data_length_error ?
-EINVAL : 0;
- qp->rqe_wr_id[qp->rq.gsi_cons].vlan_id = data->vlan;
+ qp->rqe_wr_id[qp->rq.gsi_cons].vlan = data->vlan;
/* note: length stands for data length i.e. GRH is excluded */
qp->rqe_wr_id[qp->rq.gsi_cons].sg_list[0].length =
data->length.data_length;
struct qedr_cq *cq = get_qedr_cq(ibcq);
struct qedr_qp *qp = dev->gsi_qp;
unsigned long flags;
+ u16 vlan_id;
int i = 0;
spin_lock_irqsave(&cq->cq_lock, flags);
wc[i].wc_flags |= IB_WC_GRH | IB_WC_IP_CSUM_OK;
ether_addr_copy(wc[i].smac, qp->rqe_wr_id[qp->rq.cons].smac);
wc[i].wc_flags |= IB_WC_WITH_SMAC;
- if (qp->rqe_wr_id[qp->rq.cons].vlan_id) {
+
+ vlan_id = qp->rqe_wr_id[qp->rq.cons].vlan &
+ VLAN_VID_MASK;
+ if (vlan_id) {
wc[i].wc_flags |= IB_WC_WITH_VLAN;
- wc[i].vlan_id = qp->rqe_wr_id[qp->rq.cons].vlan_id;
+ wc[i].vlan_id = vlan_id;
+ wc[i].sl = (qp->rqe_wr_id[qp->rq.cons].vlan &
+ VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
}
qedr_inc_sw_cons(&qp->rq);
return (enum ib_wc_status)status;
}
-static inline int pvrdma_wc_opcode_to_ib(int opcode)
-{
- return opcode;
+static inline int pvrdma_wc_opcode_to_ib(unsigned int opcode)
+{
+ switch (opcode) {
+ case PVRDMA_WC_SEND:
+ return IB_WC_SEND;
+ case PVRDMA_WC_RDMA_WRITE:
+ return IB_WC_RDMA_WRITE;
+ case PVRDMA_WC_RDMA_READ:
+ return IB_WC_RDMA_READ;
+ case PVRDMA_WC_COMP_SWAP:
+ return IB_WC_COMP_SWAP;
+ case PVRDMA_WC_FETCH_ADD:
+ return IB_WC_FETCH_ADD;
+ case PVRDMA_WC_LOCAL_INV:
+ return IB_WC_LOCAL_INV;
+ case PVRDMA_WC_FAST_REG_MR:
+ return IB_WC_REG_MR;
+ case PVRDMA_WC_MASKED_COMP_SWAP:
+ return IB_WC_MASKED_COMP_SWAP;
+ case PVRDMA_WC_MASKED_FETCH_ADD:
+ return IB_WC_MASKED_FETCH_ADD;
+ case PVRDMA_WC_RECV:
+ return IB_WC_RECV;
+ case PVRDMA_WC_RECV_RDMA_WITH_IMM:
+ return IB_WC_RECV_RDMA_WITH_IMM;
+ default:
+ return IB_WC_SEND;
+ }
}
static inline int pvrdma_wc_flags_to_ib(int flags)
wc->status != IB_WC_WR_FLUSH_ERR) {
struct ipoib_neigh *neigh;
- if (wc->status != IB_WC_RNR_RETRY_EXC_ERR)
- ipoib_warn(priv, "failed cm send event (status=%d, wrid=%d vend_err %x)\n",
- wc->status, wr_id, wc->vendor_err);
+ /* IB_WC[_RNR]_RETRY_EXC_ERR error is part of the life cycle,
+ * so don't make waves.
+ */
+ if (wc->status == IB_WC_RNR_RETRY_EXC_ERR ||
+ wc->status == IB_WC_RETRY_EXC_ERR)
+ ipoib_dbg(priv,
+ "%s: failed cm send event (status=%d, wrid=%d vend_err 0x%x)\n",
+ __func__, wc->status, wr_id, wc->vendor_err);
else
- ipoib_dbg(priv, "failed cm send event (status=%d, wrid=%d vend_err %x)\n",
- wc->status, wr_id, wc->vendor_err);
+ ipoib_warn(priv,
+ "%s: failed cm send event (status=%d, wrid=%d vend_err 0x%x)\n",
+ __func__, wc->status, wr_id, wc->vendor_err);
spin_lock_irqsave(&priv->lock, flags);
neigh = tx->neigh;
*/
priv->dev->broadcast[8] = priv->pkey >> 8;
priv->dev->broadcast[9] = priv->pkey & 0xff;
-
- /*
- * Update the broadcast address in the priv->broadcast object,
- * in case it already exists, otherwise no one will do that.
- */
- if (priv->broadcast) {
- spin_lock_irq(&priv->lock);
- memcpy(priv->broadcast->mcmember.mgid.raw,
- priv->dev->broadcast + 4,
- sizeof(union ib_gid));
- spin_unlock_irq(&priv->lock);
- }
-
return 0;
}
{
struct ipoib_dev_priv *priv;
struct ib_port_attr attr;
+ struct rdma_netdev *rn;
int result = -ENOMEM;
priv = ipoib_intf_alloc(hca, port, format);
ipoib_dev_cleanup(priv->dev);
device_init_failed:
- free_netdev(priv->dev);
+ rn = netdev_priv(priv->dev);
+ rn->free_rdma_netdev(priv->dev);
kfree(priv);
alloc_mem_failed:
return;
list_for_each_entry_safe(priv, tmp, dev_list, list) {
- struct rdma_netdev *rn = netdev_priv(priv->dev);
+ struct rdma_netdev *parent_rn = netdev_priv(priv->dev);
ib_unregister_event_handler(&priv->event_handler);
flush_workqueue(ipoib_workqueue);
unregister_netdev(priv->dev);
mutex_unlock(&priv->sysfs_mutex);
- rn->free_rdma_netdev(priv->dev);
+ parent_rn->free_rdma_netdev(priv->dev);
+
+ list_for_each_entry_safe(cpriv, tcpriv, &priv->child_intfs, list) {
+ struct rdma_netdev *child_rn;
- list_for_each_entry_safe(cpriv, tcpriv, &priv->child_intfs, list)
+ child_rn = netdev_priv(cpriv->dev);
+ child_rn->free_rdma_netdev(cpriv->dev);
kfree(cpriv);
+ }
kfree(priv);
}
return restart_syscall();
}
- priv = ipoib_intf_alloc(ppriv->ca, ppriv->port, intf_name);
- if (!priv) {
+ if (!down_write_trylock(&ppriv->vlan_rwsem)) {
rtnl_unlock();
mutex_unlock(&ppriv->sysfs_mutex);
- return -ENOMEM;
+ return restart_syscall();
}
- down_write(&ppriv->vlan_rwsem);
+ priv = ipoib_intf_alloc(ppriv->ca, ppriv->port, intf_name);
+ if (!priv) {
+ result = -ENOMEM;
+ goto out;
+ }
/*
* First ensure this isn't a duplicate. We check the parent device and
rtnl_unlock();
mutex_unlock(&ppriv->sysfs_mutex);
- if (result) {
- free_netdev(priv->dev);
+ if (result && priv) {
+ struct rdma_netdev *rn;
+
+ rn = netdev_priv(priv->dev);
+ rn->free_rdma_netdev(priv->dev);
kfree(priv);
}
return restart_syscall();
}
- down_write(&ppriv->vlan_rwsem);
+ if (!down_write_trylock(&ppriv->vlan_rwsem)) {
+ rtnl_unlock();
+ mutex_unlock(&ppriv->sysfs_mutex);
+ return restart_syscall();
+ }
+
list_for_each_entry_safe(priv, tpriv, &ppriv->child_intfs, list) {
if (priv->pkey == pkey &&
priv->child_type == IPOIB_LEGACY_CHILD) {
mutex_unlock(&ppriv->sysfs_mutex);
if (dev) {
- free_netdev(dev);
+ struct rdma_netdev *rn;
+
+ rn = netdev_priv(dev);
+ rn->free_rdma_netdev(priv->dev);
kfree(priv);
return 0;
}
{
int i;
- iser_err("page vec npages %d data length %d\n",
+ iser_err("page vec npages %d data length %lld\n",
page_vec->npages, page_vec->fake_mr.length);
for (i = 0; i < page_vec->npages; i++)
iser_err("vec[%d]: %llx\n", i, page_vec->pages[i]);
EXPORT_SYMBOL_GPL(input_ff_erase);
/*
- * flush_effects - erase all effects owned by a file handle
+ * input_ff_flush - erase all effects owned by a file handle
+ * @dev: input device to erase effect from
+ * @file: purported owner of the effects
+ *
+ * This function erases all force-feedback effects associated with
+ * the given owner from specified device. Note that @file may be %NULL,
+ * in which case all effects will be erased.
*/
-static int flush_effects(struct input_dev *dev, struct file *file)
+int input_ff_flush(struct input_dev *dev, struct file *file)
{
struct ff_device *ff = dev->ff;
int i;
return 0;
}
+EXPORT_SYMBOL_GPL(input_ff_flush);
/**
* input_ff_event() - generic handler for force-feedback events
mutex_init(&ff->mutex);
dev->ff = ff;
- dev->flush = flush_effects;
+ dev->flush = input_ff_flush;
dev->event = input_ff_event;
__set_bit(EV_FF, dev->evbit);
uinput_request_alloc_id(udev, request));
}
-static void uinput_request_done(struct uinput_device *udev,
- struct uinput_request *request)
+static void uinput_request_release_slot(struct uinput_device *udev,
+ unsigned int id)
{
/* Mark slot as available */
- udev->requests[request->id] = NULL;
- wake_up(&udev->requests_waitq);
+ spin_lock(&udev->requests_lock);
+ udev->requests[id] = NULL;
+ spin_unlock(&udev->requests_lock);
- complete(&request->done);
+ wake_up(&udev->requests_waitq);
}
static int uinput_request_send(struct uinput_device *udev,
static int uinput_request_submit(struct uinput_device *udev,
struct uinput_request *request)
{
- int error;
+ int retval;
- error = uinput_request_reserve_slot(udev, request);
- if (error)
- return error;
+ retval = uinput_request_reserve_slot(udev, request);
+ if (retval)
+ return retval;
- error = uinput_request_send(udev, request);
- if (error) {
- uinput_request_done(udev, request);
- return error;
- }
+ retval = uinput_request_send(udev, request);
+ if (retval)
+ goto out;
wait_for_completion(&request->done);
- return request->retval;
+ retval = request->retval;
+
+ out:
+ uinput_request_release_slot(udev, request->id);
+ return retval;
}
/*
request = udev->requests[i];
if (request) {
request->retval = -ENODEV;
- uinput_request_done(udev, request);
+ complete(&request->done);
}
}
return uinput_request_submit(udev, &request);
}
+static int uinput_dev_flush(struct input_dev *dev, struct file *file)
+{
+ /*
+ * If we are called with file == NULL that means we are tearing
+ * down the device, and therefore we can not handle FF erase
+ * requests: either we are handling UI_DEV_DESTROY (and holding
+ * the udev->mutex), or the file descriptor is closed and there is
+ * nobody on the other side anymore.
+ */
+ return file ? input_ff_flush(dev, file) : 0;
+}
+
static void uinput_destroy_device(struct uinput_device *udev)
{
const char *name, *phys;
dev->ff->playback = uinput_dev_playback;
dev->ff->set_gain = uinput_dev_set_gain;
dev->ff->set_autocenter = uinput_dev_set_autocenter;
+ /*
+ * The standard input_ff_flush() implementation does
+ * not quite work for uinput as we can't reasonably
+ * handle FF requests during device teardown.
+ */
+ dev->flush = uinput_dev_flush;
}
error = input_register_device(udev->dev);
}
req->retval = ff_up.retval;
- uinput_request_done(udev, req);
+ complete(&req->done);
goto out;
case UI_END_FF_ERASE:
}
req->retval = ff_erase.retval;
- uinput_request_done(udev, req);
+ complete(&req->done);
goto out;
}
}
/* Wait for F/W to update one page ROM data. */
- msleep(20);
+ msleep(35);
error = elan_i2c_read_cmd(client, ETP_I2C_IAP_CTRL_CMD, val);
if (error) {
config IPMMU_VMSA
bool "Renesas VMSA-compatible IPMMU"
depends on ARM || IOMMU_DMA
- depends on ARCH_RENESAS || COMPILE_TEST
+ depends on ARCH_RENESAS || (COMPILE_TEST && !GENERIC_ATOMIC64)
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU
config QCOM_IOMMU
# Note: iommu drivers cannot (yet?) be built as modules
bool "Qualcomm IOMMU Support"
- depends on ARCH_QCOM || COMPILE_TEST
+ depends on ARCH_QCOM || (COMPILE_TEST && !GENERIC_ATOMIC64)
+ depends on HAS_DMA
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU
hi = readl(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET + 4);
entry = (((u64) hi) << 32) + lo;
if (last_entry && last_entry != entry) {
- pr_err("IOMMU:%d should use the same dev table as others!/n",
+ pr_err("IOMMU:%d should use the same dev table as others!\n",
iommu->index);
return false;
}
old_devtb_size = ((entry & ~PAGE_MASK) + 1) << 12;
if (old_devtb_size != dev_table_size) {
- pr_err("The device table size of IOMMU:%d is not expected!/n",
+ pr_err("The device table size of IOMMU:%d is not expected!\n",
iommu->index);
return false;
}
old_devtb_phys = entry & PAGE_MASK;
if (old_devtb_phys >= 0x100000000ULL) {
- pr_err("The address of old device table is above 4G, not trustworthy!/n");
+ pr_err("The address of old device table is above 4G, not trustworthy!\n");
return false;
}
old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
old_dev_tbl_cpy = (void *)__get_free_pages(gfp_flag,
get_order(dev_table_size));
if (old_dev_tbl_cpy == NULL) {
- pr_err("Failed to allocate memory for copying old device table!/n");
+ pr_err("Failed to allocate memory for copying old device table!\n");
return false;
}
#define dmar_parse_one_rhsa dmar_res_noop
#endif
-static void __init
+static void
dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
{
struct acpi_dmar_hardware_unit *drhd;
static void __arm_v7s_pte_sync(arm_v7s_iopte *ptep, int num_entries,
struct io_pgtable_cfg *cfg)
{
- if (!(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA))
+ if (cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)
return;
dma_sync_single_for_device(cfg->iommu_dev, __arm_v7s_dma_addr(ptep),
int ret;
spin_lock_irqsave(&dom->pgtlock, flags);
- ret = dom->iop->map(dom->iop, iova, paddr, size, prot);
+ ret = dom->iop->map(dom->iop, iova, paddr & DMA_BIT_MASK(32),
+ size, prot);
spin_unlock_irqrestore(&dom->pgtlock, flags);
return ret;
err = of_iommu_xlate(info->dev, &iommu_spec);
of_node_put(iommu_spec.np);
- if (err)
- return err;
-
- return info->np == pdev->bus->dev.of_node;
+ return err;
}
const struct iommu_ops *of_iommu_configure(struct device *dev,
{
const __be32 *cell;
u64 hwid;
- int i;
+ int cpu;
cell = of_get_property(dn, "reg", NULL);
if (!cell)
if (hwid & ~MPIDR_HWID_BITMASK)
return -1;
- for (i = 0; i < num_possible_cpus(); i++)
- if (cpu_logical_map(i) == hwid)
- return i;
+ for_each_possible_cpu(cpu)
+ if (cpu_logical_map(cpu) == hwid)
+ return cpu;
return -1;
}
{
struct its_cmd_info info = {
.cmd_type = MAP_VLPI,
- .map = map,
+ {
+ .map = map,
+ },
};
/*
{
struct its_cmd_info info = {
.cmd_type = GET_VLPI,
- .map = map,
+ {
+ .map = map,
+ },
};
return irq_set_vcpu_affinity(irq, &info);
{
struct its_cmd_info info = {
.cmd_type = inv ? PROP_UPDATE_AND_INV_VLPI : PROP_UPDATE_VLPI,
- .config = config,
+ {
+ .config = config,
+ },
};
return irq_set_vcpu_affinity(irq, &info);
{
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
- write_gic_rmask(BIT(intr));
+ write_gic_rmask(intr);
gic_clear_pcpu_masks(intr);
}
static void gic_unmask_irq(struct irq_data *d)
{
- struct cpumask *affinity = irq_data_get_affinity_mask(d);
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
unsigned int cpu;
- write_gic_smask(BIT(intr));
+ write_gic_smask(intr);
gic_clear_pcpu_masks(intr);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
set_bit(intr, per_cpu_ptr(pcpu_masks, cpu));
}
irq_hw_number_t hw, unsigned int cpu)
{
int intr = GIC_HWIRQ_TO_SHARED(hw);
+ struct irq_data *data;
unsigned long flags;
+ data = irq_get_irq_data(virq);
+
spin_lock_irqsave(&gic_lock, flags);
write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
gic_clear_pcpu_masks(intr);
set_bit(intr, per_cpu_ptr(pcpu_masks, cpu));
+ irq_data_update_effective_affinity(data, cpumask_of(cpu));
spin_unlock_irqrestore(&gic_lock, flags);
return 0;
/* Find the first available CPU vector. */
i = 0;
- reserved = (C_SW0 | C_SW1) >> __fls(C_SW0);
+ reserved = (C_SW0 | C_SW1) >> __ffs(C_SW0);
while (!of_property_read_u32_index(node, "mti,reserved-cpu-vectors",
i++, &cpu_vec))
reserved |= BIT(cpu_vec);
gicconfig = read_gic_config();
gic_shared_intrs = gicconfig & GIC_CONFIG_NUMINTERRUPTS;
- gic_shared_intrs >>= __fls(GIC_CONFIG_NUMINTERRUPTS);
+ gic_shared_intrs >>= __ffs(GIC_CONFIG_NUMINTERRUPTS);
gic_shared_intrs = (gic_shared_intrs + 1) * 8;
gic_vpes = gicconfig & GIC_CONFIG_PVPS;
- gic_vpes >>= __fls(GIC_CONFIG_PVPS);
+ gic_vpes >>= __ffs(GIC_CONFIG_PVPS);
gic_vpes = gic_vpes + 1;
if (cpu_has_veic) {
for (i = 0; i < gic_shared_intrs; i++) {
change_gic_pol(i, GIC_POL_ACTIVE_HIGH);
change_gic_trig(i, GIC_TRIG_LEVEL);
- write_gic_rmask(BIT(i));
+ write_gic_rmask(i);
}
for (i = 0; i < gic_vpes; i++) {
isdn_net_local *lp;
struct ippp_struct *is;
int proto;
- unsigned char protobuf[4];
is = file->private_data;
if (!lp)
printk(KERN_DEBUG "isdn_ppp_write: lp == NULL\n");
else {
- /*
- * Don't reset huptimer for
- * LCP packets. (Echo requests).
- */
- if (copy_from_user(protobuf, buf, 4))
- return -EFAULT;
- proto = PPP_PROTOCOL(protobuf);
- if (proto != PPP_LCP)
- lp->huptimer = 0;
+ if (lp->isdn_device < 0 || lp->isdn_channel < 0) {
+ unsigned char protobuf[4];
+ /*
+ * Don't reset huptimer for
+ * LCP packets. (Echo requests).
+ */
+ if (copy_from_user(protobuf, buf, 4))
+ return -EFAULT;
+
+ proto = PPP_PROTOCOL(protobuf);
+ if (proto != PPP_LCP)
+ lp->huptimer = 0;
- if (lp->isdn_device < 0 || lp->isdn_channel < 0)
return 0;
+ }
if ((dev->drv[lp->isdn_device]->flags & DRV_FLAG_RUNNING) &&
lp->dialstate == 0 &&
(lp->flags & ISDN_NET_CONNECTED)) {
unsigned short hl;
struct sk_buff *skb;
+ unsigned char *cpy_buf;
/*
* we need to reserve enough space in front of
* sk_buff. old call to dev_alloc_skb only reserved
return count;
}
skb_reserve(skb, hl);
- if (copy_from_user(skb_put(skb, count), buf, count))
+ cpy_buf = skb_put(skb, count);
+ if (copy_from_user(cpy_buf, buf, count))
{
kfree_skb(skb);
return -EFAULT;
}
+
+ /*
+ * Don't reset huptimer for
+ * LCP packets. (Echo requests).
+ */
+ proto = PPP_PROTOCOL(cpy_buf);
+ if (proto != PPP_LCP)
+ lp->huptimer = 0;
+
if (is->debug & 0x40) {
printk(KERN_DEBUG "ppp xmit: len %d\n", (int) skb->len);
isdn_ppp_frame_log("xmit", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
#define AS_PEAK_mA_TO_REG(a) \
((min_t(u32, AS_PEAK_mA_MAX, a) - 1250) / 250)
+/* LED numbers for Devicetree */
+#define AS_LED_FLASH 0
+#define AS_LED_INDICATOR 1
+
enum as_mode {
AS_MODE_EXT_TORCH = 0 << AS_CONTROL_MODE_SETTING_SHIFT,
AS_MODE_INDICATOR = 1 << AS_CONTROL_MODE_SETTING_SHIFT,
struct device_node *node)
{
struct as3645a_config *cfg = &flash->cfg;
+ struct device_node *child;
const char *name;
int rval;
- flash->flash_node = of_get_child_by_name(node, "flash");
+ for_each_child_of_node(node, child) {
+ u32 id = 0;
+
+ of_property_read_u32(child, "reg", &id);
+
+ switch (id) {
+ case AS_LED_FLASH:
+ flash->flash_node = of_node_get(child);
+ break;
+ case AS_LED_INDICATOR:
+ flash->indicator_node = of_node_get(child);
+ break;
+ default:
+ dev_warn(&flash->client->dev,
+ "unknown LED %u encountered, ignoring\n", id);
+ break;
+ }
+ }
+
if (!flash->flash_node) {
dev_err(&flash->client->dev, "can't find flash node\n");
return -ENODEV;
of_property_read_u32(flash->flash_node, "voltage-reference",
&cfg->voltage_reference);
- of_property_read_u32(flash->flash_node, "peak-current-limit",
+ of_property_read_u32(flash->flash_node, "ams,input-max-microamp",
&cfg->peak);
cfg->peak = AS_PEAK_mA_TO_REG(cfg->peak);
- flash->indicator_node = of_get_child_by_name(node, "indicator");
if (!flash->indicator_node) {
dev_warn(&flash->client->dev,
"can't find indicator node\n");
as3645a_set_control(flash, AS_MODE_EXT_TORCH, false);
v4l2_flash_release(flash->vf);
+ v4l2_flash_release(flash->vfind);
led_classdev_flash_unregister(&flash->fled);
led_classdev_unregister(&flash->iled_cdev);
void __closure_wake_up(struct closure_waitlist *wait_list)
{
struct llist_node *list;
- struct closure *cl;
+ struct closure *cl, *t;
struct llist_node *reverse = NULL;
list = llist_del_all(&wait_list->list);
reverse = llist_reverse_order(list);
/* Then do the wakeups */
- llist_for_each_entry(cl, reverse, list) {
+ llist_for_each_entry_safe(cl, t, reverse, list) {
closure_set_waiting(cl, 0);
closure_sub(cl, CLOSURE_WAITING + 1);
}
extern atomic_t dm_global_event_nr;
extern wait_queue_head_t dm_global_eventq;
+void dm_issue_global_event(void);
#endif
kfree(cipher_api);
return ret;
}
+ kfree(cipher_api);
return 0;
bad_mem:
ti->error = "Invalid feature value for sector_size";
return -EINVAL;
}
+ if (ti->len & ((cc->sector_size >> SECTOR_SHIFT) - 1)) {
+ ti->error = "Device size is not multiple of sector_size feature";
+ return -EINVAL;
+ }
cc->sector_shift = __ffs(cc->sector_size) - SECTOR_SHIFT;
} else if (!strcasecmp(opt_string, "iv_large_sectors"))
set_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
* Round up the ptr to an 8-byte boundary.
*/
#define ALIGN_MASK 7
+static inline size_t align_val(size_t val)
+{
+ return (val + ALIGN_MASK) & ~ALIGN_MASK;
+}
static inline void *align_ptr(void *ptr)
{
- return (void *) (((size_t) (ptr + ALIGN_MASK)) & ~ALIGN_MASK);
+ return (void *)align_val((size_t)ptr);
}
/*
struct hash_cell *hc;
size_t len, needed = 0;
struct gendisk *disk;
- struct dm_name_list *nl, *old_nl = NULL;
+ struct dm_name_list *orig_nl, *nl, *old_nl = NULL;
uint32_t *event_nr;
down_write(&_hash_lock);
*/
for (i = 0; i < NUM_BUCKETS; i++) {
list_for_each_entry (hc, _name_buckets + i, name_list) {
- needed += sizeof(struct dm_name_list);
- needed += strlen(hc->name) + 1;
- needed += ALIGN_MASK;
- needed += (sizeof(uint32_t) + ALIGN_MASK) & ~ALIGN_MASK;
+ needed += align_val(offsetof(struct dm_name_list, name) + strlen(hc->name) + 1);
+ needed += align_val(sizeof(uint32_t));
}
}
/*
* Grab our output buffer.
*/
- nl = get_result_buffer(param, param_size, &len);
+ nl = orig_nl = get_result_buffer(param, param_size, &len);
if (len < needed) {
param->flags |= DM_BUFFER_FULL_FLAG;
goto out;
strcpy(nl->name, hc->name);
old_nl = nl;
- event_nr = align_ptr(((void *) (nl + 1)) + strlen(hc->name) + 1);
+ event_nr = align_ptr(nl->name + strlen(hc->name) + 1);
*event_nr = dm_get_event_nr(hc->md);
nl = align_ptr(event_nr + 1);
}
}
+ /*
+ * If mismatch happens, security may be compromised due to buffer
+ * overflow, so it's better to crash.
+ */
+ BUG_ON((char *)nl - (char *)orig_nl != needed);
out:
up_write(&_hash_lock);
* which has a variable size, is not used by the function processing
* the ioctl.
*/
-#define IOCTL_FLAGS_NO_PARAMS 1
+#define IOCTL_FLAGS_NO_PARAMS 1
+#define IOCTL_FLAGS_ISSUE_GLOBAL_EVENT 2
/*-----------------------------------------------------------------
* Implementation of open/close/ioctl on the special char
ioctl_fn fn;
} _ioctls[] = {
{DM_VERSION_CMD, 0, NULL}, /* version is dealt with elsewhere */
- {DM_REMOVE_ALL_CMD, IOCTL_FLAGS_NO_PARAMS, remove_all},
+ {DM_REMOVE_ALL_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, remove_all},
{DM_LIST_DEVICES_CMD, 0, list_devices},
- {DM_DEV_CREATE_CMD, IOCTL_FLAGS_NO_PARAMS, dev_create},
- {DM_DEV_REMOVE_CMD, IOCTL_FLAGS_NO_PARAMS, dev_remove},
- {DM_DEV_RENAME_CMD, 0, dev_rename},
+ {DM_DEV_CREATE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_create},
+ {DM_DEV_REMOVE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_remove},
+ {DM_DEV_RENAME_CMD, IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_rename},
{DM_DEV_SUSPEND_CMD, IOCTL_FLAGS_NO_PARAMS, dev_suspend},
{DM_DEV_STATUS_CMD, IOCTL_FLAGS_NO_PARAMS, dev_status},
{DM_DEV_WAIT_CMD, 0, dev_wait},
unlikely(ioctl_flags & IOCTL_FLAGS_NO_PARAMS))
DMERR("ioctl %d tried to output some data but has IOCTL_FLAGS_NO_PARAMS set", cmd);
+ if (!r && ioctl_flags & IOCTL_FLAGS_ISSUE_GLOBAL_EVENT)
+ dm_issue_global_event();
+
/*
* Copy the results back to userland.
*/
if (unlikely(bio_end_sector(bio) > mddev->array_sectors))
return DM_MAPIO_REQUEUE;
- mddev->pers->make_request(mddev, bio);
+ md_handle_request(mddev, bio);
return DM_MAPIO_SUBMITTED;
}
static sector_t rs_get_progress(struct raid_set *rs,
sector_t resync_max_sectors, bool *array_in_sync)
{
- sector_t r, recovery_cp, curr_resync_completed;
+ sector_t r, curr_resync_completed;
struct mddev *mddev = &rs->md;
curr_resync_completed = mddev->curr_resync_completed ?: mddev->recovery_cp;
- recovery_cp = mddev->recovery_cp;
*array_in_sync = false;
if (rs_is_raid0(rs)) {
} else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
r = curr_resync_completed;
else
- r = recovery_cp;
+ r = mddev->recovery_cp;
- if (r == MaxSector) {
+ if ((r == MaxSector) ||
+ (test_bit(MD_RECOVERY_DONE, &mddev->recovery) &&
+ (mddev->curr_resync_completed == resync_max_sectors))) {
/*
* Sync complete.
*/
static struct target_type raid_target = {
.name = "raid",
- .version = {1, 12, 1},
+ .version = {1, 13, 0},
.module = THIS_MODULE,
.ctr = raid_ctr,
.dtr = raid_dtr,
atomic_t dm_global_event_nr = ATOMIC_INIT(0);
DECLARE_WAIT_QUEUE_HEAD(dm_global_eventq);
+void dm_issue_global_event(void)
+{
+ atomic_inc(&dm_global_event_nr);
+ wake_up(&dm_global_eventq);
+}
+
/*
* One of these is allocated per bio.
*/
dm_send_uevents(&uevents, &disk_to_dev(md->disk)->kobj);
atomic_inc(&md->event_nr);
- atomic_inc(&dm_global_event_nr);
wake_up(&md->eventq);
- wake_up(&dm_global_eventq);
+ dm_issue_global_event();
}
/*
}
map = __bind(md, table, &limits);
+ dm_issue_global_event();
out:
mutex_unlock(&md->suspend_lock);
* call has finished, the bio has been linked into some internal structure
* and so is visible to ->quiesce(), so we don't need the refcount any more.
*/
+void md_handle_request(struct mddev *mddev, struct bio *bio)
+{
+check_suspended:
+ rcu_read_lock();
+ if (mddev->suspended) {
+ DEFINE_WAIT(__wait);
+ for (;;) {
+ prepare_to_wait(&mddev->sb_wait, &__wait,
+ TASK_UNINTERRUPTIBLE);
+ if (!mddev->suspended)
+ break;
+ rcu_read_unlock();
+ schedule();
+ rcu_read_lock();
+ }
+ finish_wait(&mddev->sb_wait, &__wait);
+ }
+ atomic_inc(&mddev->active_io);
+ rcu_read_unlock();
+
+ if (!mddev->pers->make_request(mddev, bio)) {
+ atomic_dec(&mddev->active_io);
+ wake_up(&mddev->sb_wait);
+ goto check_suspended;
+ }
+
+ if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
+ wake_up(&mddev->sb_wait);
+}
+EXPORT_SYMBOL(md_handle_request);
+
static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio)
{
const int rw = bio_data_dir(bio);
bio_endio(bio);
return BLK_QC_T_NONE;
}
-check_suspended:
- rcu_read_lock();
- if (mddev->suspended) {
- DEFINE_WAIT(__wait);
- for (;;) {
- prepare_to_wait(&mddev->sb_wait, &__wait,
- TASK_UNINTERRUPTIBLE);
- if (!mddev->suspended)
- break;
- rcu_read_unlock();
- schedule();
- rcu_read_lock();
- }
- finish_wait(&mddev->sb_wait, &__wait);
- }
- atomic_inc(&mddev->active_io);
- rcu_read_unlock();
/*
* save the sectors now since our bio can
sectors = bio_sectors(bio);
/* bio could be mergeable after passing to underlayer */
bio->bi_opf &= ~REQ_NOMERGE;
- if (!mddev->pers->make_request(mddev, bio)) {
- atomic_dec(&mddev->active_io);
- wake_up(&mddev->sb_wait);
- goto check_suspended;
- }
+
+ md_handle_request(mddev, bio);
cpu = part_stat_lock();
part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
part_stat_unlock();
- if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
- wake_up(&mddev->sb_wait);
-
return BLK_QC_T_NONE;
}
struct mddev *mddev = container_of(ws, struct mddev, flush_work);
struct bio *bio = mddev->flush_bio;
+ /*
+ * must reset flush_bio before calling into md_handle_request to avoid a
+ * deadlock, because other bios passed md_handle_request suspend check
+ * could wait for this and below md_handle_request could wait for those
+ * bios because of suspend check
+ */
+ mddev->flush_bio = NULL;
+ wake_up(&mddev->sb_wait);
+
if (bio->bi_iter.bi_size == 0)
/* an empty barrier - all done */
bio_endio(bio);
else {
bio->bi_opf &= ~REQ_PREFLUSH;
- mddev->pers->make_request(mddev, bio);
+ md_handle_request(mddev, bio);
}
-
- mddev->flush_bio = NULL;
- wake_up(&mddev->sb_wait);
}
void md_flush_request(struct mddev *mddev, struct bio *bio)
extern int md_rdev_init(struct md_rdev *rdev);
extern void md_rdev_clear(struct md_rdev *rdev);
+extern void md_handle_request(struct mddev *mddev, struct bio *bio);
extern void mddev_suspend(struct mddev *mddev);
extern void mddev_resume(struct mddev *mddev);
extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
spin_unlock(&head->batch_head->batch_lock);
goto unlock_out;
}
+ /*
+ * We must assign batch_head of this stripe within the
+ * batch_lock, otherwise clear_batch_ready of batch head
+ * stripe could clear BATCH_READY bit of this stripe and
+ * this stripe->batch_head doesn't get assigned, which
+ * could confuse clear_batch_ready for this stripe
+ */
+ sh->batch_head = head->batch_head;
/*
* at this point, head's BATCH_READY could be cleared, but we
*/
list_add(&sh->batch_list, &head->batch_list);
spin_unlock(&head->batch_head->batch_lock);
-
- sh->batch_head = head->batch_head;
} else {
head->batch_head = head;
sh->batch_head = head->batch_head;
set_mask_bits(&sh->state, ~(STRIPE_EXPAND_SYNC_FLAGS |
(1 << STRIPE_PREREAD_ACTIVE) |
- (1 << STRIPE_DEGRADED)),
+ (1 << STRIPE_DEGRADED) |
+ (1 << STRIPE_ON_UNPLUG_LIST)),
head_sh->state & (1 << STRIPE_INSYNC));
sh->check_state = head_sh->check_state;
raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
{
struct r5conf *conf;
- unsigned long new;
+ unsigned int new;
int err;
struct r5worker_group *new_groups, *old_groups;
int group_cnt, worker_cnt_per_group;
if (len >= PAGE_SIZE)
return -EINVAL;
- if (kstrtoul(page, 10, &new))
+ if (kstrtouint(page, 10, &new))
+ return -EINVAL;
+ /* 8192 should be big enough */
+ if (new > 8192)
return -EINVAL;
err = mddev_lock(mddev);
module_exit(ir_sharp_decode_exit);
MODULE_LICENSE("GPL");
-MODULE_AUTHOR("James Hogan <james.hogan@imgtec.com>");
+MODULE_AUTHOR("James Hogan <jhogan@kernel.org>");
MODULE_DESCRIPTION("Sharp IR protocol decoder");
obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o
obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o
obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o
+obj-$(CONFIG_ARCH_BRCMSTB) += brcmstb_dpfe.o
obj-$(CONFIG_TI_AEMIF) += ti-aemif.o
obj-$(CONFIG_TI_EMIF) += emif.o
obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o
--- /dev/null
+/*
+ * DDR PHY Front End (DPFE) driver for Broadcom set top box SoCs
+ *
+ * Copyright (c) 2017 Broadcom
+ *
+ * Released under the GPLv2 only.
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+/*
+ * This driver provides access to the DPFE interface of Broadcom STB SoCs.
+ * The firmware running on the DCPU inside the DDR PHY can provide current
+ * information about the system's RAM, for instance the DRAM refresh rate.
+ * This can be used as an indirect indicator for the DRAM's temperature.
+ * Slower refresh rate means cooler RAM, higher refresh rate means hotter
+ * RAM.
+ *
+ * Throughout the driver, we use readl_relaxed() and writel_relaxed(), which
+ * already contain the appropriate le32_to_cpu()/cpu_to_le32() calls.
+ *
+ * Note regarding the loading of the firmware image: we use be32_to_cpu()
+ * and le_32_to_cpu(), so we can support the following four cases:
+ * - LE kernel + LE firmware image (the most common case)
+ * - LE kernel + BE firmware image
+ * - BE kernel + LE firmware image
+ * - BE kernel + BE firmware image
+ *
+ * The DPCU always runs in big endian mode. The firwmare image, however, can
+ * be in either format. Also, communication between host CPU and DCPU is
+ * always in little endian.
+ */
+
+#include <linux/delay.h>
+#include <linux/firmware.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+
+#define DRVNAME "brcmstb-dpfe"
+#define FIRMWARE_NAME "dpfe.bin"
+
+/* DCPU register offsets */
+#define REG_DCPU_RESET 0x0
+#define REG_TO_DCPU_MBOX 0x10
+#define REG_TO_HOST_MBOX 0x14
+
+/* Message RAM */
+#define DCPU_MSG_RAM(x) (0x100 + (x) * sizeof(u32))
+
+/* DRAM Info Offsets & Masks */
+#define DRAM_INFO_INTERVAL 0x0
+#define DRAM_INFO_MR4 0x4
+#define DRAM_INFO_ERROR 0x8
+#define DRAM_INFO_MR4_MASK 0xff
+
+/* DRAM MR4 Offsets & Masks */
+#define DRAM_MR4_REFRESH 0x0 /* Refresh rate */
+#define DRAM_MR4_SR_ABORT 0x3 /* Self Refresh Abort */
+#define DRAM_MR4_PPRE 0x4 /* Post-package repair entry/exit */
+#define DRAM_MR4_TH_OFFS 0x5 /* Thermal Offset; vendor specific */
+#define DRAM_MR4_TUF 0x7 /* Temperature Update Flag */
+
+#define DRAM_MR4_REFRESH_MASK 0x7
+#define DRAM_MR4_SR_ABORT_MASK 0x1
+#define DRAM_MR4_PPRE_MASK 0x1
+#define DRAM_MR4_TH_OFFS_MASK 0x3
+#define DRAM_MR4_TUF_MASK 0x1
+
+/* DRAM Vendor Offsets & Masks */
+#define DRAM_VENDOR_MR5 0x0
+#define DRAM_VENDOR_MR6 0x4
+#define DRAM_VENDOR_MR7 0x8
+#define DRAM_VENDOR_MR8 0xc
+#define DRAM_VENDOR_ERROR 0x10
+#define DRAM_VENDOR_MASK 0xff
+
+/* Reset register bits & masks */
+#define DCPU_RESET_SHIFT 0x0
+#define DCPU_RESET_MASK 0x1
+#define DCPU_CLK_DISABLE_SHIFT 0x2
+
+/* DCPU return codes */
+#define DCPU_RET_ERROR_BIT BIT(31)
+#define DCPU_RET_SUCCESS 0x1
+#define DCPU_RET_ERR_HEADER (DCPU_RET_ERROR_BIT | BIT(0))
+#define DCPU_RET_ERR_INVAL (DCPU_RET_ERROR_BIT | BIT(1))
+#define DCPU_RET_ERR_CHKSUM (DCPU_RET_ERROR_BIT | BIT(2))
+#define DCPU_RET_ERR_COMMAND (DCPU_RET_ERROR_BIT | BIT(3))
+/* This error code is not firmware defined and only used in the driver. */
+#define DCPU_RET_ERR_TIMEDOUT (DCPU_RET_ERROR_BIT | BIT(4))
+
+/* Firmware magic */
+#define DPFE_BE_MAGIC 0xfe1010fe
+#define DPFE_LE_MAGIC 0xfe0101fe
+
+/* Error codes */
+#define ERR_INVALID_MAGIC -1
+#define ERR_INVALID_SIZE -2
+#define ERR_INVALID_CHKSUM -3
+
+/* Message types */
+#define DPFE_MSG_TYPE_COMMAND 1
+#define DPFE_MSG_TYPE_RESPONSE 2
+
+#define DELAY_LOOP_MAX 200000
+
+enum dpfe_msg_fields {
+ MSG_HEADER,
+ MSG_COMMAND,
+ MSG_ARG_COUNT,
+ MSG_ARG0,
+ MSG_CHKSUM,
+ MSG_FIELD_MAX /* Last entry */
+};
+
+enum dpfe_commands {
+ DPFE_CMD_GET_INFO,
+ DPFE_CMD_GET_REFRESH,
+ DPFE_CMD_GET_VENDOR,
+ DPFE_CMD_MAX /* Last entry */
+};
+
+struct dpfe_msg {
+ u32 header;
+ u32 command;
+ u32 arg_count;
+ u32 arg0;
+ u32 chksum; /* This is the sum of all other entries. */
+};
+
+/*
+ * Format of the binary firmware file:
+ *
+ * entry
+ * 0 header
+ * value: 0xfe0101fe <== little endian
+ * 0xfe1010fe <== big endian
+ * 1 sequence:
+ * [31:16] total segments on this build
+ * [15:0] this segment sequence.
+ * 2 FW version
+ * 3 IMEM byte size
+ * 4 DMEM byte size
+ * IMEM
+ * DMEM
+ * last checksum ==> sum of everything
+ */
+struct dpfe_firmware_header {
+ u32 magic;
+ u32 sequence;
+ u32 version;
+ u32 imem_size;
+ u32 dmem_size;
+};
+
+/* Things we only need during initialization. */
+struct init_data {
+ unsigned int dmem_len;
+ unsigned int imem_len;
+ unsigned int chksum;
+ bool is_big_endian;
+};
+
+/* Things we need for as long as we are active. */
+struct private_data {
+ void __iomem *regs;
+ void __iomem *dmem;
+ void __iomem *imem;
+ struct device *dev;
+ unsigned int index;
+ struct mutex lock;
+};
+
+static const char *error_text[] = {
+ "Success", "Header code incorrect", "Unknown command or argument",
+ "Incorrect checksum", "Malformed command", "Timed out",
+};
+
+/* List of supported firmware commands */
+static const u32 dpfe_commands[DPFE_CMD_MAX][MSG_FIELD_MAX] = {
+ [DPFE_CMD_GET_INFO] = {
+ [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
+ [MSG_COMMAND] = 1,
+ [MSG_ARG_COUNT] = 1,
+ [MSG_ARG0] = 1,
+ [MSG_CHKSUM] = 4,
+ },
+ [DPFE_CMD_GET_REFRESH] = {
+ [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
+ [MSG_COMMAND] = 2,
+ [MSG_ARG_COUNT] = 1,
+ [MSG_ARG0] = 1,
+ [MSG_CHKSUM] = 5,
+ },
+ [DPFE_CMD_GET_VENDOR] = {
+ [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
+ [MSG_COMMAND] = 2,
+ [MSG_ARG_COUNT] = 1,
+ [MSG_ARG0] = 2,
+ [MSG_CHKSUM] = 6,
+ },
+};
+
+static bool is_dcpu_enabled(void __iomem *regs)
+{
+ u32 val;
+
+ val = readl_relaxed(regs + REG_DCPU_RESET);
+
+ return !(val & DCPU_RESET_MASK);
+}
+
+static void __disable_dcpu(void __iomem *regs)
+{
+ u32 val;
+
+ if (!is_dcpu_enabled(regs))
+ return;
+
+ /* Put DCPU in reset if it's running. */
+ val = readl_relaxed(regs + REG_DCPU_RESET);
+ val |= (1 << DCPU_RESET_SHIFT);
+ writel_relaxed(val, regs + REG_DCPU_RESET);
+}
+
+static void __enable_dcpu(void __iomem *regs)
+{
+ u32 val;
+
+ /* Clear mailbox registers. */
+ writel_relaxed(0, regs + REG_TO_DCPU_MBOX);
+ writel_relaxed(0, regs + REG_TO_HOST_MBOX);
+
+ /* Disable DCPU clock gating */
+ val = readl_relaxed(regs + REG_DCPU_RESET);
+ val &= ~(1 << DCPU_CLK_DISABLE_SHIFT);
+ writel_relaxed(val, regs + REG_DCPU_RESET);
+
+ /* Take DCPU out of reset */
+ val = readl_relaxed(regs + REG_DCPU_RESET);
+ val &= ~(1 << DCPU_RESET_SHIFT);
+ writel_relaxed(val, regs + REG_DCPU_RESET);
+}
+
+static unsigned int get_msg_chksum(const u32 msg[])
+{
+ unsigned int sum = 0;
+ unsigned int i;
+
+ /* Don't include the last field in the checksum. */
+ for (i = 0; i < MSG_FIELD_MAX - 1; i++)
+ sum += msg[i];
+
+ return sum;
+}
+
+static int __send_command(struct private_data *priv, unsigned int cmd,
+ u32 result[])
+{
+ const u32 *msg = dpfe_commands[cmd];
+ void __iomem *regs = priv->regs;
+ unsigned int i, chksum;
+ int ret = 0;
+ u32 resp;
+
+ if (cmd >= DPFE_CMD_MAX)
+ return -1;
+
+ mutex_lock(&priv->lock);
+
+ /* Write command and arguments to message area */
+ for (i = 0; i < MSG_FIELD_MAX; i++)
+ writel_relaxed(msg[i], regs + DCPU_MSG_RAM(i));
+
+ /* Tell DCPU there is a command waiting */
+ writel_relaxed(1, regs + REG_TO_DCPU_MBOX);
+
+ /* Wait for DCPU to process the command */
+ for (i = 0; i < DELAY_LOOP_MAX; i++) {
+ /* Read response code */
+ resp = readl_relaxed(regs + REG_TO_HOST_MBOX);
+ if (resp > 0)
+ break;
+ udelay(5);
+ }
+
+ if (i == DELAY_LOOP_MAX) {
+ resp = (DCPU_RET_ERR_TIMEDOUT & ~DCPU_RET_ERROR_BIT);
+ ret = -ffs(resp);
+ } else {
+ /* Read response data */
+ for (i = 0; i < MSG_FIELD_MAX; i++)
+ result[i] = readl_relaxed(regs + DCPU_MSG_RAM(i));
+ }
+
+ /* Tell DCPU we are done */
+ writel_relaxed(0, regs + REG_TO_HOST_MBOX);
+
+ mutex_unlock(&priv->lock);
+
+ if (ret)
+ return ret;
+
+ /* Verify response */
+ chksum = get_msg_chksum(result);
+ if (chksum != result[MSG_CHKSUM])
+ resp = DCPU_RET_ERR_CHKSUM;
+
+ if (resp != DCPU_RET_SUCCESS) {
+ resp &= ~DCPU_RET_ERROR_BIT;
+ ret = -ffs(resp);
+ }
+
+ return ret;
+}
+
+/* Ensure that the firmware file loaded meets all the requirements. */
+static int __verify_firmware(struct init_data *init,
+ const struct firmware *fw)
+{
+ const struct dpfe_firmware_header *header = (void *)fw->data;
+ unsigned int dmem_size, imem_size, total_size;
+ bool is_big_endian = false;
+ const u32 *chksum_ptr;
+
+ if (header->magic == DPFE_BE_MAGIC)
+ is_big_endian = true;
+ else if (header->magic != DPFE_LE_MAGIC)
+ return ERR_INVALID_MAGIC;
+
+ if (is_big_endian) {
+ dmem_size = be32_to_cpu(header->dmem_size);
+ imem_size = be32_to_cpu(header->imem_size);
+ } else {
+ dmem_size = le32_to_cpu(header->dmem_size);
+ imem_size = le32_to_cpu(header->imem_size);
+ }
+
+ /* Data and instruction sections are 32 bit words. */
+ if ((dmem_size % sizeof(u32)) != 0 || (imem_size % sizeof(u32)) != 0)
+ return ERR_INVALID_SIZE;
+
+ /*
+ * The header + the data section + the instruction section + the
+ * checksum must be equal to the total firmware size.
+ */
+ total_size = dmem_size + imem_size + sizeof(*header) +
+ sizeof(*chksum_ptr);
+ if (total_size != fw->size)
+ return ERR_INVALID_SIZE;
+
+ /* The checksum comes at the very end. */
+ chksum_ptr = (void *)fw->data + sizeof(*header) + dmem_size + imem_size;
+
+ init->is_big_endian = is_big_endian;
+ init->dmem_len = dmem_size;
+ init->imem_len = imem_size;
+ init->chksum = (is_big_endian)
+ ? be32_to_cpu(*chksum_ptr) : le32_to_cpu(*chksum_ptr);
+
+ return 0;
+}
+
+/* Verify checksum by reading back the firmware from co-processor RAM. */
+static int __verify_fw_checksum(struct init_data *init,
+ struct private_data *priv,
+ const struct dpfe_firmware_header *header,
+ u32 checksum)
+{
+ u32 magic, sequence, version, sum;
+ u32 __iomem *dmem = priv->dmem;
+ u32 __iomem *imem = priv->imem;
+ unsigned int i;
+
+ if (init->is_big_endian) {
+ magic = be32_to_cpu(header->magic);
+ sequence = be32_to_cpu(header->sequence);
+ version = be32_to_cpu(header->version);
+ } else {
+ magic = le32_to_cpu(header->magic);
+ sequence = le32_to_cpu(header->sequence);
+ version = le32_to_cpu(header->version);
+ }
+
+ sum = magic + sequence + version + init->dmem_len + init->imem_len;
+
+ for (i = 0; i < init->dmem_len / sizeof(u32); i++)
+ sum += readl_relaxed(dmem + i);
+
+ for (i = 0; i < init->imem_len / sizeof(u32); i++)
+ sum += readl_relaxed(imem + i);
+
+ return (sum == checksum) ? 0 : -1;
+}
+
+static int __write_firmware(u32 __iomem *mem, const u32 *fw,
+ unsigned int size, bool is_big_endian)
+{
+ unsigned int i;
+
+ /* Convert size to 32-bit words. */
+ size /= sizeof(u32);
+
+ /* It is recommended to clear the firmware area first. */
+ for (i = 0; i < size; i++)
+ writel_relaxed(0, mem + i);
+
+ /* Now copy it. */
+ if (is_big_endian) {
+ for (i = 0; i < size; i++)
+ writel_relaxed(be32_to_cpu(fw[i]), mem + i);
+ } else {
+ for (i = 0; i < size; i++)
+ writel_relaxed(le32_to_cpu(fw[i]), mem + i);
+ }
+
+ return 0;
+}
+
+static int brcmstb_dpfe_download_firmware(struct platform_device *pdev,
+ struct init_data *init)
+{
+ const struct dpfe_firmware_header *header;
+ unsigned int dmem_size, imem_size;
+ struct device *dev = &pdev->dev;
+ bool is_big_endian = false;
+ struct private_data *priv;
+ const struct firmware *fw;
+ const u32 *dmem, *imem;
+ const void *fw_blob;
+ int ret;
+
+ priv = platform_get_drvdata(pdev);
+
+ /*
+ * Skip downloading the firmware if the DCPU is already running and
+ * responding to commands.
+ */
+ if (is_dcpu_enabled(priv->regs)) {
+ u32 response[MSG_FIELD_MAX];
+
+ ret = __send_command(priv, DPFE_CMD_GET_INFO, response);
+ if (!ret)
+ return 0;
+ }
+
+ ret = request_firmware(&fw, FIRMWARE_NAME, dev);
+ /* request_firmware() prints its own error messages. */
+ if (ret)
+ return ret;
+
+ ret = __verify_firmware(init, fw);
+ if (ret)
+ return -EFAULT;
+
+ __disable_dcpu(priv->regs);
+
+ is_big_endian = init->is_big_endian;
+ dmem_size = init->dmem_len;
+ imem_size = init->imem_len;
+
+ /* At the beginning of the firmware blob is a header. */
+ header = (struct dpfe_firmware_header *)fw->data;
+ /* Void pointer to the beginning of the actual firmware. */
+ fw_blob = fw->data + sizeof(*header);
+ /* IMEM comes right after the header. */
+ imem = fw_blob;
+ /* DMEM follows after IMEM. */
+ dmem = fw_blob + imem_size;
+
+ ret = __write_firmware(priv->dmem, dmem, dmem_size, is_big_endian);
+ if (ret)
+ return ret;
+ ret = __write_firmware(priv->imem, imem, imem_size, is_big_endian);
+ if (ret)
+ return ret;
+
+ ret = __verify_fw_checksum(init, priv, header, init->chksum);
+ if (ret)
+ return ret;
+
+ __enable_dcpu(priv->regs);
+
+ return 0;
+}
+
+static ssize_t generic_show(unsigned int command, u32 response[],
+ struct device *dev, char *buf)
+{
+ struct private_data *priv;
+ int ret;
+
+ priv = dev_get_drvdata(dev);
+ if (!priv)
+ return sprintf(buf, "ERROR: driver private data not set\n");
+
+ ret = __send_command(priv, command, response);
+ if (ret < 0)
+ return sprintf(buf, "ERROR: %s\n", error_text[-ret]);
+
+ return 0;
+}
+
+static ssize_t show_info(struct device *dev, struct device_attribute *devattr,
+ char *buf)
+{
+ u32 response[MSG_FIELD_MAX];
+ unsigned int info;
+ int ret;
+
+ ret = generic_show(DPFE_CMD_GET_INFO, response, dev, buf);
+ if (ret)
+ return ret;
+
+ info = response[MSG_ARG0];
+
+ return sprintf(buf, "%u.%u.%u.%u\n",
+ (info >> 24) & 0xff,
+ (info >> 16) & 0xff,
+ (info >> 8) & 0xff,
+ info & 0xff);
+}
+
+static ssize_t show_refresh(struct device *dev,
+ struct device_attribute *devattr, char *buf)
+{
+ u32 response[MSG_FIELD_MAX];
+ void __iomem *info;
+ struct private_data *priv;
+ unsigned int offset;
+ u8 refresh, sr_abort, ppre, thermal_offs, tuf;
+ u32 mr4;
+ int ret;
+
+ ret = generic_show(DPFE_CMD_GET_REFRESH, response, dev, buf);
+ if (ret)
+ return ret;
+
+ priv = dev_get_drvdata(dev);
+ offset = response[MSG_ARG0];
+ info = priv->dmem + offset;
+
+ mr4 = readl_relaxed(info + DRAM_INFO_MR4) & DRAM_INFO_MR4_MASK;
+
+ refresh = (mr4 >> DRAM_MR4_REFRESH) & DRAM_MR4_REFRESH_MASK;
+ sr_abort = (mr4 >> DRAM_MR4_SR_ABORT) & DRAM_MR4_SR_ABORT_MASK;
+ ppre = (mr4 >> DRAM_MR4_PPRE) & DRAM_MR4_PPRE_MASK;
+ thermal_offs = (mr4 >> DRAM_MR4_TH_OFFS) & DRAM_MR4_TH_OFFS_MASK;
+ tuf = (mr4 >> DRAM_MR4_TUF) & DRAM_MR4_TUF_MASK;
+
+ return sprintf(buf, "%#x %#x %#x %#x %#x %#x %#x\n",
+ readl_relaxed(info + DRAM_INFO_INTERVAL),
+ refresh, sr_abort, ppre, thermal_offs, tuf,
+ readl_relaxed(info + DRAM_INFO_ERROR));
+}
+
+static ssize_t store_refresh(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ u32 response[MSG_FIELD_MAX];
+ struct private_data *priv;
+ void __iomem *info;
+ unsigned int offset;
+ unsigned long val;
+ int ret;
+
+ if (kstrtoul(buf, 0, &val) < 0)
+ return -EINVAL;
+
+ priv = dev_get_drvdata(dev);
+
+ ret = __send_command(priv, DPFE_CMD_GET_REFRESH, response);
+ if (ret)
+ return ret;
+
+ offset = response[MSG_ARG0];
+ info = priv->dmem + offset;
+ writel_relaxed(val, info + DRAM_INFO_INTERVAL);
+
+ return count;
+}
+
+static ssize_t show_vendor(struct device *dev, struct device_attribute *devattr,
+ char *buf)
+{
+ u32 response[MSG_FIELD_MAX];
+ struct private_data *priv;
+ void __iomem *info;
+ unsigned int offset;
+ int ret;
+
+ ret = generic_show(DPFE_CMD_GET_VENDOR, response, dev, buf);
+ if (ret)
+ return ret;
+
+ offset = response[MSG_ARG0];
+ priv = dev_get_drvdata(dev);
+ info = priv->dmem + offset;
+
+ return sprintf(buf, "%#x %#x %#x %#x %#x\n",
+ readl_relaxed(info + DRAM_VENDOR_MR5) & DRAM_VENDOR_MASK,
+ readl_relaxed(info + DRAM_VENDOR_MR6) & DRAM_VENDOR_MASK,
+ readl_relaxed(info + DRAM_VENDOR_MR7) & DRAM_VENDOR_MASK,
+ readl_relaxed(info + DRAM_VENDOR_MR8) & DRAM_VENDOR_MASK,
+ readl_relaxed(info + DRAM_VENDOR_ERROR));
+}
+
+static int brcmstb_dpfe_resume(struct platform_device *pdev)
+{
+ struct init_data init;
+
+ return brcmstb_dpfe_download_firmware(pdev, &init);
+}
+
+static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL);
+static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh);
+static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL);
+static struct attribute *dpfe_attrs[] = {
+ &dev_attr_dpfe_info.attr,
+ &dev_attr_dpfe_refresh.attr,
+ &dev_attr_dpfe_vendor.attr,
+ NULL
+};
+ATTRIBUTE_GROUPS(dpfe);
+
+static int brcmstb_dpfe_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct private_data *priv;
+ struct device *dpfe_dev;
+ struct init_data init;
+ struct resource *res;
+ u32 index;
+ int ret;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ mutex_init(&priv->lock);
+ platform_set_drvdata(pdev, priv);
+
+ /* Cell index is optional; default to 0 if not present. */
+ ret = of_property_read_u32(dev->of_node, "cell-index", &index);
+ if (ret)
+ index = 0;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-cpu");
+ priv->regs = devm_ioremap_resource(dev, res);
+ if (IS_ERR(priv->regs)) {
+ dev_err(dev, "couldn't map DCPU registers\n");
+ return -ENODEV;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-dmem");
+ priv->dmem = devm_ioremap_resource(dev, res);
+ if (IS_ERR(priv->dmem)) {
+ dev_err(dev, "Couldn't map DCPU data memory\n");
+ return -ENOENT;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-imem");
+ priv->imem = devm_ioremap_resource(dev, res);
+ if (IS_ERR(priv->imem)) {
+ dev_err(dev, "Couldn't map DCPU instruction memory\n");
+ return -ENOENT;
+ }
+
+ ret = brcmstb_dpfe_download_firmware(pdev, &init);
+ if (ret)
+ goto err;
+
+ dpfe_dev = devm_kzalloc(dev, sizeof(*dpfe_dev), GFP_KERNEL);
+ if (!dpfe_dev) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ priv->dev = dpfe_dev;
+ priv->index = index;
+
+ dpfe_dev->parent = dev;
+ dpfe_dev->groups = dpfe_groups;
+ dpfe_dev->of_node = dev->of_node;
+ dev_set_drvdata(dpfe_dev, priv);
+ dev_set_name(dpfe_dev, "dpfe%u", index);
+
+ ret = device_register(dpfe_dev);
+ if (ret)
+ goto err;
+
+ dev_info(dev, "registered.\n");
+
+ return 0;
+
+err:
+ dev_err(dev, "failed to initialize -- error %d\n", ret);
+
+ return ret;
+}
+
+static const struct of_device_id brcmstb_dpfe_of_match[] = {
+ { .compatible = "brcm,dpfe-cpu", },
+ {}
+};
+MODULE_DEVICE_TABLE(of, brcmstb_dpfe_of_match);
+
+static struct platform_driver brcmstb_dpfe_driver = {
+ .driver = {
+ .name = DRVNAME,
+ .of_match_table = brcmstb_dpfe_of_match,
+ },
+ .probe = brcmstb_dpfe_probe,
+ .resume = brcmstb_dpfe_resume,
+};
+
+module_platform_driver(brcmstb_dpfe_driver);
+
+MODULE_AUTHOR("Markus Mayer <mmayer@broadcom.com>");
+MODULE_DESCRIPTION("BRCMSTB DDR PHY Front End Driver");
+MODULE_LICENSE("GPL");
down_read(&mm->mmap_sem);
- for (dar = addr; dar < addr + size; dar += page_size) {
- if (!vma || dar < vma->vm_start || dar > vma->vm_end) {
+ vma = find_vma(mm, addr);
+ if (!vma) {
+ pr_err("Can't find vma for addr %016llx\n", addr);
+ rc = -EFAULT;
+ goto out;
+ }
+ /* get the size of the pages allocated */
+ page_size = vma_kernel_pagesize(vma);
+
+ for (dar = (addr & ~(page_size - 1)); dar < (addr + size); dar += page_size) {
+ if (dar < vma->vm_start || dar >= vma->vm_end) {
vma = find_vma(mm, addr);
if (!vma) {
pr_err("Can't find vma for addr %016llx\n", addr);
}
mqrq->areq.mrq = &brq->mrq;
-
- mmc_queue_bounce_pre(mqrq);
}
static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
brq = &mq_rq->brq;
old_req = mmc_queue_req_to_req(mq_rq);
type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
- mmc_queue_bounce_post(mq_rq);
switch (status) {
case MMC_BLK_SUCCESS:
return err;
}
+static void mmc_select_driver_type(struct mmc_card *card)
+{
+ int card_drv_type, drive_strength, drv_type;
+
+ card_drv_type = card->ext_csd.raw_driver_strength |
+ mmc_driver_type_mask(0);
+
+ drive_strength = mmc_select_drive_strength(card,
+ card->ext_csd.hs200_max_dtr,
+ card_drv_type, &drv_type);
+
+ card->drive_strength = drive_strength;
+
+ if (drv_type)
+ mmc_set_driver_type(card->host, drv_type);
+}
+
static int mmc_select_hs400es(struct mmc_card *card)
{
struct mmc_host *host = card->host;
goto out_err;
}
+ mmc_select_driver_type(card);
+
/* Switch card to HS400 */
val = EXT_CSD_TIMING_HS400 |
card->drive_strength << EXT_CSD_DRV_STR_SHIFT;
return err;
}
-static void mmc_select_driver_type(struct mmc_card *card)
-{
- int card_drv_type, drive_strength, drv_type;
-
- card_drv_type = card->ext_csd.raw_driver_strength |
- mmc_driver_type_mask(0);
-
- drive_strength = mmc_select_drive_strength(card,
- card->ext_csd.hs200_max_dtr,
- card_drv_type, &drv_type);
-
- card->drive_strength = drive_strength;
-
- if (drv_type)
- mmc_set_driver_type(card->host, drv_type);
-}
-
/*
* For device supporting HS200 mode, the following sequence
* should be done before executing the tuning process.
#include "core.h"
#include "card.h"
-#define MMC_QUEUE_BOUNCESZ 65536
-
/*
* Prepare a MMC request. This just filters out odd stuff.
*/
queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
}
-static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
-{
- unsigned int bouncesz = MMC_QUEUE_BOUNCESZ;
-
- if (host->max_segs != 1 || (host->caps & MMC_CAP_NO_BOUNCE_BUFF))
- return 0;
-
- if (bouncesz > host->max_req_size)
- bouncesz = host->max_req_size;
- if (bouncesz > host->max_seg_size)
- bouncesz = host->max_seg_size;
- if (bouncesz > host->max_blk_count * 512)
- bouncesz = host->max_blk_count * 512;
-
- if (bouncesz <= 512)
- return 0;
-
- return bouncesz;
-}
-
/**
* mmc_init_request() - initialize the MMC-specific per-request data
* @q: the request queue
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
- if (card->bouncesz) {
- mq_rq->bounce_buf = kmalloc(card->bouncesz, gfp);
- if (!mq_rq->bounce_buf)
- return -ENOMEM;
- if (card->bouncesz > 512) {
- mq_rq->sg = mmc_alloc_sg(1, gfp);
- if (!mq_rq->sg)
- return -ENOMEM;
- mq_rq->bounce_sg = mmc_alloc_sg(card->bouncesz / 512,
- gfp);
- if (!mq_rq->bounce_sg)
- return -ENOMEM;
- }
- } else {
- mq_rq->bounce_buf = NULL;
- mq_rq->bounce_sg = NULL;
- mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp);
- if (!mq_rq->sg)
- return -ENOMEM;
- }
+ mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp);
+ if (!mq_rq->sg)
+ return -ENOMEM;
return 0;
}
{
struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
- /* It is OK to kfree(NULL) so this will be smooth */
- kfree(mq_rq->bounce_sg);
- mq_rq->bounce_sg = NULL;
-
- kfree(mq_rq->bounce_buf);
- mq_rq->bounce_buf = NULL;
-
kfree(mq_rq->sg);
mq_rq->sg = NULL;
}
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
- card->bouncesz = mmc_queue_calc_bouncesz(host);
- if (card->bouncesz) {
- blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512);
- blk_queue_max_segments(mq->queue, card->bouncesz / 512);
- blk_queue_max_segment_size(mq->queue, card->bouncesz);
- } else {
- blk_queue_bounce_limit(mq->queue, limit);
- blk_queue_max_hw_sectors(mq->queue,
- min(host->max_blk_count, host->max_req_size / 512));
- blk_queue_max_segments(mq->queue, host->max_segs);
- blk_queue_max_segment_size(mq->queue, host->max_seg_size);
- }
+ blk_queue_bounce_limit(mq->queue, limit);
+ blk_queue_max_hw_sectors(mq->queue,
+ min(host->max_blk_count, host->max_req_size / 512));
+ blk_queue_max_segments(mq->queue, host->max_segs);
+ blk_queue_max_segment_size(mq->queue, host->max_seg_size);
sema_init(&mq->thread_sem, 1);
*/
unsigned int mmc_queue_map_sg(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
{
- unsigned int sg_len;
- size_t buflen;
- struct scatterlist *sg;
struct request *req = mmc_queue_req_to_req(mqrq);
- int i;
-
- if (!mqrq->bounce_buf)
- return blk_rq_map_sg(mq->queue, req, mqrq->sg);
-
- sg_len = blk_rq_map_sg(mq->queue, req, mqrq->bounce_sg);
-
- mqrq->bounce_sg_len = sg_len;
-
- buflen = 0;
- for_each_sg(mqrq->bounce_sg, sg, sg_len, i)
- buflen += sg->length;
-
- sg_init_one(mqrq->sg, mqrq->bounce_buf, buflen);
-
- return 1;
-}
-
-/*
- * If writing, bounce the data to the buffer before the request
- * is sent to the host driver
- */
-void mmc_queue_bounce_pre(struct mmc_queue_req *mqrq)
-{
- if (!mqrq->bounce_buf)
- return;
-
- if (rq_data_dir(mmc_queue_req_to_req(mqrq)) != WRITE)
- return;
-
- sg_copy_to_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len,
- mqrq->bounce_buf, mqrq->sg[0].length);
-}
-
-/*
- * If reading, bounce the data from the buffer after the request
- * has been handled by the host driver
- */
-void mmc_queue_bounce_post(struct mmc_queue_req *mqrq)
-{
- if (!mqrq->bounce_buf)
- return;
-
- if (rq_data_dir(mmc_queue_req_to_req(mqrq)) != READ)
- return;
- sg_copy_from_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len,
- mqrq->bounce_buf, mqrq->sg[0].length);
+ return blk_rq_map_sg(mq->queue, req, mqrq->sg);
}
struct mmc_queue_req {
struct mmc_blk_request brq;
struct scatterlist *sg;
- char *bounce_buf;
- struct scatterlist *bounce_sg;
- unsigned int bounce_sg_len;
struct mmc_async_req areq;
enum mmc_drv_op drv_op;
int drv_op_result;
extern void mmc_cleanup_queue(struct mmc_queue *);
extern void mmc_queue_suspend(struct mmc_queue *);
extern void mmc_queue_resume(struct mmc_queue *);
-
extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
struct mmc_queue_req *);
-extern void mmc_queue_bounce_pre(struct mmc_queue_req *);
-extern void mmc_queue_bounce_post(struct mmc_queue_req *);
extern int mmc_access_rpmb(struct mmc_queue *);
comment "MMC/SD/SDIO Host Controller Drivers"
config MMC_DEBUG
- bool "MMC host drivers debugginG"
+ bool "MMC host drivers debugging"
depends on MMC != n
help
This is an option for use by developers; most people should
*
* Copyright (C) 2016 Cavium Inc.
*/
+#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/mmc/mmc.h>
for (i = 0; i < CAVIUM_MAX_MMC; i++) {
if (host->slot[i])
cvm_mmc_of_slot_remove(host->slot[i]);
- if (host->slot_pdev[i])
+ if (host->slot_pdev[i]) {
+ get_device(&host->slot_pdev[i]->dev);
of_platform_device_destroy(&host->slot_pdev[i]->dev, NULL);
+ put_device(&host->slot_pdev[i]->dev);
+ }
}
clk_disable_unprepare(host->clk);
return ret;
*/
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD |
- MMC_CAP_3_3V_DDR | MMC_CAP_NO_BOUNCE_BUFF;
+ MMC_CAP_3_3V_DDR;
if (host->use_sg)
mmc->max_segs = 16;
div->shift = __ffs(CLK_DIV_MASK);
div->width = __builtin_popcountl(CLK_DIV_MASK);
div->hw.init = &init;
- div->flags = (CLK_DIVIDER_ONE_BASED |
- CLK_DIVIDER_ROUND_CLOSEST);
+ div->flags = CLK_DIVIDER_ONE_BASED;
clk = devm_clk_register(host->dev, &div->hw);
if (WARN_ON(IS_ERR(clk)))
static int meson_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode)
{
struct meson_host *host = mmc_priv(mmc);
+ int ret;
+
+ /*
+ * If this is the initial tuning, try to get a sane Rx starting
+ * phase before doing the actual tuning.
+ */
+ if (!mmc->doing_retune) {
+ ret = meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
+
+ if (ret)
+ return ret;
+ }
+
+ ret = meson_mmc_clk_phase_tuning(mmc, opcode, host->tx_clk);
+ if (ret)
+ return ret;
return meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
}
case MMC_POWER_UP:
if (!IS_ERR(mmc->supply.vmmc))
mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd);
+
+ /* Reset phases */
+ clk_set_phase(host->rx_clk, 0);
+ clk_set_phase(host->tx_clk, 270);
+
break;
case MMC_POWER_ON:
host->vqmmc_enabled = true;
}
- /* Reset rx phase */
- clk_set_phase(host->rx_clk, 0);
break;
}
pxamci_init_ocr(host);
- /*
- * This architecture used to disable bounce buffers through its
- * defconfig, now it is done at runtime as a host property.
- */
- mmc->caps = MMC_CAP_NO_BOUNCE_BUFF;
+ mmc->caps = 0;
host->cmdat = 0;
if (!cpu_is_pxa25x()) {
mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ;
enum {
INTEL_DSM_FNS = 0,
+ INTEL_DSM_V18_SWITCH = 3,
INTEL_DSM_DRV_STRENGTH = 9,
INTEL_DSM_D3_RETUNE = 10,
};
sdhci_writel(host, val, INTEL_HS400_ES_REG);
}
+static void sdhci_intel_voltage_switch(struct sdhci_host *host)
+{
+ struct sdhci_pci_slot *slot = sdhci_priv(host);
+ struct intel_host *intel_host = sdhci_pci_priv(slot);
+ struct device *dev = &slot->chip->pdev->dev;
+ u32 result = 0;
+ int err;
+
+ err = intel_dsm(intel_host, dev, INTEL_DSM_V18_SWITCH, &result);
+ pr_debug("%s: %s DSM error %d result %u\n",
+ mmc_hostname(host->mmc), __func__, err, result);
+}
+
static const struct sdhci_ops sdhci_intel_byt_ops = {
.set_clock = sdhci_set_clock,
.set_power = sdhci_intel_set_power,
.reset = sdhci_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.hw_reset = sdhci_pci_hw_reset,
+ .voltage_switch = sdhci_intel_voltage_switch,
};
static void byt_read_dsm(struct sdhci_pci_slot *slot)
{
struct sdhci_pltfm_host *pltfm_host;
struct sdhci_host *host;
+ struct xenon_priv *priv;
int err;
host = sdhci_pltfm_init(pdev, &sdhci_xenon_pdata,
return PTR_ERR(host);
pltfm_host = sdhci_priv(host);
+ priv = sdhci_pltfm_priv(pltfm_host);
/*
* Link Xenon specific mmc_host_ops function,
if (err)
goto free_pltfm;
+ priv->axi_clk = devm_clk_get(&pdev->dev, "axi");
+ if (IS_ERR(priv->axi_clk)) {
+ err = PTR_ERR(priv->axi_clk);
+ if (err == -EPROBE_DEFER)
+ goto err_clk;
+ } else {
+ err = clk_prepare_enable(priv->axi_clk);
+ if (err)
+ goto err_clk;
+ }
+
err = mmc_of_parse(host->mmc);
if (err)
- goto err_clk;
+ goto err_clk_axi;
sdhci_get_of_property(pdev);
/* Xenon specific dt parse */
err = xenon_probe_dt(pdev);
if (err)
- goto err_clk;
+ goto err_clk_axi;
err = xenon_sdhc_prepare(host);
if (err)
- goto err_clk;
+ goto err_clk_axi;
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_disable(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev);
xenon_sdhc_unprepare(host);
+err_clk_axi:
+ clk_disable_unprepare(priv->axi_clk);
err_clk:
clk_disable_unprepare(pltfm_host->clk);
free_pltfm:
{
struct sdhci_host *host = platform_get_drvdata(pdev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
pm_runtime_get_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
sdhci_remove_host(host, 0);
xenon_sdhc_unprepare(host);
-
+ clk_disable_unprepare(priv->axi_clk);
clk_disable_unprepare(pltfm_host->clk);
sdhci_pltfm_free(pdev);
unsigned char bus_width;
unsigned char timing;
unsigned int clock;
+ struct clk *axi_clk;
int phy_type;
/*
#define CMDREQ_TIMEOUT 5000
-#ifdef CONFIG_MMC_DEBUG
-
-#define STATUS_TO_TEXT(a, status, i) \
- do { \
- if ((status) & TMIO_STAT_##a) { \
- if ((i)++) \
- printk(KERN_DEBUG " | "); \
- printk(KERN_DEBUG #a); \
- } \
- } while (0)
-
-static void pr_debug_status(u32 status)
-{
- int i = 0;
-
- pr_debug("status: %08x = ", status);
- STATUS_TO_TEXT(CARD_REMOVE, status, i);
- STATUS_TO_TEXT(CARD_INSERT, status, i);
- STATUS_TO_TEXT(SIGSTATE, status, i);
- STATUS_TO_TEXT(WRPROTECT, status, i);
- STATUS_TO_TEXT(CARD_REMOVE_A, status, i);
- STATUS_TO_TEXT(CARD_INSERT_A, status, i);
- STATUS_TO_TEXT(SIGSTATE_A, status, i);
- STATUS_TO_TEXT(CMD_IDX_ERR, status, i);
- STATUS_TO_TEXT(STOPBIT_ERR, status, i);
- STATUS_TO_TEXT(ILL_FUNC, status, i);
- STATUS_TO_TEXT(CMD_BUSY, status, i);
- STATUS_TO_TEXT(CMDRESPEND, status, i);
- STATUS_TO_TEXT(DATAEND, status, i);
- STATUS_TO_TEXT(CRCFAIL, status, i);
- STATUS_TO_TEXT(DATATIMEOUT, status, i);
- STATUS_TO_TEXT(CMDTIMEOUT, status, i);
- STATUS_TO_TEXT(RXOVERFLOW, status, i);
- STATUS_TO_TEXT(TXUNDERRUN, status, i);
- STATUS_TO_TEXT(RXRDY, status, i);
- STATUS_TO_TEXT(TXRQ, status, i);
- STATUS_TO_TEXT(ILL_ACCESS, status, i);
- printk("\n");
-}
-
-#else
-#define pr_debug_status(s) do { } while (0)
-#endif
-
static void tmio_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
{
struct tmio_mmc_host *host = mmc_priv(mmc);
status = sd_ctrl_read16_and_16_as_32(host, CTL_STATUS);
ireg = status & TMIO_MASK_IRQ & ~host->sdcard_irq_mask;
- pr_debug_status(status);
- pr_debug_status(ireg);
-
/* Clear the status except the interrupt status */
sd_ctrl_write32_as_16_and_16(host, CTL_STATUS, TMIO_MASK_IRQ);
slave->mtd.erasesize = parent->erasesize;
}
+ /*
+ * Slave erasesize might differ from the master one if the master
+ * exposes several regions with different erasesize. Adjust
+ * wr_alignment accordingly.
+ */
+ if (!(slave->mtd.flags & MTD_NO_ERASE))
+ wr_alignment = slave->mtd.erasesize;
+
tmp = slave->offset;
remainder = do_div(tmp, wr_alignment);
if ((slave->mtd.flags & MTD_WRITEABLE) && remainder) {
size += (req->ecc.strength + 1) * sizeof(u16);
/* Reserve space for mu, dmu and delta. */
size = ALIGN(size, sizeof(s32));
- size += (req->ecc.strength + 1) * sizeof(s32);
+ size += (req->ecc.strength + 1) * sizeof(s32) * 3;
user = kzalloc(size, GFP_KERNEL);
if (!user)
}
res = clk_prepare_enable(host->clk);
if (res)
- goto err_exit1;
+ goto err_put_clk;
nand_chip->cmd_ctrl = lpc32xx_nand_cmd_ctrl;
nand_chip->dev_ready = lpc32xx_nand_device_ready;
dma_release_channel(host->dma_chan);
err_exit2:
clk_disable_unprepare(host->clk);
+err_put_clk:
clk_put(host->clk);
err_exit1:
lpc32xx_wp_enable(host);
static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
struct mtd_oob_ops *ops)
{
- int chipnr, realpage, page, blockmask, column;
+ int chipnr, realpage, page, column;
struct nand_chip *chip = mtd_to_nand(mtd);
uint32_t writelen = ops->len;
realpage = (int)(to >> chip->page_shift);
page = realpage & chip->pagemask;
- blockmask = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1;
/* Invalidate the page cache, when we write to the cached page */
if (to <= ((loff_t)chip->pagebuf << chip->page_shift) &&
* @nor: pointer to a 'struct spi_nor'
* @addr: offset in the SFDP area to start reading data from
* @len: number of bytes to read
- * @buf: buffer where the SFDP data are copied into
+ * @buf: buffer where the SFDP data are copied into (dma-safe memory)
*
* Whatever the actual numbers of bytes for address and dummy cycles are
* for (Fast) Read commands, the Read SFDP (5Ah) instruction is always
return ret;
}
+/**
+ * spi_nor_read_sfdp_dma_unsafe() - read Serial Flash Discoverable Parameters.
+ * @nor: pointer to a 'struct spi_nor'
+ * @addr: offset in the SFDP area to start reading data from
+ * @len: number of bytes to read
+ * @buf: buffer where the SFDP data are copied into
+ *
+ * Wrap spi_nor_read_sfdp() using a kmalloc'ed bounce buffer as @buf is now not
+ * guaranteed to be dma-safe.
+ *
+ * Return: -ENOMEM if kmalloc() fails, the return code of spi_nor_read_sfdp()
+ * otherwise.
+ */
+static int spi_nor_read_sfdp_dma_unsafe(struct spi_nor *nor, u32 addr,
+ size_t len, void *buf)
+{
+ void *dma_safe_buf;
+ int ret;
+
+ dma_safe_buf = kmalloc(len, GFP_KERNEL);
+ if (!dma_safe_buf)
+ return -ENOMEM;
+
+ ret = spi_nor_read_sfdp(nor, addr, len, dma_safe_buf);
+ memcpy(buf, dma_safe_buf, len);
+ kfree(dma_safe_buf);
+
+ return ret;
+}
+
struct sfdp_parameter_header {
u8 id_lsb;
u8 minor;
bfpt_header->length * sizeof(u32));
addr = SFDP_PARAM_HEADER_PTP(bfpt_header);
memset(&bfpt, 0, sizeof(bfpt));
- err = spi_nor_read_sfdp(nor, addr, len, &bfpt);
+ err = spi_nor_read_sfdp_dma_unsafe(nor, addr, len, &bfpt);
if (err < 0)
return err;
params->size = bfpt.dwords[BFPT_DWORD(2)];
if (params->size & BIT(31)) {
params->size &= ~BIT(31);
+
+ /*
+ * Prevent overflows on params->size. Anyway, a NOR of 2^64
+ * bits is unlikely to exist so this error probably means
+ * the BFPT we are reading is corrupted/wrong.
+ */
+ if (params->size > 63)
+ return -EINVAL;
+
params->size = 1ULL << params->size;
} else {
params->size++;
int i, err;
/* Get the SFDP header. */
- err = spi_nor_read_sfdp(nor, 0, sizeof(header), &header);
+ err = spi_nor_read_sfdp_dma_unsafe(nor, 0, sizeof(header), &header);
if (err < 0)
return err;
};
int i, err;
+ /* DSA and CPU ports have to be members of multiple vlans */
+ if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
+ return 0;
+
if (!vid_begin)
return -EOPNOTSUPP;
if (chip->irq > 0) {
if (chip->info->g2_irqs > 0)
mv88e6xxx_g2_irq_free(chip);
+ mutex_lock(&chip->reg_lock);
mv88e6xxx_g1_irq_free(chip);
+ mutex_unlock(&chip->reg_lock);
}
}
#define AQ_CFG_SKB_FRAGS_MAX 32U
+/* Number of descriptors available in one ring to resume this ring queue
+ */
+#define AQ_CFG_RESTART_DESC_THRES (AQ_CFG_SKB_FRAGS_MAX * 2)
+
#define AQ_CFG_NAPI_WEIGHT 64U
#define AQ_CFG_MULTICAST_ADDRESS_MAX 32U
return 0;
}
+static int aq_nic_update_link_status(struct aq_nic_s *self)
+{
+ int err = self->aq_hw_ops.hw_get_link_status(self->aq_hw);
+
+ if (err)
+ return err;
+
+ if (self->link_status.mbps != self->aq_hw->aq_link_status.mbps)
+ pr_info("%s: link change old %d new %d\n",
+ AQ_CFG_DRV_NAME, self->link_status.mbps,
+ self->aq_hw->aq_link_status.mbps);
+
+ self->link_status = self->aq_hw->aq_link_status;
+ if (!netif_carrier_ok(self->ndev) && self->link_status.mbps) {
+ aq_utils_obj_set(&self->header.flags,
+ AQ_NIC_FLAG_STARTED);
+ aq_utils_obj_clear(&self->header.flags,
+ AQ_NIC_LINK_DOWN);
+ netif_carrier_on(self->ndev);
+ netif_tx_wake_all_queues(self->ndev);
+ }
+ if (netif_carrier_ok(self->ndev) && !self->link_status.mbps) {
+ netif_carrier_off(self->ndev);
+ netif_tx_disable(self->ndev);
+ aq_utils_obj_set(&self->header.flags, AQ_NIC_LINK_DOWN);
+ }
+ return 0;
+}
+
static void aq_nic_service_timer_cb(unsigned long param)
{
struct aq_nic_s *self = (struct aq_nic_s *)param;
if (aq_utils_obj_test(&self->header.flags, AQ_NIC_FLAGS_IS_NOT_READY))
goto err_exit;
- err = self->aq_hw_ops.hw_get_link_status(self->aq_hw);
- if (err < 0)
+ err = aq_nic_update_link_status(self);
+ if (err)
goto err_exit;
- self->link_status = self->aq_hw->aq_link_status;
-
self->aq_hw_ops.hw_interrupt_moderation_set(self->aq_hw,
self->aq_nic_cfg.is_interrupt_moderation);
- if (self->link_status.mbps) {
- aq_utils_obj_set(&self->header.flags,
- AQ_NIC_FLAG_STARTED);
- aq_utils_obj_clear(&self->header.flags,
- AQ_NIC_LINK_DOWN);
- netif_carrier_on(self->ndev);
- } else {
- netif_carrier_off(self->ndev);
- aq_utils_obj_set(&self->header.flags, AQ_NIC_LINK_DOWN);
- }
-
memset(&stats_rx, 0U, sizeof(struct aq_ring_stats_rx_s));
memset(&stats_tx, 0U, sizeof(struct aq_ring_stats_tx_s));
for (i = AQ_DIMOF(self->aq_vec); i--;) {
SET_NETDEV_DEV(ndev, dev);
ndev->if_port = port;
- ndev->min_mtu = ETH_MIN_MTU;
self->ndev = ndev;
self->aq_pci_func = aq_pci_func;
int aq_nic_ndev_register(struct aq_nic_s *self)
{
int err = 0;
- unsigned int i = 0U;
if (!self->ndev) {
err = -EINVAL;
netif_carrier_off(self->ndev);
- for (i = AQ_CFG_VECS_MAX; i--;)
- aq_nic_ndev_queue_stop(self, i);
+ netif_tx_disable(self->ndev);
err = register_netdev(self->ndev);
if (err < 0)
self->ndev->features = aq_hw_caps->hw_features;
self->ndev->priv_flags = aq_hw_caps->hw_priv_flags;
self->ndev->mtu = aq_nic_cfg->mtu - ETH_HLEN;
+ self->ndev->max_mtu = self->aq_hw_caps.mtu - ETH_FCS_LEN - ETH_HLEN;
return 0;
}
err = -EINVAL;
goto err_exit;
}
- if (netif_running(ndev)) {
- unsigned int i;
-
- for (i = AQ_CFG_VECS_MAX; i--;)
- netif_stop_subqueue(ndev, i);
- }
+ if (netif_running(ndev))
+ netif_tx_disable(ndev);
for (self->aq_vecs = 0; self->aq_vecs < self->aq_nic_cfg.vecs;
self->aq_vecs++) {
return err;
}
-void aq_nic_ndev_queue_start(struct aq_nic_s *self, unsigned int idx)
-{
- netif_start_subqueue(self->ndev, idx);
-}
-
-void aq_nic_ndev_queue_stop(struct aq_nic_s *self, unsigned int idx)
-{
- netif_stop_subqueue(self->ndev, idx);
-}
-
int aq_nic_start(struct aq_nic_s *self)
{
struct aq_vec_s *aq_vec = NULL;
goto err_exit;
}
- for (i = 0U, aq_vec = self->aq_vec[0];
- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
- aq_nic_ndev_queue_start(self, i);
-
err = netif_set_real_num_tx_queues(self->ndev, self->aq_vecs);
if (err < 0)
goto err_exit;
if (err < 0)
goto err_exit;
+ netif_tx_start_all_queues(self->ndev);
+
err_exit:
return err;
}
unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
unsigned int frag_count = 0U;
unsigned int dx = ring->sw_tail;
+ struct aq_ring_buff_s *first = NULL;
struct aq_ring_buff_s *dx_buff = &ring->buff_ring[dx];
if (unlikely(skb_is_gso(skb))) {
dx_buff->len_l4 = tcp_hdrlen(skb);
dx_buff->mss = skb_shinfo(skb)->gso_size;
dx_buff->is_txc = 1U;
+ dx_buff->eop_index = 0xffffU;
dx_buff->is_ipv6 =
(ip_hdr(skb)->version == 6) ? 1U : 0U;
if (unlikely(dma_mapping_error(aq_nic_get_dev(self), dx_buff->pa)))
goto exit;
+ first = dx_buff;
dx_buff->len_pkt = skb->len;
dx_buff->is_sop = 1U;
dx_buff->is_mapped = 1U;
for (; nr_frags--; ++frag_count) {
unsigned int frag_len = 0U;
+ unsigned int buff_offset = 0U;
+ unsigned int buff_size = 0U;
dma_addr_t frag_pa;
skb_frag_t *frag = &skb_shinfo(skb)->frags[frag_count];
frag_len = skb_frag_size(frag);
- frag_pa = skb_frag_dma_map(aq_nic_get_dev(self), frag, 0,
- frag_len, DMA_TO_DEVICE);
- if (unlikely(dma_mapping_error(aq_nic_get_dev(self), frag_pa)))
- goto mapping_error;
+ while (frag_len) {
+ if (frag_len > AQ_CFG_TX_FRAME_MAX)
+ buff_size = AQ_CFG_TX_FRAME_MAX;
+ else
+ buff_size = frag_len;
+
+ frag_pa = skb_frag_dma_map(aq_nic_get_dev(self),
+ frag,
+ buff_offset,
+ buff_size,
+ DMA_TO_DEVICE);
+
+ if (unlikely(dma_mapping_error(aq_nic_get_dev(self),
+ frag_pa)))
+ goto mapping_error;
- while (frag_len > AQ_CFG_TX_FRAME_MAX) {
dx = aq_ring_next_dx(ring, dx);
dx_buff = &ring->buff_ring[dx];
dx_buff->flags = 0U;
- dx_buff->len = AQ_CFG_TX_FRAME_MAX;
+ dx_buff->len = buff_size;
dx_buff->pa = frag_pa;
dx_buff->is_mapped = 1U;
+ dx_buff->eop_index = 0xffffU;
+
+ frag_len -= buff_size;
+ buff_offset += buff_size;
- frag_len -= AQ_CFG_TX_FRAME_MAX;
- frag_pa += AQ_CFG_TX_FRAME_MAX;
++ret;
}
-
- dx = aq_ring_next_dx(ring, dx);
- dx_buff = &ring->buff_ring[dx];
-
- dx_buff->flags = 0U;
- dx_buff->len = frag_len;
- dx_buff->pa = frag_pa;
- dx_buff->is_mapped = 1U;
- ++ret;
}
+ first->eop_index = dx;
dx_buff->is_eop = 1U;
dx_buff->skb = skb;
goto exit;
unsigned int vec = skb->queue_mapping % self->aq_nic_cfg.vecs;
unsigned int tc = 0U;
int err = NETDEV_TX_OK;
- bool is_nic_in_bad_state;
frags = skb_shinfo(skb)->nr_frags + 1;
goto err_exit;
}
- is_nic_in_bad_state = aq_utils_obj_test(&self->header.flags,
- AQ_NIC_FLAGS_IS_NOT_TX_READY) ||
- (aq_ring_avail_dx(ring) <
- AQ_CFG_SKB_FRAGS_MAX);
+ aq_ring_update_queue_state(ring);
- if (is_nic_in_bad_state) {
- aq_nic_ndev_queue_stop(self, ring->idx);
+ /* Above status update may stop the queue. Check this. */
+ if (__netif_subqueue_stopped(self->ndev, ring->idx)) {
err = NETDEV_TX_BUSY;
goto err_exit;
}
ring,
frags);
if (err >= 0) {
- if (aq_ring_avail_dx(ring) < AQ_CFG_SKB_FRAGS_MAX + 1)
- aq_nic_ndev_queue_stop(self, ring->idx);
-
++ring->stats.tx.packets;
ring->stats.tx.bytes += skb->len;
}
int aq_nic_set_mtu(struct aq_nic_s *self, int new_mtu)
{
- int err = 0;
-
- if (new_mtu > self->aq_hw_caps.mtu) {
- err = -EINVAL;
- goto err_exit;
- }
self->aq_nic_cfg.mtu = new_mtu;
-err_exit:
- return err;
+ return 0;
}
int aq_nic_set_mac(struct aq_nic_s *self, struct net_device *ndev)
struct aq_vec_s *aq_vec = NULL;
unsigned int i = 0U;
- for (i = 0U, aq_vec = self->aq_vec[0];
- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
- aq_nic_ndev_queue_stop(self, i);
+ netif_tx_disable(self->ndev);
del_timer_sync(&self->service_timer);
int aq_nic_init(struct aq_nic_s *self);
int aq_nic_cfg_start(struct aq_nic_s *self);
int aq_nic_ndev_register(struct aq_nic_s *self);
-void aq_nic_ndev_queue_start(struct aq_nic_s *self, unsigned int idx);
-void aq_nic_ndev_queue_stop(struct aq_nic_s *self, unsigned int idx);
void aq_nic_ndev_free(struct aq_nic_s *self);
int aq_nic_start(struct aq_nic_s *self);
int aq_nic_xmit(struct aq_nic_s *self, struct sk_buff *skb);
return 0;
}
+static inline bool aq_ring_dx_in_range(unsigned int h, unsigned int i,
+ unsigned int t)
+{
+ return (h < t) ? ((h < i) && (i < t)) : ((h < i) || (i < t));
+}
+
+void aq_ring_update_queue_state(struct aq_ring_s *ring)
+{
+ if (aq_ring_avail_dx(ring) <= AQ_CFG_SKB_FRAGS_MAX)
+ aq_ring_queue_stop(ring);
+ else if (aq_ring_avail_dx(ring) > AQ_CFG_RESTART_DESC_THRES)
+ aq_ring_queue_wake(ring);
+}
+
+void aq_ring_queue_wake(struct aq_ring_s *ring)
+{
+ struct net_device *ndev = aq_nic_get_ndev(ring->aq_nic);
+
+ if (__netif_subqueue_stopped(ndev, ring->idx)) {
+ netif_wake_subqueue(ndev, ring->idx);
+ ring->stats.tx.queue_restarts++;
+ }
+}
+
+void aq_ring_queue_stop(struct aq_ring_s *ring)
+{
+ struct net_device *ndev = aq_nic_get_ndev(ring->aq_nic);
+
+ if (!__netif_subqueue_stopped(ndev, ring->idx))
+ netif_stop_subqueue(ndev, ring->idx);
+}
+
void aq_ring_tx_clean(struct aq_ring_s *self)
{
struct device *dev = aq_nic_get_dev(self->aq_nic);
struct aq_ring_buff_s *buff = &self->buff_ring[self->sw_head];
if (likely(buff->is_mapped)) {
- if (unlikely(buff->is_sop))
+ if (unlikely(buff->is_sop)) {
+ if (!buff->is_eop &&
+ buff->eop_index != 0xffffU &&
+ (!aq_ring_dx_in_range(self->sw_head,
+ buff->eop_index,
+ self->hw_head)))
+ break;
+
dma_unmap_single(dev, buff->pa, buff->len,
DMA_TO_DEVICE);
- else
+ } else {
dma_unmap_page(dev, buff->pa, buff->len,
DMA_TO_DEVICE);
+ }
}
if (unlikely(buff->is_eop))
dev_kfree_skb_any(buff->skb);
- }
-}
-static inline unsigned int aq_ring_dx_in_range(unsigned int h, unsigned int i,
- unsigned int t)
-{
- return (h < t) ? ((h < i) && (i < t)) : ((h < i) || (i < t));
+ buff->pa = 0U;
+ buff->eop_index = 0xffffU;
+ }
}
#define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
};
union {
struct {
- u32 len:16;
+ u16 len;
u32 is_ip_cso:1;
u32 is_udp_cso:1;
u32 is_tcp_cso:1;
u32 is_cleaned:1;
u32 is_error:1;
u32 rsvd3:6;
+ u16 eop_index;
+ u16 rsvd4;
};
- u32 flags;
+ u64 flags;
};
};
u64 errors;
u64 packets;
u64 bytes;
+ u64 queue_restarts;
};
union aq_ring_stats_s {
int aq_ring_init(struct aq_ring_s *self);
void aq_ring_rx_deinit(struct aq_ring_s *self);
void aq_ring_free(struct aq_ring_s *self);
+void aq_ring_update_queue_state(struct aq_ring_s *ring);
+void aq_ring_queue_wake(struct aq_ring_s *ring);
+void aq_ring_queue_stop(struct aq_ring_s *ring);
void aq_ring_tx_clean(struct aq_ring_s *self);
int aq_ring_rx_clean(struct aq_ring_s *self,
struct napi_struct *napi,
if (ring[AQ_VEC_TX_ID].sw_head !=
ring[AQ_VEC_TX_ID].hw_head) {
aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
-
- if (aq_ring_avail_dx(&ring[AQ_VEC_TX_ID]) >
- AQ_CFG_SKB_FRAGS_MAX) {
- aq_nic_ndev_queue_start(self->aq_nic,
- ring[AQ_VEC_TX_ID].idx);
- }
+ aq_ring_update_queue_state(&ring[AQ_VEC_TX_ID]);
was_tx_cleaned = true;
}
stats_tx->packets += tx->packets;
stats_tx->bytes += tx->bytes;
stats_tx->errors += tx->errors;
+ stats_tx->queue_restarts += tx->queue_restarts;
}
}
#include "../aq_common.h"
-#define HW_ATL_B0_MTU_JUMBO (16000U)
+#define HW_ATL_B0_MTU_JUMBO 16352U
#define HW_ATL_B0_MTU 1514U
#define HW_ATL_B0_TX_RINGS 4U
break;
default:
- link_status->mbps = 0U;
- break;
+ return -EBUSY;
}
}
netif_dbg(priv, hw, priv->netdev, "updated MIB counters\n");
}
+static void bcm_sysport_update_tx_stats(struct bcm_sysport_priv *priv,
+ u64 *tx_bytes, u64 *tx_packets)
+{
+ struct bcm_sysport_tx_ring *ring;
+ u64 bytes = 0, packets = 0;
+ unsigned int start;
+ unsigned int q;
+
+ for (q = 0; q < priv->netdev->num_tx_queues; q++) {
+ ring = &priv->tx_rings[q];
+ do {
+ start = u64_stats_fetch_begin_irq(&priv->syncp);
+ bytes = ring->bytes;
+ packets = ring->packets;
+ } while (u64_stats_fetch_retry_irq(&priv->syncp, start));
+
+ *tx_bytes += bytes;
+ *tx_packets += packets;
+ }
+}
+
static void bcm_sysport_get_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 *data)
{
struct bcm_sysport_stats64 *stats64 = &priv->stats64;
struct u64_stats_sync *syncp = &priv->syncp;
struct bcm_sysport_tx_ring *ring;
+ u64 tx_bytes = 0, tx_packets = 0;
unsigned int start;
int i, j;
- if (netif_running(dev))
+ if (netif_running(dev)) {
bcm_sysport_update_mib_counters(priv);
+ bcm_sysport_update_tx_stats(priv, &tx_bytes, &tx_packets);
+ stats64->tx_bytes = tx_bytes;
+ stats64->tx_packets = tx_packets;
+ }
for (i = 0, j = 0; i < BCM_SYSPORT_STATS_LEN; i++) {
const struct bcm_sysport_stats *s;
continue;
p += s->stat_offset;
- if (s->stat_sizeof == sizeof(u64))
+ if (s->stat_sizeof == sizeof(u64) &&
+ s->type == BCM_SYSPORT_STAT_NETDEV64) {
do {
start = u64_stats_fetch_begin_irq(syncp);
data[i] = *(u64 *)p;
} while (u64_stats_fetch_retry_irq(syncp, start));
- else
+ } else
data[i] = *(u32 *)p;
j++;
}
{
struct bcm_sysport_priv *priv = netdev_priv(dev);
struct bcm_sysport_stats64 *stats64 = &priv->stats64;
- struct bcm_sysport_tx_ring *ring;
- u64 tx_packets = 0, tx_bytes = 0;
unsigned int start;
- unsigned int q;
netdev_stats_to_stats64(stats, &dev->stats);
- for (q = 0; q < dev->num_tx_queues; q++) {
- ring = &priv->tx_rings[q];
- do {
- start = u64_stats_fetch_begin_irq(&priv->syncp);
- tx_bytes = ring->bytes;
- tx_packets = ring->packets;
- } while (u64_stats_fetch_retry_irq(&priv->syncp, start));
-
- stats->tx_bytes += tx_bytes;
- stats->tx_packets += tx_packets;
- }
-
- stats64->tx_bytes = stats->tx_bytes;
- stats64->tx_packets = stats->tx_packets;
+ bcm_sysport_update_tx_stats(priv, &stats->tx_bytes,
+ &stats->tx_packets);
do {
start = u64_stats_fetch_begin_irq(&priv->syncp);
{
int rc = 0;
+ if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
+ cls_flower->common.chain_index)
+ return -EOPNOTSUPP;
+
switch (cls_flower->command) {
case TC_CLSFLOWER_REPLACE:
rc = bnxt_tc_add_flow(bp, src_fid, cls_flower);
ret = cnic_alloc_dma(dev, kwq_16_dma, pages, 0);
if (ret)
- return -ENOMEM;
+ goto error;
n = CNIC_PAGE_SIZE / CNIC_KWQ16_DATA_SIZE;
for (i = 0, j = 0; i < cp->max_cid_space; i++) {
#define FEC_ENET_TS_AVAIL ((uint)0x00010000)
#define FEC_ENET_TS_TIMER ((uint)0x00008000)
-#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII | FEC_ENET_TS_TIMER)
-#define FEC_NAPI_IMASK (FEC_ENET_MII | FEC_ENET_TS_TIMER)
+#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII)
+#define FEC_NAPI_IMASK FEC_ENET_MII
#define FEC_RX_DISABLED_IMASK (FEC_DEFAULT_IMASK & (~FEC_ENET_RXF))
/* ENET interrupt coalescing macro define */
if (int_events == 0)
return false;
- if (int_events & FEC_ENET_RXF)
+ if (int_events & FEC_ENET_RXF_0)
fep->work_rx |= (1 << 2);
if (int_events & FEC_ENET_RXF_1)
fep->work_rx |= (1 << 0);
if (int_events & FEC_ENET_RXF_2)
fep->work_rx |= (1 << 1);
- if (int_events & FEC_ENET_TXF)
+ if (int_events & FEC_ENET_TXF_0)
fep->work_tx |= (1 << 2);
if (int_events & FEC_ENET_TXF_1)
fep->work_tx |= (1 << 0);
}
if (fep->ptp_clock)
- fec_ptp_check_pps_event(fep);
-
+ if (fec_ptp_check_pps_event(fep))
+ ret = IRQ_HANDLED;
return ret;
}
}
static int hnae3_match_n_instantiate(struct hnae3_client *client,
- struct hnae3_ae_dev *ae_dev,
- bool is_reg, bool *matched)
+ struct hnae3_ae_dev *ae_dev, bool is_reg)
{
int ret;
- *matched = false;
-
/* check if this client matches the type of ae_dev */
if (!(hnae3_client_match(client->type, ae_dev->dev_type) &&
hnae_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))) {
return 0;
}
- /* there is a match of client and dev */
- *matched = true;
/* now, (un-)instantiate client by calling lower layer */
if (is_reg) {
{
struct hnae3_client *client_tmp;
struct hnae3_ae_dev *ae_dev;
- bool matched;
int ret = 0;
mutex_lock(&hnae3_common_lock);
/* if the client could not be initialized on current port, for
* any error reasons, move on to next available port
*/
- ret = hnae3_match_n_instantiate(client, ae_dev, true, &matched);
+ ret = hnae3_match_n_instantiate(client, ae_dev, true);
if (ret)
dev_err(&ae_dev->pdev->dev,
"match and instantiation failed for port\n");
void hnae3_unregister_client(struct hnae3_client *client)
{
struct hnae3_ae_dev *ae_dev;
- bool matched;
mutex_lock(&hnae3_common_lock);
/* un-initialize the client on every matched port */
list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
- hnae3_match_n_instantiate(client, ae_dev, false, &matched);
+ hnae3_match_n_instantiate(client, ae_dev, false);
}
list_del(&client->node);
const struct pci_device_id *id;
struct hnae3_ae_dev *ae_dev;
struct hnae3_client *client;
- bool matched;
int ret = 0;
mutex_lock(&hnae3_common_lock);
* initialize the figure out client instance
*/
list_for_each_entry(client, &hnae3_client_list, node) {
- ret = hnae3_match_n_instantiate(client, ae_dev, true,
- &matched);
+ ret = hnae3_match_n_instantiate(client, ae_dev, true);
if (ret)
dev_err(&ae_dev->pdev->dev,
"match and instantiation failed\n");
- if (matched)
- break;
}
}
const struct pci_device_id *id;
struct hnae3_ae_dev *ae_dev;
struct hnae3_client *client;
- bool matched;
mutex_lock(&hnae3_common_lock);
/* Check if there are matched ae_dev */
/* check the client list for the match with this ae_dev type and
* un-initialize the figure out client instance
*/
- list_for_each_entry(client, &hnae3_client_list, node) {
- hnae3_match_n_instantiate(client, ae_dev, false,
- &matched);
- if (matched)
- break;
- }
+ list_for_each_entry(client, &hnae3_client_list, node)
+ hnae3_match_n_instantiate(client, ae_dev, false);
ae_algo->ops->uninit_ae_dev(ae_dev);
hnae_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
const struct pci_device_id *id;
struct hnae3_ae_algo *ae_algo;
struct hnae3_client *client;
- bool matched;
int ret = 0;
mutex_lock(&hnae3_common_lock);
* initialize the figure out client instance
*/
list_for_each_entry(client, &hnae3_client_list, node) {
- ret = hnae3_match_n_instantiate(client, ae_dev, true,
- &matched);
+ ret = hnae3_match_n_instantiate(client, ae_dev, true);
if (ret)
dev_err(&ae_dev->pdev->dev,
"match and instantiation failed\n");
- if (matched)
- break;
}
out_err:
const struct pci_device_id *id;
struct hnae3_ae_algo *ae_algo;
struct hnae3_client *client;
- bool matched;
mutex_lock(&hnae3_common_lock);
/* Check if there are matched ae_algo */
if (!id)
continue;
- list_for_each_entry(client, &hnae3_client_list, node) {
- hnae3_match_n_instantiate(client, ae_dev, false,
- &matched);
- if (matched)
- break;
- }
+ list_for_each_entry(client, &hnae3_client_list, node)
+ hnae3_match_n_instantiate(client, ae_dev, false);
ae_algo->ops->uninit_ae_dev(ae_dev);
hnae_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
#define HNAE3_CLASS_NAME_SIZE 16
#define HNAE3_DEV_INITED_B 0x0
-#define HNAE_DEV_SUPPORT_ROCE_B 0x1
+#define HNAE3_DEV_SUPPORT_ROCE_B 0x1
+#define HNAE3_DEV_SUPPORT_DCB_B 0x2
+
+#define HNAE3_DEV_SUPPORT_ROCE_DCB_BITS (BIT(HNAE3_DEV_SUPPORT_DCB_B) |\
+ BIT(HNAE3_DEV_SUPPORT_ROCE_B))
+
+#define hnae3_dev_roce_supported(hdev) \
+ hnae_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_ROCE_B)
+
+#define hnae3_dev_dcb_supported(hdev) \
+ hnae_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_DCB_B)
#define ring_ptr_move_fw(ring, p) \
((ring)->p = ((ring)->p + 1) % (ring)->desc_num)
struct hnae3_tc_info {
u16 tqp_offset; /* TQP offset from base TQP */
u16 tqp_count; /* Total TQPs */
- u8 up; /* user priority */
u8 tc; /* TC index */
bool enable; /* If this TC is enable or not */
};
#define HNAE3_MAX_TC 8
+#define HNAE3_MAX_USER_PRIO 8
struct hnae3_knic_private_info {
struct net_device *netdev; /* Set by KNIC client when init instance */
u16 rss_size; /* Allocated RSS queues */
u16 num_desc;
u8 num_tc; /* Total number of enabled TCs */
+ u8 prio_tc[HNAE3_MAX_USER_PRIO]; /* TC indexed by prio */
struct hnae3_tc_info tc_info[HNAE3_MAX_TC]; /* Idx of array is HW TC */
u16 num_tqps; /* total number of TQPs in this handle */
u8 rsv[18];
};
-#define HCLGE_VECTOR_ELEMENTS_PER_CMD 11
+#define HCLGE_VECTOR_ELEMENTS_PER_CMD 10
enum hclge_int_type {
HCLGE_INT_TX,
#define HCLGE_INT_TYPE_S 0
#define HCLGE_INT_TYPE_M 0x3
#define HCLGE_TQP_ID_S 2
-#define HCLGE_TQP_ID_M (0x3fff << HCLGE_TQP_ID_S)
+#define HCLGE_TQP_ID_M (0x7ff << HCLGE_TQP_ID_S)
+#define HCLGE_INT_GL_IDX_S 13
+#define HCLGE_INT_GL_IDX_M (0x3 << HCLGE_INT_GL_IDX_S)
__le16 tqp_type_and_id[HCLGE_VECTOR_ELEMENTS_PER_CMD];
+ u8 vfid;
+ u8 rsv;
};
#define HCLGE_TC_NUM 8
struct hclge_rx_priv_buff {
__le16 buf_num[HCLGE_TC_NUM];
- u8 rsv[8];
+ __le16 shared_buf;
+ u8 rsv[6];
};
struct hclge_query_version {
#define HCLGE_DEFAULT_TX_BUF 0x4000 /* 16k bytes */
#define HCLGE_TOTAL_PKT_BUF 0x108000 /* 1.03125M bytes */
#define HCLGE_DEFAULT_DV 0xA000 /* 40k byte */
+#define HCLGE_DEFAULT_NON_DCB_DV 0x7800 /* 30K byte */
#define HCLGE_TYPE_CRQ 0
#define HCLGE_TYPE_CSQ 1
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA), 0},
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA_MACSEC), 0},
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC), 0},
- /* Required last entry */
- {0, }
-};
-
-static const struct pci_device_id roce_pci_tbl[] = {
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA_MACSEC), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA_MACSEC), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC), 0},
- /* Required last entry */
+ /* required last entry */
{0, }
};
hdev->num_tqps = __le16_to_cpu(req->tqp_num);
hdev->pkt_buf_size = __le16_to_cpu(req->buf_size) << HCLGE_BUF_UNIT_S;
- if (hnae_get_bit(hdev->ae_dev->flag, HNAE_DEV_SUPPORT_ROCE_B)) {
+ if (hnae3_dev_roce_supported(hdev)) {
hdev->num_roce_msix =
hnae_get_field(__le16_to_cpu(req->pf_intr_vector_number),
HCLGE_PF_VEC_NUM_M, HCLGE_PF_VEC_NUM_S);
hdev->base_tqp_pid = 0;
hdev->rss_size_max = 1;
hdev->rx_buf_len = cfg.rx_buf_len;
- for (i = 0; i < ETH_ALEN; i++)
- hdev->hw.mac.mac_addr[i] = cfg.mac_addr[i];
+ ether_addr_copy(hdev->hw.mac.mac_addr, cfg.mac_addr);
hdev->hw.mac.media_type = cfg.media_type;
+ hdev->hw.mac.phy_addr = cfg.phy_addr;
hdev->num_desc = cfg.tqp_desc_num;
hdev->tm_info.num_pg = 1;
hdev->tm_info.num_tc = cfg.tc_num;
tc_num = hclge_get_tc_num(hdev);
pfc_enable_num = hclge_get_pfc_enalbe_num(hdev);
- shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV;
+ if (hnae3_dev_dcb_supported(hdev))
+ shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV;
+ else
+ shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_NON_DCB_DV;
+
shared_buf_tc = pfc_enable_num * hdev->mps +
(tc_num - pfc_enable_num) * hdev->mps / 2 +
hdev->mps;
struct hclge_priv_buf *priv;
int i;
+ /* When DCB is not supported, rx private
+ * buffer is not allocated.
+ */
+ if (!hnae3_dev_dcb_supported(hdev)) {
+ if (!hclge_is_rx_buf_ok(hdev, rx_all))
+ return -ENOMEM;
+
+ return 0;
+ }
+
/* step 1, try to alloc private buffer for all enabled tc */
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
priv = &hdev->priv_buf[i];
priv->wl.high = 2 * hdev->mps;
priv->buf_size = priv->wl.high;
}
+ } else {
+ priv->enable = 0;
+ priv->wl.low = 0;
+ priv->wl.high = 0;
+ priv->buf_size = 0;
}
}
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
priv = &hdev->priv_buf[i];
- if (hdev->hw_tc_map & BIT(i))
- priv->enable = 1;
+ priv->enable = 0;
+ priv->wl.low = 0;
+ priv->wl.high = 0;
+ priv->buf_size = 0;
+
+ if (!(hdev->hw_tc_map & BIT(i)))
+ continue;
+
+ priv->enable = 1;
if (hdev->tm_info.hw_pfc_map & BIT(i)) {
priv->wl.low = 128;
cpu_to_le16(true << HCLGE_TC0_PRI_BUF_EN_B);
}
+ req->shared_buf =
+ cpu_to_le16((hdev->s_buf.buf_size >> HCLGE_BUF_UNIT_S) |
+ (1 << HCLGE_TC0_PRI_BUF_EN_B));
+
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
return ret;
}
- ret = hclge_rx_priv_wl_config(hdev);
- if (ret) {
- dev_err(&hdev->pdev->dev,
- "could not configure rx private waterline %d\n", ret);
- return ret;
- }
+ if (hnae3_dev_dcb_supported(hdev)) {
+ ret = hclge_rx_priv_wl_config(hdev);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "could not configure rx private waterline %d\n",
+ ret);
+ return ret;
+ }
- ret = hclge_common_thrd_config(hdev);
- if (ret) {
- dev_err(&hdev->pdev->dev,
- "could not configure common threshold %d\n", ret);
- return ret;
+ ret = hclge_common_thrd_config(hdev);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "could not configure common threshold %d\n",
+ ret);
+ return ret;
+ }
}
ret = hclge_common_wl_config(hdev);
u16 tc_valid[HCLGE_MAX_TC_NUM];
u16 tc_size[HCLGE_MAX_TC_NUM];
u32 *rss_indir = NULL;
+ u16 rss_size = 0, roundup_size;
const u8 *key;
int i, ret, j;
for (j = 0; j < hdev->num_vmdq_vport + 1; j++) {
for (i = 0; i < HCLGE_RSS_IND_TBL_SIZE; i++) {
vport[j].rss_indirection_tbl[i] =
- i % hdev->rss_size_max;
+ i % vport[j].alloc_rss_size;
+
+ /* vport 0 is for PF */
+ if (j != 0)
+ continue;
+
+ rss_size = vport[j].alloc_rss_size;
rss_indir[i] = vport[j].rss_indirection_tbl[i];
}
}
if (ret)
goto err;
+ /* Each TC have the same queue size, and tc_size set to hardware is
+ * the log2 of roundup power of two of rss_size, the acutal queue
+ * size is limited by indirection table.
+ */
+ if (rss_size > HCLGE_RSS_TC_SIZE_7 || rss_size == 0) {
+ dev_err(&hdev->pdev->dev,
+ "Configure rss tc size failed, invalid TC_SIZE = %d\n",
+ rss_size);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ roundup_size = roundup_pow_of_two(rss_size);
+ roundup_size = ilog2(roundup_size);
+
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- if (hdev->hw_tc_map & BIT(i))
- tc_valid[i] = 1;
- else
- tc_valid[i] = 0;
+ tc_valid[i] = 0;
- switch (hdev->rss_size_max) {
- case HCLGE_RSS_TC_SIZE_0:
- tc_size[i] = 0;
- break;
- case HCLGE_RSS_TC_SIZE_1:
- tc_size[i] = 1;
- break;
- case HCLGE_RSS_TC_SIZE_2:
- tc_size[i] = 2;
- break;
- case HCLGE_RSS_TC_SIZE_3:
- tc_size[i] = 3;
- break;
- case HCLGE_RSS_TC_SIZE_4:
- tc_size[i] = 4;
- break;
- case HCLGE_RSS_TC_SIZE_5:
- tc_size[i] = 5;
- break;
- case HCLGE_RSS_TC_SIZE_6:
- tc_size[i] = 6;
- break;
- case HCLGE_RSS_TC_SIZE_7:
- tc_size[i] = 7;
- break;
- default:
- break;
- }
- tc_offset[i] = hdev->rss_size_max * i;
+ if (!(hdev->hw_tc_map & BIT(i)))
+ continue;
+
+ tc_valid[i] = 1;
+ tc_size[i] = roundup_size;
+ tc_offset[i] = rss_size * i;
}
+
ret = hclge_set_rss_tc_mode(hdev, tc_valid, tc_size, tc_offset);
err:
hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
hnae_set_field(req->tqp_type_and_id[i], HCLGE_TQP_ID_M,
HCLGE_TQP_ID_S, node->tqp_index);
+ hnae_set_field(req->tqp_type_and_id[i], HCLGE_INT_GL_IDX_M,
+ HCLGE_INT_GL_IDX_S,
+ hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
req->tqp_type_and_id[i] = cpu_to_le16(req->tqp_type_and_id[i]);
+ req->vfid = vport->vport_id;
if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
hnae_set_field(req->tqp_type_and_id[i], HCLGE_TQP_ID_M,
HCLGE_TQP_ID_S, node->tqp_index);
+ hnae_set_field(req->tqp_type_and_id[i], HCLGE_INT_GL_IDX_M,
+ HCLGE_INT_GL_IDX_S,
+ hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
req->tqp_type_and_id[i] = cpu_to_le16(req->tqp_type_and_id[i]);
+ req->vfid = vport->vport_id;
if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
}
i = 0;
hclge_cmd_setup_basic_desc(&desc,
- HCLGE_OPC_ADD_RING_TO_VECTOR,
+ HCLGE_OPC_DEL_RING_TO_VECTOR,
false);
req->int_vector_id = vector_id;
}
{
#define HCLGE_VLAN_TYPE_VF_TABLE 0
#define HCLGE_VLAN_TYPE_PORT_TABLE 1
+ struct hnae3_handle *handle;
int ret;
ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_VLAN_TYPE_VF_TABLE,
ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_VLAN_TYPE_PORT_TABLE,
true);
+ if (ret)
+ return ret;
- return ret;
+ handle = &hdev->vport[0].nic;
+ return hclge_set_port_vlan_filter(handle, htons(ETH_P_8021Q), 0, false);
}
static int hclge_set_mtu(struct hnae3_handle *handle, int new_mtu)
goto err;
if (hdev->roce_client &&
- hnae_get_bit(hdev->ae_dev->flag,
- HNAE_DEV_SUPPORT_ROCE_B)) {
+ hnae3_dev_roce_supported(hdev)) {
struct hnae3_client *rc = hdev->roce_client;
ret = hclge_init_roce_base_info(vport);
break;
case HNAE3_CLIENT_ROCE:
- if (hnae_get_bit(hdev->ae_dev->flag,
- HNAE_DEV_SUPPORT_ROCE_B)) {
+ if (hnae3_dev_roce_supported(hdev)) {
hdev->roce_client = client;
vport->roce.client = client;
}
static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
{
struct pci_dev *pdev = ae_dev->pdev;
- const struct pci_device_id *id;
struct hclge_dev *hdev;
int ret;
hdev->ae_dev = ae_dev;
ae_dev->priv = hdev;
- id = pci_match_id(roce_pci_tbl, ae_dev->pdev);
- if (id)
- hnae_set_bit(ae_dev->flag, HNAE_DEV_SUPPORT_ROCE_B, 1);
-
ret = hclge_pci_init(hdev);
if (ret) {
dev_err(&pdev->dev, "PCI init failed\n");
return ret;
}
- ret = hclge_rss_init_hw(hdev);
- if (ret) {
- dev_err(&pdev->dev, "Rss init fail, ret =%d\n", ret);
- return ret;
- }
-
ret = hclge_init_vlan_config(hdev);
if (ret) {
dev_err(&pdev->dev, "VLAN init fail, ret =%d\n", ret);
return ret;
}
+ ret = hclge_rss_init_hw(hdev);
+ if (ret) {
+ dev_err(&pdev->dev, "Rss init fail, ret =%d\n", ret);
+ return ret;
+ }
+
setup_timer(&hdev->service_timer, hclge_service_timer,
(unsigned long)hdev);
INIT_WORK(&hdev->service_task, hclge_service_task);
struct hclge_tc_info {
u8 tc_id;
u8 tc_sch_mode; /* 0: sp; 1: dwrr */
- u8 up;
u8 pgid;
u32 bw_limit;
};
u8 num_tc;
u8 num_pg; /* It must be 1 if vNET-Base schd */
u8 pg_dwrr[HCLGE_PG_NUM];
+ u8 prio_tc[HNAE3_MAX_USER_PRIO];
struct hclge_pg_info pg_info[HCLGE_PG_NUM];
struct hclge_tc_info tc_info[HNAE3_MAX_TC];
enum hclge_fc_mode fc_mode;
u8 rss_hash_key[HCLGE_RSS_KEY_SIZE]; /* User configured hash keys */
/* User configured lookup table entries */
u8 rss_indirection_tbl[HCLGE_RSS_IND_TBL_SIZE];
+ u16 alloc_rss_size;
u16 qs_offset;
u16 bw_limit; /* VSI BW Limit (0 = disabled) */
{
u8 tc;
- for (tc = 0; tc < hdev->tm_info.num_tc; tc++)
- if (hdev->tm_info.tc_info[tc].up == pri_id)
- break;
+ tc = hdev->tm_info.prio_tc[pri_id];
if (tc >= hdev->tm_info.num_tc)
return -EINVAL;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_PRI_TO_TC_MAPPING, false);
- for (pri_id = 0; pri_id < hdev->tm_info.num_tc; pri_id++) {
+ for (pri_id = 0; pri_id < HNAE3_MAX_USER_PRIO; pri_id++) {
ret = hclge_fill_pri_array(hdev, pri, pri_id);
if (ret)
return ret;
shap_cfg_cmd->pg_id = pg_id;
- hclge_tm_set_feild(shap_cfg_cmd->pg_shapping_para, IR_B, ir_b);
- hclge_tm_set_feild(shap_cfg_cmd->pg_shapping_para, IR_U, ir_u);
- hclge_tm_set_feild(shap_cfg_cmd->pg_shapping_para, IR_S, ir_s);
- hclge_tm_set_feild(shap_cfg_cmd->pg_shapping_para, BS_B, bs_b);
- hclge_tm_set_feild(shap_cfg_cmd->pg_shapping_para, BS_S, bs_s);
+ hclge_tm_set_field(shap_cfg_cmd->pg_shapping_para, IR_B, ir_b);
+ hclge_tm_set_field(shap_cfg_cmd->pg_shapping_para, IR_U, ir_u);
+ hclge_tm_set_field(shap_cfg_cmd->pg_shapping_para, IR_S, ir_s);
+ hclge_tm_set_field(shap_cfg_cmd->pg_shapping_para, BS_B, bs_b);
+ hclge_tm_set_field(shap_cfg_cmd->pg_shapping_para, BS_S, bs_s);
return hclge_cmd_send(&hdev->hw, &desc, 1);
}
shap_cfg_cmd->pri_id = pri_id;
- hclge_tm_set_feild(shap_cfg_cmd->pri_shapping_para, IR_B, ir_b);
- hclge_tm_set_feild(shap_cfg_cmd->pri_shapping_para, IR_U, ir_u);
- hclge_tm_set_feild(shap_cfg_cmd->pri_shapping_para, IR_S, ir_s);
- hclge_tm_set_feild(shap_cfg_cmd->pri_shapping_para, BS_B, bs_b);
- hclge_tm_set_feild(shap_cfg_cmd->pri_shapping_para, BS_S, bs_s);
+ hclge_tm_set_field(shap_cfg_cmd->pri_shapping_para, IR_B, ir_b);
+ hclge_tm_set_field(shap_cfg_cmd->pri_shapping_para, IR_U, ir_u);
+ hclge_tm_set_field(shap_cfg_cmd->pri_shapping_para, IR_S, ir_s);
+ hclge_tm_set_field(shap_cfg_cmd->pri_shapping_para, BS_B, bs_b);
+ hclge_tm_set_field(shap_cfg_cmd->pri_shapping_para, BS_S, bs_s);
return hclge_cmd_send(&hdev->hw, &desc, 1);
}
kinfo->num_tqps / kinfo->num_tc);
vport->qs_offset = hdev->tm_info.num_tc * vport->vport_id;
vport->dwrr = 100; /* 100 percent as init */
+ vport->alloc_rss_size = kinfo->rss_size;
for (i = 0; i < kinfo->num_tc; i++) {
if (hdev->hw_tc_map & BIT(i)) {
kinfo->tc_info[i].tqp_offset = i * kinfo->rss_size;
kinfo->tc_info[i].tqp_count = kinfo->rss_size;
kinfo->tc_info[i].tc = i;
- kinfo->tc_info[i].up = hdev->tm_info.tc_info[i].up;
} else {
/* Set to default queue if TC is disable */
kinfo->tc_info[i].enable = false;
kinfo->tc_info[i].tqp_offset = 0;
kinfo->tc_info[i].tqp_count = 1;
kinfo->tc_info[i].tc = 0;
- kinfo->tc_info[i].up = 0;
}
}
+
+ memcpy(kinfo->prio_tc, hdev->tm_info.prio_tc,
+ FIELD_SIZEOF(struct hnae3_knic_private_info, prio_tc));
}
static void hclge_tm_vport_info_update(struct hclge_dev *hdev)
for (i = 0; i < hdev->tm_info.num_tc; i++) {
hdev->tm_info.tc_info[i].tc_id = i;
hdev->tm_info.tc_info[i].tc_sch_mode = HCLGE_SCH_MODE_DWRR;
- hdev->tm_info.tc_info[i].up = i;
hdev->tm_info.tc_info[i].pgid = 0;
hdev->tm_info.tc_info[i].bw_limit =
hdev->tm_info.pg_info[0].bw_limit;
}
+ for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
+ hdev->tm_info.prio_tc[i] =
+ (i >= hdev->tm_info.num_tc) ? 0 : i;
+
hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
}
if (ret)
return ret;
+ /* Only DCB-supported dev supports qset back pressure setting */
+ if (!hnae3_dev_dcb_supported(hdev))
+ return 0;
+
for (i = 0; i < hdev->tm_info.num_tc; i++) {
ret = hclge_tm_qs_bp_cfg(hdev, i);
if (ret)
u32 rsvd1;
};
-#define hclge_tm_set_feild(dest, string, val) \
+#define hclge_tm_set_field(dest, string, val) \
hnae_set_field((dest), (HCLGE_TM_SHAP_##string##_MSK), \
(HCLGE_TM_SHAP_##string##_LSH), val)
-#define hclge_tm_get_feild(src, string) \
+#define hclge_tm_get_field(src, string) \
hnae_get_field((src), (HCLGE_TM_SHAP_##string##_MSK), \
(HCLGE_TM_SHAP_##string##_LSH))
static const struct pci_device_id hns3_pci_tbl[] = {
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_GE), 0},
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA_MACSEC), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA_MACSEC), 0},
- {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC), 0},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA),
+ HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA_MACSEC),
+ HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA),
+ HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA_MACSEC),
+ HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC),
+ HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
/* required last entry */
{0, }
};
}
ae_dev->pdev = pdev;
+ ae_dev->flag = ent->driver_data;
ae_dev->dev_type = HNAE3_DEV_KNIC;
pci_set_drvdata(pdev, ae_dev);
eth_hw_addr_random(netdev);
dev_warn(priv->dev, "using random MAC address %pM\n",
netdev->dev_addr);
- /* Also copy this new MAC address into hdev */
- if (h->ae_algo->ops->set_mac_addr)
- h->ae_algo->ops->set_mac_addr(h, netdev->dev_addr);
}
+
+ if (h->ae_algo->ops->set_mac_addr)
+ h->ae_algo->ops->set_mac_addr(h, netdev->dev_addr);
+
}
static void hns3_nic_set_priv_ops(struct net_device *netdev)
unsigned long flags;
MAL_DBG2(mal, "poll(%d)" NL, budget);
- again:
+
/* Process TX skbs */
list_for_each(l, &mal->poll_list) {
struct mal_commac *mc =
spin_lock_irqsave(&mal->lock, flags);
mal_disable_eob_irq(mal);
spin_unlock_irqrestore(&mal->lock, flags);
- goto again;
}
mc->ops->poll_tx(mc->dev);
}
#define MVPP2_GMAC_INBAND_AN_MASK BIT(0)
#define MVPP2_GMAC_FLOW_CTRL_MASK GENMASK(2, 1)
#define MVPP2_GMAC_PCS_ENABLE_MASK BIT(3)
-#define MVPP2_GMAC_PORT_RGMII_MASK BIT(4)
+#define MVPP2_GMAC_INTERNAL_CLK_MASK BIT(4)
#define MVPP2_GMAC_DISABLE_PADDING BIT(5)
#define MVPP2_GMAC_PORT_RESET_MASK BIT(6)
#define MVPP2_GMAC_AUTONEG_CONFIG 0xc
#define MVPP2_PRS_RI_L3_MCAST BIT(15)
#define MVPP2_PRS_RI_L3_BCAST (BIT(15) | BIT(16))
#define MVPP2_PRS_RI_IP_FRAG_MASK 0x20000
+#define MVPP2_PRS_RI_IP_FRAG_TRUE BIT(17)
#define MVPP2_PRS_RI_UDF3_MASK 0x300000
#define MVPP2_PRS_RI_UDF3_RX_SPECIAL BIT(21)
#define MVPP2_PRS_RI_L4_PROTO_MASK 0x1c00000
struct clk *pp_clk;
struct clk *gop_clk;
struct clk *mg_clk;
+ struct clk *axi_clk;
/* List of pointers to port structures */
struct mvpp2_port **port_list;
(proto != IPPROTO_IGMP))
return -EINVAL;
- /* Fragmented packet */
+ /* Not fragmented packet */
tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
MVPP2_PE_LAST_FREE_TID);
if (tid < 0)
MVPP2_PRS_SRAM_OP_SEL_UDF_ADD);
mvpp2_prs_sram_ai_update(&pe, MVPP2_PRS_IPV4_DIP_AI_BIT,
MVPP2_PRS_IPV4_DIP_AI_BIT);
- mvpp2_prs_sram_ri_update(&pe, ri | MVPP2_PRS_RI_IP_FRAG_MASK,
- ri_mask | MVPP2_PRS_RI_IP_FRAG_MASK);
+ mvpp2_prs_sram_ri_update(&pe, ri, ri_mask | MVPP2_PRS_RI_IP_FRAG_MASK);
+
+ mvpp2_prs_tcam_data_byte_set(&pe, 2, 0x00,
+ MVPP2_PRS_TCAM_PROTO_MASK_L);
+ mvpp2_prs_tcam_data_byte_set(&pe, 3, 0x00,
+ MVPP2_PRS_TCAM_PROTO_MASK);
mvpp2_prs_tcam_data_byte_set(&pe, 5, proto, MVPP2_PRS_TCAM_PROTO_MASK);
mvpp2_prs_tcam_ai_update(&pe, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_IP4);
mvpp2_prs_hw_write(priv, &pe);
- /* Not fragmented packet */
+ /* Fragmented packet */
tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
MVPP2_PE_LAST_FREE_TID);
if (tid < 0)
pe.sram.word[MVPP2_PRS_SRAM_RI_CTRL_WORD] = 0x0;
mvpp2_prs_sram_ri_update(&pe, ri, ri_mask);
- mvpp2_prs_tcam_data_byte_set(&pe, 2, 0x00, MVPP2_PRS_TCAM_PROTO_MASK_L);
- mvpp2_prs_tcam_data_byte_set(&pe, 3, 0x00, MVPP2_PRS_TCAM_PROTO_MASK);
+ mvpp2_prs_sram_ri_update(&pe, ri | MVPP2_PRS_RI_IP_FRAG_TRUE,
+ ri_mask | MVPP2_PRS_RI_IP_FRAG_MASK);
+
+ mvpp2_prs_tcam_data_byte_set(&pe, 2, 0x00, 0x0);
+ mvpp2_prs_tcam_data_byte_set(&pe, 3, 0x00, 0x0);
/* Update shadow table and hw entry */
mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_IP4);
val |= MVPP2_GMAC_INBAND_AN_MASK | MVPP2_GMAC_PCS_ENABLE_MASK;
} else if (phy_interface_mode_is_rgmii(port->phy_interface)) {
val &= ~MVPP2_GMAC_PCS_ENABLE_MASK;
- val |= MVPP2_GMAC_PORT_RGMII_MASK;
}
writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
/* Ports initialization */
static int mvpp2_port_probe(struct platform_device *pdev,
struct device_node *port_node,
- struct mvpp2 *priv)
+ struct mvpp2 *priv, int index)
{
struct device_node *phy_node;
struct phy *comphy;
}
netdev_info(dev, "Using %s mac address %pM\n", mac_from, dev->dev_addr);
- priv->port_list[id] = port;
+ priv->port_list[index] = port;
return 0;
err_free_port_pcpu:
err = clk_prepare_enable(priv->mg_clk);
if (err < 0)
goto err_gop_clk;
+
+ priv->axi_clk = devm_clk_get(&pdev->dev, "axi_clk");
+ if (IS_ERR(priv->axi_clk)) {
+ err = PTR_ERR(priv->axi_clk);
+ if (err == -EPROBE_DEFER)
+ goto err_gop_clk;
+ priv->axi_clk = NULL;
+ } else {
+ err = clk_prepare_enable(priv->axi_clk);
+ if (err < 0)
+ goto err_gop_clk;
+ }
}
/* Get system's tclk rate */
}
/* Initialize ports */
+ i = 0;
for_each_available_child_of_node(dn, port_node) {
- err = mvpp2_port_probe(pdev, port_node, priv);
+ err = mvpp2_port_probe(pdev, port_node, priv, i);
if (err < 0)
goto err_mg_clk;
+ i++;
}
platform_set_drvdata(pdev, priv);
return 0;
err_mg_clk:
+ clk_disable_unprepare(priv->axi_clk);
if (priv->hw_version == MVPP22)
clk_disable_unprepare(priv->mg_clk);
err_gop_clk:
aggr_txq->descs_dma);
}
+ clk_disable_unprepare(priv->axi_clk);
clk_disable_unprepare(priv->mg_clk);
clk_disable_unprepare(priv->pp_clk);
clk_disable_unprepare(priv->gop_clk);
{MLX5_FLOW_CONTEXT_ACTION_FWD_NEXT_PRIO, "NEXT_PRIO"}
TRACE_EVENT(mlx5_fs_set_fte,
- TP_PROTO(const struct fs_fte *fte, bool new_fte),
+ TP_PROTO(const struct fs_fte *fte, int new_fte),
TP_ARGS(fte, new_fte),
TP_STRUCT__entry(
__field(const struct fs_fte *, fte)
__field(u32, action)
__field(u32, flow_tag)
__field(u8, mask_enable)
- __field(bool, new_fte)
+ __field(int, new_fte)
__array(u32, mask_outer, MLX5_ST_SZ_DW(fte_match_set_lyr_2_4))
__array(u32, mask_inner, MLX5_ST_SZ_DW(fte_match_set_lyr_2_4))
__array(u32, mask_misc, MLX5_ST_SZ_DW(fte_match_set_misc))
priv->fs.vlan.filter_disabled = false;
if (priv->netdev->flags & IFF_PROMISC)
return;
- mlx5e_del_any_vid_rules(priv);
+ mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_CTAG_VID, 0);
}
void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv)
priv->fs.vlan.filter_disabled = true;
if (priv->netdev->flags & IFF_PROMISC)
return;
- mlx5e_add_any_vid_rules(priv);
+ mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_CTAG_VID, 0);
}
int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
struct mlx5e_sw_stats temp, *s = &temp;
struct mlx5e_rq_stats *rq_stats;
struct mlx5e_sq_stats *sq_stats;
- u64 tx_offload_none = 0;
int i, j;
memset(s, 0, sizeof(*s));
s->rx_lro_bytes += rq_stats->lro_bytes;
s->rx_csum_none += rq_stats->csum_none;
s->rx_csum_complete += rq_stats->csum_complete;
+ s->rx_csum_unnecessary += rq_stats->csum_unnecessary;
s->rx_csum_unnecessary_inner += rq_stats->csum_unnecessary_inner;
s->rx_xdp_drop += rq_stats->xdp_drop;
s->rx_xdp_tx += rq_stats->xdp_tx;
s->tx_queue_dropped += sq_stats->dropped;
s->tx_xmit_more += sq_stats->xmit_more;
s->tx_csum_partial_inner += sq_stats->csum_partial_inner;
- tx_offload_none += sq_stats->csum_none;
+ s->tx_csum_none += sq_stats->csum_none;
+ s->tx_csum_partial += sq_stats->csum_partial;
}
}
- /* Update calculated offload counters */
- s->tx_csum_partial = s->tx_packets - tx_offload_none - s->tx_csum_partial_inner;
- s->rx_csum_unnecessary = s->rx_packets - s->rx_csum_none - s->rx_csum_complete;
-
s->link_down_events_phy = MLX5_GET(ppcnt_reg,
priv->stats.pport.phy_counters,
counter_set.phys_layer_cntrs.link_down_events);
err = feature_handler(netdev, enable);
if (err) {
- netdev_err(netdev, "%s feature 0x%llx failed err %d\n",
- enable ? "Enable" : "Disable", feature, err);
+ netdev_err(netdev, "%s feature %pNF failed, err %d\n",
+ enable ? "Enable" : "Disable", &feature, err);
return err;
}
if (lro) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ rq->stats.csum_unnecessary++;
return;
}
skb->csum_level = 1;
skb->encapsulation = 1;
rq->stats.csum_unnecessary_inner++;
+ return;
}
+ rq->stats.csum_unnecessary++;
return;
}
csum_none:
u64 rx_xdp_drop;
u64 rx_xdp_tx;
u64 rx_xdp_tx_full;
+ u64 tx_csum_none;
u64 tx_csum_partial;
u64 tx_csum_partial_inner;
u64 tx_queue_stopped;
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_drop) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_full) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_none) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_partial) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_partial_inner) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_stopped) },
u64 packets;
u64 bytes;
u64 csum_complete;
+ u64 csum_unnecessary;
u64 csum_unnecessary_inner;
u64 csum_none;
u64 lro_packets;
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, packets) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, bytes) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete) },
+ { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_unnecessary) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_unnecessary_inner) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_none) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, xdp_drop) },
u64 tso_bytes;
u64 tso_inner_packets;
u64 tso_inner_bytes;
+ u64 csum_partial;
u64 csum_partial_inner;
u64 nop;
/* less likely accessed in data path */
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tso_bytes) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tso_inner_packets) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tso_inner_bytes) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_partial) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_partial_inner) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, nop) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_none) },
return true;
}
+static bool modify_header_match_supported(struct mlx5_flow_spec *spec,
+ struct tcf_exts *exts)
+{
+ const struct tc_action *a;
+ bool modify_ip_header;
+ LIST_HEAD(actions);
+ u8 htype, ip_proto;
+ void *headers_v;
+ u16 ethertype;
+ int nkeys, i;
+
+ headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, outer_headers);
+ ethertype = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ethertype);
+
+ /* for non-IP we only re-write MACs, so we're okay */
+ if (ethertype != ETH_P_IP && ethertype != ETH_P_IPV6)
+ goto out_ok;
+
+ modify_ip_header = false;
+ tcf_exts_to_list(exts, &actions);
+ list_for_each_entry(a, &actions, list) {
+ if (!is_tcf_pedit(a))
+ continue;
+
+ nkeys = tcf_pedit_nkeys(a);
+ for (i = 0; i < nkeys; i++) {
+ htype = tcf_pedit_htype(a, i);
+ if (htype == TCA_PEDIT_KEY_EX_HDR_TYPE_IP4 ||
+ htype == TCA_PEDIT_KEY_EX_HDR_TYPE_IP6) {
+ modify_ip_header = true;
+ break;
+ }
+ }
+ }
+
+ ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol);
+ if (modify_ip_header && ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) {
+ pr_info("can't offload re-write of ip proto %d\n", ip_proto);
+ return false;
+ }
+
+out_ok:
+ return true;
+}
+
+static bool actions_match_supported(struct mlx5e_priv *priv,
+ struct tcf_exts *exts,
+ struct mlx5e_tc_flow_parse_attr *parse_attr,
+ struct mlx5e_tc_flow *flow)
+{
+ u32 actions;
+
+ if (flow->flags & MLX5E_TC_FLOW_ESWITCH)
+ actions = flow->esw_attr->action;
+ else
+ actions = flow->nic_attr->action;
+
+ if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
+ return modify_header_match_supported(&parse_attr->spec, exts);
+
+ return true;
+}
+
static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
struct mlx5e_tc_flow_parse_attr *parse_attr,
struct mlx5e_tc_flow *flow)
return -EINVAL;
}
+ if (!actions_match_supported(priv, exts, parse_attr, flow))
+ return -EOPNOTSUPP;
+
return 0;
}
break;
default:
err = -EOPNOTSUPP;
- goto out;
+ goto free_encap;
}
fl4.flowi4_tos = tun_key->tos;
fl4.daddr = tun_key->u.ipv4.dst;
err = mlx5e_route_lookup_ipv4(priv, mirred_dev, &out_dev,
&fl4, &n, &ttl);
if (err)
- goto out;
+ goto free_encap;
/* used by mlx5e_detach_encap to lookup a neigh hash table
* entry in the neigh hash table when a user deletes a rule
*/
err = mlx5e_rep_encap_entry_attach(netdev_priv(out_dev), e);
if (err)
- goto out;
+ goto free_encap;
read_lock_bh(&n->lock);
nud_state = n->nud_state;
destroy_neigh_entry:
mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e);
-out:
+free_encap:
kfree(encap_header);
+out:
if (n)
neigh_release(n);
return err;
break;
default:
err = -EOPNOTSUPP;
- goto out;
+ goto free_encap;
}
fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label);
err = mlx5e_route_lookup_ipv6(priv, mirred_dev, &out_dev,
&fl6, &n, &ttl);
if (err)
- goto out;
+ goto free_encap;
/* used by mlx5e_detach_encap to lookup a neigh hash table
* entry in the neigh hash table when a user deletes a rule
*/
err = mlx5e_rep_encap_entry_attach(netdev_priv(out_dev), e);
if (err)
- goto out;
+ goto free_encap;
read_lock_bh(&n->lock);
nud_state = n->nud_state;
destroy_neigh_entry:
mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e);
-out:
+free_encap:
kfree(encap_header);
+out:
if (n)
neigh_release(n);
return err;
}
}
+ /* must verify if encap is valid or not */
if (found)
goto attach_flow;
*encap_dev = e->out_dev;
if (e->flags & MLX5_ENCAP_ENTRY_VALID)
attr->encap_id = e->encap_id;
+ else
+ err = -EAGAIN;
return err;
return -EINVAL;
}
+
+ if (!actions_match_supported(priv, exts, parse_attr, flow))
+ return -EOPNOTSUPP;
+
return err;
}
sq->stats.csum_partial_inner++;
} else {
eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM;
+ sq->stats.csum_partial++;
}
} else
sq->stats.csum_none++;
return 0;
}
-int mlx5_fpga_caps(struct mlx5_core_dev *dev, u32 *caps)
+int mlx5_fpga_caps(struct mlx5_core_dev *dev)
{
u32 in[MLX5_ST_SZ_DW(fpga_cap)] = {0};
- return mlx5_core_access_reg(dev, in, sizeof(in), caps,
+ return mlx5_core_access_reg(dev, in, sizeof(in), dev->caps.fpga,
MLX5_ST_SZ_BYTES(fpga_cap),
MLX5_REG_FPGA_CAP, 0, 0);
}
u64 rx_total_drop;
};
-int mlx5_fpga_caps(struct mlx5_core_dev *dev, u32 *caps);
+int mlx5_fpga_caps(struct mlx5_core_dev *dev);
int mlx5_fpga_query(struct mlx5_core_dev *dev, struct mlx5_fpga_query *query);
int mlx5_fpga_ctrl_op(struct mlx5_core_dev *dev, u8 op);
int mlx5_fpga_access_reg(struct mlx5_core_dev *dev, u8 size, u64 addr,
if (err)
goto out;
- err = mlx5_fpga_caps(fdev->mdev,
- fdev->mdev->caps.hca_cur[MLX5_CAP_FPGA]);
+ err = mlx5_fpga_caps(fdev->mdev);
if (err)
goto out;
}
if (fte->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
+ int max_list_size = BIT(MLX5_CAP_FLOWTABLE_TYPE(dev,
+ log_max_flow_counter,
+ ft->type));
int list_size = 0;
list_for_each_entry(dst, &fte->node.children, node.list) {
in_dests += MLX5_ST_SZ_BYTES(dest_format_struct);
list_size++;
}
+ if (list_size > max_list_size) {
+ err = -EINVAL;
+ goto err_out;
+ }
MLX5_SET(flow_context, in_flow_context, flow_counter_list_size,
list_size);
}
err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
+err_out:
kvfree(in);
return err;
}
FS_FT_FDB = 0X4,
FS_FT_SNIFFER_RX = 0X5,
FS_FT_SNIFFER_TX = 0X6,
+ FS_FT_MAX_TYPE = FS_FT_SNIFFER_TX,
};
enum fs_flow_table_op_mod {
#define fs_for_each_dst(pos, fte) \
fs_list_for_each_entry(pos, &(fte)->node.children)
+#define MLX5_CAP_FLOWTABLE_TYPE(mdev, cap, type) ( \
+ (type == FS_FT_NIC_RX) ? MLX5_CAP_FLOWTABLE_NIC_RX(mdev, cap) : \
+ (type == FS_FT_ESW_EGRESS_ACL) ? MLX5_CAP_ESW_EGRESS_ACL(mdev, cap) : \
+ (type == FS_FT_ESW_INGRESS_ACL) ? MLX5_CAP_ESW_INGRESS_ACL(mdev, cap) : \
+ (type == FS_FT_FDB) ? MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) : \
+ (type == FS_FT_SNIFFER_RX) ? MLX5_CAP_FLOWTABLE_SNIFFER_RX(mdev, cap) : \
+ (type == FS_FT_SNIFFER_TX) ? MLX5_CAP_FLOWTABLE_SNIFFER_TX(mdev, cap) : \
+ (BUILD_BUG_ON_ZERO(FS_FT_SNIFFER_TX != FS_FT_MAX_TYPE))\
+ )
+
#endif
{
struct mlx5e_priv *priv = mlx5i_epriv(netdev);
const struct mlx5e_profile *profile = priv->profile;
+ struct mlx5_core_dev *mdev = priv->mdev;
mlx5e_detach_netdev(priv);
profile->cleanup(priv);
destroy_workqueue(priv->wq);
free_netdev(netdev);
- mlx5e_destroy_mdev_resources(priv->mdev);
+ mlx5e_destroy_mdev_resources(mdev);
}
EXPORT_SYMBOL(mlx5_rdma_netdev_free);
mlx5_core_warn(dev,
"failed to restore VF %d settings, err %d\n",
vf, err);
- continue;
+ continue;
}
}
mlx5_core_dbg(dev, "successfully enabled VF* %d\n", vf);
mlxsw_sp_nexthop_rif_fini(nh);
break;
case MLXSW_SP_NEXTHOP_TYPE_IPIP:
+ mlxsw_sp_nexthop_rif_fini(nh);
mlxsw_sp_nexthop_ipip_fini(mlxsw_sp, nh);
break;
}
router->ipip_ops_arr[ipipt]->can_offload(mlxsw_sp, dev,
MLXSW_SP_L3_PROTO_IPV4)) {
nh->type = MLXSW_SP_NEXTHOP_TYPE_IPIP;
- return mlxsw_sp_nexthop_ipip_init(mlxsw_sp, ipipt, nh, dev);
+ err = mlxsw_sp_nexthop_ipip_init(mlxsw_sp, ipipt, nh, dev);
+ if (err)
+ return err;
+ mlxsw_sp_nexthop_rif_init(nh, &nh->ipip_entry->ol_lb->common);
+ return 0;
}
nh->type = MLXSW_SP_NEXTHOP_TYPE_ETH;
router->ipip_ops_arr[ipipt]->can_offload(mlxsw_sp, dev,
MLXSW_SP_L3_PROTO_IPV6)) {
nh->type = MLXSW_SP_NEXTHOP_TYPE_IPIP;
- return mlxsw_sp_nexthop_ipip_init(mlxsw_sp, ipipt, nh, dev);
+ err = mlxsw_sp_nexthop_ipip_init(mlxsw_sp, ipipt, nh, dev);
+ if (err)
+ return err;
+ mlxsw_sp_nexthop_rif_init(nh, &nh->ipip_entry->ol_lb->common);
+ return 0;
}
nh->type = MLXSW_SP_NEXTHOP_TYPE_ETH;
vr = mlxsw_sp_vr_get(mlxsw_sp, tb_id ? : RT_TABLE_MAIN);
if (IS_ERR(vr))
return ERR_CAST(vr);
+ vr->rif_count++;
err = mlxsw_sp_rif_index_alloc(mlxsw_sp, &rif_index);
if (err)
mlxsw_sp_rif_counters_alloc(rif);
mlxsw_sp->router->rifs[rif_index] = rif;
- vr->rif_count++;
return rif;
kfree(rif);
err_rif_alloc:
err_rif_index_alloc:
+ vr->rif_count--;
mlxsw_sp_vr_put(vr);
return ERR_PTR(err);
}
mlxsw_sp_router_rif_gone_sync(mlxsw_sp, rif);
vr = &mlxsw_sp->router->vrs[rif->vr_id];
- vr->rif_count--;
mlxsw_sp->router->rifs[rif->rif_index] = NULL;
mlxsw_sp_rif_counters_free(rif);
ops->deconfigure(rif);
/* Loopback RIFs are not associated with a FID. */
mlxsw_sp_fid_put(fid);
kfree(rif);
+ vr->rif_count--;
mlxsw_sp_vr_put(vr);
}
static int emac_get_sset_count(struct net_device *netdev, int sset)
{
switch (sset) {
+ case ETH_SS_PRIV_FLAGS:
+ return 1;
case ETH_SS_STATS:
return EMAC_STATS_LEN;
default:
unsigned int i;
switch (stringset) {
+ case ETH_SS_PRIV_FLAGS:
+ strcpy(data, "single-pause-mode");
+ break;
+
case ETH_SS_STATS:
for (i = 0; i < EMAC_STATS_LEN; i++) {
strlcpy(data, emac_ethtool_stat_strings[i],
return EMAC_MAX_REG_SIZE * sizeof(u32);
}
+#define EMAC_PRIV_ENABLE_SINGLE_PAUSE BIT(0)
+
+static int emac_set_priv_flags(struct net_device *netdev, u32 flags)
+{
+ struct emac_adapter *adpt = netdev_priv(netdev);
+
+ adpt->single_pause_mode = !!(flags & EMAC_PRIV_ENABLE_SINGLE_PAUSE);
+
+ if (netif_running(netdev))
+ return emac_reinit_locked(adpt);
+
+ return 0;
+}
+
+static u32 emac_get_priv_flags(struct net_device *netdev)
+{
+ struct emac_adapter *adpt = netdev_priv(netdev);
+
+ return adpt->single_pause_mode ? EMAC_PRIV_ENABLE_SINGLE_PAUSE : 0;
+}
+
static const struct ethtool_ops emac_ethtool_ops = {
.get_link_ksettings = phy_ethtool_get_link_ksettings,
.set_link_ksettings = phy_ethtool_set_link_ksettings,
.get_regs_len = emac_get_regs_len,
.get_regs = emac_get_regs,
+
+ .set_priv_flags = emac_set_priv_flags,
+ .get_priv_flags = emac_get_priv_flags,
};
void emac_set_ethtool_ops(struct net_device *netdev)
mac &= ~(HUGEN | VLAN_STRIP | TPAUSE | SIMR | HUGE | MULTI_ALL |
DEBUG_MODE | SINGLE_PAUSE_MODE);
+ /* Enable single-pause-frame mode if requested.
+ *
+ * If enabled, the EMAC will send a single pause frame when the RX
+ * queue is full. This normally leads to packet loss because
+ * the pause frame disables the remote MAC only for 33ms (the quanta),
+ * and then the remote MAC continues sending packets even though
+ * the RX queue is still full.
+ *
+ * If disabled, the EMAC sends a pause frame every 31ms until the RX
+ * queue is no longer full. Normally, this is the preferred
+ * method of operation. However, when the system is hung (e.g.
+ * cores are halted), the EMAC interrupt handler is never called
+ * and so the RX queue fills up quickly and stays full. The resuling
+ * non-stop "flood" of pause frames sometimes has the effect of
+ * disabling nearby switches. In some cases, other nearby switches
+ * are also affected, shutting down the entire network.
+ *
+ * The user can enable or disable single-pause-frame mode
+ * via ethtool.
+ */
+ mac |= adpt->single_pause_mode ? SINGLE_PAUSE_MODE : 0;
+
writel_relaxed(csr1, adpt->csr + EMAC_EMAC_WRAPPER_CSR1);
writel_relaxed(mac, adpt->base + EMAC_MAC_CTRL);
curr_rxbuf->dma_addr =
dma_map_single(adpt->netdev->dev.parent, skb->data,
- curr_rxbuf->length, DMA_FROM_DEVICE);
+ adpt->rxbuf_size, DMA_FROM_DEVICE);
+
ret = dma_mapping_error(adpt->netdev->dev.parent,
curr_rxbuf->dma_addr);
if (ret) {
/* default to automatic flow control */
adpt->automatic = true;
+
+ /* Disable single-pause-frame mode by default */
+ adpt->single_pause_mode = false;
}
/* Get the clock */
bool tx_flow_control;
bool rx_flow_control;
+ /* True == use single-pause-frame mode. */
+ bool single_pause_mode;
+
/* Ring parameter */
u8 tpd_burst;
u8 rfd_burst;
static int rmnet_is_real_dev_registered(const struct net_device *real_dev)
{
- rx_handler_func_t *rx_handler;
-
- rx_handler = rcu_dereference(real_dev->rx_handler);
- return (rx_handler == rmnet_rx_handler);
+ return rcu_access_pointer(real_dev->rx_handler) == rmnet_rx_handler;
}
/* Needs rtnl lock */
if (likely(RTL_R16(IntrStatus) & RxAckBits))
work_done += rtl8139_rx(dev, tp, budget);
- if (work_done < budget && napi_complete_done(napi, work_done)) {
+ if (work_done < budget) {
unsigned long flags;
spin_lock_irqsave(&tp->lock, flags);
- RTL_W16_F(IntrMask, rtl8139_intr_mask);
+ if (napi_complete_done(napi, work_done))
+ RTL_W16_F(IntrMask, rtl8139_intr_mask);
spin_unlock_irqrestore(&tp->lock, flags);
}
spin_unlock(&tp->rx_lock);
int rocker_tlv_put(struct rocker_desc_info *desc_info,
int attrtype, int attrlen, const void *data);
-static inline int rocker_tlv_put_u8(struct rocker_desc_info *desc_info,
- int attrtype, u8 value)
+static inline int
+rocker_tlv_put_u8(struct rocker_desc_info *desc_info, int attrtype, u8 value)
{
- return rocker_tlv_put(desc_info, attrtype, sizeof(u8), &value);
+ u8 tmp = value; /* work around GCC PR81715 */
+
+ return rocker_tlv_put(desc_info, attrtype, sizeof(u8), &tmp);
}
-static inline int rocker_tlv_put_u16(struct rocker_desc_info *desc_info,
- int attrtype, u16 value)
+static inline int
+rocker_tlv_put_u16(struct rocker_desc_info *desc_info, int attrtype, u16 value)
{
- return rocker_tlv_put(desc_info, attrtype, sizeof(u16), &value);
+ u16 tmp = value;
+
+ return rocker_tlv_put(desc_info, attrtype, sizeof(u16), &tmp);
}
-static inline int rocker_tlv_put_be16(struct rocker_desc_info *desc_info,
- int attrtype, __be16 value)
+static inline int
+rocker_tlv_put_be16(struct rocker_desc_info *desc_info, int attrtype, __be16 value)
{
- return rocker_tlv_put(desc_info, attrtype, sizeof(__be16), &value);
+ __be16 tmp = value;
+
+ return rocker_tlv_put(desc_info, attrtype, sizeof(__be16), &tmp);
}
-static inline int rocker_tlv_put_u32(struct rocker_desc_info *desc_info,
- int attrtype, u32 value)
+static inline int
+rocker_tlv_put_u32(struct rocker_desc_info *desc_info, int attrtype, u32 value)
{
- return rocker_tlv_put(desc_info, attrtype, sizeof(u32), &value);
+ u32 tmp = value;
+
+ return rocker_tlv_put(desc_info, attrtype, sizeof(u32), &tmp);
}
-static inline int rocker_tlv_put_be32(struct rocker_desc_info *desc_info,
- int attrtype, __be32 value)
+static inline int
+rocker_tlv_put_be32(struct rocker_desc_info *desc_info, int attrtype, __be32 value)
{
- return rocker_tlv_put(desc_info, attrtype, sizeof(__be32), &value);
+ __be32 tmp = value;
+
+ return rocker_tlv_put(desc_info, attrtype, sizeof(__be32), &tmp);
}
-static inline int rocker_tlv_put_u64(struct rocker_desc_info *desc_info,
- int attrtype, u64 value)
+static inline int
+rocker_tlv_put_u64(struct rocker_desc_info *desc_info, int attrtype, u64 value)
{
- return rocker_tlv_put(desc_info, attrtype, sizeof(u64), &value);
+ u64 tmp = value;
+
+ return rocker_tlv_put(desc_info, attrtype, sizeof(u64), &tmp);
}
static inline struct rocker_tlv *
.remove = dwc_eth_dwmac_remove,
.driver = {
.name = "dwc-eth-dwmac",
+ .pm = &stmmac_pltfr_pm_ops,
.of_match_table = dwc_eth_dwmac_match,
},
};
(((tx) ? soc##_GMAC_TXCLK_DLY_ENABLE : soc##_GMAC_TXCLK_DLY_DISABLE) | \
((rx) ? soc##_GMAC_RXCLK_DLY_ENABLE : soc##_GMAC_RXCLK_DLY_DISABLE))
+#define RK3128_GRF_MAC_CON0 0x0168
+#define RK3128_GRF_MAC_CON1 0x016c
+
+/* RK3128_GRF_MAC_CON0 */
+#define RK3128_GMAC_TXCLK_DLY_ENABLE GRF_BIT(14)
+#define RK3128_GMAC_TXCLK_DLY_DISABLE GRF_CLR_BIT(14)
+#define RK3128_GMAC_RXCLK_DLY_ENABLE GRF_BIT(15)
+#define RK3128_GMAC_RXCLK_DLY_DISABLE GRF_CLR_BIT(15)
+#define RK3128_GMAC_CLK_RX_DL_CFG(val) HIWORD_UPDATE(val, 0x7F, 7)
+#define RK3128_GMAC_CLK_TX_DL_CFG(val) HIWORD_UPDATE(val, 0x7F, 0)
+
+/* RK3128_GRF_MAC_CON1 */
+#define RK3128_GMAC_PHY_INTF_SEL_RGMII \
+ (GRF_BIT(6) | GRF_CLR_BIT(7) | GRF_CLR_BIT(8))
+#define RK3128_GMAC_PHY_INTF_SEL_RMII \
+ (GRF_CLR_BIT(6) | GRF_CLR_BIT(7) | GRF_BIT(8))
+#define RK3128_GMAC_FLOW_CTRL GRF_BIT(9)
+#define RK3128_GMAC_FLOW_CTRL_CLR GRF_CLR_BIT(9)
+#define RK3128_GMAC_SPEED_10M GRF_CLR_BIT(10)
+#define RK3128_GMAC_SPEED_100M GRF_BIT(10)
+#define RK3128_GMAC_RMII_CLK_25M GRF_BIT(11)
+#define RK3128_GMAC_RMII_CLK_2_5M GRF_CLR_BIT(11)
+#define RK3128_GMAC_CLK_125M (GRF_CLR_BIT(12) | GRF_CLR_BIT(13))
+#define RK3128_GMAC_CLK_25M (GRF_BIT(12) | GRF_BIT(13))
+#define RK3128_GMAC_CLK_2_5M (GRF_CLR_BIT(12) | GRF_BIT(13))
+#define RK3128_GMAC_RMII_MODE GRF_BIT(14)
+#define RK3128_GMAC_RMII_MODE_CLR GRF_CLR_BIT(14)
+
+static void rk3128_set_to_rgmii(struct rk_priv_data *bsp_priv,
+ int tx_delay, int rx_delay)
+{
+ struct device *dev = &bsp_priv->pdev->dev;
+
+ if (IS_ERR(bsp_priv->grf)) {
+ dev_err(dev, "Missing rockchip,grf property\n");
+ return;
+ }
+
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_PHY_INTF_SEL_RGMII |
+ RK3128_GMAC_RMII_MODE_CLR);
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON0,
+ DELAY_ENABLE(RK3128, tx_delay, rx_delay) |
+ RK3128_GMAC_CLK_RX_DL_CFG(rx_delay) |
+ RK3128_GMAC_CLK_TX_DL_CFG(tx_delay));
+}
+
+static void rk3128_set_to_rmii(struct rk_priv_data *bsp_priv)
+{
+ struct device *dev = &bsp_priv->pdev->dev;
+
+ if (IS_ERR(bsp_priv->grf)) {
+ dev_err(dev, "Missing rockchip,grf property\n");
+ return;
+ }
+
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_PHY_INTF_SEL_RMII | RK3128_GMAC_RMII_MODE);
+}
+
+static void rk3128_set_rgmii_speed(struct rk_priv_data *bsp_priv, int speed)
+{
+ struct device *dev = &bsp_priv->pdev->dev;
+
+ if (IS_ERR(bsp_priv->grf)) {
+ dev_err(dev, "Missing rockchip,grf property\n");
+ return;
+ }
+
+ if (speed == 10)
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_CLK_2_5M);
+ else if (speed == 100)
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_CLK_25M);
+ else if (speed == 1000)
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_CLK_125M);
+ else
+ dev_err(dev, "unknown speed value for RGMII! speed=%d", speed);
+}
+
+static void rk3128_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed)
+{
+ struct device *dev = &bsp_priv->pdev->dev;
+
+ if (IS_ERR(bsp_priv->grf)) {
+ dev_err(dev, "Missing rockchip,grf property\n");
+ return;
+ }
+
+ if (speed == 10) {
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_RMII_CLK_2_5M |
+ RK3128_GMAC_SPEED_10M);
+ } else if (speed == 100) {
+ regmap_write(bsp_priv->grf, RK3128_GRF_MAC_CON1,
+ RK3128_GMAC_RMII_CLK_25M |
+ RK3128_GMAC_SPEED_100M);
+ } else {
+ dev_err(dev, "unknown speed value for RMII! speed=%d", speed);
+ }
+}
+
+static const struct rk_gmac_ops rk3128_ops = {
+ .set_to_rgmii = rk3128_set_to_rgmii,
+ .set_to_rmii = rk3128_set_to_rmii,
+ .set_rgmii_speed = rk3128_set_rgmii_speed,
+ .set_rmii_speed = rk3128_set_rmii_speed,
+};
+
#define RK3228_GRF_MAC_CON0 0x0900
#define RK3228_GRF_MAC_CON1 0x0904
static SIMPLE_DEV_PM_OPS(rk_gmac_pm_ops, rk_gmac_suspend, rk_gmac_resume);
static const struct of_device_id rk_gmac_dwmac_match[] = {
+ { .compatible = "rockchip,rk3128-gmac", .data = &rk3128_ops },
{ .compatible = "rockchip,rk3228-gmac", .data = &rk3228_ops },
{ .compatible = "rockchip,rk3288-gmac", .data = &rk3288_ops },
{ .compatible = "rockchip,rk3328-gmac", .data = &rk3328_ops },
{
void __iomem *ioaddr = hw->pcsr;
unsigned int pmt = 0;
+ u32 config;
if (mode & WAKE_MAGIC) {
pr_debug("GMAC: WOL Magic frame\n");
pmt |= power_down | global_unicast | wake_up_frame_en;
}
+ if (pmt) {
+ /* The receiver must be enabled for WOL before powering down */
+ config = readl(ioaddr + GMAC_CONFIG);
+ config |= GMAC_CONFIG_RE;
+ writel(config, ioaddr + GMAC_CONFIG);
+ }
writel(pmt, ioaddr + GMAC_PMT);
}
{ .compatible = "allwinner,sun8i-h3-emac" },
{ .compatible = "allwinner,sun8i-v3s-emac" },
{ .compatible = "allwinner,sun50i-a64-emac" },
+ {},
};
/* If phy-handle property is passed from DT, use it as the PHY */
u32 num_chn;
u32 send_sections;
u32 recv_sections;
+ u32 send_section_size;
+ u32 recv_section_size;
};
enum rndis_device_state {
net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
- net_device->recv_section_size = NETVSC_RECV_SECTION_SIZE;
- net_device->send_section_size = NETVSC_SEND_SECTION_SIZE;
-
init_completion(&net_device->channel_init_wait);
init_waitqueue_head(&net_device->subchan_open);
INIT_WORK(&net_device->subchan_work, rndis_set_subchannel);
int ret = 0;
/* Get receive buffer area. */
- buf_size = device_info->recv_sections * net_device->recv_section_size;
+ buf_size = device_info->recv_sections * device_info->recv_section_size;
buf_size = roundup(buf_size, PAGE_SIZE);
net_device->recv_buf = vzalloc(buf_size);
goto cleanup;
/* Now setup the send buffer. */
- buf_size = device_info->send_sections * net_device->send_section_size;
+ buf_size = device_info->send_sections * device_info->send_section_size;
buf_size = round_up(buf_size, PAGE_SIZE);
net_device->send_buf = vzalloc(buf_size);
device_info.num_chn = count;
device_info.ring_size = ring_size;
device_info.send_sections = nvdev->send_section_cnt;
+ device_info.send_section_size = nvdev->send_section_size;
device_info.recv_sections = nvdev->recv_section_cnt;
+ device_info.recv_section_size = nvdev->recv_section_size;
rndis_filter_device_remove(dev, nvdev);
device_info.ring_size = ring_size;
device_info.num_chn = nvdev->num_chn;
device_info.send_sections = nvdev->send_section_cnt;
+ device_info.send_section_size = nvdev->send_section_size;
device_info.recv_sections = nvdev->recv_section_cnt;
+ device_info.recv_section_size = nvdev->recv_section_size;
rndis_filter_device_remove(hdev, nvdev);
device_info.num_chn = nvdev->num_chn;
device_info.ring_size = ring_size;
device_info.send_sections = new_tx;
+ device_info.send_section_size = nvdev->send_section_size;
device_info.recv_sections = new_rx;
+ device_info.recv_section_size = nvdev->recv_section_size;
netif_device_detach(ndev);
was_opened = rndis_filter_opened(nvdev);
device_info.ring_size = ring_size;
device_info.num_chn = VRSS_CHANNEL_DEFAULT;
device_info.send_sections = NETVSC_DEFAULT_TX;
+ device_info.send_section_size = NETVSC_SEND_SECTION_SIZE;
device_info.recv_sections = NETVSC_DEFAULT_RX;
+ device_info.recv_section_size = NETVSC_RECV_SECTION_SIZE;
nvdev = rndis_filter_device_add(dev, &device_info);
if (IS_ERR(nvdev)) {
endif
-menuconfig PHYLIB
- tristate "PHY Device support and infrastructure"
- depends on NETDEVICES
- select MDIO_DEVICE
- help
- Ethernet controllers are usually attached to PHY
- devices. This option provides infrastructure for
- managing PHY devices.
-
config PHYLINK
tristate
depends on NETDEVICES
configuration links, PHYs, and Serdes links with MAC level
autonegotiation modes.
+menuconfig PHYLIB
+ tristate "PHY Device support and infrastructure"
+ depends on NETDEVICES
+ select MDIO_DEVICE
+ help
+ Ethernet controllers are usually attached to PHY
+ devices. This option provides infrastructure for
+ managing PHY devices.
+
if PHYLIB
config SWPHY
cmd->base.port = PORT_BNC;
else
cmd->base.port = PORT_MII;
-
+ cmd->base.transceiver = phy_is_internal(phydev) ?
+ XCVR_INTERNAL : XCVR_EXTERNAL;
cmd->base.phy_address = phydev->mdio.addr;
cmd->base.autoneg = phydev->autoneg;
cmd->base.eth_tp_mdix_ctrl = phydev->mdix_ctrl;
{
const char *drv_name = phydev->drv ? phydev->drv->name : "unbound";
char *irq_str;
- char irq_num[4];
+ char irq_num[8];
switch(phydev->irq) {
case PHY_POLL:
priv->phy_drv->read_status(phydev);
val = mdiobus_read(phydev->mdio.bus, priv->addr, XILINX_GMII2RGMII_REG);
- val &= XILINX_GMII2RGMII_SPEED_MASK;
+ val &= ~XILINX_GMII2RGMII_SPEED_MASK;
if (phydev->speed == SPEED_1000)
val |= BMCR_SPEED1000;
int n_channels; /* how many channels are attached 54 */
spinlock_t rlock; /* lock for receive side 58 */
spinlock_t wlock; /* lock for transmit side 5c */
- int *xmit_recursion __percpu; /* xmit recursion detect */
+ int __percpu *xmit_recursion; /* xmit recursion detect */
int mru; /* max receive unit 60 */
unsigned int flags; /* control bits 64 */
unsigned int xstate; /* transmit state bits 68 */
switch (tun->flags & TUN_TYPE_MASK) {
case IFF_TUN:
if (tun->flags & IFF_NO_PI) {
- switch (skb->data[0] & 0xf0) {
- case 0x40:
+ u8 ip_version = skb->len ? (skb->data[0] >> 4) : 0;
+
+ switch (ip_version) {
+ case 4:
pi.proto = htons(ETH_P_IP);
break;
- case 0x60:
+ case 6:
pi.proto = htons(ETH_P_IPV6);
break;
default:
desc->bInterfaceProtocol == 3);
}
+static int is_novatel_rndis(struct usb_interface_descriptor *desc)
+{
+ return (desc->bInterfaceClass == USB_CLASS_MISC &&
+ desc->bInterfaceSubClass == 4 &&
+ desc->bInterfaceProtocol == 1);
+}
+
#else
#define is_rndis(desc) 0
#define is_activesync(desc) 0
#define is_wireless_rndis(desc) 0
+#define is_novatel_rndis(desc) 0
#endif
*/
rndis = (is_rndis(&intf->cur_altsetting->desc) ||
is_activesync(&intf->cur_altsetting->desc) ||
- is_wireless_rndis(&intf->cur_altsetting->desc));
+ is_wireless_rndis(&intf->cur_altsetting->desc) ||
+ is_novatel_rndis(&intf->cur_altsetting->desc));
memset(info, 0, sizeof(*info));
info->control = intf;
#define REALTEK_VENDOR_ID 0x0bda
#define SAMSUNG_VENDOR_ID 0x04e8
#define LENOVO_VENDOR_ID 0x17ef
+#define LINKSYS_VENDOR_ID 0x13b1
#define NVIDIA_VENDOR_ID 0x0955
#define HP_VENDOR_ID 0x03f0
#define MICROSOFT_VENDOR_ID 0x045e
.driver_info = 0,
},
+#if IS_ENABLED(CONFIG_USB_RTL8152)
+/* Linksys USB3GIGV1 Ethernet Adapter */
+{
+ USB_DEVICE_AND_INTERFACE_INFO(LINKSYS_VENDOR_ID, 0x0041, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ .driver_info = 0,
+},
+#endif
+
/* ThinkPad USB-C Dock (based on Realtek RTL8153) */
{
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3062, USB_CLASS_COMM,
struct ethtool_eeprom *ee, u8 *data)
{
struct lan78xx_net *dev = netdev_priv(netdev);
+ int ret;
+
+ ret = usb_autopm_get_interface(dev->intf);
+ if (ret)
+ return ret;
ee->magic = LAN78XX_EEPROM_MAGIC;
- return lan78xx_read_raw_eeprom(dev, ee->offset, ee->len, data);
+ ret = lan78xx_read_raw_eeprom(dev, ee->offset, ee->len, data);
+
+ usb_autopm_put_interface(dev->intf);
+
+ return ret;
}
static int lan78xx_ethtool_set_eeprom(struct net_device *netdev,
struct ethtool_eeprom *ee, u8 *data)
{
struct lan78xx_net *dev = netdev_priv(netdev);
+ int ret;
+
+ ret = usb_autopm_get_interface(dev->intf);
+ if (ret)
+ return ret;
- /* Allow entire eeprom update only */
- if ((ee->magic == LAN78XX_EEPROM_MAGIC) &&
- (ee->offset == 0) &&
- (ee->len == 512) &&
- (data[0] == EEPROM_INDICATOR))
- return lan78xx_write_raw_eeprom(dev, ee->offset, ee->len, data);
+ /* Invalid EEPROM_INDICATOR at offset zero will result in a failure
+ * to load data from EEPROM
+ */
+ if (ee->magic == LAN78XX_EEPROM_MAGIC)
+ ret = lan78xx_write_raw_eeprom(dev, ee->offset, ee->len, data);
else if ((ee->magic == LAN78XX_OTP_MAGIC) &&
(ee->offset == 0) &&
(ee->len == 512) &&
(data[0] == OTP_INDICATOR_1))
- return lan78xx_write_raw_otp(dev, ee->offset, ee->len, data);
+ ret = lan78xx_write_raw_otp(dev, ee->offset, ee->len, data);
+
+ usb_autopm_put_interface(dev->intf);
- return -EINVAL;
+ return ret;
}
static void lan78xx_get_strings(struct net_device *netdev, u32 stringset,
/* LAN7801 only has RGMII mode */
if (dev->chipid == ID_REV_CHIP_ID_7801_)
buf &= ~MAC_CR_GMII_EN_;
- buf |= MAC_CR_AUTO_DUPLEX_ | MAC_CR_AUTO_SPEED_;
ret = lan78xx_write_reg(dev, MAC_CR, buf);
ret = lan78xx_read_reg(dev, MAC_TX, &buf);
#define VENDOR_ID_MICROSOFT 0x045e
#define VENDOR_ID_SAMSUNG 0x04e8
#define VENDOR_ID_LENOVO 0x17ef
+#define VENDOR_ID_LINKSYS 0x13b1
#define VENDOR_ID_NVIDIA 0x0955
#define MCU_TYPE_PLA 0x0100
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
{}
};
/* RNDIS for tethering */
USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3),
.driver_info = (unsigned long) &rndis_info,
+}, {
+ /* Novatel Verizon USB730L */
+ USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1),
+ .driver_info = (unsigned long) &rndis_info,
},
{ }, // END
};
MODULE_DEVICE_TABLE(pci, ath10k_pci_id_table);
-#ifdef CONFIG_PM
-
-static int ath10k_pci_pm_suspend(struct device *dev)
+static __maybe_unused int ath10k_pci_pm_suspend(struct device *dev)
{
struct ath10k *ar = dev_get_drvdata(dev);
int ret;
return ret;
}
-static int ath10k_pci_pm_resume(struct device *dev)
+static __maybe_unused int ath10k_pci_pm_resume(struct device *dev)
{
struct ath10k *ar = dev_get_drvdata(dev);
int ret;
static SIMPLE_DEV_PM_OPS(ath10k_pci_pm_ops,
ath10k_pci_pm_suspend,
ath10k_pci_pm_resume);
-#endif
static struct pci_driver ath10k_pci_driver = {
.name = "ath10k_pci",
eth_broadcast_addr(params_le->bssid);
params_le->bss_type = DOT11_BSSTYPE_ANY;
- params_le->scan_type = 0;
+ params_le->scan_type = BRCMF_SCANTYPE_ACTIVE;
params_le->channel_num = 0;
params_le->nprobes = cpu_to_le32(-1);
params_le->active_time = cpu_to_le32(-1);
params_le->home_time = cpu_to_le32(-1);
memset(¶ms_le->ssid_le, 0, sizeof(params_le->ssid_le));
- /* if request is null exit so it will be all channel broadcast scan */
- if (!request)
- return;
-
n_ssids = request->n_ssids;
n_channels = request->n_channels;
+
/* Copy channel array if applicable */
brcmf_dbg(SCAN, "### List of channelspecs to scan ### %d\n",
n_channels);
ptr += sizeof(ssid_le);
}
} else {
- brcmf_dbg(SCAN, "Broadcast scan %p\n", request->ssids);
- if ((request->ssids) && request->ssids->ssid_len) {
- brcmf_dbg(SCAN, "SSID %s len=%d\n",
- params_le->ssid_le.SSID,
- request->ssids->ssid_len);
- params_le->ssid_le.SSID_len =
- cpu_to_le32(request->ssids->ssid_len);
- memcpy(¶ms_le->ssid_le.SSID, request->ssids->ssid,
- request->ssids->ssid_len);
- }
+ brcmf_dbg(SCAN, "Performing passive scan\n");
+ params_le->scan_type = BRCMF_SCANTYPE_PASSIVE;
}
/* Adding mask to channel numbers */
params_le->channel_num =
struct brcmf_cfg80211_info *cfg = ifp->drvr->config;
s32 status;
struct brcmf_escan_result_le *escan_result_le;
+ u32 escan_buflen;
struct brcmf_bss_info_le *bss_info_le;
struct brcmf_bss_info_le *bss = NULL;
u32 bi_length;
if (status == BRCMF_E_STATUS_PARTIAL) {
brcmf_dbg(SCAN, "ESCAN Partial result\n");
+ if (e->datalen < sizeof(*escan_result_le)) {
+ brcmf_err("invalid event data length\n");
+ goto exit;
+ }
escan_result_le = (struct brcmf_escan_result_le *) data;
if (!escan_result_le) {
brcmf_err("Invalid escan result (NULL pointer)\n");
goto exit;
}
+ escan_buflen = le32_to_cpu(escan_result_le->buflen);
+ if (escan_buflen > BRCMF_ESCAN_BUF_SIZE ||
+ escan_buflen > e->datalen ||
+ escan_buflen < sizeof(*escan_result_le)) {
+ brcmf_err("Invalid escan buffer length: %d\n",
+ escan_buflen);
+ goto exit;
+ }
if (le16_to_cpu(escan_result_le->bss_count) != 1) {
brcmf_err("Invalid bss_count %d: ignoring\n",
escan_result_le->bss_count);
}
bi_length = le32_to_cpu(bss_info_le->length);
- if (bi_length != (le32_to_cpu(escan_result_le->buflen) -
- WL_ESCAN_RESULTS_FIXED_SIZE)) {
- brcmf_err("Invalid bss_info length %d: ignoring\n",
+ if (bi_length != escan_buflen - WL_ESCAN_RESULTS_FIXED_SIZE) {
+ brcmf_err("Ignoring invalid bss_info length: %d\n",
bi_length);
goto exit;
}
#define BRCMF_SCAN_PARAMS_COUNT_MASK 0x0000ffff
#define BRCMF_SCAN_PARAMS_NSSID_SHIFT 16
+/* scan type definitions */
+#define BRCMF_SCANTYPE_DEFAULT 0xFF
+#define BRCMF_SCANTYPE_ACTIVE 0
+#define BRCMF_SCANTYPE_PASSIVE 1
+
#define BRCMF_WSEC_MAX_PSK_LEN 32
#define BRCMF_WSEC_PASSPHRASE BIT(0)
* 1. We are not using a unified image
* 2. We are using a unified image but had an error while exiting D3
*/
- set_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status);
+ set_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status);
set_bit(IWL_MVM_STATUS_D3_RECONFIG, &mvm->status);
/*
* When switching images we return 1, which causes mac80211
struct iwl_mvm_mc_iter_data *data = _data;
struct iwl_mvm *mvm = data->mvm;
struct iwl_mcast_filter_cmd *cmd = mvm->mcast_filter_cmd;
+ struct iwl_host_cmd hcmd = {
+ .id = MCAST_FILTER_CMD,
+ .flags = CMD_ASYNC,
+ .dataflags[0] = IWL_HCMD_DFL_NOCOPY,
+ };
int ret, len;
/* if we don't have free ports, mcast frames will be dropped */
memcpy(cmd->bssid, vif->bss_conf.bssid, ETH_ALEN);
len = roundup(sizeof(*cmd) + cmd->count * ETH_ALEN, 4);
- ret = iwl_mvm_send_cmd_pdu(mvm, MCAST_FILTER_CMD, CMD_ASYNC, len, cmd);
+ hcmd.len[0] = len;
+ hcmd.data[0] = cmd;
+
+ ret = iwl_mvm_send_cmd(mvm, &hcmd);
if (ret)
IWL_ERR(mvm, "mcast filter cmd error. ret=%d\n", ret);
}
if (!cmd)
goto out;
+ if (changed_flags & FIF_ALLMULTI)
+ cmd->pass_all = !!(*total_flags & FIF_ALLMULTI);
+
+ if (cmd->pass_all)
+ cmd->count = 0;
+
iwl_mvm_recalc_multicast(mvm);
out:
mutex_unlock(&mvm->mutex);
* queues, so we should never get a second deferred
* frame for the RA/TID.
*/
- iwl_mvm_start_mac_queues(mvm, info->hw_queue);
+ iwl_mvm_start_mac_queues(mvm, BIT(info->hw_queue));
ieee80211_free_txskb(mvm->hw, skb);
}
}
return ret;
}
+static void iwl_mvm_flush_no_vif(struct iwl_mvm *mvm, u32 queues, bool drop)
+{
+ if (drop) {
+ if (iwl_mvm_has_new_tx_api(mvm))
+ /* TODO new tx api */
+ WARN_ONCE(1,
+ "Need to implement flush TX queue\n");
+ else
+ iwl_mvm_flush_tx_path(mvm,
+ iwl_mvm_flushable_queues(mvm) & queues,
+ 0);
+ } else {
+ if (iwl_mvm_has_new_tx_api(mvm)) {
+ struct ieee80211_sta *sta;
+ int i;
+
+ mutex_lock(&mvm->mutex);
+
+ for (i = 0; i < ARRAY_SIZE(mvm->fw_id_to_mac_id); i++) {
+ sta = rcu_dereference_protected(
+ mvm->fw_id_to_mac_id[i],
+ lockdep_is_held(&mvm->mutex));
+ if (IS_ERR_OR_NULL(sta))
+ continue;
+
+ iwl_mvm_wait_sta_queues_empty(mvm,
+ iwl_mvm_sta_from_mac80211(sta));
+ }
+
+ mutex_unlock(&mvm->mutex);
+ } else {
+ iwl_trans_wait_tx_queues_empty(mvm->trans,
+ queues);
+ }
+ }
+}
+
static void iwl_mvm_mac_flush(struct ieee80211_hw *hw,
struct ieee80211_vif *vif, u32 queues, bool drop)
{
int i;
u32 msk = 0;
- if (!vif || vif->type != NL80211_IFTYPE_STATION)
+ if (!vif) {
+ iwl_mvm_flush_no_vif(mvm, queues, drop);
+ return;
+ }
+
+ if (vif->type != NL80211_IFTYPE_STATION)
return;
/* Make sure we're done with the deferred traffic before flushing */
(lq_sta->tx_agg_tid_en & BIT(tid)) &&
(tid_data->tx_count_last >= IWL_MVM_RS_AGG_START_THRESHOLD)) {
IWL_DEBUG_RATE(mvm, "try to aggregate tid %d\n", tid);
- rs_tl_turn_on_agg_for_tid(mvm, lq_sta, tid, sta);
+ if (rs_tl_turn_on_agg_for_tid(mvm, lq_sta, tid, sta) == 0)
+ tid_data->state = IWL_AGG_QUEUED;
}
}
* If there was a significant jump in the nssn - adjust.
* If the SN is smaller than the NSSN it might need to first go into
* the reorder buffer, in which case we just release up to it and the
- * rest of the function will take of storing it and releasing up to the
- * nssn
+ * rest of the function will take care of storing it and releasing up to
+ * the nssn
*/
if (!iwl_mvm_is_sn_less(nssn, buffer->head_sn + buffer->buf_size,
- buffer->buf_size)) {
+ buffer->buf_size) ||
+ !ieee80211_sn_less(sn, buffer->head_sn + buffer->buf_size)) {
u16 min_sn = ieee80211_sn_less(sn, nssn) ? sn : nssn;
iwl_mvm_release_frames(mvm, sta, napi, buffer, min_sn);
struct iwl_host_cmd cmd = {
.id = SCAN_OFFLOAD_ABORT_CMD,
};
- u32 status;
+ u32 status = CAN_ABORT_STATUS;
ret = iwl_mvm_send_cmd_status(mvm, &cmd, &status);
if (ret)
{
struct iwl_mvm_add_sta_cmd cmd;
int ret;
- u32 status;
+ u32 status = ADD_STA_SUCCESS;
lockdep_assert_held(&mvm->mutex);
if (WARN_ON_ONCE(tid >= IWL_MAX_TID_COUNT))
return -EINVAL;
- if (mvmsta->tid_data[tid].state != IWL_AGG_OFF) {
- IWL_ERR(mvm, "Start AGG when state is not IWL_AGG_OFF %d!\n",
+ if (mvmsta->tid_data[tid].state != IWL_AGG_QUEUED &&
+ mvmsta->tid_data[tid].state != IWL_AGG_OFF) {
+ IWL_ERR(mvm,
+ "Start AGG when state is not IWL_AGG_QUEUED or IWL_AGG_OFF %d!\n",
mvmsta->tid_data[tid].state);
return -ENXIO;
}
* These states relate to a specific RA / TID.
*
* @IWL_AGG_OFF: aggregation is not used
+ * @IWL_AGG_QUEUED: aggregation start work has been queued
* @IWL_AGG_STARTING: aggregation are starting (between start and oper)
* @IWL_AGG_ON: aggregation session is up
* @IWL_EMPTYING_HW_QUEUE_ADDBA: establishing a BA session - waiting for the
*/
enum iwl_mvm_agg_state {
IWL_AGG_OFF = 0,
+ IWL_AGG_QUEUED,
IWL_AGG_STARTING,
IWL_AGG_ON,
IWL_EMPTYING_HW_QUEUE_ADDBA,
lockdep_assert_held(&mvm->mutex);
+ status = 0;
ret = iwl_mvm_send_cmd_pdu_status(mvm, WIDE_ID(PHY_OPS_GROUP,
CTDP_CONFIG_CMD),
sizeof(cmd), &cmd, &status);
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_ADHOC:
/*
- * Handle legacy hostapd as well, where station will be added
- * only just before sending the association response.
+ * Non-bufferable frames use the broadcast station, thus they
+ * use the probe queue.
* Also take care of the case where we send a deauth to a
* station that we don't have, or similarly an association
* response (with non-success status) for a station we can't
* Also, disassociate frames might happen, particular with
* reason 7 ("Class 3 frame received from nonassociated STA").
*/
- if (ieee80211_is_probe_resp(fc) || ieee80211_is_auth(fc) ||
- ieee80211_is_deauth(fc) || ieee80211_is_assoc_resp(fc) ||
- ieee80211_is_disassoc(fc))
+ if (ieee80211_is_mgmt(fc) &&
+ (!ieee80211_is_bufferable_mmpdu(fc) ||
+ ieee80211_is_deauth(fc) || ieee80211_is_disassoc(fc)))
return mvm->probe_queue;
if (info->hw_queue == info->control.vif->cab_queue)
return mvmvif->cab_queue;
vif = qtnf_netdev_get_priv(wdev->netdev);
+ qtnf_scan_done(vif->mac, true);
+
if (qtnf_cmd_send_del_intf(vif))
pr_err("VIF%u.%u: failed to delete VIF\n", vif->mac->macid,
vif->vifid);
struct qtnf_vif *vif = qtnf_netdev_get_priv(dev);
int ret;
+ qtnf_scan_done(vif->mac, true);
+
ret = qtnf_cmd_send_stop_ap(vif);
if (ret) {
pr_err("VIF%u.%u: failed to stop AP operation in FW\n",
!qtnf_sta_list_lookup(&vif->sta_list, params->mac))
return 0;
- qtnf_scan_done(vif->mac, true);
-
ret = qtnf_cmd_send_del_sta(vif, params);
if (ret)
pr_err("VIF%u.%u: failed to delete STA %pM\n",
}
vif->sta_state = QTNF_STA_DISCONNECTED;
- qtnf_scan_done(mac, true);
}
+
+ qtnf_scan_done(mac, true);
}
void qtnf_cfg80211_vif_reset(struct qtnf_vif *vif)
.aborted = aborted,
};
+ if (timer_pending(&mac->scan_timeout))
+ del_timer_sync(&mac->scan_timeout);
+
mutex_lock(&mac->mac_lock);
if (mac->scan_req) {
return -EINVAL;
}
- if (timer_pending(&mac->scan_timeout))
- del_timer_sync(&mac->scan_timeout);
qtnf_scan_done(mac, le32_to_cpu(status->flags) & QLINK_SCAN_ABORTED);
return 0;
struct qtnf_pcie_bus_priv *priv = (void *)get_bus_priv(bus);
dma_addr_t txbd_paddr, skb_paddr;
struct qtnf_tx_bd *txbd;
+ unsigned long flags;
int len, i;
u32 info;
int ret = 0;
+ spin_lock_irqsave(&priv->tx0_lock, flags);
+
if (!qtnf_tx_queue_ready(priv)) {
if (skb->dev)
netif_stop_queue(skb->dev);
+ spin_unlock_irqrestore(&priv->tx0_lock, flags);
return NETDEV_TX_BUSY;
}
dev_kfree_skb_any(skb);
}
- qtnf_pcie_data_tx_reclaim(priv);
priv->tx_done_count++;
+ spin_unlock_irqrestore(&priv->tx0_lock, flags);
+
+ qtnf_pcie_data_tx_reclaim(priv);
return NETDEV_TX_OK;
}
strcpy(bus->fwname, QTN_PCI_PEARL_FW_NAME);
init_completion(&bus->request_firmware_complete);
mutex_init(&bus->bus_lock);
+ spin_lock_init(&pcie_priv->tx0_lock);
spin_lock_init(&pcie_priv->irq_lock);
spin_lock_init(&pcie_priv->tx_reclaim_lock);
/* lock for tx reclaim operations */
spinlock_t tx_reclaim_lock;
+ /* lock for tx0 operations */
+ spinlock_t tx0_lock;
u8 msi_enabled;
int mps;
struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
struct nd_namespace_index *nsindex;
+ /*
+ * If any of the DIMMs do not support labels the only
+ * possible BTT format is v1.
+ */
+ if (!ndd) {
+ loop_bitmask = 0;
+ break;
+ }
+
nsindex = to_namespace_index(ndd, ndd->ns_current);
if (nsindex == NULL)
loop_bitmask |= 1;
return false;
if (nvme_req(req)->status & NVME_SC_DNR)
return false;
- if (jiffies - req->start_time >= req->timeout)
- return false;
if (nvme_req(req)->retries >= nvme_max_retries)
return false;
return true;
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
if (a == &dev_attr_uuid.attr) {
- if (uuid_is_null(&ns->uuid) ||
+ if (uuid_is_null(&ns->uuid) &&
!memchr_inv(ns->nguid, 0, sizeof(ns->nguid)))
return 0;
}
container_of(work, struct nvme_ctrl, async_event_work);
spin_lock_irq(&ctrl->lock);
- while (ctrl->event_limit > 0) {
+ while (ctrl->state == NVME_CTRL_LIVE && ctrl->event_limit > 0) {
int aer_idx = --ctrl->event_limit;
spin_unlock_irq(&ctrl->lock);
/*FALLTHRU*/
case NVME_SC_ABORT_REQ:
++ctrl->event_limit;
- queue_work(nvme_wq, &ctrl->async_event_work);
+ if (ctrl->state == NVME_CTRL_LIVE)
+ queue_work(nvme_wq, &ctrl->async_event_work);
break;
default:
break;
nvme_queue_scan(ctrl);
break;
case NVME_AER_NOTICE_FW_ACT_STARTING:
- schedule_work(&ctrl->fw_act_work);
+ queue_work(nvme_wq, &ctrl->fw_act_work);
break;
default:
dev_warn(ctrl->device, "async event result %08x\n", result);
opts->queue_size = NVMF_DEF_QUEUE_SIZE;
opts->nr_io_queues = num_online_cpus();
opts->reconnect_delay = NVMF_DEF_RECONNECT_DELAY;
+ opts->kato = NVME_DEFAULT_KATO;
options = o = kstrdup(buf, GFP_KERNEL);
if (!options)
goto out;
}
- if (opts->discovery_nqn) {
- pr_err("Discovery controllers cannot accept keep_alive_tmo != 0\n");
- ret = -EINVAL;
- goto out;
- }
-
if (token < 0) {
pr_err("Invalid keep_alive_tmo %d\n", token);
ret = -EINVAL;
goto out;
- } else if (token == 0) {
+ } else if (token == 0 && !opts->discovery_nqn) {
/* Allowed for debug */
pr_warn("keep_alive_tmo 0 won't execute keep alives!!!\n");
}
opts->kato = token;
+
+ if (opts->discovery_nqn && opts->kato) {
+ pr_err("Discovery controllers cannot accept KATO != 0\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
break;
case NVMF_OPT_CTRL_LOSS_TMO:
if (match_int(args, &token)) {
uuid_copy(&opts->host->id, &hostid);
out:
- if (!opts->discovery_nqn && !opts->kato)
- opts->kato = NVME_DEFAULT_KATO;
kfree(options);
return ret;
}
if (atomic_read(&op->state) == FCPOP_STATE_ABORTED)
status = cpu_to_le16((NVME_SC_ABORT_REQ | NVME_SC_DNR) << 1);
else if (freq->status)
- status = cpu_to_le16(NVME_SC_FC_TRANSPORT_ERROR << 1);
+ status = cpu_to_le16(NVME_SC_INTERNAL << 1);
/*
* For the linux implementation, if we have an unsuccesful
*/
if (freq->transferred_length !=
be32_to_cpu(op->cmd_iu.data_len)) {
- status = cpu_to_le16(NVME_SC_FC_TRANSPORT_ERROR << 1);
+ status = cpu_to_le16(NVME_SC_INTERNAL << 1);
goto done;
}
result.u64 = 0;
freq->transferred_length ||
op->rsp_iu.status_code ||
sqe->common.command_id != cqe->command_id)) {
- status = cpu_to_le16(NVME_SC_FC_TRANSPORT_ERROR << 1);
+ status = cpu_to_le16(NVME_SC_INTERNAL << 1);
goto done;
}
result = cqe->result;
break;
default:
- status = cpu_to_le16(NVME_SC_FC_TRANSPORT_ERROR << 1);
+ status = cpu_to_le16(NVME_SC_INTERNAL << 1);
goto done;
}
* as well as those by FC-NVME spec.
*/
WARN_ON_ONCE(sqe->common.metadata);
- WARN_ON_ONCE(sqe->common.dptr.prp1);
- WARN_ON_ONCE(sqe->common.dptr.prp2);
sqe->common.flags |= NVME_CMD_SGL_METABUF;
/*
- * format SQE DPTR field per FC-NVME rules
- * type=data block descr; subtype=offset;
- * offset is currently 0.
+ * format SQE DPTR field per FC-NVME rules:
+ * type=0x5 Transport SGL Data Block Descriptor
+ * subtype=0xA Transport-specific value
+ * address=0
+ * length=length of the data series
*/
- sqe->rw.dptr.sgl.type = NVME_SGL_FMT_OFFSET;
+ sqe->rw.dptr.sgl.type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
+ NVME_SGL_FMT_TRANSPORT_A;
sqe->rw.dptr.sgl.length = cpu_to_le32(data_len);
sqe->rw.dptr.sgl.addr = 0;
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/mutex.h>
+#include <linux/once.h>
#include <linux/pci.h>
#include <linux/poison.h>
#include <linux/t10-pi.h>
struct mutex shutdown_lock;
bool subsystem;
void __iomem *cmb;
- dma_addr_t cmb_dma_addr;
+ pci_bus_addr_t cmb_bus_addr;
u64 cmb_size;
u32 cmbsz;
u32 cmbloc;
}
#endif
+static void nvme_print_sgl(struct scatterlist *sgl, int nents)
+{
+ int i;
+ struct scatterlist *sg;
+
+ for_each_sg(sgl, sg, nents, i) {
+ dma_addr_t phys = sg_phys(sg);
+ pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d "
+ "dma_address:%pad dma_length:%d\n",
+ i, &phys, sg->offset, sg->length, &sg_dma_address(sg),
+ sg_dma_len(sg));
+ }
+}
+
static blk_status_t nvme_setup_prps(struct nvme_dev *dev, struct request *req)
{
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
return BLK_STS_OK;
bad_sgl:
- if (WARN_ONCE(1, "Invalid SGL for payload:%d nents:%d\n",
- blk_rq_payload_bytes(req), iod->nents)) {
- for_each_sg(iod->sg, sg, iod->nents, i) {
- dma_addr_t phys = sg_phys(sg);
- pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d "
- "dma_address:%pad dma_length:%d\n", i, &phys,
- sg->offset, sg->length,
- &sg_dma_address(sg),
- sg_dma_len(sg));
- }
- }
+ WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents),
+ "Invalid SGL for payload:%d nents:%d\n",
+ blk_rq_payload_bytes(req), iod->nents);
return BLK_STS_IOERR;
-
}
static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
if (qid && dev->cmb && use_cmb_sqes && NVME_CMB_SQS(dev->cmbsz)) {
unsigned offset = (qid - 1) * roundup(SQ_SIZE(depth),
dev->ctrl.page_size);
- nvmeq->sq_dma_addr = dev->cmb_dma_addr + offset;
+ nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset;
nvmeq->sq_cmds_io = dev->cmb + offset;
} else {
nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
if (result < 0)
goto release_cq;
+ nvme_init_queue(nvmeq, qid);
result = queue_request_irq(nvmeq);
if (result < 0)
goto release_sq;
- nvme_init_queue(nvmeq, qid);
return result;
release_sq:
return result;
nvmeq->cq_vector = 0;
+ nvme_init_queue(nvmeq, 0);
result = queue_request_irq(nvmeq);
if (result) {
nvmeq->cq_vector = -1;
resource_size_t bar_size;
struct pci_dev *pdev = to_pci_dev(dev->dev);
void __iomem *cmb;
- dma_addr_t dma_addr;
+ int bar;
dev->cmbsz = readl(dev->bar + NVME_REG_CMBSZ);
if (!(NVME_CMB_SZ(dev->cmbsz)))
szu = (u64)1 << (12 + 4 * NVME_CMB_SZU(dev->cmbsz));
size = szu * NVME_CMB_SZ(dev->cmbsz);
offset = szu * NVME_CMB_OFST(dev->cmbloc);
- bar_size = pci_resource_len(pdev, NVME_CMB_BIR(dev->cmbloc));
+ bar = NVME_CMB_BIR(dev->cmbloc);
+ bar_size = pci_resource_len(pdev, bar);
if (offset > bar_size)
return NULL;
if (size > bar_size - offset)
size = bar_size - offset;
- dma_addr = pci_resource_start(pdev, NVME_CMB_BIR(dev->cmbloc)) + offset;
- cmb = ioremap_wc(dma_addr, size);
+ cmb = ioremap_wc(pci_resource_start(pdev, bar) + offset, size);
if (!cmb)
return NULL;
- dev->cmb_dma_addr = dma_addr;
+ dev->cmb_bus_addr = pci_bus_address(pdev, bar) + offset;
dev->cmb_size = size;
return cmb;
}
if (result)
goto out;
- nvme_init_queue(dev->queues[0], 0);
result = nvme_alloc_admin_tags(dev);
if (result)
goto out;
}
changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
- WARN_ON_ONCE(!changed);
+ if (!changed) {
+ /* state change failure is ok if we're in DELETING state */
+ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
+ return;
+ }
+
ctrl->ctrl.nr_reconnects = 0;
nvme_start_ctrl(&ctrl->ctrl);
struct nvme_rdma_ctrl *ctrl = container_of(work,
struct nvme_rdma_ctrl, err_work);
- nvme_stop_ctrl(&ctrl->ctrl);
+ nvme_stop_keep_alive(&ctrl->ctrl);
if (ctrl->ctrl.queue_count > 1) {
nvme_stop_queues(&ctrl->ctrl);
if (status)
nvmet_set_status(req, status);
- /* XXX: need to fill in something useful for sq_head */
- req->rsp->sq_head = 0;
- if (likely(req->sq)) /* may happen during early failure */
- req->rsp->sq_id = cpu_to_le16(req->sq->qid);
+ if (req->sq->size)
+ req->sq->sqhd = (req->sq->sqhd + 1) % req->sq->size;
+ req->rsp->sq_head = cpu_to_le16(req->sq->sqhd);
+ req->rsp->sq_id = cpu_to_le16(req->sq->qid);
req->rsp->command_id = req->cmd->common.command_id;
if (req->ns)
void nvmet_sq_setup(struct nvmet_ctrl *ctrl, struct nvmet_sq *sq,
u16 qid, u16 size)
{
+ sq->sqhd = 0;
sq->qid = qid;
sq->size = size;
pr_warn("queue already connected!\n");
return NVME_SC_CONNECT_CTRL_BUSY | NVME_SC_DNR;
}
+ if (!sqsize) {
+ pr_warn("queue size zero!\n");
+ return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
+ }
- nvmet_cq_setup(ctrl, req->cq, qid, sqsize);
- nvmet_sq_setup(ctrl, req->sq, qid, sqsize);
+ /* note: convert queue size from 0's-based value to 1's-based value */
+ nvmet_cq_setup(ctrl, req->cq, qid, sqsize + 1);
+ nvmet_sq_setup(ctrl, req->sq, qid, sqsize + 1);
return 0;
}
u32 a_id;
struct nvmet_fc_tgtport *tgtport;
struct list_head a_list;
- struct nvmet_fc_tgt_queue *queues[NVMET_NR_QUEUES];
+ struct nvmet_fc_tgt_queue *queues[NVMET_NR_QUEUES + 1];
struct kref ref;
};
unsigned long flags;
int ret;
- if (qid >= NVMET_NR_QUEUES)
+ if (qid > NVMET_NR_QUEUES)
return NULL;
queue = kzalloc((sizeof(*queue) +
u16 qid = nvmet_fc_getqueueid(connection_id);
unsigned long flags;
+ if (qid > NVMET_NR_QUEUES)
+ return NULL;
+
spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(assoc, &tgtport->assoc_list, a_list) {
if (association_id == assoc->association_id) {
int i;
spin_lock_irqsave(&tgtport->lock, flags);
- for (i = NVMET_NR_QUEUES - 1; i >= 0; i--) {
+ for (i = NVMET_NR_QUEUES; i >= 0; i--) {
queue = assoc->queues[i];
if (queue) {
if (!nvmet_fc_tgt_q_get(queue))
spin_lock_irqsave(&fod->flock, flags);
fod->writedataactive = false;
spin_unlock_irqrestore(&fod->flock, flags);
- nvmet_req_complete(&fod->req,
- NVME_SC_FC_TRANSPORT_ERROR);
+ nvmet_req_complete(&fod->req, NVME_SC_INTERNAL);
} else /* NVMET_FCOP_READDATA or NVMET_FCOP_READDATA_RSP */ {
fcpreq->fcp_error = ret;
fcpreq->transferred_length = 0;
/* if in the middle of an io and we need to tear down */
if (abort) {
if (fcpreq->op == NVMET_FCOP_WRITEDATA) {
- nvmet_req_complete(&fod->req,
- NVME_SC_FC_TRANSPORT_ERROR);
+ nvmet_req_complete(&fod->req, NVME_SC_INTERNAL);
return true;
}
fod->abort = true;
spin_unlock(&fod->flock);
- nvmet_req_complete(&fod->req,
- NVME_SC_FC_TRANSPORT_ERROR);
+ nvmet_req_complete(&fod->req, NVME_SC_INTERNAL);
return;
}
{
struct nvmet_fc_tgtport *tgtport = port->priv;
unsigned long flags;
+ bool matched = false;
spin_lock_irqsave(&nvmet_fc_tgtlock, flags);
if (tgtport->port == port) {
- nvmet_fc_tgtport_put(tgtport);
+ matched = true;
tgtport->port = NULL;
}
spin_unlock_irqrestore(&nvmet_fc_tgtlock, flags);
+
+ if (matched)
+ nvmet_fc_tgtport_put(tgtport);
}
static struct nvmet_fabrics_ops nvmet_fc_tgt_fcp_ops = {
struct fcloop_lport *lport;
struct list_head nport_list;
struct kref ref;
- struct completion rport_unreg_done;
- struct completion tport_unreg_done;
u64 node_name;
u64 port_name;
u32 port_role;
tfcp_req->aborted = true;
spin_unlock(&tfcp_req->reqlock);
- tfcp_req->status = NVME_SC_FC_TRANSPORT_ABORTED;
+ tfcp_req->status = NVME_SC_INTERNAL;
/*
* nothing more to do. If io wasn't active, the transport should
schedule_work(&inireq->iniwork);
}
+static void
+fcloop_nport_free(struct kref *ref)
+{
+ struct fcloop_nport *nport =
+ container_of(ref, struct fcloop_nport, ref);
+ unsigned long flags;
+
+ spin_lock_irqsave(&fcloop_lock, flags);
+ list_del(&nport->nport_list);
+ spin_unlock_irqrestore(&fcloop_lock, flags);
+
+ kfree(nport);
+}
+
+static void
+fcloop_nport_put(struct fcloop_nport *nport)
+{
+ kref_put(&nport->ref, fcloop_nport_free);
+}
+
+static int
+fcloop_nport_get(struct fcloop_nport *nport)
+{
+ return kref_get_unless_zero(&nport->ref);
+}
+
static void
fcloop_localport_delete(struct nvme_fc_local_port *localport)
{
{
struct fcloop_rport *rport = remoteport->private;
- /* release any threads waiting for the unreg to complete */
- complete(&rport->nport->rport_unreg_done);
+ fcloop_nport_put(rport->nport);
}
static void
{
struct fcloop_tport *tport = targetport->private;
- /* release any threads waiting for the unreg to complete */
- complete(&tport->nport->tport_unreg_done);
+ fcloop_nport_put(tport->nport);
}
#define FCLOOP_HW_QUEUES 4
goto out_free_opts;
}
+ memset(&pinfo, 0, sizeof(pinfo));
pinfo.node_name = opts->wwnn;
pinfo.port_name = opts->wwpn;
pinfo.port_role = opts->roles;
return ret ? ret : count;
}
-static void
-fcloop_nport_free(struct kref *ref)
-{
- struct fcloop_nport *nport =
- container_of(ref, struct fcloop_nport, ref);
- unsigned long flags;
-
- spin_lock_irqsave(&fcloop_lock, flags);
- list_del(&nport->nport_list);
- spin_unlock_irqrestore(&fcloop_lock, flags);
-
- kfree(nport);
-}
-
-static void
-fcloop_nport_put(struct fcloop_nport *nport)
-{
- kref_put(&nport->ref, fcloop_nport_free);
-}
-
-static int
-fcloop_nport_get(struct fcloop_nport *nport)
-{
- return kref_get_unless_zero(&nport->ref);
-}
-
static struct fcloop_nport *
fcloop_alloc_nport(const char *buf, size_t count, bool remoteport)
{
if (!nport)
return -EIO;
+ memset(&pinfo, 0, sizeof(pinfo));
pinfo.node_name = nport->node_name;
pinfo.port_name = nport->port_name;
pinfo.port_role = nport->port_role;
}
static int
-__wait_remoteport_unreg(struct fcloop_nport *nport, struct fcloop_rport *rport)
+__remoteport_unreg(struct fcloop_nport *nport, struct fcloop_rport *rport)
{
- int ret;
-
if (!rport)
return -EALREADY;
- init_completion(&nport->rport_unreg_done);
-
- ret = nvme_fc_unregister_remoteport(rport->remoteport);
- if (ret)
- return ret;
-
- wait_for_completion(&nport->rport_unreg_done);
-
- fcloop_nport_put(nport);
-
- return ret;
+ return nvme_fc_unregister_remoteport(rport->remoteport);
}
static ssize_t
if (!nport)
return -ENOENT;
- ret = __wait_remoteport_unreg(nport, rport);
+ ret = __remoteport_unreg(nport, rport);
return ret ? ret : count;
}
}
static int
-__wait_targetport_unreg(struct fcloop_nport *nport, struct fcloop_tport *tport)
+__targetport_unreg(struct fcloop_nport *nport, struct fcloop_tport *tport)
{
- int ret;
-
if (!tport)
return -EALREADY;
- init_completion(&nport->tport_unreg_done);
-
- ret = nvmet_fc_unregister_targetport(tport->targetport);
- if (ret)
- return ret;
-
- wait_for_completion(&nport->tport_unreg_done);
-
- fcloop_nport_put(nport);
-
- return ret;
+ return nvmet_fc_unregister_targetport(tport->targetport);
}
static ssize_t
if (!nport)
return -ENOENT;
- ret = __wait_targetport_unreg(nport, tport);
+ ret = __targetport_unreg(nport, tport);
return ret ? ret : count;
}
spin_unlock_irqrestore(&fcloop_lock, flags);
- ret = __wait_targetport_unreg(nport, tport);
+ ret = __targetport_unreg(nport, tport);
if (ret)
pr_warn("%s: Failed deleting target port\n", __func__);
- ret = __wait_remoteport_unreg(nport, rport);
+ ret = __remoteport_unreg(nport, rport);
if (ret)
pr_warn("%s: Failed deleting remote port\n", __func__);
struct percpu_ref ref;
u16 qid;
u16 size;
+ u16 sqhd;
struct completion free_done;
struct completion confirm_done;
};
/* Stop the user from writing */
if (pos >= nvmem->size)
- return 0;
+ return -EFBIG;
if (count < nvmem->word_size)
return -EINVAL;
return ERR_PTR(-EINVAL);
nvmem = __nvmem_device_get(nvmem_np, NULL, NULL);
+ of_node_put(nvmem_np);
if (IS_ERR(nvmem))
return ERR_CAST(nvmem);
rmem->ops->device_release(rmem, dev);
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_release);
+
+/**
+ * of_reserved_mem_lookup() - acquire reserved_mem from a device node
+ * @np: node pointer of the desired reserved-memory region
+ *
+ * This function allows drivers to acquire a reference to the reserved_mem
+ * struct based on a device node handle.
+ *
+ * Returns a reserved_mem reference, or NULL on error.
+ */
+struct reserved_mem *of_reserved_mem_lookup(struct device_node *np)
+{
+ const char *name;
+ int i;
+
+ if (!np->full_name)
+ return NULL;
+
+ name = kbasename(np->full_name);
+ for (i = 0; i < reserved_mem_count; i++)
+ if (!strcmp(reserved_mem[i].name, name))
+ return &reserved_mem[i];
+
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(of_reserved_mem_lookup);
EXPORT_SYMBOL_GPL(of_platform_default_populate);
#ifndef CONFIG_PPC
+static const struct of_device_id reserved_mem_matches[] = {
+ { .compatible = "qcom,rmtfs-mem" },
+ { .compatible = "ramoops" },
+ {}
+};
+
static int __init of_platform_default_populate_init(void)
{
struct device_node *node;
return -ENODEV;
/*
- * Handle ramoops explicitly, since it is inside /reserved-memory,
- * which lacks a "compatible" property.
+ * Handle certain compatibles explicitly, since we don't want to create
+ * platform_devices for every node in /reserved-memory with a
+ * "compatible",
*/
- node = of_find_node_by_path("/reserved-memory");
- if (node) {
- node = of_find_compatible_node(node, NULL, "ramoops");
- if (node)
- of_platform_device_create(node, NULL, NULL);
- }
+ for_each_matching_node(node, reserved_mem_matches)
+ of_platform_device_create(node, NULL, NULL);
/* Populate everything else. */
of_platform_default_populate(NULL, NULL, NULL);
return ret;
}
-static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test)
+static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq)
{
- u8 irq;
u8 msi_count;
struct pci_epf *epf = epf_test->epf;
struct pci_epc *epc = epf->epc;
reg->status |= STATUS_IRQ_RAISED;
msi_count = pci_epc_get_msi(epc);
- irq = (reg->command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT;
if (irq > msi_count || msi_count <= 0)
pci_epc_raise_irq(epc, PCI_EPC_IRQ_LEGACY, 0);
else
reg->command = 0;
reg->status = 0;
+ irq = (command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT;
+
if (command & COMMAND_RAISE_LEGACY_IRQ) {
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, PCI_EPC_IRQ_LEGACY, 0);
reg->status |= STATUS_WRITE_FAIL;
else
reg->status |= STATUS_WRITE_SUCCESS;
- pci_epf_test_raise_irq(epf_test);
+ pci_epf_test_raise_irq(epf_test, irq);
goto reset_handler;
}
reg->status |= STATUS_READ_SUCCESS;
else
reg->status |= STATUS_READ_FAIL;
- pci_epf_test_raise_irq(epf_test);
+ pci_epf_test_raise_irq(epf_test, irq);
goto reset_handler;
}
reg->status |= STATUS_COPY_SUCCESS;
else
reg->status |= STATUS_COPY_FAIL;
- pci_epf_test_raise_irq(epf_test);
+ pci_epf_test_raise_irq(epf_test, irq);
goto reset_handler;
}
if (command & COMMAND_RAISE_MSI_IRQ) {
msi_count = pci_epc_get_msi(epc);
- irq = (command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT;
if (irq > msi_count || msi_count <= 0)
goto reset_handler;
reg->status = STATUS_IRQ_RAISED;
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
- char *driver_override, *old = pdev->driver_override, *cp;
+ char *driver_override, *old, *cp;
/* We need to keep extra room for a newline */
if (count >= (PAGE_SIZE - 1))
if (cp)
*cp = '\0';
+ device_lock(dev);
+ old = pdev->driver_override;
if (strlen(driver_override)) {
pdev->driver_override = driver_override;
} else {
kfree(driver_override);
pdev->driver_override = NULL;
}
+ device_unlock(dev);
kfree(old);
struct device_attribute *attr, char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
+ ssize_t len;
- return snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override);
+ device_lock(dev);
+ len = snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override);
+ device_unlock(dev);
+ return len;
}
static DEVICE_ATTR_RW(driver_override);
ret = armpmu_register(pmu);
if (ret) {
pr_warn("Failed to register PMU for CPU%d\n", cpu);
+ kfree(pmu->name);
return ret;
}
}
{
struct acpi_device *device = bl_get_data(b);
- if (b->props.power == FB_BLANK_POWERDOWN)
- call_fext_func(fext, FUNC_BACKLIGHT, 0x1, 0x4, 0x3);
- else
- call_fext_func(fext, FUNC_BACKLIGHT, 0x1, 0x4, 0x0);
+ if (fext) {
+ if (b->props.power == FB_BLANK_POWERDOWN)
+ call_fext_func(fext, FUNC_BACKLIGHT, 0x1, 0x4, 0x3);
+ else
+ call_fext_func(fext, FUNC_BACKLIGHT, 0x1, 0x4, 0x0);
+ }
return set_lcd_level(device, b->props.brightness);
}
MODULE_PARM_DESC(mbox_sel,
"RIO Messaging MBOX Selection Mask (default: 0x0f = all)");
+static DEFINE_SPINLOCK(tsi721_maint_lock);
+
static void tsi721_omsg_handler(struct tsi721_device *priv, int ch);
static void tsi721_imsg_handler(struct tsi721_device *priv, int ch);
void __iomem *regs = priv->regs + TSI721_DMAC_BASE(priv->mdma.ch_id);
struct tsi721_dma_desc *bd_ptr;
u32 rd_count, swr_ptr, ch_stat;
+ unsigned long flags;
int i, err = 0;
u32 op = do_wr ? MAINT_WR : MAINT_RD;
if (offset > (RIO_MAINT_SPACE_SZ - len) || (len != sizeof(u32)))
return -EINVAL;
+ spin_lock_irqsave(&tsi721_maint_lock, flags);
+
bd_ptr = priv->mdma.bd_base;
rd_count = ioread32(regs + TSI721_DMAC_DRDCNT);
*/
swr_ptr = ioread32(regs + TSI721_DMAC_DSWP);
iowrite32(swr_ptr, regs + TSI721_DMAC_DSRP);
+
err_out:
+ spin_unlock_irqrestore(&tsi721_maint_lock, flags);
return err;
}
#include <linux/rio.h>
#include <linux/module.h>
-/*
- * These interrupt-safe spinlocks protect all accesses to RIO
- * configuration space and doorbell access.
- */
-static DEFINE_SPINLOCK(rio_config_lock);
-static DEFINE_SPINLOCK(rio_doorbell_lock);
-
/*
* Wrappers for all RIO configuration access functions. They just check
- * alignment, do locking and call the low-level functions pointed to
- * by rio_mport->ops.
+ * alignment and call the low-level functions pointed to by rio_mport->ops.
*/
#define RIO_8_BAD 0
(struct rio_mport *mport, u32 offset, type *value) \
{ \
int res; \
- unsigned long flags; \
u32 data = 0; \
if (RIO_##size##_BAD) return RIO_BAD_SIZE; \
- spin_lock_irqsave(&rio_config_lock, flags); \
res = mport->ops->lcread(mport, mport->id, offset, len, &data); \
*value = (type)data; \
- spin_unlock_irqrestore(&rio_config_lock, flags); \
return res; \
}
int __rio_local_write_config_##size \
(struct rio_mport *mport, u32 offset, type value) \
{ \
- int res; \
- unsigned long flags; \
if (RIO_##size##_BAD) return RIO_BAD_SIZE; \
- spin_lock_irqsave(&rio_config_lock, flags); \
- res = mport->ops->lcwrite(mport, mport->id, offset, len, value);\
- spin_unlock_irqrestore(&rio_config_lock, flags); \
- return res; \
+ return mport->ops->lcwrite(mport, mport->id, offset, len, value);\
}
RIO_LOP_READ(8, u8, 1)
(struct rio_mport *mport, u16 destid, u8 hopcount, u32 offset, type *value) \
{ \
int res; \
- unsigned long flags; \
u32 data = 0; \
if (RIO_##size##_BAD) return RIO_BAD_SIZE; \
- spin_lock_irqsave(&rio_config_lock, flags); \
res = mport->ops->cread(mport, mport->id, destid, hopcount, offset, len, &data); \
*value = (type)data; \
- spin_unlock_irqrestore(&rio_config_lock, flags); \
return res; \
}
int rio_mport_write_config_##size \
(struct rio_mport *mport, u16 destid, u8 hopcount, u32 offset, type value) \
{ \
- int res; \
- unsigned long flags; \
if (RIO_##size##_BAD) return RIO_BAD_SIZE; \
- spin_lock_irqsave(&rio_config_lock, flags); \
- res = mport->ops->cwrite(mport, mport->id, destid, hopcount, offset, len, value); \
- spin_unlock_irqrestore(&rio_config_lock, flags); \
- return res; \
+ return mport->ops->cwrite(mport, mport->id, destid, hopcount, \
+ offset, len, value); \
}
RIO_OP_READ(8, u8, 1)
*/
int rio_mport_send_doorbell(struct rio_mport *mport, u16 destid, u16 data)
{
- int res;
- unsigned long flags;
-
- spin_lock_irqsave(&rio_doorbell_lock, flags);
- res = mport->ops->dsend(mport, mport->id, destid, data);
- spin_unlock_irqrestore(&rio_doorbell_lock, flags);
-
- return res;
+ return mport->ops->dsend(mport, mport->id, destid, data);
}
EXPORT_SYMBOL_GPL(rio_mport_send_doorbell);
This enables the ATH79 reset controller driver that supports the
AR71xx SoC reset controller.
+config RESET_AXS10X
+ bool "AXS10x Reset Driver" if COMPILE_TEST
+ default ARC_PLAT_AXS10X
+ help
+ This enables the reset controller driver for AXS10x.
+
config RESET_BERLIN
bool "Berlin Reset Driver" if COMPILE_TEST
default ARCH_BERLIN
help
This enables the reset controller driver for Marvell Berlin SoCs.
-config RESET_HSDK_V1
- bool "HSDK v1 Reset Driver"
- default n
+config RESET_HSDK
+ bool "Synopsys HSDK Reset Driver"
+ depends on HAS_IOMEM
+ depends on ARC_SOC_HSDK || COMPILE_TEST
help
- This enables the reset controller driver for HSDK v1.
+ This enables the reset controller driver for HSDK board.
config RESET_IMX7
bool "i.MX7 Reset Driver" if COMPILE_TEST
help
This enables the reset driver for ImgTec Pistachio SoCs.
-config RESET_SOCFPGA
- bool "SoCFPGA Reset Driver" if COMPILE_TEST
- default ARCH_SOCFPGA
+config RESET_SIMPLE
+ bool "Simple Reset Controller Driver" if COMPILE_TEST
+ default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX
help
- This enables the reset controller driver for Altera SoCFPGAs.
+ This enables a simple reset controller driver for reset lines that
+ that can be asserted and deasserted by toggling bits in a contiguous,
+ exclusive register space.
-config RESET_STM32
- bool "STM32 Reset Driver" if COMPILE_TEST
- default ARCH_STM32
- help
- This enables the RCC reset controller driver for STM32 MCUs.
+ Currently this driver supports Altera SoCFPGAs, the RCC reset
+ controller in STM32 MCUs, Allwinner SoCs, and ZTE's zx2967 family.
config RESET_SUNXI
bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI
default ARCH_SUNXI
+ select RESET_SIMPLE
help
This enables the reset driver for Allwinner SoCs.
Say Y if you want to control reset signals provided by System Control
block, Media I/O block, Peripheral Block.
-config RESET_ZX2967
- bool "ZTE ZX2967 Reset Driver"
- depends on ARCH_ZX || COMPILE_TEST
- help
- This enables the reset controller driver for ZTE's zx2967 family.
-
config RESET_ZYNQ
bool "ZYNQ Reset Driver" if COMPILE_TEST
default ARCH_ZYNQ
obj-$(CONFIG_ARCH_TEGRA) += tegra/
obj-$(CONFIG_RESET_A10SR) += reset-a10sr.o
obj-$(CONFIG_RESET_ATH79) += reset-ath79.o
+obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o
obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o
-obj-$(CONFIG_RESET_HSDK_V1) += reset-hsdk-v1.o
+obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o
obj-$(CONFIG_RESET_IMX7) += reset-imx7.o
obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o
obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o
obj-$(CONFIG_RESET_MESON) += reset-meson.o
obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o
obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o
-obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o
-obj-$(CONFIG_RESET_STM32) += reset-stm32.o
+obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o
obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o
obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o
obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o
obj-$(CONFIG_RESET_UNIPHIER) += reset-uniphier.o
-obj-$(CONFIG_RESET_ZX2967) += reset-zx2967.o
obj-$(CONFIG_RESET_ZYNQ) += reset-zynq.o
--- /dev/null
+/*
+ * Copyright (C) 2017 Synopsys.
+ *
+ * Synopsys AXS10x reset driver.
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/reset-controller.h>
+
+#define to_axs10x_rst(p) container_of((p), struct axs10x_rst, rcdev)
+
+#define AXS10X_MAX_RESETS 32
+
+struct axs10x_rst {
+ void __iomem *regs_rst;
+ spinlock_t lock;
+ struct reset_controller_dev rcdev;
+};
+
+static int axs10x_reset_reset(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct axs10x_rst *rst = to_axs10x_rst(rcdev);
+ unsigned long flags;
+
+ spin_lock_irqsave(&rst->lock, flags);
+ writel(BIT(id), rst->regs_rst);
+ spin_unlock_irqrestore(&rst->lock, flags);
+
+ return 0;
+}
+
+static const struct reset_control_ops axs10x_reset_ops = {
+ .reset = axs10x_reset_reset,
+};
+
+static int axs10x_reset_probe(struct platform_device *pdev)
+{
+ struct axs10x_rst *rst;
+ struct resource *mem;
+
+ rst = devm_kzalloc(&pdev->dev, sizeof(*rst), GFP_KERNEL);
+ if (!rst)
+ return -ENOMEM;
+
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ rst->regs_rst = devm_ioremap_resource(&pdev->dev, mem);
+ if (IS_ERR(rst->regs_rst))
+ return PTR_ERR(rst->regs_rst);
+
+ spin_lock_init(&rst->lock);
+
+ rst->rcdev.owner = THIS_MODULE;
+ rst->rcdev.ops = &axs10x_reset_ops;
+ rst->rcdev.of_node = pdev->dev.of_node;
+ rst->rcdev.nr_resets = AXS10X_MAX_RESETS;
+
+ return devm_reset_controller_register(&pdev->dev, &rst->rcdev);
+}
+
+static const struct of_device_id axs10x_reset_dt_match[] = {
+ { .compatible = "snps,axs10x-reset" },
+ { },
+};
+
+static struct platform_driver axs10x_reset_driver = {
+ .probe = axs10x_reset_probe,
+ .driver = {
+ .name = "axs10x-reset",
+ .of_match_table = axs10x_reset_dt_match,
+ },
+};
+builtin_platform_driver(axs10x_reset_driver);
+
+MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>");
+MODULE_DESCRIPTION("Synopsys AXS10x reset driver");
+MODULE_LICENSE("GPL v2");
/*
* Copyright (C) 2017 Synopsys.
*
- * Synopsys HSDKv1 SDP reset driver.
+ * Synopsys HSDK Development platform reset driver.
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
#include <linux/slab.h>
#include <linux/types.h>
-#define to_hsdkv1_rst(p) container_of((p), struct hsdkv1_rst, rcdev)
+#define to_hsdk_rst(p) container_of((p), struct hsdk_rst, rcdev)
-struct hsdkv1_rst {
+struct hsdk_rst {
void __iomem *regs_ctl;
void __iomem *regs_rst;
spinlock_t lock;
#define CGU_IP_SW_RESET_RESET BIT(0)
#define SW_RESET_TIMEOUT 10000
-static void hsdkv1_reset_config(struct hsdkv1_rst *rst, unsigned long id)
+static void hsdk_reset_config(struct hsdk_rst *rst, unsigned long id)
{
writel(rst_map[id], rst->regs_ctl + CGU_SYS_RST_CTRL);
}
-static int hsdkv1_reset_do(struct hsdkv1_rst *rst)
+static int hsdk_reset_do(struct hsdk_rst *rst)
{
u32 reg;
!(reg & CGU_IP_SW_RESET_RESET), 5, SW_RESET_TIMEOUT);
}
-static int hsdkv1_reset_reset(struct reset_controller_dev *rcdev,
+static int hsdk_reset_reset(struct reset_controller_dev *rcdev,
unsigned long id)
{
- struct hsdkv1_rst *rst = to_hsdkv1_rst(rcdev);
+ struct hsdk_rst *rst = to_hsdk_rst(rcdev);
unsigned long flags;
int ret;
spin_lock_irqsave(&rst->lock, flags);
- hsdkv1_reset_config(rst, id);
- ret = hsdkv1_reset_do(rst);
+ hsdk_reset_config(rst, id);
+ ret = hsdk_reset_do(rst);
spin_unlock_irqrestore(&rst->lock, flags);
return ret;
}
-static const struct reset_control_ops hsdkv1_reset_ops = {
- .reset = hsdkv1_reset_reset,
+static const struct reset_control_ops hsdk_reset_ops = {
+ .reset = hsdk_reset_reset,
};
-static int hsdkv1_reset_probe(struct platform_device *pdev)
+static int hsdk_reset_probe(struct platform_device *pdev)
{
- struct hsdkv1_rst *rst;
+ struct hsdk_rst *rst;
struct resource *mem;
rst = devm_kzalloc(&pdev->dev, sizeof(*rst), GFP_KERNEL);
spin_lock_init(&rst->lock);
rst->rcdev.owner = THIS_MODULE;
- rst->rcdev.ops = &hsdkv1_reset_ops;
+ rst->rcdev.ops = &hsdk_reset_ops;
rst->rcdev.of_node = pdev->dev.of_node;
rst->rcdev.nr_resets = HSDK_MAX_RESETS;
rst->rcdev.of_reset_n_cells = 1;
return reset_controller_register(&rst->rcdev);
}
-static const struct of_device_id hsdkv1_reset_dt_match[] = {
- { .compatible = "snps,hsdk-v1.0-reset" },
+static const struct of_device_id hsdk_reset_dt_match[] = {
+ { .compatible = "snps,hsdk-reset" },
{ },
};
-static struct platform_driver hsdkv1_reset_driver = {
- .probe = hsdkv1_reset_probe,
+static struct platform_driver hsdk_reset_driver = {
+ .probe = hsdk_reset_probe,
.driver = {
- .name = "hsdk-v1.0-reset",
- .of_match_table = hsdkv1_reset_dt_match,
+ .name = "hsdk-reset",
+ .of_match_table = hsdk_reset_dt_match,
},
};
-builtin_platform_driver(hsdkv1_reset_driver);
+builtin_platform_driver(hsdk_reset_driver);
MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>");
-MODULE_DESCRIPTION("Synopsys HSDKv1 SDP reset driver");
+MODULE_DESCRIPTION("Synopsys HSDK SDP reset driver");
MODULE_LICENSE("GPL v2");
#include <linux/reset-controller.h>
#include <linux/slab.h>
#include <linux/types.h>
+#include <linux/of_device.h>
#define REG_COUNT 8
#define BITS_PER_REG 32
+#define LEVEL_OFFSET 0x7c
struct meson_reset {
void __iomem *reg_base;
struct reset_controller_dev rcdev;
+ spinlock_t lock;
};
static int meson_reset_reset(struct reset_controller_dev *rcdev,
unsigned int offset = id % BITS_PER_REG;
void __iomem *reg_addr = data->reg_base + (bank << 2);
- if (bank >= REG_COUNT)
- return -EINVAL;
-
writel(BIT(offset), reg_addr);
return 0;
}
-static const struct reset_control_ops meson_reset_ops = {
+static int meson_reset_level(struct reset_controller_dev *rcdev,
+ unsigned long id, bool assert)
+{
+ struct meson_reset *data =
+ container_of(rcdev, struct meson_reset, rcdev);
+ unsigned int bank = id / BITS_PER_REG;
+ unsigned int offset = id % BITS_PER_REG;
+ void __iomem *reg_addr = data->reg_base + LEVEL_OFFSET + (bank << 2);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&data->lock, flags);
+
+ reg = readl(reg_addr);
+ if (assert)
+ writel(reg & ~BIT(offset), reg_addr);
+ else
+ writel(reg | BIT(offset), reg_addr);
+
+ spin_unlock_irqrestore(&data->lock, flags);
+
+ return 0;
+}
+
+static int meson_reset_assert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ return meson_reset_level(rcdev, id, true);
+}
+
+static int meson_reset_deassert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ return meson_reset_level(rcdev, id, false);
+}
+
+static const struct reset_control_ops meson_reset_meson8_ops = {
.reset = meson_reset_reset,
};
+static const struct reset_control_ops meson_reset_gx_ops = {
+ .reset = meson_reset_reset,
+ .assert = meson_reset_assert,
+ .deassert = meson_reset_deassert,
+};
+
static const struct of_device_id meson_reset_dt_ids[] = {
- { .compatible = "amlogic,meson8b-reset", },
- { .compatible = "amlogic,meson-gxbb-reset", },
+ { .compatible = "amlogic,meson8b-reset",
+ .data = &meson_reset_meson8_ops, },
+ { .compatible = "amlogic,meson-gxbb-reset",
+ .data = &meson_reset_gx_ops, },
{ /* sentinel */ },
};
static int meson_reset_probe(struct platform_device *pdev)
{
+ const struct reset_control_ops *ops;
struct meson_reset *data;
struct resource *res;
if (!data)
return -ENOMEM;
+ ops = of_device_get_match_data(&pdev->dev);
+ if (!ops)
+ return -EINVAL;
+
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->reg_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->reg_base))
platform_set_drvdata(pdev, data);
+ spin_lock_init(&data->lock);
+
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG;
- data->rcdev.ops = &meson_reset_ops;
+ data->rcdev.ops = ops;
data->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(&pdev->dev, &data->rcdev);
--- /dev/null
+/*
+ * Simple Reset Controller Driver
+ *
+ * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * Based on Allwinner SoCs Reset Controller driver
+ *
+ * Copyright 2013 Maxime Ripard
+ *
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/reset-controller.h>
+#include <linux/spinlock.h>
+
+#include "reset-simple.h"
+
+static inline struct reset_simple_data *
+to_reset_simple_data(struct reset_controller_dev *rcdev)
+{
+ return container_of(rcdev, struct reset_simple_data, rcdev);
+}
+
+static int reset_simple_update(struct reset_controller_dev *rcdev,
+ unsigned long id, bool assert)
+{
+ struct reset_simple_data *data = to_reset_simple_data(rcdev);
+ int reg_width = sizeof(u32);
+ int bank = id / (reg_width * BITS_PER_BYTE);
+ int offset = id % (reg_width * BITS_PER_BYTE);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&data->lock, flags);
+
+ reg = readl(data->membase + (bank * reg_width));
+ if (assert ^ data->active_low)
+ reg |= BIT(offset);
+ else
+ reg &= ~BIT(offset);
+ writel(reg, data->membase + (bank * reg_width));
+
+ spin_unlock_irqrestore(&data->lock, flags);
+
+ return 0;
+}
+
+static int reset_simple_assert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ return reset_simple_update(rcdev, id, true);
+}
+
+static int reset_simple_deassert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ return reset_simple_update(rcdev, id, false);
+}
+
+static int reset_simple_status(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct reset_simple_data *data = to_reset_simple_data(rcdev);
+ int reg_width = sizeof(u32);
+ int bank = id / (reg_width * BITS_PER_BYTE);
+ int offset = id % (reg_width * BITS_PER_BYTE);
+ u32 reg;
+
+ reg = readl(data->membase + (bank * reg_width));
+
+ return !(reg & BIT(offset)) ^ !data->status_active_low;
+}
+
+const struct reset_control_ops reset_simple_ops = {
+ .assert = reset_simple_assert,
+ .deassert = reset_simple_deassert,
+ .status = reset_simple_status,
+};
+
+/**
+ * struct reset_simple_devdata - simple reset controller properties
+ * @reg_offset: offset between base address and first reset register.
+ * @nr_resets: number of resets. If not set, default to resource size in bits.
+ * @active_low: if true, bits are cleared to assert the reset. Otherwise, bits
+ * are set to assert the reset.
+ * @status_active_low: if true, bits read back as cleared while the reset is
+ * asserted. Otherwise, bits read back as set while the
+ * reset is asserted.
+ */
+struct reset_simple_devdata {
+ u32 reg_offset;
+ u32 nr_resets;
+ bool active_low;
+ bool status_active_low;
+};
+
+#define SOCFPGA_NR_BANKS 8
+
+static const struct reset_simple_devdata reset_simple_socfpga = {
+ .reg_offset = 0x10,
+ .nr_resets = SOCFPGA_NR_BANKS * 32,
+ .status_active_low = true,
+};
+
+static const struct reset_simple_devdata reset_simple_active_low = {
+ .active_low = true,
+ .status_active_low = true,
+};
+
+static const struct of_device_id reset_simple_dt_ids[] = {
+ { .compatible = "altr,rst-mgr", .data = &reset_simple_socfpga },
+ { .compatible = "st,stm32-rcc", },
+ { .compatible = "allwinner,sun6i-a31-clock-reset",
+ .data = &reset_simple_active_low },
+ { .compatible = "zte,zx296718-reset",
+ .data = &reset_simple_active_low },
+ { /* sentinel */ },
+};
+
+static int reset_simple_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ const struct reset_simple_devdata *devdata;
+ struct reset_simple_data *data;
+ void __iomem *membase;
+ struct resource *res;
+ u32 reg_offset = 0;
+
+ devdata = of_device_get_match_data(dev);
+
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ membase = devm_ioremap_resource(dev, res);
+ if (IS_ERR(membase))
+ return PTR_ERR(membase);
+
+ spin_lock_init(&data->lock);
+ data->membase = membase;
+ data->rcdev.owner = THIS_MODULE;
+ data->rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE;
+ data->rcdev.ops = &reset_simple_ops;
+ data->rcdev.of_node = dev->of_node;
+
+ if (devdata) {
+ reg_offset = devdata->reg_offset;
+ if (devdata->nr_resets)
+ data->rcdev.nr_resets = devdata->nr_resets;
+ data->active_low = devdata->active_low;
+ data->status_active_low = devdata->status_active_low;
+ }
+
+ if (of_device_is_compatible(dev->of_node, "altr,rst-mgr") &&
+ of_property_read_u32(dev->of_node, "altr,modrst-offset",
+ ®_offset)) {
+ dev_warn(dev,
+ "missing altr,modrst-offset property, assuming 0x%x!\n",
+ reg_offset);
+ }
+
+ data->membase += reg_offset;
+
+ return devm_reset_controller_register(dev, &data->rcdev);
+}
+
+static struct platform_driver reset_simple_driver = {
+ .probe = reset_simple_probe,
+ .driver = {
+ .name = "simple-reset",
+ .of_match_table = reset_simple_dt_ids,
+ },
+};
+builtin_platform_driver(reset_simple_driver);
--- /dev/null
+/*
+ * Simple Reset Controller ops
+ *
+ * Based on Allwinner SoCs Reset Controller driver
+ *
+ * Copyright 2013 Maxime Ripard
+ *
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __RESET_SIMPLE_H__
+#define __RESET_SIMPLE_H__
+
+#include <linux/io.h>
+#include <linux/reset-controller.h>
+#include <linux/spinlock.h>
+
+/**
+ * struct reset_simple_data - driver data for simple reset controllers
+ * @lock: spinlock to protect registers during read-modify-write cycles
+ * @membase: memory mapped I/O register range
+ * @rcdev: reset controller device base structure
+ * @active_low: if true, bits are cleared to assert the reset. Otherwise, bits
+ * are set to assert the reset. Note that this says nothing about
+ * the voltage level of the actual reset line.
+ * @status_active_low: if true, bits read back as cleared while the reset is
+ * asserted. Otherwise, bits read back as set while the
+ * reset is asserted.
+ */
+struct reset_simple_data {
+ spinlock_t lock;
+ void __iomem *membase;
+ struct reset_controller_dev rcdev;
+ bool active_low;
+ bool status_active_low;
+};
+
+extern const struct reset_control_ops reset_simple_ops;
+
+#endif /* __RESET_SIMPLE_H__ */
+++ /dev/null
-/*
- * Socfpga Reset Controller Driver
- *
- * Copyright 2014 Steffen Trumtrar <s.trumtrar@pengutronix.de>
- *
- * based on
- * Allwinner SoCs Reset Controller driver
- *
- * Copyright 2013 Maxime Ripard
- *
- * Maxime Ripard <maxime.ripard@free-electrons.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/err.h>
-#include <linux/io.h>
-#include <linux/init.h>
-#include <linux/of.h>
-#include <linux/platform_device.h>
-#include <linux/reset-controller.h>
-#include <linux/spinlock.h>
-#include <linux/types.h>
-
-#define BANK_INCREMENT 4
-#define NR_BANKS 8
-
-struct socfpga_reset_data {
- spinlock_t lock;
- void __iomem *membase;
- struct reset_controller_dev rcdev;
-};
-
-static int socfpga_reset_assert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct socfpga_reset_data *data = container_of(rcdev,
- struct socfpga_reset_data,
- rcdev);
- int bank = id / BITS_PER_LONG;
- int offset = id % BITS_PER_LONG;
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(&data->lock, flags);
-
- reg = readl(data->membase + (bank * BANK_INCREMENT));
- writel(reg | BIT(offset), data->membase + (bank * BANK_INCREMENT));
- spin_unlock_irqrestore(&data->lock, flags);
-
- return 0;
-}
-
-static int socfpga_reset_deassert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct socfpga_reset_data *data = container_of(rcdev,
- struct socfpga_reset_data,
- rcdev);
-
- int bank = id / BITS_PER_LONG;
- int offset = id % BITS_PER_LONG;
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(&data->lock, flags);
-
- reg = readl(data->membase + (bank * BANK_INCREMENT));
- writel(reg & ~BIT(offset), data->membase + (bank * BANK_INCREMENT));
-
- spin_unlock_irqrestore(&data->lock, flags);
-
- return 0;
-}
-
-static int socfpga_reset_status(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct socfpga_reset_data *data = container_of(rcdev,
- struct socfpga_reset_data, rcdev);
- int bank = id / BITS_PER_LONG;
- int offset = id % BITS_PER_LONG;
- u32 reg;
-
- reg = readl(data->membase + (bank * BANK_INCREMENT));
-
- return !(reg & BIT(offset));
-}
-
-static const struct reset_control_ops socfpga_reset_ops = {
- .assert = socfpga_reset_assert,
- .deassert = socfpga_reset_deassert,
- .status = socfpga_reset_status,
-};
-
-static int socfpga_reset_probe(struct platform_device *pdev)
-{
- struct socfpga_reset_data *data;
- struct resource *res;
- struct device *dev = &pdev->dev;
- struct device_node *np = dev->of_node;
- u32 modrst_offset;
-
- /*
- * The binding was mainlined without the required property.
- * Do not continue, when we encounter an old DT.
- */
- if (!of_find_property(pdev->dev.of_node, "#reset-cells", NULL)) {
- dev_err(&pdev->dev, "%pOF missing #reset-cells property\n",
- pdev->dev.of_node);
- return -EINVAL;
- }
-
- data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
- if (!data)
- return -ENOMEM;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- data->membase = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(data->membase))
- return PTR_ERR(data->membase);
-
- if (of_property_read_u32(np, "altr,modrst-offset", &modrst_offset)) {
- dev_warn(dev, "missing altr,modrst-offset property, assuming 0x10!\n");
- modrst_offset = 0x10;
- }
- data->membase += modrst_offset;
-
- spin_lock_init(&data->lock);
-
- data->rcdev.owner = THIS_MODULE;
- data->rcdev.nr_resets = NR_BANKS * BITS_PER_LONG;
- data->rcdev.ops = &socfpga_reset_ops;
- data->rcdev.of_node = pdev->dev.of_node;
-
- return devm_reset_controller_register(dev, &data->rcdev);
-}
-
-static const struct of_device_id socfpga_reset_dt_ids[] = {
- { .compatible = "altr,rst-mgr", },
- { /* sentinel */ },
-};
-
-static struct platform_driver socfpga_reset_driver = {
- .probe = socfpga_reset_probe,
- .driver = {
- .name = "socfpga-reset",
- .of_match_table = socfpga_reset_dt_ids,
- },
-};
-builtin_platform_driver(socfpga_reset_driver);
+++ /dev/null
-/*
- * Copyright (C) Maxime Coquelin 2015
- * Author: Maxime Coquelin <mcoquelin.stm32@gmail.com>
- * License terms: GNU General Public License (GPL), version 2
- *
- * Heavily based on sunxi driver from Maxime Ripard.
- */
-
-#include <linux/err.h>
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/platform_device.h>
-#include <linux/reset-controller.h>
-#include <linux/slab.h>
-#include <linux/spinlock.h>
-#include <linux/types.h>
-
-struct stm32_reset_data {
- spinlock_t lock;
- void __iomem *membase;
- struct reset_controller_dev rcdev;
-};
-
-static int stm32_reset_assert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct stm32_reset_data *data = container_of(rcdev,
- struct stm32_reset_data,
- rcdev);
- int bank = id / BITS_PER_LONG;
- int offset = id % BITS_PER_LONG;
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(&data->lock, flags);
-
- reg = readl(data->membase + (bank * 4));
- writel(reg | BIT(offset), data->membase + (bank * 4));
-
- spin_unlock_irqrestore(&data->lock, flags);
-
- return 0;
-}
-
-static int stm32_reset_deassert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct stm32_reset_data *data = container_of(rcdev,
- struct stm32_reset_data,
- rcdev);
- int bank = id / BITS_PER_LONG;
- int offset = id % BITS_PER_LONG;
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(&data->lock, flags);
-
- reg = readl(data->membase + (bank * 4));
- writel(reg & ~BIT(offset), data->membase + (bank * 4));
-
- spin_unlock_irqrestore(&data->lock, flags);
-
- return 0;
-}
-
-static const struct reset_control_ops stm32_reset_ops = {
- .assert = stm32_reset_assert,
- .deassert = stm32_reset_deassert,
-};
-
-static const struct of_device_id stm32_reset_dt_ids[] = {
- { .compatible = "st,stm32-rcc", },
- { /* sentinel */ },
-};
-
-static int stm32_reset_probe(struct platform_device *pdev)
-{
- struct stm32_reset_data *data;
- struct resource *res;
-
- data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
- if (!data)
- return -ENOMEM;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- data->membase = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(data->membase))
- return PTR_ERR(data->membase);
-
- spin_lock_init(&data->lock);
-
- data->rcdev.owner = THIS_MODULE;
- data->rcdev.nr_resets = resource_size(res) * 8;
- data->rcdev.ops = &stm32_reset_ops;
- data->rcdev.of_node = pdev->dev.of_node;
-
- return devm_reset_controller_register(&pdev->dev, &data->rcdev);
-}
-
-static struct platform_driver stm32_reset_driver = {
- .probe = stm32_reset_probe,
- .driver = {
- .name = "stm32-rcc-reset",
- .of_match_table = stm32_reset_dt_ids,
- },
-};
-builtin_platform_driver(stm32_reset_driver);
#include <linux/spinlock.h>
#include <linux/types.h>
-struct sunxi_reset_data {
- spinlock_t lock;
- void __iomem *membase;
- struct reset_controller_dev rcdev;
-};
-
-static int sunxi_reset_assert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct sunxi_reset_data *data = container_of(rcdev,
- struct sunxi_reset_data,
- rcdev);
- int reg_width = sizeof(u32);
- int bank = id / (reg_width * BITS_PER_BYTE);
- int offset = id % (reg_width * BITS_PER_BYTE);
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(&data->lock, flags);
-
- reg = readl(data->membase + (bank * reg_width));
- writel(reg & ~BIT(offset), data->membase + (bank * reg_width));
-
- spin_unlock_irqrestore(&data->lock, flags);
-
- return 0;
-}
-
-static int sunxi_reset_deassert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct sunxi_reset_data *data = container_of(rcdev,
- struct sunxi_reset_data,
- rcdev);
- int reg_width = sizeof(u32);
- int bank = id / (reg_width * BITS_PER_BYTE);
- int offset = id % (reg_width * BITS_PER_BYTE);
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(&data->lock, flags);
-
- reg = readl(data->membase + (bank * reg_width));
- writel(reg | BIT(offset), data->membase + (bank * reg_width));
-
- spin_unlock_irqrestore(&data->lock, flags);
-
- return 0;
-}
-
-static const struct reset_control_ops sunxi_reset_ops = {
- .assert = sunxi_reset_assert,
- .deassert = sunxi_reset_deassert,
-};
+#include "reset-simple.h"
static int sunxi_reset_init(struct device_node *np)
{
- struct sunxi_reset_data *data;
+ struct reset_simple_data *data;
struct resource res;
resource_size_t size;
int ret;
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = size * 8;
- data->rcdev.ops = &sunxi_reset_ops;
+ data->rcdev.ops = &reset_simple_ops;
data->rcdev.of_node = np;
+ data->active_low = true;
return reset_controller_register(&data->rcdev);
* These are the reset controller we need to initialize early on in
* our system, before we can even think of using a regular device
* driver for it.
+ * The controllers that we can register through the regular device
+ * model are handled by the simple reset driver directly.
*/
static const struct of_device_id sunxi_early_reset_dt_ids[] __initconst = {
{ .compatible = "allwinner,sun6i-a31-ahb1-reset", },
for_each_matching_node(np, sunxi_early_reset_dt_ids)
sunxi_reset_init(np);
}
-
-/*
- * And these are the controllers we can register through the regular
- * device model.
- */
-static const struct of_device_id sunxi_reset_dt_ids[] = {
- { .compatible = "allwinner,sun6i-a31-clock-reset", },
- { /* sentinel */ },
-};
-
-static int sunxi_reset_probe(struct platform_device *pdev)
-{
- struct sunxi_reset_data *data;
- struct resource *res;
-
- data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
- if (!data)
- return -ENOMEM;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- data->membase = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(data->membase))
- return PTR_ERR(data->membase);
-
- spin_lock_init(&data->lock);
-
- data->rcdev.owner = THIS_MODULE;
- data->rcdev.nr_resets = resource_size(res) * 8;
- data->rcdev.ops = &sunxi_reset_ops;
- data->rcdev.of_node = pdev->dev.of_node;
-
- return devm_reset_controller_register(&pdev->dev, &data->rcdev);
-}
-
-static struct platform_driver sunxi_reset_driver = {
- .probe = sunxi_reset_probe,
- .driver = {
- .name = "sunxi-reset",
- .of_match_table = sunxi_reset_dt_ids,
- },
-};
-builtin_platform_driver(sunxi_reset_driver);
static const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x2000, 2), /* NAND */
+ UNIPHIER_RESETX(6, 0x2000, 12), /* Ether */
UNIPHIER_RESETX(8, 0x2000, 10), /* STDMAC (HSC, MIO, RLE) */
UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */
UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */
static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x2000, 2), /* NAND */
+ UNIPHIER_RESETX(6, 0x2000, 12), /* Ether */
UNIPHIER_RESETX(8, 0x2000, 10), /* STDMAC (HSC, RLE) */
UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */
UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */
static const struct uniphier_reset_data uniphier_ld11_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
+ UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */
UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC, MIO) */
UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */
UNIPHIER_RESETX(41, 0x2008, 1), /* EVEA */
static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
+ UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */
UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC) */
UNIPHIER_RESETX(12, 0x200c, 5), /* GIO (PCIe, USB3) */
UNIPHIER_RESETX(16, 0x200c, 12), /* USB30-PHY0 */
UNIPHIER_RESET_END,
};
+static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = {
+ UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */
+ UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
+ UNIPHIER_RESETX(8, 0x200c, 12), /* STDMAC */
+ UNIPHIER_RESETX(12, 0x200c, 4), /* USB30 link (GIO0) */
+ UNIPHIER_RESETX(13, 0x200c, 5), /* USB31 link (GIO1) */
+ UNIPHIER_RESETX(16, 0x200c, 16), /* USB30-PHY0 */
+ UNIPHIER_RESETX(17, 0x200c, 18), /* USB30-PHY1 */
+ UNIPHIER_RESETX(18, 0x200c, 20), /* USB30-PHY2 */
+ UNIPHIER_RESETX(20, 0x200c, 17), /* USB31-PHY0 */
+ UNIPHIER_RESETX(21, 0x200c, 19), /* USB31-PHY1 */
+ UNIPHIER_RESET_END,
+};
+
/* Media I/O reset data */
#define UNIPHIER_MIO_RESET_SD(id, ch) \
UNIPHIER_RESETX((id), 0x110 + 0x200 * (ch), 0)
.compatible = "socionext,uniphier-ld20-reset",
.data = uniphier_ld20_sys_reset_data,
},
+ {
+ .compatible = "socionext,uniphier-pxs3-reset",
+ .data = uniphier_pxs3_sys_reset_data,
+ },
/* Media I/O reset, SD reset */
{
.compatible = "socionext,uniphier-ld4-mio-reset",
.compatible = "socionext,uniphier-ld20-sd-reset",
.data = uniphier_pro5_sd_reset_data,
},
+ {
+ .compatible = "socionext,uniphier-pxs3-sd-reset",
+ .data = uniphier_pro5_sd_reset_data,
+ },
/* Peripheral reset */
{
.compatible = "socionext,uniphier-ld4-peri-reset",
.compatible = "socionext,uniphier-ld20-peri-reset",
.data = uniphier_pro4_peri_reset_data,
},
+ {
+ .compatible = "socionext,uniphier-pxs3-peri-reset",
+ .data = uniphier_pro4_peri_reset_data,
+ },
/* Analog signal amplifiers reset */
{
.compatible = "socionext,uniphier-ld11-adamv-reset",
+++ /dev/null
-/*
- * ZTE's zx2967 family reset controller driver
- *
- * Copyright (C) 2017 ZTE Ltd.
- *
- * Author: Baoyou Xie <baoyou.xie@linaro.org>
- *
- * License terms: GNU General Public License (GPL) version 2
- */
-
-#include <linux/of_address.h>
-#include <linux/platform_device.h>
-#include <linux/reset-controller.h>
-
-struct zx2967_reset {
- void __iomem *reg_base;
- spinlock_t lock;
- struct reset_controller_dev rcdev;
-};
-
-static int zx2967_reset_act(struct reset_controller_dev *rcdev,
- unsigned long id, bool assert)
-{
- struct zx2967_reset *reset = NULL;
- int bank = id / 32;
- int offset = id % 32;
- u32 reg;
- unsigned long flags;
-
- reset = container_of(rcdev, struct zx2967_reset, rcdev);
-
- spin_lock_irqsave(&reset->lock, flags);
-
- reg = readl_relaxed(reset->reg_base + (bank * 4));
- if (assert)
- reg &= ~BIT(offset);
- else
- reg |= BIT(offset);
- writel_relaxed(reg, reset->reg_base + (bank * 4));
-
- spin_unlock_irqrestore(&reset->lock, flags);
-
- return 0;
-}
-
-static int zx2967_reset_assert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- return zx2967_reset_act(rcdev, id, true);
-}
-
-static int zx2967_reset_deassert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- return zx2967_reset_act(rcdev, id, false);
-}
-
-static const struct reset_control_ops zx2967_reset_ops = {
- .assert = zx2967_reset_assert,
- .deassert = zx2967_reset_deassert,
-};
-
-static int zx2967_reset_probe(struct platform_device *pdev)
-{
- struct zx2967_reset *reset;
- struct resource *res;
-
- reset = devm_kzalloc(&pdev->dev, sizeof(*reset), GFP_KERNEL);
- if (!reset)
- return -ENOMEM;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- reset->reg_base = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(reset->reg_base))
- return PTR_ERR(reset->reg_base);
-
- spin_lock_init(&reset->lock);
-
- reset->rcdev.owner = THIS_MODULE;
- reset->rcdev.nr_resets = resource_size(res) * 8;
- reset->rcdev.ops = &zx2967_reset_ops;
- reset->rcdev.of_node = pdev->dev.of_node;
-
- return devm_reset_controller_register(&pdev->dev, &reset->rcdev);
-}
-
-static const struct of_device_id zx2967_reset_dt_ids[] = {
- { .compatible = "zte,zx296718-reset", },
- {},
-};
-
-static struct platform_driver zx2967_reset_driver = {
- .probe = zx2967_reset_probe,
- .driver = {
- .name = "zx2967-reset",
- .of_match_table = zx2967_reset_dt_ids,
- },
-};
-builtin_platform_driver(zx2967_reset_driver);
dasd_schedule_device_bh(device);
if (device->block) {
dasd_schedule_block_bh(device->block);
- blk_mq_run_hw_queues(device->block->request_queue, true);
+ if (device->block->request_queue)
+ blk_mq_run_hw_queues(device->block->request_queue,
+ true);
}
}
EXPORT_SYMBOL_GPL(dasd_generic_handle_state_change);
dasd_schedule_device_bh(device);
if (device->block) {
dasd_schedule_block_bh(device->block);
- blk_mq_run_hw_queues(device->block->request_queue, true);
+ if (device->block->request_queue)
+ blk_mq_run_hw_queues(device->block->request_queue,
+ true);
}
if (!device->stopped)
if (device->block) {
dasd_schedule_block_bh(device->block);
- blk_mq_run_hw_queues(device->block->request_queue, true);
+ if (device->block->request_queue)
+ blk_mq_run_hw_queues(device->block->request_queue,
+ true);
}
clear_bit(DASD_FLAG_SUSPENDED, &device->flags);
static void scm_request_finish(struct scm_request *scmrq)
{
struct scm_blk_dev *bdev = scmrq->bdev;
- int *error;
+ blk_status_t *error;
int i;
for (i = 0; i < nr_requests_per_io && scmrq->request[i]; i++) {
static void scm_blk_request_done(struct request *req)
{
- int *error = blk_mq_rq_to_pdu(req);
+ blk_status_t *error = blk_mq_rq_to_pdu(req);
blk_mq_end_request(req, *error);
}
atomic_set(&bdev->queued_reqs, 0);
bdev->tag_set.ops = &scm_mq_ops;
- bdev->tag_set.cmd_size = sizeof(int);
+ bdev->tag_set.cmd_size = sizeof(blk_status_t);
bdev->tag_set.nr_hw_queues = nr_requests;
bdev->tag_set.queue_depth = nr_requests_per_io * nr_requests;
bdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
static int recovery_check(struct device *dev, void *data)
{
struct ccw_device *cdev = to_ccwdev(dev);
+ struct subchannel *sch;
int *redo = data;
spin_lock_irq(cdev->ccwlock);
switch (cdev->private->state) {
+ case DEV_STATE_ONLINE:
+ sch = to_subchannel(cdev->dev.parent);
+ if ((sch->schib.pmcw.pam & sch->opm) == sch->vpm)
+ break;
+ /* fall through */
case DEV_STATE_DISCONNECTED:
CIO_MSG_EVENT(3, "recovery: trigger 0.%x.%04x\n",
cdev->private->dev_id.ssid,
}
spin_unlock_irq(&recovery_lock);
} else
- CIO_MSG_EVENT(4, "recovery: end\n");
+ CIO_MSG_EVENT(3, "recovery: end\n");
}
static DECLARE_WORK(recovery_work, recovery_work_func);
schedule_work(&recovery_work);
}
-static void ccw_device_schedule_recovery(void)
+void ccw_device_schedule_recovery(void)
{
unsigned long flags;
- CIO_MSG_EVENT(4, "recovery: schedule\n");
+ CIO_MSG_EVENT(3, "recovery: schedule\n");
spin_lock_irqsave(&recovery_lock, flags);
if (!timer_pending(&recovery_timer) || (recovery_phase != 0)) {
recovery_phase = 0;
void ccw_device_set_notoper(struct ccw_device *cdev);
void ccw_device_set_timeout(struct ccw_device *, int);
+void ccw_device_schedule_recovery(void);
/* Channel measurement facility related */
void retry_set_schib(struct ccw_device *cdev);
}
}
+static void ccw_device_handle_broken_paths(struct ccw_device *cdev)
+{
+ struct subchannel *sch = to_subchannel(cdev->dev.parent);
+ u8 broken_paths = (sch->schib.pmcw.pam & sch->opm) ^ sch->vpm;
+
+ if (broken_paths && (cdev->private->path_broken_mask != broken_paths))
+ ccw_device_schedule_recovery();
+
+ cdev->private->path_broken_mask = broken_paths;
+}
+
void ccw_device_verify_done(struct ccw_device *cdev, int err)
{
struct subchannel *sch;
memset(&cdev->private->irb, 0, sizeof(struct irb));
}
ccw_device_report_path_events(cdev);
+ ccw_device_handle_broken_paths(cdev);
break;
case -ETIME:
case -EUSERS:
not operable */
u8 path_gone_mask; /* mask of paths, that became unavailable */
u8 path_new_mask; /* mask of paths, that became available */
+ u8 path_broken_mask; /* mask of paths, which were found to be
+ unusable */
struct {
unsigned int fast:1; /* post with "channel end" */
unsigned int repall:1; /* report every interrupt status */
int status;
dresp = (struct aac_mount *) fib_data(fibptr);
- if (!(fibptr->dev->supplement_adapter_info.supported_options2 &
- AAC_OPTION_VARIABLE_BLOCK_SIZE))
+ if (!aac_supports_2T(fibptr->dev)) {
dresp->mnt[0].capacityhigh = 0;
- if ((le32_to_cpu(dresp->status) != ST_OK) ||
- (le32_to_cpu(dresp->mnt[0].vol) != CT_NONE)) {
- _aac_probe_container2(context, fibptr);
- return;
+ if ((le32_to_cpu(dresp->status) == ST_OK) &&
+ (le32_to_cpu(dresp->mnt[0].vol) != CT_NONE)) {
+ _aac_probe_container2(context, fibptr);
+ return;
+ }
}
scsicmd = (struct scsi_cmnd *) context;
return 0;
}
+static inline int aac_supports_2T(struct aac_dev *dev)
+{
+ return (dev->adapter_info.options & AAC_OPT_NEW_COMM_64);
+}
+
char * get_container_type(unsigned type);
extern int numacb;
extern char aac_driver_version[];
bus = aac_logical_to_phys(scmd_channel(cmd));
cid = scmd_id(cmd);
- info = &aac->hba_map[bus][cid];
- if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
- info->devtype != AAC_DEVTYPE_NATIVE_RAW)
+
+ if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS)
return FAILED;
- if (info->reset_state > 0)
+ info = &aac->hba_map[bus][cid];
+
+ if (info->devtype != AAC_DEVTYPE_NATIVE_RAW &&
+ info->reset_state > 0)
return FAILED;
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
bus = aac_logical_to_phys(scmd_channel(cmd));
cid = scmd_id(cmd);
- info = &aac->hba_map[bus][cid];
- if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
- info->devtype != AAC_DEVTYPE_NATIVE_RAW)
+
+ if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS)
return FAILED;
- if (info->reset_state > 0)
+ info = &aac->hba_map[bus][cid];
+
+ if (info->devtype != AAC_DEVTYPE_NATIVE_RAW &&
+ info->reset_state > 0)
return FAILED;
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
aac_set_intx_mode(dev);
src_writel(dev, MUnit.IDR, IOP_SRC_RESET_MASK);
+
+ msleep(5000);
}
static void aac_send_hardware_soft_reset(struct aac_dev *dev)
* Params : SCpnt - command causing reset
* Returns : one of SCSI_RESET_ macros
*/
-int acornscsi_host_reset(struct Scsi_Host *shpnt)
+int acornscsi_host_reset(struct scsi_cmnd *SCpnt)
{
- AS_Host *host = (AS_Host *)shpnt->hostdata;
+ AS_Host *host = (AS_Host *)SCpnt->device->host->hostdata;
struct scsi_cmnd *SCptr;
host->stats.resets += 1;
printk(KERN_WARNING "acornscsi_reset: ");
print_sbic_status(asr, ssr, host->scsi.phase);
- for (devidx = 0; devidx < 9; devidx ++) {
+ for (devidx = 0; devidx < 9; devidx++)
acornscsi_dumplog(host, devidx);
}
#endif
*/
if ((vscsi->flags & (CLIENT_FAILED | RESPONSE_Q_DOWN))) {
pr_err("write_pending failed since: %d\n", vscsi->flags);
- return 0;
+ return -EIO;
}
rc = srp_transfer_data(cmd, &vio_iu(iue)->srp.cmd, ibmvscsis_rdma,
/**
* iscsi_session_teardown - destroy session, host, and cls_session
* @cls_session: iscsi session
- *
- * The driver must have called iscsi_remove_session before
- * calling this.
*/
void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
{
iscsi_pool_free(&session->cmdpool);
+ iscsi_remove_session(cls_session);
+
kfree(session->password);
kfree(session->password_in);
kfree(session->username);
kfree(session->portal_type);
kfree(session->discovery_parent_type);
- iscsi_destroy_session(cls_session);
+ iscsi_free_session(cls_session);
+
iscsi_host_dec_session_cnt(shost);
module_put(owner);
}
"Extents and RPI headers enabled.\n");
}
mempool_free(mboxq, phba->mbox_mem_pool);
+ rc = -EIO;
goto out_free_bsmbx;
}
wcqe->total_data_placed);
nCmd->transferred_length = 0;
nCmd->rcv_rsplen = 0;
- nCmd->status = NVME_SC_FC_TRANSPORT_ERROR;
+ nCmd->status = NVME_SC_INTERNAL;
}
}
goto rel;
if (unlikely(res == QLA_FUNCTION_FAILED))
- fd->status = NVME_SC_FC_TRANSPORT_ERROR;
+ fd->status = NVME_SC_INTERNAL;
else
fd->status = 0;
if (sshdr.asc == 0x20 || /* Invalid command operation code */
sshdr.asc == 0x21 || /* Logical block address out of range */
sshdr.asc == 0x24 || /* Invalid field in cdb */
- sshdr.asc == 0x26) { /* Parameter value invalid */
+ sshdr.asc == 0x26 || /* Parameter value invalid */
+ sshdr.asc == 0x27) { /* Write protected */
set_host_byte(scmd, DID_TARGET_FAILURE);
}
return SUCCESS;
if (*bflags & BLIST_NO_DIF)
sdev->no_dif = 1;
+ if (*bflags & BLIST_UNMAP_LIMIT_WS)
+ sdev->unmap_limit_for_ws = 1;
+
sdev->eh_timeout = SCSI_DEFAULT_EH_TIMEOUT;
if (*bflags & BLIST_TRY_VPD_PAGES)
list_for_each_entry(rport, &fc_host->rports, peers) {
- if ((rport->port_state == FC_PORTSTATE_BLOCKED) &&
+ if ((rport->port_state == FC_PORTSTATE_BLOCKED ||
+ rport->port_state == FC_PORTSTATE_NOTPRESENT) &&
(rport->channel == channel)) {
switch (fc_host->tgtid_bind_type) {
memcpy(&rport->port_name, &ids->port_name,
sizeof(rport->port_name));
rport->port_id = ids->port_id;
- rport->roles = ids->roles;
rport->port_state = FC_PORTSTATE_ONLINE;
rport->flags &= ~FC_RPORT_FAST_FAIL_TIMEDOUT;
fci->f->dd_fcrport_size);
spin_unlock_irqrestore(shost->host_lock, flags);
- if (ids->roles & FC_PORT_ROLE_FCP_TARGET) {
- scsi_target_unblock(&rport->dev, SDEV_RUNNING);
-
- /* initiate a scan of the target */
- spin_lock_irqsave(shost->host_lock, flags);
- rport->flags |= FC_RPORT_SCAN_PENDING;
- scsi_queue_work(shost, &rport->scan_work);
- spin_unlock_irqrestore(shost->host_lock, flags);
- }
+ fc_remote_port_rolechg(rport, ids->roles);
return rport;
}
}
static enum blk_eh_timer_return
fc_bsg_job_timeout(struct request *req)
{
- struct bsg_job *job = (void *) req->special;
+ struct bsg_job *job = blk_mq_rq_to_pdu(req);
struct Scsi_Host *shost = fc_bsg_to_shost(job);
struct fc_rport *rport = fc_bsg_to_rport(job);
struct fc_internal *i = to_fc_internal(shost->transportt);
}
EXPORT_SYMBOL_GPL(iscsi_free_session);
-/**
- * iscsi_destroy_session - destroy iscsi session
- * @session: iscsi_session
- *
- * Can be called by a LLD or iscsi_transport. There must not be
- * any running connections.
- */
-int iscsi_destroy_session(struct iscsi_cls_session *session)
-{
- iscsi_remove_session(session);
- ISCSI_DBG_TRANS_SESSION(session, "Completing session destruction\n");
- iscsi_free_session(session);
- return 0;
-}
-EXPORT_SYMBOL_GPL(iscsi_destroy_session);
-
/**
* iscsi_create_conn - create iscsi class connection
* @session: iscsi cls session
uint32_t group;
nlh = nlmsg_hdr(skb);
- if (nlh->nlmsg_len < sizeof(*nlh) ||
+ if (nlh->nlmsg_len < sizeof(*nlh) + sizeof(*ev) ||
skb->len < nlh->nlmsg_len) {
break;
}
break;
case SD_LBP_WS16:
- max_blocks = min_not_zero(sdkp->max_ws_blocks,
- (u32)SD_MAX_WS16_BLOCKS);
+ if (sdkp->device->unmap_limit_for_ws)
+ max_blocks = sdkp->max_unmap_blocks;
+ else
+ max_blocks = sdkp->max_ws_blocks;
+
+ max_blocks = min_not_zero(max_blocks, (u32)SD_MAX_WS16_BLOCKS);
break;
case SD_LBP_WS10:
- max_blocks = min_not_zero(sdkp->max_ws_blocks,
- (u32)SD_MAX_WS10_BLOCKS);
+ if (sdkp->device->unmap_limit_for_ws)
+ max_blocks = sdkp->max_unmap_blocks;
+ else
+ max_blocks = sdkp->max_ws_blocks;
+
+ max_blocks = min_not_zero(max_blocks, (u32)SD_MAX_WS10_BLOCKS);
break;
case SD_LBP_ZERO:
sd_config_discard(sdkp, SD_LBP_WS16);
else if (sdkp->lbpws10)
sd_config_discard(sdkp, SD_LBP_WS10);
- else if (sdkp->lbpu && sdkp->max_unmap_blocks)
- sd_config_discard(sdkp, SD_LBP_UNMAP);
else
sd_config_discard(sdkp, SD_LBP_DISABLE);
}
sd_read_security(sdkp, buffer);
}
- sdkp->first_scan = 0;
-
/*
* We now have all cache related info, determine how we deal
* with flush requests.
q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max);
/*
- * Use the device's preferred I/O size for reads and writes
+ * Determine the device's preferred I/O size for reads and writes
* unless the reported value is unreasonably small, large, or
* garbage.
*/
rw_max = min_not_zero(logical_to_sectors(sdp, dev_max),
(sector_t)BLK_DEF_MAX_SECTORS);
- /* Combine with controller limits */
- q->limits.max_sectors = min(rw_max, queue_max_hw_sectors(q));
+ /* Do not exceed controller limit */
+ rw_max = min(rw_max, queue_max_hw_sectors(q));
+
+ /*
+ * Only update max_sectors if previously unset or if the current value
+ * exceeds the capabilities of the hardware.
+ */
+ if (sdkp->first_scan ||
+ q->limits.max_sectors > q->limits.max_dev_sectors ||
+ q->limits.max_sectors > q->limits.max_hw_sectors)
+ q->limits.max_sectors = rw_max;
+
+ sdkp->first_scan = 0;
set_capacity(disk, logical_to_sectors(sdp, sdkp->capacity));
sd_config_write_same(sdkp);
return max_sectors << 9;
}
+static void
+sg_fill_request_table(Sg_fd *sfp, sg_req_info_t *rinfo)
+{
+ Sg_request *srp;
+ int val;
+ unsigned int ms;
+
+ val = 0;
+ list_for_each_entry(srp, &sfp->rq_list, entry) {
+ if (val > SG_MAX_QUEUE)
+ break;
+ rinfo[val].req_state = srp->done + 1;
+ rinfo[val].problem =
+ srp->header.masked_status &
+ srp->header.host_status &
+ srp->header.driver_status;
+ if (srp->done)
+ rinfo[val].duration =
+ srp->header.duration;
+ else {
+ ms = jiffies_to_msecs(jiffies);
+ rinfo[val].duration =
+ (ms > srp->header.duration) ?
+ (ms - srp->header.duration) : 0;
+ }
+ rinfo[val].orphan = srp->orphan;
+ rinfo[val].sg_io_owned = srp->sg_io_owned;
+ rinfo[val].pack_id = srp->header.pack_id;
+ rinfo[val].usr_ptr = srp->header.usr_ptr;
+ val++;
+ }
+}
+
static long
sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg)
{
return -EFAULT;
else {
sg_req_info_t *rinfo;
- unsigned int ms;
- rinfo = kmalloc(SZ_SG_REQ_INFO * SG_MAX_QUEUE,
- GFP_KERNEL);
+ rinfo = kzalloc(SZ_SG_REQ_INFO * SG_MAX_QUEUE,
+ GFP_KERNEL);
if (!rinfo)
return -ENOMEM;
read_lock_irqsave(&sfp->rq_list_lock, iflags);
- val = 0;
- list_for_each_entry(srp, &sfp->rq_list, entry) {
- if (val >= SG_MAX_QUEUE)
- break;
- memset(&rinfo[val], 0, SZ_SG_REQ_INFO);
- rinfo[val].req_state = srp->done + 1;
- rinfo[val].problem =
- srp->header.masked_status &
- srp->header.host_status &
- srp->header.driver_status;
- if (srp->done)
- rinfo[val].duration =
- srp->header.duration;
- else {
- ms = jiffies_to_msecs(jiffies);
- rinfo[val].duration =
- (ms > srp->header.duration) ?
- (ms - srp->header.duration) : 0;
- }
- rinfo[val].orphan = srp->orphan;
- rinfo[val].sg_io_owned = srp->sg_io_owned;
- rinfo[val].pack_id = srp->header.pack_id;
- rinfo[val].usr_ptr = srp->header.usr_ptr;
- val++;
- }
+ sg_fill_request_table(sfp, rinfo);
read_unlock_irqrestore(&sfp->rq_list_lock, iflags);
result = __copy_to_user(p, rinfo,
SZ_SG_REQ_INFO * SG_MAX_QUEUE);
obj-y += fsl/
obj-$(CONFIG_ARCH_MXC) += imx/
obj-$(CONFIG_SOC_XWAY) += lantiq/
-obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/
+obj-y += mediatek/
obj-$(CONFIG_ARCH_MESON) += amlogic/
obj-$(CONFIG_ARCH_QCOM) += qcom/
obj-y += renesas/
Say yes to support decoding of Amlogic Meson GX SoC family
information about the type, package and version.
+config MESON_GX_PM_DOMAINS
+ bool "Amlogic Meson GX Power Domains driver"
+ depends on ARCH_MESON || COMPILE_TEST
+ depends on PM && OF
+ default ARCH_MESON
+ select PM_GENERIC_DOMAINS
+ select PM_GENERIC_DOMAINS_OF
+ help
+ Say yes to expose Amlogic Meson GX Power Domains as
+ Generic Power Domains.
+
+config MESON_MX_SOCINFO
+ bool "Amlogic Meson MX SoC Information driver"
+ depends on ARCH_MESON || COMPILE_TEST
+ default ARCH_MESON
+ select SOC_BUS
+ help
+ Say yes to support decoding of Amlogic Meson6, Meson8,
+ Meson8b and Meson8m2 SoC family information about the type
+ and version.
+
endmenu
obj-$(CONFIG_MESON_GX_SOCINFO) += meson-gx-socinfo.o
+obj-$(CONFIG_MESON_GX_PM_DOMAINS) += meson-gx-pwrc-vpu.o
+obj-$(CONFIG_MESON_MX_SOCINFO) += meson-mx-socinfo.o
--- /dev/null
+/*
+ * Copyright (c) 2017 BayLibre, SAS
+ * Author: Neil Armstrong <narmstrong@baylibre.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0+
+ */
+
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
+#include <linux/bitfield.h>
+#include <linux/regmap.h>
+#include <linux/mfd/syscon.h>
+#include <linux/reset.h>
+#include <linux/clk.h>
+
+/* AO Offsets */
+
+#define AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2)
+
+#define GEN_PWR_VPU_HDMI BIT(8)
+#define GEN_PWR_VPU_HDMI_ISO BIT(9)
+
+/* HHI Offsets */
+
+#define HHI_MEM_PD_REG0 (0x40 << 2)
+#define HHI_VPU_MEM_PD_REG0 (0x41 << 2)
+#define HHI_VPU_MEM_PD_REG1 (0x42 << 2)
+
+struct meson_gx_pwrc_vpu {
+ struct generic_pm_domain genpd;
+ struct regmap *regmap_ao;
+ struct regmap *regmap_hhi;
+ struct reset_control *rstc;
+ struct clk *vpu_clk;
+ struct clk *vapb_clk;
+ bool powered;
+};
+
+static inline
+struct meson_gx_pwrc_vpu *genpd_to_pd(struct generic_pm_domain *d)
+{
+ return container_of(d, struct meson_gx_pwrc_vpu, genpd);
+}
+
+static int meson_gx_pwrc_vpu_power_off(struct generic_pm_domain *genpd)
+{
+ struct meson_gx_pwrc_vpu *pd = genpd_to_pd(genpd);
+ int i;
+
+ regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
+ GEN_PWR_VPU_HDMI_ISO, GEN_PWR_VPU_HDMI_ISO);
+ udelay(20);
+
+ /* Power Down Memories */
+ for (i = 0; i < 32; i += 2) {
+ regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG0,
+ 0x2 << i, 0x3 << i);
+ udelay(5);
+ }
+ for (i = 0; i < 32; i += 2) {
+ regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG1,
+ 0x2 << i, 0x3 << i);
+ udelay(5);
+ }
+ for (i = 8; i < 16; i++) {
+ regmap_update_bits(pd->regmap_hhi, HHI_MEM_PD_REG0,
+ BIT(i), BIT(i));
+ udelay(5);
+ }
+ udelay(20);
+
+ regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
+ GEN_PWR_VPU_HDMI, GEN_PWR_VPU_HDMI);
+
+ msleep(20);
+
+ clk_disable_unprepare(pd->vpu_clk);
+ clk_disable_unprepare(pd->vapb_clk);
+
+ pd->powered = false;
+
+ return 0;
+}
+
+static int meson_gx_pwrc_vpu_setup_clk(struct meson_gx_pwrc_vpu *pd)
+{
+ int ret;
+
+ ret = clk_prepare_enable(pd->vpu_clk);
+ if (ret)
+ return ret;
+
+ return clk_prepare_enable(pd->vapb_clk);
+}
+
+static int meson_gx_pwrc_vpu_power_on(struct generic_pm_domain *genpd)
+{
+ struct meson_gx_pwrc_vpu *pd = genpd_to_pd(genpd);
+ int ret;
+ int i;
+
+ regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
+ GEN_PWR_VPU_HDMI, 0);
+ udelay(20);
+
+ /* Power Up Memories */
+ for (i = 0; i < 32; i += 2) {
+ regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG0,
+ 0x2 << i, 0);
+ udelay(5);
+ }
+
+ for (i = 0; i < 32; i += 2) {
+ regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG1,
+ 0x2 << i, 0);
+ udelay(5);
+ }
+
+ for (i = 8; i < 16; i++) {
+ regmap_update_bits(pd->regmap_hhi, HHI_MEM_PD_REG0,
+ BIT(i), 0);
+ udelay(5);
+ }
+ udelay(20);
+
+ ret = reset_control_assert(pd->rstc);
+ if (ret)
+ return ret;
+
+ regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
+ GEN_PWR_VPU_HDMI_ISO, 0);
+
+ ret = reset_control_deassert(pd->rstc);
+ if (ret)
+ return ret;
+
+ ret = meson_gx_pwrc_vpu_setup_clk(pd);
+ if (ret)
+ return ret;
+
+ pd->powered = true;
+
+ return 0;
+}
+
+static bool meson_gx_pwrc_vpu_get_power(struct meson_gx_pwrc_vpu *pd)
+{
+ u32 reg;
+
+ regmap_read(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, ®);
+
+ return (reg & GEN_PWR_VPU_HDMI);
+}
+
+static struct meson_gx_pwrc_vpu vpu_hdmi_pd = {
+ .genpd = {
+ .name = "vpu_hdmi",
+ .power_off = meson_gx_pwrc_vpu_power_off,
+ .power_on = meson_gx_pwrc_vpu_power_on,
+ },
+};
+
+static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev)
+{
+ struct regmap *regmap_ao, *regmap_hhi;
+ struct reset_control *rstc;
+ struct clk *vpu_clk;
+ struct clk *vapb_clk;
+
+ regmap_ao = syscon_node_to_regmap(of_get_parent(pdev->dev.of_node));
+ if (IS_ERR(regmap_ao)) {
+ dev_err(&pdev->dev, "failed to get regmap\n");
+ return PTR_ERR(regmap_ao);
+ }
+
+ regmap_hhi = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ "amlogic,hhi-sysctrl");
+ if (IS_ERR(regmap_hhi)) {
+ dev_err(&pdev->dev, "failed to get HHI regmap\n");
+ return PTR_ERR(regmap_hhi);
+ }
+
+ rstc = devm_reset_control_array_get(&pdev->dev, false, false);
+ if (IS_ERR(rstc)) {
+ dev_err(&pdev->dev, "failed to get reset lines\n");
+ return PTR_ERR(rstc);
+ }
+
+ vpu_clk = devm_clk_get(&pdev->dev, "vpu");
+ if (IS_ERR(vpu_clk)) {
+ dev_err(&pdev->dev, "vpu clock request failed\n");
+ return PTR_ERR(vpu_clk);
+ }
+
+ vapb_clk = devm_clk_get(&pdev->dev, "vapb");
+ if (IS_ERR(vapb_clk)) {
+ dev_err(&pdev->dev, "vapb clock request failed\n");
+ return PTR_ERR(vapb_clk);
+ }
+
+ vpu_hdmi_pd.regmap_ao = regmap_ao;
+ vpu_hdmi_pd.regmap_hhi = regmap_hhi;
+ vpu_hdmi_pd.rstc = rstc;
+ vpu_hdmi_pd.vpu_clk = vpu_clk;
+ vpu_hdmi_pd.vapb_clk = vapb_clk;
+
+ pm_genpd_init(&vpu_hdmi_pd.genpd, &simple_qos_governor,
+ meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd));
+
+ return of_genpd_add_provider_simple(pdev->dev.of_node,
+ &vpu_hdmi_pd.genpd);
+}
+
+static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev)
+{
+ if (vpu_hdmi_pd.powered)
+ meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd);
+}
+
+static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = {
+ { .compatible = "amlogic,meson-gx-pwrc-vpu" },
+ { /* sentinel */ }
+};
+
+static struct platform_driver meson_gx_pwrc_vpu_driver = {
+ .probe = meson_gx_pwrc_vpu_probe,
+ .shutdown = meson_gx_pwrc_vpu_shutdown,
+ .driver = {
+ .name = "meson_gx_pwrc_vpu",
+ .of_match_table = meson_gx_pwrc_vpu_match_table,
+ },
+};
+builtin_platform_driver(meson_gx_pwrc_vpu_driver);
--- /dev/null
+/*
+ * Copyright (c) 2017 Martin Blumenstingl <martin.blumenstingl@googlemail.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0+
+ */
+
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/sys_soc.h>
+#include <linux/bitfield.h>
+#include <linux/regmap.h>
+#include <linux/mfd/syscon.h>
+
+#define MESON_SOCINFO_MAJOR_VER_MESON6 0x16
+#define MESON_SOCINFO_MAJOR_VER_MESON8 0x19
+#define MESON_SOCINFO_MAJOR_VER_MESON8B 0x1b
+
+#define MESON_MX_ASSIST_HW_REV 0x14c
+
+#define MESON_MX_ANALOG_TOP_METAL_REVISION 0x0
+
+#define MESON_MX_BOOTROM_MISC_VER 0x4
+
+static const char *meson_mx_socinfo_revision(unsigned int major_ver,
+ unsigned int misc_ver,
+ unsigned int metal_rev)
+{
+ unsigned int minor_ver;
+
+ switch (major_ver) {
+ case MESON_SOCINFO_MAJOR_VER_MESON6:
+ minor_ver = 0xa;
+ break;
+
+ case MESON_SOCINFO_MAJOR_VER_MESON8:
+ if (metal_rev == 0x11111112)
+ major_ver = 0x1d;
+
+ if (metal_rev == 0x11111111 || metal_rev == 0x11111112)
+ minor_ver = 0xa;
+ else if (metal_rev == 0x11111113)
+ minor_ver = 0xb;
+ else if (metal_rev == 0x11111133)
+ minor_ver = 0xc;
+ else
+ minor_ver = 0xd;
+
+ break;
+
+ case MESON_SOCINFO_MAJOR_VER_MESON8B:
+ if (metal_rev == 0x11111111)
+ minor_ver = 0xa;
+ else
+ minor_ver = 0xb;
+
+ break;
+
+ default:
+ minor_ver = 0x0;
+ break;
+ }
+
+ return kasprintf(GFP_KERNEL, "Rev%X (%x - 0:%X)", minor_ver, major_ver,
+ misc_ver);
+}
+
+static const char *meson_mx_socinfo_soc_id(unsigned int major_ver,
+ unsigned int metal_rev)
+{
+ const char *soc_id;
+
+ switch (major_ver) {
+ case MESON_SOCINFO_MAJOR_VER_MESON6:
+ soc_id = "Meson6 (AML8726-MX)";
+ break;
+
+ case MESON_SOCINFO_MAJOR_VER_MESON8:
+ if (metal_rev == 0x11111112)
+ soc_id = "Meson8m2 (S812)";
+ else
+ soc_id = "Meson8 (S802)";
+
+ break;
+
+ case MESON_SOCINFO_MAJOR_VER_MESON8B:
+ soc_id = "Meson8b (S805)";
+ break;
+
+ default:
+ soc_id = "Unknown";
+ break;
+ }
+
+ return kstrdup_const(soc_id, GFP_KERNEL);
+}
+
+static const struct of_device_id meson_mx_socinfo_analog_top_ids[] = {
+ { .compatible = "amlogic,meson8-analog-top", },
+ { .compatible = "amlogic,meson8b-analog-top", },
+ { /* sentinel */ }
+};
+
+int __init meson_mx_socinfo_init(void)
+{
+ struct soc_device_attribute *soc_dev_attr;
+ struct soc_device *soc_dev;
+ struct device_node *np;
+ struct regmap *assist_regmap, *bootrom_regmap, *analog_top_regmap;
+ unsigned int major_ver, misc_ver, metal_rev = 0;
+ int ret;
+
+ assist_regmap =
+ syscon_regmap_lookup_by_compatible("amlogic,meson-mx-assist");
+ if (IS_ERR(assist_regmap))
+ return PTR_ERR(assist_regmap);
+
+ bootrom_regmap =
+ syscon_regmap_lookup_by_compatible("amlogic,meson-mx-bootrom");
+ if (IS_ERR(bootrom_regmap))
+ return PTR_ERR(bootrom_regmap);
+
+ np = of_find_matching_node(NULL, meson_mx_socinfo_analog_top_ids);
+ if (np) {
+ analog_top_regmap = syscon_node_to_regmap(np);
+ if (IS_ERR(analog_top_regmap))
+ return PTR_ERR(analog_top_regmap);
+
+ ret = regmap_read(analog_top_regmap,
+ MESON_MX_ANALOG_TOP_METAL_REVISION,
+ &metal_rev);
+ if (ret)
+ return ret;
+ }
+
+ ret = regmap_read(assist_regmap, MESON_MX_ASSIST_HW_REV, &major_ver);
+ if (ret < 0)
+ return ret;
+
+ ret = regmap_read(bootrom_regmap, MESON_MX_BOOTROM_MISC_VER,
+ &misc_ver);
+ if (ret < 0)
+ return ret;
+
+ soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
+ if (!soc_dev_attr)
+ return -ENODEV;
+
+ soc_dev_attr->family = "Amlogic Meson";
+
+ np = of_find_node_by_path("/");
+ of_property_read_string(np, "model", &soc_dev_attr->machine);
+ of_node_put(np);
+
+ soc_dev_attr->revision = meson_mx_socinfo_revision(major_ver, misc_ver,
+ metal_rev);
+ soc_dev_attr->soc_id = meson_mx_socinfo_soc_id(major_ver, metal_rev);
+
+ soc_dev = soc_device_register(soc_dev_attr);
+ if (IS_ERR(soc_dev)) {
+ kfree_const(soc_dev_attr->revision);
+ kfree_const(soc_dev_attr->soc_id);
+ kfree(soc_dev_attr);
+ return PTR_ERR(soc_dev);
+ }
+
+ dev_info(soc_device_to_device(soc_dev), "Amlogic %s %s detected\n",
+ soc_dev_attr->soc_id, soc_dev_attr->revision);
+
+ return 0;
+}
+device_initcall(meson_mx_socinfo_init);
"sama5d21", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D22CU_EXID_MATCH,
"sama5d22", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D225C_D1M_EXID_MATCH,
+ "sama5d225c 16MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D23CU_EXID_MATCH,
"sama5d23", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CX_EXID_MATCH,
"sama5d27", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CN_EXID_MATCH,
"sama5d27", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D1G_EXID_MATCH,
+ "sama5d27c 128MiB SiP", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D5M_EXID_MATCH,
+ "sama5d27c 64MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CU_EXID_MATCH,
"sama5d28", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CN_EXID_MATCH,
"sama5d28", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28C_D1G_EXID_MATCH,
+ "sama5d28c 128MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D31_EXID_MATCH,
"sama5d31", "sama5d3"),
AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D33_EXID_MATCH,
#define SAMA5D2_CIDR_MATCH 0x0a5c08c0
#define SAMA5D21CU_EXID_MATCH 0x0000005a
+#define SAMA5D225C_D1M_EXID_MATCH 0x00000053
#define SAMA5D22CU_EXID_MATCH 0x00000059
#define SAMA5D22CN_EXID_MATCH 0x00000069
#define SAMA5D23CU_EXID_MATCH 0x00000058
#define SAMA5D24CX_EXID_MATCH 0x00000004
#define SAMA5D24CU_EXID_MATCH 0x00000014
#define SAMA5D26CU_EXID_MATCH 0x00000012
+#define SAMA5D27C_D1G_EXID_MATCH 0x00000033
+#define SAMA5D27C_D5M_EXID_MATCH 0x00000032
#define SAMA5D27CU_EXID_MATCH 0x00000011
#define SAMA5D27CN_EXID_MATCH 0x00000021
+#define SAMA5D28C_D1G_EXID_MATCH 0x00000013
#define SAMA5D28CU_EXID_MATCH 0x00000010
#define SAMA5D28CN_EXID_MATCH 0x00000020
If unsure, say N.
+source "drivers/soc/bcm/brcmstb/Kconfig"
+
endmenu
--- /dev/null
+if SOC_BRCMSTB
+
+config BRCMSTB_PM
+ bool "Support suspend/resume for STB platforms"
+ default y
+ depends on PM
+ depends on ARCH_BRCMSTB || BMIPS_GENERIC
+ select ARM_CPU_SUSPEND if ARM
+
+endif # SOC_BRCMSTB
obj-y += common.o biuctrl.o
+obj-$(CONFIG_BRCMSTB_PM) += pm/
--- /dev/null
+obj-$(CONFIG_ARM) += s2-arm.o pm-arm.o
+AFLAGS_s2-arm.o := -march=armv7-a
+obj-$(CONFIG_BMIPS_GENERIC) += s2-mips.o s3-mips.o pm-mips.o
--- /dev/null
+/*
+ * Always ON (AON) register interface between bootloader and Linux
+ *
+ * Copyright © 2014-2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __BRCMSTB_AON_DEFS_H__
+#define __BRCMSTB_AON_DEFS_H__
+
+#include <linux/compiler.h>
+
+/* Magic number in upper 16-bits */
+#define BRCMSTB_S3_MAGIC_MASK 0xffff0000
+#define BRCMSTB_S3_MAGIC_SHORT 0x5AFE0000
+
+enum {
+ /* Restore random key for AES memory verification (off = fixed key) */
+ S3_FLAG_LOAD_RANDKEY = (1 << 0),
+
+ /* Scratch buffer page table is present */
+ S3_FLAG_SCRATCH_BUFFER_TABLE = (1 << 1),
+
+ /* Skip all memory verification */
+ S3_FLAG_NO_MEM_VERIFY = (1 << 2),
+
+ /*
+ * Modification of this bit reserved for bootloader only.
+ * 1=PSCI started Linux, 0=Direct jump to Linux.
+ */
+ S3_FLAG_PSCI_BOOT = (1 << 3),
+
+ /*
+ * Modification of this bit reserved for bootloader only.
+ * 1=64 bit boot, 0=32 bit boot.
+ */
+ S3_FLAG_BOOTED64 = (1 << 4),
+};
+
+#define BRCMSTB_HASH_LEN (128 / 8) /* 128-bit hash */
+
+#define AON_REG_MAGIC_FLAGS 0x00
+#define AON_REG_CONTROL_LOW 0x04
+#define AON_REG_CONTROL_HIGH 0x08
+#define AON_REG_S3_HASH 0x0c /* hash of S3 params */
+#define AON_REG_CONTROL_HASH_LEN 0x1c
+#define AON_REG_PANIC 0x20
+
+#define BRCMSTB_S3_MAGIC 0x5AFEB007
+#define BRCMSTB_PANIC_MAGIC 0x512E115E
+#define BOOTLOADER_SCRATCH_SIZE 64
+#define BRCMSTB_DTU_STATE_MAP_ENTRIES (8*1024)
+#define BRCMSTB_DTU_CONFIG_ENTRIES (512)
+#define BRCMSTB_DTU_COUNT (2)
+
+#define IMAGE_DESCRIPTORS_BUFSIZE (2 * 1024)
+#define S3_BOOTLOADER_RESERVED (S3_FLAG_PSCI_BOOT | S3_FLAG_BOOTED64)
+
+struct brcmstb_bootloader_dtu_table {
+ uint32_t dtu_state_map[BRCMSTB_DTU_STATE_MAP_ENTRIES];
+ uint32_t dtu_config[BRCMSTB_DTU_CONFIG_ENTRIES];
+};
+
+/*
+ * Bootloader utilizes a custom parameter block left in DRAM for handling S3
+ * warm resume
+ */
+struct brcmstb_s3_params {
+ /* scratch memory for bootloader */
+ uint8_t scratch[BOOTLOADER_SCRATCH_SIZE];
+
+ uint32_t magic; /* BRCMSTB_S3_MAGIC */
+ uint64_t reentry; /* PA */
+
+ /* descriptors */
+ uint32_t hash[BRCMSTB_HASH_LEN / 4];
+
+ /*
+ * If 0, then ignore this parameter (there is only one set of
+ * descriptors)
+ *
+ * If non-0, then a second set of descriptors is stored at:
+ *
+ * descriptors + desc_offset_2
+ *
+ * The MAC result of both descriptors is XOR'd and stored in @hash
+ */
+ uint32_t desc_offset_2;
+
+ /*
+ * (Physical) address of a brcmstb_bootloader_scratch_table, for
+ * providing a large DRAM buffer to the bootloader
+ */
+ uint64_t buffer_table;
+
+ uint32_t spare[70];
+
+ uint8_t descriptors[IMAGE_DESCRIPTORS_BUFSIZE];
+ /*
+ * Must be last member of struct. See brcmstb_pm_s3_finish() for reason.
+ */
+ struct brcmstb_bootloader_dtu_table dtu[BRCMSTB_DTU_COUNT];
+} __packed;
+
+#endif /* __BRCMSTB_AON_DEFS_H__ */
--- /dev/null
+/*
+ * ARM-specific support for Broadcom STB S2/S3/S5 power management
+ *
+ * S2: clock gate CPUs and as many peripherals as possible
+ * S3: power off all of the chip except the Always ON (AON) island; keep DDR is
+ * self-refresh
+ * S5: (a.k.a. S3 cold boot) much like S3, except DDR is powered down, so we
+ * treat this mode like a soft power-off, with wakeup allowed from AON
+ *
+ * Copyright © 2014-2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) "brcmstb-pm: " fmt
+
+#include <linux/bitops.h>
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/ioport.h>
+#include <linux/kconfig.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/module.h>
+#include <linux/notifier.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/pm.h>
+#include <linux/printk.h>
+#include <linux/proc_fs.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/sort.h>
+#include <linux/suspend.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+#include <linux/soc/brcmstb/brcmstb.h>
+
+#include <asm/fncpy.h>
+#include <asm/setup.h>
+#include <asm/suspend.h>
+
+#include "pm.h"
+#include "aon_defs.h"
+
+#define SHIMPHY_DDR_PAD_CNTRL 0x8c
+
+/* Method #0 */
+#define SHIMPHY_PAD_PLL_SEQUENCE BIT(8)
+#define SHIMPHY_PAD_GATE_PLL_S3 BIT(9)
+
+/* Method #1 */
+#define PWRDWN_SEQ_NO_SEQUENCING 0
+#define PWRDWN_SEQ_HOLD_CHANNEL 1
+#define PWRDWN_SEQ_RESET_PLL 2
+#define PWRDWN_SEQ_POWERDOWN_PLL 3
+
+#define SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK 0x00f00000
+#define SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT 20
+
+#define DDR_FORCE_CKE_RST_N BIT(3)
+#define DDR_PHY_RST_N BIT(2)
+#define DDR_PHY_CKE BIT(1)
+
+#define DDR_PHY_NO_CHANNEL 0xffffffff
+
+#define MAX_NUM_MEMC 3
+
+struct brcmstb_memc {
+ void __iomem *ddr_phy_base;
+ void __iomem *ddr_shimphy_base;
+ void __iomem *ddr_ctrl;
+};
+
+struct brcmstb_pm_control {
+ void __iomem *aon_ctrl_base;
+ void __iomem *aon_sram;
+ struct brcmstb_memc memcs[MAX_NUM_MEMC];
+
+ void __iomem *boot_sram;
+ size_t boot_sram_len;
+
+ bool support_warm_boot;
+ size_t pll_status_offset;
+ int num_memc;
+
+ struct brcmstb_s3_params *s3_params;
+ dma_addr_t s3_params_pa;
+ int s3entry_method;
+ u32 warm_boot_offset;
+ u32 phy_a_standby_ctrl_offs;
+ u32 phy_b_standby_ctrl_offs;
+ bool needs_ddr_pad;
+ struct platform_device *pdev;
+};
+
+enum bsp_initiate_command {
+ BSP_CLOCK_STOP = 0x00,
+ BSP_GEN_RANDOM_KEY = 0x4A,
+ BSP_RESTORE_RANDOM_KEY = 0x55,
+ BSP_GEN_FIXED_KEY = 0x63,
+};
+
+#define PM_INITIATE 0x01
+#define PM_INITIATE_SUCCESS 0x00
+#define PM_INITIATE_FAIL 0xfe
+
+static struct brcmstb_pm_control ctrl;
+
+static int (*brcmstb_pm_do_s2_sram)(void __iomem *aon_ctrl_base,
+ void __iomem *ddr_phy_pll_status);
+
+static int brcmstb_init_sram(struct device_node *dn)
+{
+ void __iomem *sram;
+ struct resource res;
+ int ret;
+
+ ret = of_address_to_resource(dn, 0, &res);
+ if (ret)
+ return ret;
+
+ /* Uncached, executable remapping of SRAM */
+ sram = __arm_ioremap_exec(res.start, resource_size(&res), false);
+ if (!sram)
+ return -ENOMEM;
+
+ ctrl.boot_sram = sram;
+ ctrl.boot_sram_len = resource_size(&res);
+
+ return 0;
+}
+
+static const struct of_device_id sram_dt_ids[] = {
+ { .compatible = "mmio-sram" },
+ { /* sentinel */ }
+};
+
+static int do_bsp_initiate_command(enum bsp_initiate_command cmd)
+{
+ void __iomem *base = ctrl.aon_ctrl_base;
+ int ret;
+ int timeo = 1000 * 1000; /* 1 second */
+
+ writel_relaxed(0, base + AON_CTRL_PM_INITIATE);
+ (void)readl_relaxed(base + AON_CTRL_PM_INITIATE);
+
+ /* Go! */
+ writel_relaxed((cmd << 1) | PM_INITIATE, base + AON_CTRL_PM_INITIATE);
+
+ /*
+ * If firmware doesn't support the 'ack', then just assume it's done
+ * after 10ms. Note that this only works for command 0, BSP_CLOCK_STOP
+ */
+ if (of_machine_is_compatible("brcm,bcm74371a0")) {
+ (void)readl_relaxed(base + AON_CTRL_PM_INITIATE);
+ mdelay(10);
+ return 0;
+ }
+
+ for (;;) {
+ ret = readl_relaxed(base + AON_CTRL_PM_INITIATE);
+ if (!(ret & PM_INITIATE))
+ break;
+ if (timeo <= 0) {
+ pr_err("error: timeout waiting for BSP (%x)\n", ret);
+ break;
+ }
+ timeo -= 50;
+ udelay(50);
+ }
+
+ return (ret & 0xff) != PM_INITIATE_SUCCESS;
+}
+
+static int brcmstb_pm_handshake(void)
+{
+ void __iomem *base = ctrl.aon_ctrl_base;
+ u32 tmp;
+ int ret;
+
+ /* BSP power handshake, v1 */
+ tmp = readl_relaxed(base + AON_CTRL_HOST_MISC_CMDS);
+ tmp &= ~1UL;
+ writel_relaxed(tmp, base + AON_CTRL_HOST_MISC_CMDS);
+ (void)readl_relaxed(base + AON_CTRL_HOST_MISC_CMDS);
+
+ ret = do_bsp_initiate_command(BSP_CLOCK_STOP);
+ if (ret)
+ pr_err("BSP handshake failed\n");
+
+ /*
+ * HACK: BSP may have internal race on the CLOCK_STOP command.
+ * Avoid touching the BSP for a few milliseconds.
+ */
+ mdelay(3);
+
+ return ret;
+}
+
+static inline void shimphy_set(u32 value, u32 mask)
+{
+ int i;
+
+ if (!ctrl.needs_ddr_pad)
+ return;
+
+ for (i = 0; i < ctrl.num_memc; i++) {
+ u32 tmp;
+
+ tmp = readl_relaxed(ctrl.memcs[i].ddr_shimphy_base +
+ SHIMPHY_DDR_PAD_CNTRL);
+ tmp = value | (tmp & mask);
+ writel_relaxed(tmp, ctrl.memcs[i].ddr_shimphy_base +
+ SHIMPHY_DDR_PAD_CNTRL);
+ }
+ wmb(); /* Complete sequence in order. */
+}
+
+static inline void ddr_ctrl_set(bool warmboot)
+{
+ int i;
+
+ for (i = 0; i < ctrl.num_memc; i++) {
+ u32 tmp;
+
+ tmp = readl_relaxed(ctrl.memcs[i].ddr_ctrl +
+ ctrl.warm_boot_offset);
+ if (warmboot)
+ tmp |= 1;
+ else
+ tmp &= ~1; /* Cold boot */
+ writel_relaxed(tmp, ctrl.memcs[i].ddr_ctrl +
+ ctrl.warm_boot_offset);
+ }
+ /* Complete sequence in order */
+ wmb();
+}
+
+static inline void s3entry_method0(void)
+{
+ shimphy_set(SHIMPHY_PAD_GATE_PLL_S3 | SHIMPHY_PAD_PLL_SEQUENCE,
+ 0xffffffff);
+}
+
+static inline void s3entry_method1(void)
+{
+ /*
+ * S3 Entry Sequence
+ * -----------------
+ * Step 1: SHIMPHY_ADDR_CNTL_0_DDR_PAD_CNTRL [ S3_PWRDWN_SEQ ] = 3
+ * Step 2: MEMC_DDR_0_WARM_BOOT [ WARM_BOOT ] = 1
+ */
+ shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL <<
+ SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
+ ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
+
+ ddr_ctrl_set(true);
+}
+
+static inline void s5entry_method1(void)
+{
+ int i;
+
+ /*
+ * S5 Entry Sequence
+ * -----------------
+ * Step 1: SHIMPHY_ADDR_CNTL_0_DDR_PAD_CNTRL [ S3_PWRDWN_SEQ ] = 3
+ * Step 2: MEMC_DDR_0_WARM_BOOT [ WARM_BOOT ] = 0
+ * Step 3: DDR_PHY_CONTROL_REGS_[AB]_0_STANDBY_CONTROL[ CKE ] = 0
+ * DDR_PHY_CONTROL_REGS_[AB]_0_STANDBY_CONTROL[ RST_N ] = 0
+ */
+ shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL <<
+ SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
+ ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
+
+ ddr_ctrl_set(false);
+
+ for (i = 0; i < ctrl.num_memc; i++) {
+ u32 tmp;
+
+ /* Step 3: Channel A (RST_N = CKE = 0) */
+ tmp = readl_relaxed(ctrl.memcs[i].ddr_phy_base +
+ ctrl.phy_a_standby_ctrl_offs);
+ tmp &= ~(DDR_PHY_RST_N | DDR_PHY_RST_N);
+ writel_relaxed(tmp, ctrl.memcs[i].ddr_phy_base +
+ ctrl.phy_a_standby_ctrl_offs);
+
+ /* Step 3: Channel B? */
+ if (ctrl.phy_b_standby_ctrl_offs != DDR_PHY_NO_CHANNEL) {
+ tmp = readl_relaxed(ctrl.memcs[i].ddr_phy_base +
+ ctrl.phy_b_standby_ctrl_offs);
+ tmp &= ~(DDR_PHY_RST_N | DDR_PHY_RST_N);
+ writel_relaxed(tmp, ctrl.memcs[i].ddr_phy_base +
+ ctrl.phy_b_standby_ctrl_offs);
+ }
+ }
+ /* Must complete */
+ wmb();
+}
+
+/*
+ * Run a Power Management State Machine (PMSM) shutdown command and put the CPU
+ * into a low-power mode
+ */
+static void brcmstb_do_pmsm_power_down(unsigned long base_cmd, bool onewrite)
+{
+ void __iomem *base = ctrl.aon_ctrl_base;
+
+ if ((ctrl.s3entry_method == 1) && (base_cmd == PM_COLD_CONFIG))
+ s5entry_method1();
+
+ /* pm_start_pwrdn transition 0->1 */
+ writel_relaxed(base_cmd, base + AON_CTRL_PM_CTRL);
+
+ if (!onewrite) {
+ (void)readl_relaxed(base + AON_CTRL_PM_CTRL);
+
+ writel_relaxed(base_cmd | PM_PWR_DOWN, base + AON_CTRL_PM_CTRL);
+ (void)readl_relaxed(base + AON_CTRL_PM_CTRL);
+ }
+ wfi();
+}
+
+/* Support S5 cold boot out of "poweroff" */
+static void brcmstb_pm_poweroff(void)
+{
+ brcmstb_pm_handshake();
+
+ /* Clear magic S3 warm-boot value */
+ writel_relaxed(0, ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
+ (void)readl_relaxed(ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
+
+ /* Skip wait-for-interrupt signal; just use a countdown */
+ writel_relaxed(0x10, ctrl.aon_ctrl_base + AON_CTRL_PM_CPU_WAIT_COUNT);
+ (void)readl_relaxed(ctrl.aon_ctrl_base + AON_CTRL_PM_CPU_WAIT_COUNT);
+
+ if (ctrl.s3entry_method == 1) {
+ shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL <<
+ SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
+ ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
+ ddr_ctrl_set(false);
+ brcmstb_do_pmsm_power_down(M1_PM_COLD_CONFIG, true);
+ return; /* We should never actually get here */
+ }
+
+ brcmstb_do_pmsm_power_down(PM_COLD_CONFIG, false);
+}
+
+static void *brcmstb_pm_copy_to_sram(void *fn, size_t len)
+{
+ unsigned int size = ALIGN(len, FNCPY_ALIGN);
+
+ if (ctrl.boot_sram_len < size) {
+ pr_err("standby code will not fit in SRAM\n");
+ return NULL;
+ }
+
+ return fncpy(ctrl.boot_sram, fn, size);
+}
+
+/*
+ * S2 suspend/resume picks up where we left off, so we must execute carefully
+ * from SRAM, in order to allow DDR to come back up safely before we continue.
+ */
+static int brcmstb_pm_s2(void)
+{
+ /* A previous S3 can set a value hazardous to S2, so make sure. */
+ if (ctrl.s3entry_method == 1) {
+ shimphy_set((PWRDWN_SEQ_NO_SEQUENCING <<
+ SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
+ ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
+ ddr_ctrl_set(false);
+ }
+
+ brcmstb_pm_do_s2_sram = brcmstb_pm_copy_to_sram(&brcmstb_pm_do_s2,
+ brcmstb_pm_do_s2_sz);
+ if (!brcmstb_pm_do_s2_sram)
+ return -EINVAL;
+
+ return brcmstb_pm_do_s2_sram(ctrl.aon_ctrl_base,
+ ctrl.memcs[0].ddr_phy_base +
+ ctrl.pll_status_offset);
+}
+
+/*
+ * This function is called on a new stack, so don't allow inlining (which will
+ * generate stack references on the old stack). It cannot be made static because
+ * it is referenced from brcmstb_pm_s3()
+ */
+noinline int brcmstb_pm_s3_finish(void)
+{
+ struct brcmstb_s3_params *params = ctrl.s3_params;
+ dma_addr_t params_pa = ctrl.s3_params_pa;
+ phys_addr_t reentry = virt_to_phys(&cpu_resume);
+ enum bsp_initiate_command cmd;
+ u32 flags;
+
+ /*
+ * Clear parameter structure, but not DTU area, which has already been
+ * filled in. We know DTU is a the end, so we can just subtract its
+ * size.
+ */
+ memset(params, 0, sizeof(*params) - sizeof(params->dtu));
+
+ flags = readl_relaxed(ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
+
+ flags &= S3_BOOTLOADER_RESERVED;
+ flags |= S3_FLAG_NO_MEM_VERIFY;
+ flags |= S3_FLAG_LOAD_RANDKEY;
+
+ /* Load random / fixed key */
+ if (flags & S3_FLAG_LOAD_RANDKEY)
+ cmd = BSP_GEN_RANDOM_KEY;
+ else
+ cmd = BSP_GEN_FIXED_KEY;
+ if (do_bsp_initiate_command(cmd)) {
+ pr_info("key loading failed\n");
+ return -EIO;
+ }
+
+ params->magic = BRCMSTB_S3_MAGIC;
+ params->reentry = reentry;
+
+ /* No more writes to DRAM */
+ flush_cache_all();
+
+ flags |= BRCMSTB_S3_MAGIC_SHORT;
+
+ writel_relaxed(flags, ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
+ writel_relaxed(lower_32_bits(params_pa),
+ ctrl.aon_sram + AON_REG_CONTROL_LOW);
+ writel_relaxed(upper_32_bits(params_pa),
+ ctrl.aon_sram + AON_REG_CONTROL_HIGH);
+
+ switch (ctrl.s3entry_method) {
+ case 0:
+ s3entry_method0();
+ brcmstb_do_pmsm_power_down(PM_WARM_CONFIG, false);
+ break;
+ case 1:
+ s3entry_method1();
+ brcmstb_do_pmsm_power_down(M1_PM_WARM_CONFIG, true);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Must have been interrupted from wfi()? */
+ return -EINTR;
+}
+
+static int brcmstb_pm_do_s3(unsigned long sp)
+{
+ unsigned long save_sp;
+ int ret;
+
+ asm volatile (
+ "mov %[save], sp\n"
+ "mov sp, %[new]\n"
+ "bl brcmstb_pm_s3_finish\n"
+ "mov %[ret], r0\n"
+ "mov %[new], sp\n"
+ "mov sp, %[save]\n"
+ : [save] "=&r" (save_sp), [ret] "=&r" (ret)
+ : [new] "r" (sp)
+ );
+
+ return ret;
+}
+
+static int brcmstb_pm_s3(void)
+{
+ void __iomem *sp = ctrl.boot_sram + ctrl.boot_sram_len;
+
+ return cpu_suspend((unsigned long)sp, brcmstb_pm_do_s3);
+}
+
+static int brcmstb_pm_standby(bool deep_standby)
+{
+ int ret;
+
+ if (brcmstb_pm_handshake())
+ return -EIO;
+
+ if (deep_standby)
+ ret = brcmstb_pm_s3();
+ else
+ ret = brcmstb_pm_s2();
+ if (ret)
+ pr_err("%s: standby failed\n", __func__);
+
+ return ret;
+}
+
+static int brcmstb_pm_enter(suspend_state_t state)
+{
+ int ret = -EINVAL;
+
+ switch (state) {
+ case PM_SUSPEND_STANDBY:
+ ret = brcmstb_pm_standby(false);
+ break;
+ case PM_SUSPEND_MEM:
+ ret = brcmstb_pm_standby(true);
+ break;
+ }
+
+ return ret;
+}
+
+static int brcmstb_pm_valid(suspend_state_t state)
+{
+ switch (state) {
+ case PM_SUSPEND_STANDBY:
+ return true;
+ case PM_SUSPEND_MEM:
+ return ctrl.support_warm_boot;
+ default:
+ return false;
+ }
+}
+
+static const struct platform_suspend_ops brcmstb_pm_ops = {
+ .enter = brcmstb_pm_enter,
+ .valid = brcmstb_pm_valid,
+};
+
+static const struct of_device_id aon_ctrl_dt_ids[] = {
+ { .compatible = "brcm,brcmstb-aon-ctrl" },
+ {}
+};
+
+struct ddr_phy_ofdata {
+ bool supports_warm_boot;
+ size_t pll_status_offset;
+ int s3entry_method;
+ u32 warm_boot_offset;
+ u32 phy_a_standby_ctrl_offs;
+ u32 phy_b_standby_ctrl_offs;
+};
+
+static struct ddr_phy_ofdata ddr_phy_71_1 = {
+ .supports_warm_boot = true,
+ .pll_status_offset = 0x0c,
+ .s3entry_method = 1,
+ .warm_boot_offset = 0x2c,
+ .phy_a_standby_ctrl_offs = 0x198,
+ .phy_b_standby_ctrl_offs = DDR_PHY_NO_CHANNEL
+};
+
+static struct ddr_phy_ofdata ddr_phy_72_0 = {
+ .supports_warm_boot = true,
+ .pll_status_offset = 0x10,
+ .s3entry_method = 1,
+ .warm_boot_offset = 0x40,
+ .phy_a_standby_ctrl_offs = 0x2a4,
+ .phy_b_standby_ctrl_offs = 0x8a4
+};
+
+static struct ddr_phy_ofdata ddr_phy_225_1 = {
+ .supports_warm_boot = false,
+ .pll_status_offset = 0x4,
+ .s3entry_method = 0
+};
+
+static struct ddr_phy_ofdata ddr_phy_240_1 = {
+ .supports_warm_boot = true,
+ .pll_status_offset = 0x4,
+ .s3entry_method = 0
+};
+
+static const struct of_device_id ddr_phy_dt_ids[] = {
+ {
+ .compatible = "brcm,brcmstb-ddr-phy-v71.1",
+ .data = &ddr_phy_71_1,
+ },
+ {
+ .compatible = "brcm,brcmstb-ddr-phy-v72.0",
+ .data = &ddr_phy_72_0,
+ },
+ {
+ .compatible = "brcm,brcmstb-ddr-phy-v225.1",
+ .data = &ddr_phy_225_1,
+ },
+ {
+ .compatible = "brcm,brcmstb-ddr-phy-v240.1",
+ .data = &ddr_phy_240_1,
+ },
+ {
+ /* Same as v240.1, for the registers we care about */
+ .compatible = "brcm,brcmstb-ddr-phy-v240.2",
+ .data = &ddr_phy_240_1,
+ },
+ {}
+};
+
+struct ddr_seq_ofdata {
+ bool needs_ddr_pad;
+ u32 warm_boot_offset;
+};
+
+static const struct ddr_seq_ofdata ddr_seq_b22 = {
+ .needs_ddr_pad = false,
+ .warm_boot_offset = 0x2c,
+};
+
+static const struct ddr_seq_ofdata ddr_seq = {
+ .needs_ddr_pad = true,
+};
+
+static const struct of_device_id ddr_shimphy_dt_ids[] = {
+ { .compatible = "brcm,brcmstb-ddr-shimphy-v1.0" },
+ {}
+};
+
+static const struct of_device_id brcmstb_memc_of_match[] = {
+ {
+ .compatible = "brcm,brcmstb-memc-ddr-rev-b.2.2",
+ .data = &ddr_seq_b22,
+ },
+ {
+ .compatible = "brcm,brcmstb-memc-ddr",
+ .data = &ddr_seq,
+ },
+ {},
+};
+
+static void __iomem *brcmstb_ioremap_match(const struct of_device_id *matches,
+ int index, const void **ofdata)
+{
+ struct device_node *dn;
+ const struct of_device_id *match;
+
+ dn = of_find_matching_node_and_match(NULL, matches, &match);
+ if (!dn)
+ return ERR_PTR(-EINVAL);
+
+ if (ofdata)
+ *ofdata = match->data;
+
+ return of_io_request_and_map(dn, index, dn->full_name);
+}
+
+static int brcmstb_pm_panic_notify(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ writel_relaxed(BRCMSTB_PANIC_MAGIC, ctrl.aon_sram + AON_REG_PANIC);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block brcmstb_pm_panic_nb = {
+ .notifier_call = brcmstb_pm_panic_notify,
+};
+
+static int brcmstb_pm_probe(struct platform_device *pdev)
+{
+ const struct ddr_phy_ofdata *ddr_phy_data;
+ const struct ddr_seq_ofdata *ddr_seq_data;
+ const struct of_device_id *of_id = NULL;
+ struct device_node *dn;
+ void __iomem *base;
+ int ret, i;
+
+ /* AON ctrl registers */
+ base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL);
+ if (IS_ERR(base)) {
+ pr_err("error mapping AON_CTRL\n");
+ return PTR_ERR(base);
+ }
+ ctrl.aon_ctrl_base = base;
+
+ /* AON SRAM registers */
+ base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 1, NULL);
+ if (IS_ERR(base)) {
+ /* Assume standard offset */
+ ctrl.aon_sram = ctrl.aon_ctrl_base +
+ AON_CTRL_SYSTEM_DATA_RAM_OFS;
+ } else {
+ ctrl.aon_sram = base;
+ }
+
+ writel_relaxed(0, ctrl.aon_sram + AON_REG_PANIC);
+
+ /* DDR PHY registers */
+ base = brcmstb_ioremap_match(ddr_phy_dt_ids, 0,
+ (const void **)&ddr_phy_data);
+ if (IS_ERR(base)) {
+ pr_err("error mapping DDR PHY\n");
+ return PTR_ERR(base);
+ }
+ ctrl.support_warm_boot = ddr_phy_data->supports_warm_boot;
+ ctrl.pll_status_offset = ddr_phy_data->pll_status_offset;
+ /* Only need DDR PHY 0 for now? */
+ ctrl.memcs[0].ddr_phy_base = base;
+ ctrl.s3entry_method = ddr_phy_data->s3entry_method;
+ ctrl.phy_a_standby_ctrl_offs = ddr_phy_data->phy_a_standby_ctrl_offs;
+ ctrl.phy_b_standby_ctrl_offs = ddr_phy_data->phy_b_standby_ctrl_offs;
+ /*
+ * Slightly grosss to use the phy ver to get a memc,
+ * offset but that is the only versioned things so far
+ * we can test for.
+ */
+ ctrl.warm_boot_offset = ddr_phy_data->warm_boot_offset;
+
+ /* DDR SHIM-PHY registers */
+ for_each_matching_node(dn, ddr_shimphy_dt_ids) {
+ i = ctrl.num_memc;
+ if (i >= MAX_NUM_MEMC) {
+ pr_warn("too many MEMCs (max %d)\n", MAX_NUM_MEMC);
+ break;
+ }
+
+ base = of_io_request_and_map(dn, 0, dn->full_name);
+ if (IS_ERR(base)) {
+ if (!ctrl.support_warm_boot)
+ break;
+
+ pr_err("error mapping DDR SHIMPHY %d\n", i);
+ return PTR_ERR(base);
+ }
+ ctrl.memcs[i].ddr_shimphy_base = base;
+ ctrl.num_memc++;
+ }
+
+ /* Sequencer DRAM Param and Control Registers */
+ i = 0;
+ for_each_matching_node(dn, brcmstb_memc_of_match) {
+ base = of_iomap(dn, 0);
+ if (!base) {
+ pr_err("error mapping DDR Sequencer %d\n", i);
+ return -ENOMEM;
+ }
+
+ of_id = of_match_node(brcmstb_memc_of_match, dn);
+ if (!of_id) {
+ iounmap(base);
+ return -EINVAL;
+ }
+
+ ddr_seq_data = of_id->data;
+ ctrl.needs_ddr_pad = ddr_seq_data->needs_ddr_pad;
+ /* Adjust warm boot offset based on the DDR sequencer */
+ if (ddr_seq_data->warm_boot_offset)
+ ctrl.warm_boot_offset = ddr_seq_data->warm_boot_offset;
+
+ ctrl.memcs[i].ddr_ctrl = base;
+ i++;
+ }
+
+ pr_debug("PM: supports warm boot:%d, method:%d, wboffs:%x\n",
+ ctrl.support_warm_boot, ctrl.s3entry_method,
+ ctrl.warm_boot_offset);
+
+ dn = of_find_matching_node(NULL, sram_dt_ids);
+ if (!dn) {
+ pr_err("SRAM not found\n");
+ return -EINVAL;
+ }
+
+ ret = brcmstb_init_sram(dn);
+ if (ret) {
+ pr_err("error setting up SRAM for PM\n");
+ return ret;
+ }
+
+ ctrl.pdev = pdev;
+
+ ctrl.s3_params = kmalloc(sizeof(*ctrl.s3_params), GFP_KERNEL);
+ if (!ctrl.s3_params)
+ return -ENOMEM;
+ ctrl.s3_params_pa = dma_map_single(&pdev->dev, ctrl.s3_params,
+ sizeof(*ctrl.s3_params),
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, ctrl.s3_params_pa)) {
+ pr_err("error mapping DMA memory\n");
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &brcmstb_pm_panic_nb);
+
+ pm_power_off = brcmstb_pm_poweroff;
+ suspend_set_ops(&brcmstb_pm_ops);
+
+ return 0;
+
+out:
+ kfree(ctrl.s3_params);
+
+ pr_warn("PM: initialization failed with code %d\n", ret);
+
+ return ret;
+}
+
+static struct platform_driver brcmstb_pm_driver = {
+ .driver = {
+ .name = "brcmstb-pm",
+ .of_match_table = aon_ctrl_dt_ids,
+ },
+};
+
+static int __init brcmstb_pm_init(void)
+{
+ return platform_driver_probe(&brcmstb_pm_driver,
+ brcmstb_pm_probe);
+}
+module_init(brcmstb_pm_init);
--- /dev/null
+/*
+ * MIPS-specific support for Broadcom STB S2/S3/S5 power management
+ *
+ * Copyright (C) 2016-2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/delay.h>
+#include <linux/suspend.h>
+#include <asm/bmips.h>
+#include <asm/tlbflush.h>
+
+#include "pm.h"
+
+#define S2_NUM_PARAMS 6
+#define MAX_NUM_MEMC 3
+
+/* S3 constants */
+#define MAX_GP_REGS 16
+#define MAX_CP0_REGS 32
+#define NUM_MEMC_CLIENTS 128
+#define AON_CTRL_RAM_SIZE 128
+#define BRCMSTB_S3_MAGIC 0x5AFEB007
+
+#define CLEAR_RESET_MASK 0x01
+
+/* Index each CP0 register that needs to be saved */
+#define CONTEXT 0
+#define USER_LOCAL 1
+#define PGMK 2
+#define HWRENA 3
+#define COMPARE 4
+#define STATUS 5
+#define CONFIG 6
+#define MODE 7
+#define EDSP 8
+#define BOOT_VEC 9
+#define EBASE 10
+
+struct brcmstb_memc {
+ void __iomem *ddr_phy_base;
+ void __iomem *arb_base;
+};
+
+struct brcmstb_pm_control {
+ void __iomem *aon_ctrl_base;
+ void __iomem *aon_sram_base;
+ void __iomem *timers_base;
+ struct brcmstb_memc memcs[MAX_NUM_MEMC];
+ int num_memc;
+};
+
+struct brcm_pm_s3_context {
+ u32 cp0_regs[MAX_CP0_REGS];
+ u32 memc0_rts[NUM_MEMC_CLIENTS];
+ u32 sc_boot_vec;
+};
+
+struct brcmstb_mem_transfer;
+
+struct brcmstb_mem_transfer {
+ struct brcmstb_mem_transfer *next;
+ void *src;
+ void *dst;
+ dma_addr_t pa_src;
+ dma_addr_t pa_dst;
+ u32 len;
+ u8 key;
+ u8 mode;
+ u8 src_remapped;
+ u8 dst_remapped;
+ u8 src_dst_remapped;
+};
+
+#define AON_SAVE_SRAM(base, idx, val) \
+ __raw_writel(val, base + (idx << 2))
+
+/* Used for saving registers in asm */
+u32 gp_regs[MAX_GP_REGS];
+
+#define BSP_CLOCK_STOP 0x00
+#define PM_INITIATE 0x01
+
+static struct brcmstb_pm_control ctrl;
+
+static void brcm_pm_save_cp0_context(struct brcm_pm_s3_context *ctx)
+{
+ /* Generic MIPS */
+ ctx->cp0_regs[CONTEXT] = read_c0_context();
+ ctx->cp0_regs[USER_LOCAL] = read_c0_userlocal();
+ ctx->cp0_regs[PGMK] = read_c0_pagemask();
+ ctx->cp0_regs[HWRENA] = read_c0_cache();
+ ctx->cp0_regs[COMPARE] = read_c0_compare();
+ ctx->cp0_regs[STATUS] = read_c0_status();
+
+ /* Broadcom specific */
+ ctx->cp0_regs[CONFIG] = read_c0_brcm_config();
+ ctx->cp0_regs[MODE] = read_c0_brcm_mode();
+ ctx->cp0_regs[EDSP] = read_c0_brcm_edsp();
+ ctx->cp0_regs[BOOT_VEC] = read_c0_brcm_bootvec();
+ ctx->cp0_regs[EBASE] = read_c0_ebase();
+
+ ctx->sc_boot_vec = bmips_read_zscm_reg(0xa0);
+}
+
+static void brcm_pm_restore_cp0_context(struct brcm_pm_s3_context *ctx)
+{
+ /* Restore cp0 state */
+ bmips_write_zscm_reg(0xa0, ctx->sc_boot_vec);
+
+ /* Generic MIPS */
+ write_c0_context(ctx->cp0_regs[CONTEXT]);
+ write_c0_userlocal(ctx->cp0_regs[USER_LOCAL]);
+ write_c0_pagemask(ctx->cp0_regs[PGMK]);
+ write_c0_cache(ctx->cp0_regs[HWRENA]);
+ write_c0_compare(ctx->cp0_regs[COMPARE]);
+ write_c0_status(ctx->cp0_regs[STATUS]);
+
+ /* Broadcom specific */
+ write_c0_brcm_config(ctx->cp0_regs[CONFIG]);
+ write_c0_brcm_mode(ctx->cp0_regs[MODE]);
+ write_c0_brcm_edsp(ctx->cp0_regs[EDSP]);
+ write_c0_brcm_bootvec(ctx->cp0_regs[BOOT_VEC]);
+ write_c0_ebase(ctx->cp0_regs[EBASE]);
+}
+
+static void brcmstb_pm_handshake(void)
+{
+ void __iomem *base = ctrl.aon_ctrl_base;
+ u32 tmp;
+
+ /* BSP power handshake, v1 */
+ tmp = __raw_readl(base + AON_CTRL_HOST_MISC_CMDS);
+ tmp &= ~1UL;
+ __raw_writel(tmp, base + AON_CTRL_HOST_MISC_CMDS);
+ (void)__raw_readl(base + AON_CTRL_HOST_MISC_CMDS);
+
+ __raw_writel(0, base + AON_CTRL_PM_INITIATE);
+ (void)__raw_readl(base + AON_CTRL_PM_INITIATE);
+ __raw_writel(BSP_CLOCK_STOP | PM_INITIATE,
+ base + AON_CTRL_PM_INITIATE);
+ /*
+ * HACK: BSP may have internal race on the CLOCK_STOP command.
+ * Avoid touching the BSP for a few milliseconds.
+ */
+ mdelay(3);
+}
+
+static void brcmstb_pm_s5(void)
+{
+ void __iomem *base = ctrl.aon_ctrl_base;
+
+ brcmstb_pm_handshake();
+
+ /* Clear magic s3 warm-boot value */
+ AON_SAVE_SRAM(ctrl.aon_sram_base, 0, 0);
+
+ /* Set the countdown */
+ __raw_writel(0x10, base + AON_CTRL_PM_CPU_WAIT_COUNT);
+ (void)__raw_readl(base + AON_CTRL_PM_CPU_WAIT_COUNT);
+
+ /* Prepare to S5 cold boot */
+ __raw_writel(PM_COLD_CONFIG, base + AON_CTRL_PM_CTRL);
+ (void)__raw_readl(base + AON_CTRL_PM_CTRL);
+
+ __raw_writel((PM_COLD_CONFIG | PM_PWR_DOWN), base +
+ AON_CTRL_PM_CTRL);
+ (void)__raw_readl(base + AON_CTRL_PM_CTRL);
+
+ __asm__ __volatile__(
+ " wait\n"
+ : : : "memory");
+}
+
+static int brcmstb_pm_s3(void)
+{
+ struct brcm_pm_s3_context s3_context;
+ void __iomem *memc_arb_base;
+ unsigned long flags;
+ u32 tmp;
+ int i;
+
+ /* Prepare for s3 */
+ AON_SAVE_SRAM(ctrl.aon_sram_base, 0, BRCMSTB_S3_MAGIC);
+ AON_SAVE_SRAM(ctrl.aon_sram_base, 1, (u32)&s3_reentry);
+ AON_SAVE_SRAM(ctrl.aon_sram_base, 2, 0);
+
+ /* Clear RESET_HISTORY */
+ tmp = __raw_readl(ctrl.aon_ctrl_base + AON_CTRL_RESET_CTRL);
+ tmp &= ~CLEAR_RESET_MASK;
+ __raw_writel(tmp, ctrl.aon_ctrl_base + AON_CTRL_RESET_CTRL);
+
+ local_irq_save(flags);
+
+ /* Inhibit DDR_RSTb pulse for both MMCs*/
+ for (i = 0; i < ctrl.num_memc; i++) {
+ tmp = __raw_readl(ctrl.memcs[i].ddr_phy_base +
+ DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL);
+
+ tmp &= ~0x0f;
+ __raw_writel(tmp, ctrl.memcs[i].ddr_phy_base +
+ DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL);
+ tmp |= (0x05 | BIT(5));
+ __raw_writel(tmp, ctrl.memcs[i].ddr_phy_base +
+ DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL);
+ }
+
+ /* Save CP0 context */
+ brcm_pm_save_cp0_context(&s3_context);
+
+ /* Save RTS(skip debug register) */
+ memc_arb_base = ctrl.memcs[0].arb_base + 4;
+ for (i = 0; i < NUM_MEMC_CLIENTS; i++) {
+ s3_context.memc0_rts[i] = __raw_readl(memc_arb_base);
+ memc_arb_base += 4;
+ }
+
+ /* Save I/O context */
+ local_flush_tlb_all();
+ _dma_cache_wback_inv(0, ~0);
+
+ brcm_pm_do_s3(ctrl.aon_ctrl_base, current_cpu_data.dcache.linesz);
+
+ /* CPU reconfiguration */
+ local_flush_tlb_all();
+ bmips_cpu_setup();
+ cpumask_clear(&bmips_booted_mask);
+
+ /* Restore RTS (skip debug register) */
+ memc_arb_base = ctrl.memcs[0].arb_base + 4;
+ for (i = 0; i < NUM_MEMC_CLIENTS; i++) {
+ __raw_writel(s3_context.memc0_rts[i], memc_arb_base);
+ memc_arb_base += 4;
+ }
+
+ /* restore CP0 context */
+ brcm_pm_restore_cp0_context(&s3_context);
+
+ local_irq_restore(flags);
+
+ return 0;
+}
+
+static int brcmstb_pm_s2(void)
+{
+ /*
+ * We need to pass 6 arguments to an assembly function. Lets avoid the
+ * stack and pass arguments in a explicit 4 byte array. The assembly
+ * code assumes all arguments are 4 bytes and arguments are ordered
+ * like so:
+ *
+ * 0: AON_CTRl base register
+ * 1: DDR_PHY base register
+ * 2: TIMERS base resgister
+ * 3: I-Cache line size
+ * 4: Restart vector address
+ * 5: Restart vector size
+ */
+ u32 s2_params[6];
+
+ /* Prepare s2 parameters */
+ s2_params[0] = (u32)ctrl.aon_ctrl_base;
+ s2_params[1] = (u32)ctrl.memcs[0].ddr_phy_base;
+ s2_params[2] = (u32)ctrl.timers_base;
+ s2_params[3] = (u32)current_cpu_data.icache.linesz;
+ s2_params[4] = (u32)BMIPS_WARM_RESTART_VEC;
+ s2_params[5] = (u32)(bmips_smp_int_vec_end -
+ bmips_smp_int_vec);
+
+ /* Drop to standby */
+ brcm_pm_do_s2(s2_params);
+
+ return 0;
+}
+
+static int brcmstb_pm_standby(bool deep_standby)
+{
+ brcmstb_pm_handshake();
+
+ /* Send IRQs to BMIPS_WARM_RESTART_VEC */
+ clear_c0_cause(CAUSEF_IV);
+ irq_disable_hazard();
+ set_c0_status(ST0_BEV);
+ irq_disable_hazard();
+
+ if (deep_standby)
+ brcmstb_pm_s3();
+ else
+ brcmstb_pm_s2();
+
+ /* Send IRQs to normal runtime vectors */
+ clear_c0_status(ST0_BEV);
+ irq_disable_hazard();
+ set_c0_cause(CAUSEF_IV);
+ irq_disable_hazard();
+
+ return 0;
+}
+
+static int brcmstb_pm_enter(suspend_state_t state)
+{
+ int ret = -EINVAL;
+
+ switch (state) {
+ case PM_SUSPEND_STANDBY:
+ ret = brcmstb_pm_standby(false);
+ break;
+ case PM_SUSPEND_MEM:
+ ret = brcmstb_pm_standby(true);
+ break;
+ }
+
+ return ret;
+}
+
+static int brcmstb_pm_valid(suspend_state_t state)
+{
+ switch (state) {
+ case PM_SUSPEND_STANDBY:
+ return true;
+ case PM_SUSPEND_MEM:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static const struct platform_suspend_ops brcmstb_pm_ops = {
+ .enter = brcmstb_pm_enter,
+ .valid = brcmstb_pm_valid,
+};
+
+static const struct of_device_id aon_ctrl_dt_ids[] = {
+ { .compatible = "brcm,brcmstb-aon-ctrl" },
+ { /* sentinel */ }
+};
+
+static const struct of_device_id ddr_phy_dt_ids[] = {
+ { .compatible = "brcm,brcmstb-ddr-phy" },
+ { /* sentinel */ }
+};
+
+static const struct of_device_id arb_dt_ids[] = {
+ { .compatible = "brcm,brcmstb-memc-arb" },
+ { /* sentinel */ }
+};
+
+static const struct of_device_id timers_ids[] = {
+ { .compatible = "brcm,brcmstb-timers" },
+ { /* sentinel */ }
+};
+
+static inline void __iomem *brcmstb_ioremap_node(struct device_node *dn,
+ int index)
+{
+ return of_io_request_and_map(dn, index, dn->full_name);
+}
+
+static void __iomem *brcmstb_ioremap_match(const struct of_device_id *matches,
+ int index, const void **ofdata)
+{
+ struct device_node *dn;
+ const struct of_device_id *match;
+
+ dn = of_find_matching_node_and_match(NULL, matches, &match);
+ if (!dn)
+ return ERR_PTR(-EINVAL);
+
+ if (ofdata)
+ *ofdata = match->data;
+
+ return brcmstb_ioremap_node(dn, index);
+}
+
+static int brcmstb_pm_init(void)
+{
+ struct device_node *dn;
+ void __iomem *base;
+ int i;
+
+ /* AON ctrl registers */
+ base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL);
+ if (IS_ERR(base)) {
+ pr_err("error mapping AON_CTRL\n");
+ goto aon_err;
+ }
+ ctrl.aon_ctrl_base = base;
+
+ /* AON SRAM registers */
+ base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 1, NULL);
+ if (IS_ERR(base)) {
+ pr_err("error mapping AON_SRAM\n");
+ goto sram_err;
+ }
+ ctrl.aon_sram_base = base;
+
+ ctrl.num_memc = 0;
+ /* Map MEMC DDR PHY registers */
+ for_each_matching_node(dn, ddr_phy_dt_ids) {
+ i = ctrl.num_memc;
+ if (i >= MAX_NUM_MEMC) {
+ pr_warn("Too many MEMCs (max %d)\n", MAX_NUM_MEMC);
+ break;
+ }
+ base = brcmstb_ioremap_node(dn, 0);
+ if (IS_ERR(base))
+ goto ddr_err;
+
+ ctrl.memcs[i].ddr_phy_base = base;
+ ctrl.num_memc++;
+ }
+
+ /* MEMC ARB registers */
+ base = brcmstb_ioremap_match(arb_dt_ids, 0, NULL);
+ if (IS_ERR(base)) {
+ pr_err("error mapping MEMC ARB\n");
+ goto ddr_err;
+ }
+ ctrl.memcs[0].arb_base = base;
+
+ /* Timer registers */
+ base = brcmstb_ioremap_match(timers_ids, 0, NULL);
+ if (IS_ERR(base)) {
+ pr_err("error mapping timers\n");
+ goto tmr_err;
+ }
+ ctrl.timers_base = base;
+
+ /* s3 cold boot aka s5 */
+ pm_power_off = brcmstb_pm_s5;
+
+ suspend_set_ops(&brcmstb_pm_ops);
+
+ return 0;
+
+tmr_err:
+ iounmap(ctrl.memcs[0].arb_base);
+ddr_err:
+ for (i = 0; i < ctrl.num_memc; i++)
+ iounmap(ctrl.memcs[i].ddr_phy_base);
+
+ iounmap(ctrl.aon_sram_base);
+sram_err:
+ iounmap(ctrl.aon_ctrl_base);
+aon_err:
+ return PTR_ERR(base);
+}
+arch_initcall(brcmstb_pm_init);
--- /dev/null
+/*
+ * Definitions for Broadcom STB power management / Always ON (AON) block
+ *
+ * Copyright © 2016-2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __BRCMSTB_PM_H__
+#define __BRCMSTB_PM_H__
+
+#define AON_CTRL_RESET_CTRL 0x00
+#define AON_CTRL_PM_CTRL 0x04
+#define AON_CTRL_PM_STATUS 0x08
+#define AON_CTRL_PM_CPU_WAIT_COUNT 0x10
+#define AON_CTRL_PM_INITIATE 0x88
+#define AON_CTRL_HOST_MISC_CMDS 0x8c
+#define AON_CTRL_SYSTEM_DATA_RAM_OFS 0x200
+
+/* MIPS PM constants */
+/* MEMC0 offsets */
+#define DDR40_PHY_CONTROL_REGS_0_PLL_STATUS 0x10
+#define DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL 0xa4
+
+/* TIMER offsets */
+#define TIMER_TIMER1_CTRL 0x0c
+#define TIMER_TIMER1_STAT 0x1c
+
+/* TIMER defines */
+#define RESET_TIMER 0x0
+#define START_TIMER 0xbfffffff
+#define TIMER_MASK 0x3fffffff
+
+/* PM_CTRL bitfield (Method #0) */
+#define PM_FAST_PWRDOWN (1 << 6)
+#define PM_WARM_BOOT (1 << 5)
+#define PM_DEEP_STANDBY (1 << 4)
+#define PM_CPU_PWR (1 << 3)
+#define PM_USE_CPU_RDY (1 << 2)
+#define PM_PLL_PWRDOWN (1 << 1)
+#define PM_PWR_DOWN (1 << 0)
+
+/* PM_CTRL bitfield (Method #1) */
+#define PM_DPHY_STANDBY_CLEAR (1 << 20)
+#define PM_MIN_S3_WIDTH_TIMER_BYPASS (1 << 7)
+
+#define PM_S2_COMMAND (PM_PLL_PWRDOWN | PM_USE_CPU_RDY | PM_PWR_DOWN)
+
+/* Method 0 bitmasks */
+#define PM_COLD_CONFIG (PM_PLL_PWRDOWN | PM_DEEP_STANDBY)
+#define PM_WARM_CONFIG (PM_COLD_CONFIG | PM_USE_CPU_RDY | PM_WARM_BOOT)
+
+/* Method 1 bitmask */
+#define M1_PM_WARM_CONFIG (PM_DPHY_STANDBY_CLEAR | \
+ PM_MIN_S3_WIDTH_TIMER_BYPASS | \
+ PM_WARM_BOOT | PM_DEEP_STANDBY | \
+ PM_PLL_PWRDOWN | PM_PWR_DOWN)
+
+#define M1_PM_COLD_CONFIG (PM_DPHY_STANDBY_CLEAR | \
+ PM_MIN_S3_WIDTH_TIMER_BYPASS | \
+ PM_DEEP_STANDBY | \
+ PM_PLL_PWRDOWN | PM_PWR_DOWN)
+
+#ifndef __ASSEMBLY__
+
+#ifndef CONFIG_MIPS
+extern const unsigned long brcmstb_pm_do_s2_sz;
+extern asmlinkage int brcmstb_pm_do_s2(void __iomem *aon_ctrl_base,
+ void __iomem *ddr_phy_pll_status);
+#else
+/* s2 asm */
+extern asmlinkage int brcm_pm_do_s2(u32 *s2_params);
+
+/* s3 asm */
+extern asmlinkage int brcm_pm_do_s3(void __iomem *aon_ctrl_base,
+ int dcache_linesz);
+extern int s3_reentry;
+#endif /* CONFIG_MIPS */
+
+#endif
+
+#endif /* __BRCMSTB_PM_H__ */
--- /dev/null
+/*
+ * Copyright © 2014-2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+#include "pm.h"
+
+ .text
+ .align 3
+
+#define AON_CTRL_REG r10
+#define DDR_PHY_STATUS_REG r11
+
+/*
+ * r0: AON_CTRL base address
+ * r1: DDRY PHY PLL status register address
+ */
+ENTRY(brcmstb_pm_do_s2)
+ stmfd sp!, {r4-r11, lr}
+ mov AON_CTRL_REG, r0
+ mov DDR_PHY_STATUS_REG, r1
+
+ /* Flush memory transactions */
+ dsb
+
+ /* Cache DDR_PHY_STATUS_REG translation */
+ ldr r0, [DDR_PHY_STATUS_REG]
+
+ /* power down request */
+ ldr r0, =PM_S2_COMMAND
+ ldr r1, =0
+ str r1, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
+ ldr r1, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
+ str r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
+ ldr r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
+
+ /* Wait for interrupt */
+ wfi
+ nop
+
+ /* Bring MEMC back up */
+1: ldr r0, [DDR_PHY_STATUS_REG]
+ ands r0, #1
+ beq 1b
+
+ /* Power-up handshake */
+ ldr r0, =1
+ str r0, [AON_CTRL_REG, #AON_CTRL_HOST_MISC_CMDS]
+ ldr r0, [AON_CTRL_REG, #AON_CTRL_HOST_MISC_CMDS]
+
+ ldr r0, =0
+ str r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
+ ldr r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
+
+ /* Return to caller */
+ ldr r0, =0
+ ldmfd sp!, {r4-r11, pc}
+
+ ENDPROC(brcmstb_pm_do_s2)
+
+ /* Place literal pool here */
+ .ltorg
+
+ENTRY(brcmstb_pm_do_s2_sz)
+ .word . - brcmstb_pm_do_s2
--- /dev/null
+/*
+ * Copyright (C) 2016 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/asm.h>
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+#include <asm/stackframe.h>
+
+#include "pm.h"
+
+ .text
+ .set noreorder
+ .align 5
+
+/*
+ * a0: u32 params array
+ */
+LEAF(brcm_pm_do_s2)
+
+ subu sp, 64
+ sw ra, 0(sp)
+ sw s0, 4(sp)
+ sw s1, 8(sp)
+ sw s2, 12(sp)
+ sw s3, 16(sp)
+ sw s4, 20(sp)
+ sw s5, 24(sp)
+ sw s6, 28(sp)
+ sw s7, 32(sp)
+
+ /*
+ * Dereference the params array
+ * s0: AON_CTRL base register
+ * s1: DDR_PHY base register
+ * s2: TIMERS base register
+ * s3: I-Cache line size
+ * s4: Restart vector address
+ * s5: Restart vector size
+ */
+ move t0, a0
+
+ lw s0, 0(t0)
+ lw s1, 4(t0)
+ lw s2, 8(t0)
+ lw s3, 12(t0)
+ lw s4, 16(t0)
+ lw s5, 20(t0)
+
+ /* Lock this asm section into the I-cache */
+ addiu t1, s3, -1
+ not t1
+
+ la t0, brcm_pm_do_s2
+ and t0, t1
+
+ la t2, asm_end
+ and t2, t1
+
+1: cache 0x1c, 0(t0)
+ bne t0, t2, 1b
+ addu t0, s3
+
+ /* Lock the interrupt vector into the I-cache */
+ move t0, zero
+
+2: move t1, s4
+ cache 0x1c, 0(t1)
+ addu t1, s3
+ addu t0, s3
+ ble t0, s5, 2b
+ nop
+
+ sync
+
+ /* Power down request */
+ li t0, PM_S2_COMMAND
+ sw zero, AON_CTRL_PM_CTRL(s0)
+ lw zero, AON_CTRL_PM_CTRL(s0)
+ sw t0, AON_CTRL_PM_CTRL(s0)
+ lw t0, AON_CTRL_PM_CTRL(s0)
+
+ /* Enable CP0 interrupt 2 and wait for interrupt */
+ mfc0 t0, CP0_STATUS
+ /* Save cp0 sr for restoring later */
+ move s6, t0
+
+ li t1, ~(ST0_IM | ST0_IE)
+ and t0, t1
+ ori t0, STATUSF_IP2
+ mtc0 t0, CP0_STATUS
+ nop
+ nop
+ nop
+ ori t0, ST0_IE
+ mtc0 t0, CP0_STATUS
+
+ /* Wait for interrupt */
+ wait
+ nop
+
+ /* Wait for memc0 */
+1: lw t0, DDR40_PHY_CONTROL_REGS_0_PLL_STATUS(s1)
+ andi t0, 1
+ beqz t0, 1b
+ nop
+
+ /* 1ms delay needed for stable recovery */
+ /* Use TIMER1 to count 1 ms */
+ li t0, RESET_TIMER
+ sw t0, TIMER_TIMER1_CTRL(s2)
+ lw t0, TIMER_TIMER1_CTRL(s2)
+
+ li t0, START_TIMER
+ sw t0, TIMER_TIMER1_CTRL(s2)
+ lw t0, TIMER_TIMER1_CTRL(s2)
+
+ /* Prepare delay */
+ li t0, TIMER_MASK
+ lw t1, TIMER_TIMER1_STAT(s2)
+ and t1, t0
+ /* 1ms delay */
+ addi t1, 27000
+
+ /* Wait for the timer value to exceed t1 */
+1: lw t0, TIMER_TIMER1_STAT(s2)
+ sgtu t2, t1, t0
+ bnez t2, 1b
+ nop
+
+ /* Power back up */
+ li t1, 1
+ sw t1, AON_CTRL_HOST_MISC_CMDS(s0)
+ lw t1, AON_CTRL_HOST_MISC_CMDS(s0)
+
+ sw zero, AON_CTRL_PM_CTRL(s0)
+ lw zero, AON_CTRL_PM_CTRL(s0)
+
+ /* Unlock I-cache */
+ addiu t1, s3, -1
+ not t1
+
+ la t0, brcm_pm_do_s2
+ and t0, t1
+
+ la t2, asm_end
+ and t2, t1
+
+1: cache 0x00, 0(t0)
+ bne t0, t2, 1b
+ addu t0, s3
+
+ /* Unlock interrupt vector */
+ move t0, zero
+
+2: move t1, s4
+ cache 0x00, 0(t1)
+ addu t1, s3
+ addu t0, s3
+ ble t0, s5, 2b
+ nop
+
+ /* Restore cp0 sr */
+ sync
+ nop
+ mtc0 s6, CP0_STATUS
+ nop
+
+ /* Set return value to success */
+ li v0, 0
+
+ /* Return to caller */
+ lw s7, 32(sp)
+ lw s6, 28(sp)
+ lw s5, 24(sp)
+ lw s4, 20(sp)
+ lw s3, 16(sp)
+ lw s2, 12(sp)
+ lw s1, 8(sp)
+ lw s0, 4(sp)
+ lw ra, 0(sp)
+ addiu sp, 64
+
+ jr ra
+ nop
+END(brcm_pm_do_s2)
+
+ .globl asm_end
+asm_end:
+ nop
+
--- /dev/null
+/*
+ * Copyright (C) 2016 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/asm.h>
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+#include <asm/bmips.h>
+
+#include "pm.h"
+
+ .text
+ .set noreorder
+ .align 5
+ .global s3_reentry
+
+/*
+ * a0: AON_CTRL base register
+ * a1: D-Cache line size
+ */
+LEAF(brcm_pm_do_s3)
+
+ /* Get the address of s3_context */
+ la t0, gp_regs
+ sw ra, 0(t0)
+ sw s0, 4(t0)
+ sw s1, 8(t0)
+ sw s2, 12(t0)
+ sw s3, 16(t0)
+ sw s4, 20(t0)
+ sw s5, 24(t0)
+ sw s6, 28(t0)
+ sw s7, 32(t0)
+ sw gp, 36(t0)
+ sw sp, 40(t0)
+ sw fp, 44(t0)
+
+ /* Save CP0 Status */
+ mfc0 t1, CP0_STATUS
+ sw t1, 48(t0)
+
+ /* Write-back gp registers - cache will be gone */
+ addiu t1, a1, -1
+ not t1
+ and t0, t1
+
+ /* Flush at least 64 bytes */
+ addiu t2, t0, 64
+ and t2, t1
+
+1: cache 0x17, 0(t0)
+ bne t0, t2, 1b
+ addu t0, a1
+
+ /* Drop to deep standby */
+ li t1, PM_WARM_CONFIG
+ sw zero, AON_CTRL_PM_CTRL(a0)
+ lw zero, AON_CTRL_PM_CTRL(a0)
+ sw t1, AON_CTRL_PM_CTRL(a0)
+ lw t1, AON_CTRL_PM_CTRL(a0)
+
+ li t1, (PM_WARM_CONFIG | PM_PWR_DOWN)
+ sw t1, AON_CTRL_PM_CTRL(a0)
+ lw t1, AON_CTRL_PM_CTRL(a0)
+
+ /* Enable CP0 interrupt 2 and wait for interrupt */
+ mfc0 t0, CP0_STATUS
+
+ li t1, ~(ST0_IM | ST0_IE)
+ and t0, t1
+ ori t0, STATUSF_IP2
+ mtc0 t0, CP0_STATUS
+ nop
+ nop
+ nop
+ ori t0, ST0_IE
+ mtc0 t0, CP0_STATUS
+
+ /* Wait for interrupt */
+ wait
+ nop
+
+s3_reentry:
+
+ /* Clear call/return stack */
+ li t0, (0x06 << 16)
+ mtc0 t0, $22, 2
+ ssnop
+ ssnop
+ ssnop
+
+ /* Clear jump target buffer */
+ li t0, (0x04 << 16)
+ mtc0 t0, $22, 2
+ ssnop
+ ssnop
+ ssnop
+
+ sync
+ nop
+
+ /* Setup mmu defaults */
+ mtc0 zero, CP0_WIRED
+ mtc0 zero, CP0_ENTRYHI
+ li k0, PM_DEFAULT_MASK
+ mtc0 k0, CP0_PAGEMASK
+
+ li sp, BMIPS_WARM_RESTART_VEC
+ la k0, plat_wired_tlb_setup
+ jalr k0
+ nop
+
+ /* Restore general purpose registers */
+ la t0, gp_regs
+ lw fp, 44(t0)
+ lw sp, 40(t0)
+ lw gp, 36(t0)
+ lw s7, 32(t0)
+ lw s6, 28(t0)
+ lw s5, 24(t0)
+ lw s4, 20(t0)
+ lw s3, 16(t0)
+ lw s2, 12(t0)
+ lw s1, 8(t0)
+ lw s0, 4(t0)
+ lw ra, 0(t0)
+
+ /* Restore CP0 status */
+ lw t1, 48(t0)
+ mtc0 t1, CP0_STATUS
+
+ /* Return to caller */
+ li v0, 0
+ jr ra
+ nop
+
+END(brcm_pm_do_s3)
{ .compatible = "fsl,ls1021a-dcfg", },
{ .compatible = "fsl,ls1043a-dcfg", },
{ .compatible = "fsl,ls2080a-dcfg", },
+ { .compatible = "fsl,ls1088a-dcfg", },
{}
};
MODULE_DEVICE_TABLE(of, fsl_guts_of_match);
menuconfig FSL_DPAA
bool "Freescale DPAA 1.x support"
- depends on FSL_SOC_BOOKE
+ depends on (FSL_SOC_BOOKE || ARCH_LAYERSCAPE)
select GENERIC_ALLOCATOR
help
The Freescale Data Path Acceleration Architecture (DPAA) is a set of
obj-$(CONFIG_FSL_DPAA) += bman_ccsr.o qman_ccsr.o \
bman_portal.o qman_portal.o \
- bman.o qman.o
+ bman.o qman.o dpaa_sys.o
obj-$(CONFIG_FSL_BMAN_TEST) += bman-test.o
bman-test-y = bman_test.o
/* Portal register assists */
+#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH 0x3000
+#define BM_REG_RCR_CI_CINH 0x3100
+#define BM_REG_RCR_ITR 0x3200
+#define BM_REG_CFG 0x3300
+#define BM_REG_SCN(n) (0x3400 + ((n) << 6))
+#define BM_REG_ISR 0x3e00
+#define BM_REG_IER 0x3e40
+#define BM_REG_ISDR 0x3e80
+#define BM_REG_IIR 0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR 0x0000
+#define BM_CL_RR0 0x0100
+#define BM_CL_RR1 0x0140
+#define BM_CL_RCR 0x1000
+#define BM_CL_RCR_PI_CENA 0x3000
+#define BM_CL_RCR_CI_CENA 0x3100
+
+#else
/* Cache-inhibited register offsets */
#define BM_REG_RCR_PI_CINH 0x0000
#define BM_REG_RCR_CI_CINH 0x0004
#define BM_CL_RCR 0x1000
#define BM_CL_RCR_PI_CENA 0x3000
#define BM_CL_RCR_CI_CENA 0x3100
+#endif
/*
* Portal modes.
};
struct bm_addr {
- void __iomem *ce; /* cache-enabled */
+ void *ce; /* cache-enabled */
+ __be32 *ce_be; /* Same as above but for direct access */
void __iomem *ci; /* cache-inhibited */
};
/* Cache-inhibited register access. */
static inline u32 bm_in(struct bm_portal *p, u32 offset)
{
- return be32_to_cpu(__raw_readl(p->addr.ci + offset));
+ return ioread32be(p->addr.ci + offset);
}
static inline void bm_out(struct bm_portal *p, u32 offset, u32 val)
{
- __raw_writel(cpu_to_be32(val), p->addr.ci + offset);
+ iowrite32be(val, p->addr.ci + offset);
}
/* Cache Enabled Portal Access */
static inline u32 bm_ce_in(struct bm_portal *p, u32 offset)
{
- return be32_to_cpu(__raw_readl(p->addr.ce + offset));
+ return be32_to_cpu(*(p->addr.ce_be + (offset/4)));
}
struct bman_portal {
mc->cr = portal->addr.ce + BM_CL_CR;
mc->rr = portal->addr.ce + BM_CL_RR0;
- mc->rridx = (__raw_readb(&mc->cr->_ncw_verb) & BM_MCC_VERB_VBIT) ?
+ mc->rridx = (mc->cr->_ncw_verb & BM_MCC_VERB_VBIT) ?
0 : 1;
mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
#ifdef CONFIG_FSL_DPAA_CHECKING
* its command is submitted and completed. This includes the valid-bit,
* in case you were wondering...
*/
- if (!__raw_readb(&rr->verb)) {
+ if (!rr->verb) {
dpaa_invalidate_touch_ro(rr);
return NULL;
}
* config, everything that follows depends on it and "config" is more
* for (de)reference...
*/
- p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
- p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+ p->addr.ce = c->addr_virt_ce;
+ p->addr.ce_be = c->addr_virt_ce;
+ p->addr.ci = c->addr_virt_ci;
if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
dev_err(c->dev, "RCR initialisation failed\n");
goto fail_rcr;
unsigned long irqflags;
local_irq_save(irqflags);
- set_bits(bits & BM_PIRQ_VISIBLE, &p->irq_sources);
+ p->irq_sources |= bits & BM_PIRQ_VISIBLE;
bm_out(&p->p, BM_REG_IER, p->irq_sources);
local_irq_restore(irqflags);
return 0;
return -ENODEV;
}
+ /*
+ * If FBPR memory wasn't defined using the qbman compatible string
+ * try using the of_reserved_mem_device method
+ */
+ if (!fbpr_a) {
+ ret = qbman_init_private_mem(dev, 0, &fbpr_a, &fbpr_sz);
+ if (ret) {
+ dev_err(dev, "qbman_init_private_mem() failed 0x%x\n",
+ ret);
+ return -ENODEV;
+ }
+ }
+
+ dev_dbg(dev, "Allocated FBPR 0x%llx 0x%zx\n", fbpr_a, fbpr_sz);
+
bm_set_memory(fbpr_a, fbpr_sz);
err_irq = platform_get_irq(pdev, 0);
struct device_node *node = dev->of_node;
struct bm_portal_config *pcfg;
struct resource *addr_phys[2];
- void __iomem *va;
int irq, cpu;
pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL);
}
pcfg->irq = irq;
- va = ioremap_prot(addr_phys[0]->start, resource_size(addr_phys[0]), 0);
- if (!va) {
- dev_err(dev, "ioremap::CE failed\n");
+ pcfg->addr_virt_ce = memremap(addr_phys[0]->start,
+ resource_size(addr_phys[0]),
+ QBMAN_MEMREMAP_ATTR);
+ if (!pcfg->addr_virt_ce) {
+ dev_err(dev, "memremap::CE failed\n");
goto err_ioremap1;
}
- pcfg->addr_virt[DPAA_PORTAL_CE] = va;
-
- va = ioremap_prot(addr_phys[1]->start, resource_size(addr_phys[1]),
- _PAGE_GUARDED | _PAGE_NO_CACHE);
- if (!va) {
+ pcfg->addr_virt_ci = ioremap(addr_phys[1]->start,
+ resource_size(addr_phys[1]));
+ if (!pcfg->addr_virt_ci) {
dev_err(dev, "ioremap::CI failed\n");
goto err_ioremap2;
}
- pcfg->addr_virt[DPAA_PORTAL_CI] = va;
-
spin_lock(&bman_lock);
cpu = cpumask_next_zero(-1, &portal_cpus);
if (cpu >= nr_cpu_ids) {
return 0;
err_portal_init:
- iounmap(pcfg->addr_virt[DPAA_PORTAL_CI]);
+ iounmap(pcfg->addr_virt_ci);
err_ioremap2:
- iounmap(pcfg->addr_virt[DPAA_PORTAL_CE]);
+ memunmap(pcfg->addr_virt_ce);
err_ioremap1:
return -ENXIO;
}
extern struct gen_pool *bm_bpalloc;
struct bm_portal_config {
- /*
- * Corenet portal addresses;
- * [0]==cache-enabled, [1]==cache-inhibited.
- */
- void __iomem *addr_virt[2];
+ /* Portal addresses */
+ void *addr_virt_ce;
+ void __iomem *addr_virt_ci;
/* Allow these to be joined in lists */
struct list_head list;
struct device *dev;
--- /dev/null
+/* Copyright 2017 NXP Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of NXP Semiconductor nor the
+ * names of its contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY NXP Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL NXP Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/dma-mapping.h>
+#include "dpaa_sys.h"
+
+/*
+ * Initialize a devices private memory region
+ */
+int qbman_init_private_mem(struct device *dev, int idx, dma_addr_t *addr,
+ size_t *size)
+{
+ int ret;
+ struct device_node *mem_node;
+ u64 size64;
+
+ ret = of_reserved_mem_device_init_by_idx(dev, dev->of_node, idx);
+ if (ret) {
+ dev_err(dev,
+ "of_reserved_mem_device_init_by_idx(%d) failed 0x%x\n",
+ idx, ret);
+ return -ENODEV;
+ }
+ mem_node = of_parse_phandle(dev->of_node, "memory-region", 0);
+ if (mem_node) {
+ ret = of_property_read_u64(mem_node, "size", &size64);
+ if (ret) {
+ dev_err(dev, "of_address_to_resource fails 0x%x\n",
+ ret);
+ return -ENODEV;
+ }
+ *size = size64;
+ } else {
+ dev_err(dev, "No memory-region found for index %d\n", idx);
+ return -ENODEV;
+ }
+
+ if (!dma_zalloc_coherent(dev, *size, addr, 0)) {
+ dev_err(dev, "DMA Alloc memory failed\n");
+ return -ENODEV;
+ }
+
+ /*
+ * Disassociate the reserved memory area from the device
+ * because a device can only have one DMA memory area. This
+ * should be fine since the memory is allocated and initialized
+ * and only ever accessed by the QBMan device from now on
+ */
+ of_reserved_mem_device_release(dev);
+ return 0;
+}
#include <linux/prefetch.h>
#include <linux/genalloc.h>
#include <asm/cacheflush.h>
+#include <linux/io.h>
+#include <linux/delay.h>
/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
#define DPAA_PORTAL_CE 0
#define DPAA_PORTAL_CI 1
-#if (L1_CACHE_BYTES != 32) && (L1_CACHE_BYTES != 64)
-#error "Unsupported Cacheline Size"
-#endif
-
static inline void dpaa_flush(void *p)
{
+ /*
+ * Only PPC needs to flush the cache currently - on ARM the mapping
+ * is non cacheable
+ */
#ifdef CONFIG_PPC
flush_dcache_range((unsigned long)p, (unsigned long)p+64);
-#elif defined(CONFIG_ARM32)
- __cpuc_flush_dcache_area(p, 64);
-#elif defined(CONFIG_ARM64)
- __flush_dcache_area(p, 64);
#endif
}
/* Offset applied to genalloc pools due to zero being an error return */
#define DPAA_GENALLOC_OFF 0x80000000
+/* Initialize the devices private memory region */
+int qbman_init_private_mem(struct device *dev, int idx, dma_addr_t *addr,
+ size_t *size);
+
+/* memremap() attributes for different platforms */
+#ifdef CONFIG_PPC
+#define QBMAN_MEMREMAP_ATTR MEMREMAP_WB
+#else
+#define QBMAN_MEMREMAP_ATTR MEMREMAP_WC
+#endif
+
#endif /* __DPAA_SYS_H */
/* Portal register assists */
+#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
+/* Cache-inhibited register offsets */
+#define QM_REG_EQCR_PI_CINH 0x3000
+#define QM_REG_EQCR_CI_CINH 0x3040
+#define QM_REG_EQCR_ITR 0x3080
+#define QM_REG_DQRR_PI_CINH 0x3100
+#define QM_REG_DQRR_CI_CINH 0x3140
+#define QM_REG_DQRR_ITR 0x3180
+#define QM_REG_DQRR_DCAP 0x31C0
+#define QM_REG_DQRR_SDQCR 0x3200
+#define QM_REG_DQRR_VDQCR 0x3240
+#define QM_REG_DQRR_PDQCR 0x3280
+#define QM_REG_MR_PI_CINH 0x3300
+#define QM_REG_MR_CI_CINH 0x3340
+#define QM_REG_MR_ITR 0x3380
+#define QM_REG_CFG 0x3500
+#define QM_REG_ISR 0x3600
+#define QM_REG_IER 0x3640
+#define QM_REG_ISDR 0x3680
+#define QM_REG_IIR 0x36C0
+#define QM_REG_ITPR 0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR 0x0000
+#define QM_CL_DQRR 0x1000
+#define QM_CL_MR 0x2000
+#define QM_CL_EQCR_PI_CENA 0x3000
+#define QM_CL_EQCR_CI_CENA 0x3040
+#define QM_CL_DQRR_PI_CENA 0x3100
+#define QM_CL_DQRR_CI_CENA 0x3140
+#define QM_CL_MR_PI_CENA 0x3300
+#define QM_CL_MR_CI_CENA 0x3340
+#define QM_CL_CR 0x3800
+#define QM_CL_RR0 0x3900
+#define QM_CL_RR1 0x3940
+
+#else
/* Cache-inhibited register offsets */
#define QM_REG_EQCR_PI_CINH 0x0000
#define QM_REG_EQCR_CI_CINH 0x0004
#define QM_CL_CR 0x3800
#define QM_CL_RR0 0x3900
#define QM_CL_RR1 0x3940
+#endif
/*
* BTW, the drivers (and h/w programming model) already obtain the required
};
struct qm_addr {
- void __iomem *ce; /* cache-enabled */
+ void *ce; /* cache-enabled */
+ __be32 *ce_be; /* same value as above but for direct access */
void __iomem *ci; /* cache-inhibited */
};
/* Cache-inhibited register access. */
static inline u32 qm_in(struct qm_portal *p, u32 offset)
{
- return be32_to_cpu(__raw_readl(p->addr.ci + offset));
+ return ioread32be(p->addr.ci + offset);
}
static inline void qm_out(struct qm_portal *p, u32 offset, u32 val)
{
- __raw_writel(cpu_to_be32(val), p->addr.ci + offset);
+ iowrite32be(val, p->addr.ci + offset);
}
/* Cache Enabled Portal Access */
static inline u32 qm_ce_in(struct qm_portal *p, u32 offset)
{
- return be32_to_cpu(__raw_readl(p->addr.ce + offset));
+ return be32_to_cpu(*(p->addr.ce_be + (offset/4)));
}
/* --- EQCR API --- */
*/
dpaa_invalidate_touch_ro(res);
#endif
- /*
- * when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+ if ((res->verb & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
if (!dqrr->pi)
dqrr->vbit ^= QM_DQRR_VERB_VBIT;
union qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
DPAA_ASSERT(mr->pmode == qm_mr_pvb);
- /*
- * when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+
+ if ((res->verb & QM_MR_VERB_VBIT) == mr->vbit) {
mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
if (!mr->pi)
mr->vbit ^= QM_MR_VERB_VBIT;
mc->cr = portal->addr.ce + QM_CL_CR;
mc->rr = portal->addr.ce + QM_CL_RR0;
- mc->rridx = (__raw_readb(&mc->cr->_ncw_verb) & QM_MCC_VERB_VBIT)
+ mc->rridx = (mc->cr->_ncw_verb & QM_MCC_VERB_VBIT)
? 0 : 1;
mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
#ifdef CONFIG_FSL_DPAA_CHECKING
* its command is submitted and completed. This includes the valid-bit,
* in case you were wondering...
*/
- if (!__raw_readb(&rr->verb)) {
+ if (!rr->verb) {
dpaa_invalidate_touch_ro(rr);
return NULL;
}
static inline void fq_set(struct qman_fq *fq, u32 mask)
{
- set_bits(mask, &fq->flags);
+ fq->flags |= mask;
}
static inline void fq_clear(struct qman_fq *fq, u32 mask)
{
- clear_bits(mask, &fq->flags);
+ fq->flags &= ~mask;
}
static inline int fq_isset(struct qman_fq *fq, u32 mask)
* entries well before the ring has been fully consumed, so
* we're being *really* paranoid here.
*/
- u64 now, then = jiffies;
-
- do {
- now = jiffies;
- } while ((then + 10000) > now);
+ msleep(1);
msg = qm_mr_current(p);
if (!msg)
return 0;
* config, everything that follows depends on it and "config" is more
* for (de)reference
*/
- p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
- p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+ p->addr.ce = c->addr_virt_ce;
+ p->addr.ce_be = c->addr_virt_ce;
+ p->addr.ci = c->addr_virt_ci;
/*
* If CI-stashing is used, the current defaults use a threshold of 3,
* and stash with high-than-DQRR priority.
unsigned long irqflags;
local_irq_save(irqflags);
- set_bits(bits & QM_PIRQ_VISIBLE, &p->irq_sources);
+ p->irq_sources |= bits & QM_PIRQ_VISIBLE;
qm_out(&p->p, QM_REG_IER, p->irq_sources);
local_irq_restore(irqflags);
}
*/
local_irq_save(irqflags);
bits &= QM_PIRQ_VISIBLE;
- clear_bits(bits, &p->irq_sources);
+ p->irq_sources &= ~bits;
qm_out(&p->p, QM_REG_IER, p->irq_sources);
ier = qm_in(&p->p, QM_REG_IER);
/*
}
/*
- * Ideally we would use the DMA API to turn rmem->base into a DMA address
- * (especially if iommu translations ever get involved). Unfortunately, the
- * DMA API currently does not allow mapping anything that is not backed with
- * a struct page.
+ * QMan needs two global memory areas initialized at boot time:
+ * 1) FQD: Frame Queue Descriptors used to manage frame queues
+ * 2) PFDR: Packed Frame Queue Descriptor Records used to store frames
+ * Both areas are reserved using the device tree reserved memory framework
+ * and the addresses and sizes are initialized when the QMan device is probed
*/
static dma_addr_t fqd_a, pfdr_a;
static size_t fqd_sz, pfdr_sz;
+#ifdef CONFIG_PPC
+/*
+ * Support for PPC Device Tree backward compatibility when compatible
+ * string is set to fsl-qman-fqd and fsl-qman-pfdr
+ */
+static int zero_priv_mem(phys_addr_t addr, size_t sz)
+{
+ /* map as cacheable, non-guarded */
+ void __iomem *tmpp = ioremap_prot(addr, sz, 0);
+
+ if (!tmpp)
+ return -ENOMEM;
+
+ memset_io(tmpp, 0, sz);
+ flush_dcache_range((unsigned long)tmpp,
+ (unsigned long)tmpp + sz);
+ iounmap(tmpp);
+
+ return 0;
+}
+
static int qman_fqd(struct reserved_mem *rmem)
{
fqd_a = rmem->base;
fqd_sz = rmem->size;
WARN_ON(!(fqd_a && fqd_sz));
-
return 0;
}
RESERVEDMEM_OF_DECLARE(qman_fqd, "fsl,qman-fqd", qman_fqd);
}
RESERVEDMEM_OF_DECLARE(qman_pfdr, "fsl,qman-pfdr", qman_pfdr);
+#endif
+
static unsigned int qm_get_fqid_maxcnt(void)
{
return fqd_sz / 64;
}
-/*
- * Flush this memory range from data cache so that QMAN originated
- * transactions for this memory region could be marked non-coherent.
- */
-static int zero_priv_mem(struct device *dev, struct device_node *node,
- phys_addr_t addr, size_t sz)
-{
- /* map as cacheable, non-guarded */
- void __iomem *tmpp = ioremap_prot(addr, sz, 0);
-
- if (!tmpp)
- return -ENOMEM;
-
- memset_io(tmpp, 0, sz);
- flush_dcache_range((unsigned long)tmpp,
- (unsigned long)tmpp + sz);
- iounmap(tmpp);
-
- return 0;
-}
-
static void log_edata_bits(struct device *dev, u32 bit_count)
{
u32 i, j, mask = 0xffffffff;
qman_ip_rev = QMAN_REV30;
else if (major == 3 && minor == 1)
qman_ip_rev = QMAN_REV31;
+ else if (major == 3 && minor == 2)
+ qman_ip_rev = QMAN_REV32;
else {
dev_err(dev, "Unknown QMan version\n");
return -ENODEV;
qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
}
- ret = zero_priv_mem(dev, node, fqd_a, fqd_sz);
- WARN_ON(ret);
- if (ret)
- return -ENODEV;
+ if (fqd_a) {
+#ifdef CONFIG_PPC
+ /*
+ * For PPC backward DT compatibility
+ * FQD memory MUST be zero'd by software
+ */
+ zero_priv_mem(fqd_a, fqd_sz);
+#else
+ WARN(1, "Unexpected architecture using non shared-dma-mem reservations");
+#endif
+ } else {
+ /*
+ * Order of memory regions is assumed as FQD followed by PFDR
+ * in order to ensure allocations from the correct regions the
+ * driver initializes then allocates each piece in order
+ */
+ ret = qbman_init_private_mem(dev, 0, &fqd_a, &fqd_sz);
+ if (ret) {
+ dev_err(dev, "qbman_init_private_mem() for FQD failed 0x%x\n",
+ ret);
+ return -ENODEV;
+ }
+ }
+ dev_dbg(dev, "Allocated FQD 0x%llx 0x%zx\n", fqd_a, fqd_sz);
+
+ if (!pfdr_a) {
+ /* Setup PFDR memory */
+ ret = qbman_init_private_mem(dev, 1, &pfdr_a, &pfdr_sz);
+ if (ret) {
+ dev_err(dev, "qbman_init_private_mem() for PFDR failed 0x%x\n",
+ ret);
+ return -ENODEV;
+ }
+ }
+ dev_dbg(dev, "Allocated PFDR 0x%llx 0x%zx\n", pfdr_a, pfdr_sz);
ret = qman_init_ccsr(dev);
if (ret) {
struct device_node *node = dev->of_node;
struct qm_portal_config *pcfg;
struct resource *addr_phys[2];
- void __iomem *va;
int irq, cpu, err;
u32 val;
}
pcfg->irq = irq;
- va = ioremap_prot(addr_phys[0]->start, resource_size(addr_phys[0]), 0);
- if (!va) {
- dev_err(dev, "ioremap::CE failed\n");
+ pcfg->addr_virt_ce = memremap(addr_phys[0]->start,
+ resource_size(addr_phys[0]),
+ QBMAN_MEMREMAP_ATTR);
+ if (!pcfg->addr_virt_ce) {
+ dev_err(dev, "memremap::CE failed\n");
goto err_ioremap1;
}
- pcfg->addr_virt[DPAA_PORTAL_CE] = va;
-
- va = ioremap_prot(addr_phys[1]->start, resource_size(addr_phys[1]),
- _PAGE_GUARDED | _PAGE_NO_CACHE);
- if (!va) {
+ pcfg->addr_virt_ci = ioremap(addr_phys[1]->start,
+ resource_size(addr_phys[1]));
+ if (!pcfg->addr_virt_ci) {
dev_err(dev, "ioremap::CI failed\n");
goto err_ioremap2;
}
- pcfg->addr_virt[DPAA_PORTAL_CI] = va;
-
pcfg->pools = qm_get_pools_sdqcr();
spin_lock(&qman_lock);
return 0;
err_portal_init:
- iounmap(pcfg->addr_virt[DPAA_PORTAL_CI]);
+ iounmap(pcfg->addr_virt_ci);
err_ioremap2:
- iounmap(pcfg->addr_virt[DPAA_PORTAL_CE]);
+ memunmap(pcfg->addr_virt_ce);
err_ioremap1:
return -ENXIO;
}
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
#include "dpaa_sys.h"
#include <soc/fsl/qman.h>
void qman_init_cgr_all(void);
struct qm_portal_config {
- /*
- * Corenet portal addresses;
- * [0]==cache-enabled, [1]==cache-inhibited.
- */
- void __iomem *addr_virt[2];
+ /* Portal addresses */
+ void *addr_virt_ce;
+ void __iomem *addr_virt_ci;
struct device *dev;
struct iommu_domain *iommu_domain;
/* Allow these to be joined in lists */
#define QMAN_REV20 0x0200
#define QMAN_REV30 0x0300
#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
#define QM_FQID_RANGE_START 1 /* FQID 0 reserved for internal use */
#include "qman_priv.h"
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
int qman_test_stash(void);
int qman_test_api(void);
#
# MediaTek SoC drivers
#
+menu "MediaTek SoC drivers"
+ depends on ARCH_MEDIATEK || COMPILE_TEST
+
config MTK_INFRACFG
bool "MediaTek INFRACFG Support"
- depends on ARCH_MEDIATEK || COMPILE_TEST
select REGMAP
help
Say yes here to add support for the MediaTek INFRACFG controller. The
config MTK_PMIC_WRAP
tristate "MediaTek PMIC Wrapper Support"
- depends on ARCH_MEDIATEK
depends on RESET_CONTROLLER
select REGMAP
help
config MTK_SCPSYS
bool "MediaTek SCPSYS Support"
- depends on ARCH_MEDIATEK || COMPILE_TEST
default ARCH_MEDIATEK
select REGMAP
select MTK_INFRACFG
help
Say yes here to add support for the MediaTek SCPSYS power domain
driver.
+
+endmenu
PWRAP_WDT_SRC_EN_HARB_STAUPD_DLE | \
PWRAP_WDT_SRC_EN_HARB_STAUPD_ALE)
+/* Group of bits used for shown slave capability */
+#define PWRAP_SLV_CAP_SPI BIT(0)
+#define PWRAP_SLV_CAP_DUALIO BIT(1)
+#define PWRAP_SLV_CAP_SECURITY BIT(2)
+#define HAS_CAP(_c, _x) (((_c) & (_x)) == (_x))
+
/* defines for slave device wrapper registers */
enum dew_regs {
PWRAP_DEW_BASE,
PWRAP_ADC_RDATA_ADDR1,
PWRAP_ADC_RDATA_ADDR2,
+ /* MT7622 only regs */
+ PWRAP_EINT_STA0_ADR,
+ PWRAP_EINT_STA1_ADR,
+ PWRAP_STA,
+ PWRAP_CLR,
+ PWRAP_DVFS_ADR8,
+ PWRAP_DVFS_WDATA8,
+ PWRAP_DVFS_ADR9,
+ PWRAP_DVFS_WDATA9,
+ PWRAP_DVFS_ADR10,
+ PWRAP_DVFS_WDATA10,
+ PWRAP_DVFS_ADR11,
+ PWRAP_DVFS_WDATA11,
+ PWRAP_DVFS_ADR12,
+ PWRAP_DVFS_WDATA12,
+ PWRAP_DVFS_ADR13,
+ PWRAP_DVFS_WDATA13,
+ PWRAP_DVFS_ADR14,
+ PWRAP_DVFS_WDATA14,
+ PWRAP_DVFS_ADR15,
+ PWRAP_DVFS_WDATA15,
+ PWRAP_EXT_CK,
+ PWRAP_ADC_RDATA_ADDR,
+ PWRAP_GPS_STA,
+ PWRAP_SW_RST,
+ PWRAP_DVFS_STEP_CTRL0,
+ PWRAP_DVFS_STEP_CTRL1,
+ PWRAP_DVFS_STEP_CTRL2,
+ PWRAP_SPI2_CTRL,
+
/* MT8135 only regs */
PWRAP_CSHEXT,
PWRAP_EVENT_IN_EN,
[PWRAP_ADC_RDATA_ADDR2] = 0x154,
};
+static int mt7622_regs[] = {
+ [PWRAP_MUX_SEL] = 0x0,
+ [PWRAP_WRAP_EN] = 0x4,
+ [PWRAP_DIO_EN] = 0x8,
+ [PWRAP_SIDLY] = 0xC,
+ [PWRAP_RDDMY] = 0x10,
+ [PWRAP_SI_CK_CON] = 0x14,
+ [PWRAP_CSHEXT_WRITE] = 0x18,
+ [PWRAP_CSHEXT_READ] = 0x1C,
+ [PWRAP_CSLEXT_START] = 0x20,
+ [PWRAP_CSLEXT_END] = 0x24,
+ [PWRAP_STAUPD_PRD] = 0x28,
+ [PWRAP_STAUPD_GRPEN] = 0x2C,
+ [PWRAP_EINT_STA0_ADR] = 0x30,
+ [PWRAP_EINT_STA1_ADR] = 0x34,
+ [PWRAP_STA] = 0x38,
+ [PWRAP_CLR] = 0x3C,
+ [PWRAP_STAUPD_MAN_TRIG] = 0x40,
+ [PWRAP_STAUPD_STA] = 0x44,
+ [PWRAP_WRAP_STA] = 0x48,
+ [PWRAP_HARB_INIT] = 0x4C,
+ [PWRAP_HARB_HPRIO] = 0x50,
+ [PWRAP_HIPRIO_ARB_EN] = 0x54,
+ [PWRAP_HARB_STA0] = 0x58,
+ [PWRAP_HARB_STA1] = 0x5C,
+ [PWRAP_MAN_EN] = 0x60,
+ [PWRAP_MAN_CMD] = 0x64,
+ [PWRAP_MAN_RDATA] = 0x68,
+ [PWRAP_MAN_VLDCLR] = 0x6C,
+ [PWRAP_WACS0_EN] = 0x70,
+ [PWRAP_INIT_DONE0] = 0x74,
+ [PWRAP_WACS0_CMD] = 0x78,
+ [PWRAP_WACS0_RDATA] = 0x7C,
+ [PWRAP_WACS0_VLDCLR] = 0x80,
+ [PWRAP_WACS1_EN] = 0x84,
+ [PWRAP_INIT_DONE1] = 0x88,
+ [PWRAP_WACS1_CMD] = 0x8C,
+ [PWRAP_WACS1_RDATA] = 0x90,
+ [PWRAP_WACS1_VLDCLR] = 0x94,
+ [PWRAP_WACS2_EN] = 0x98,
+ [PWRAP_INIT_DONE2] = 0x9C,
+ [PWRAP_WACS2_CMD] = 0xA0,
+ [PWRAP_WACS2_RDATA] = 0xA4,
+ [PWRAP_WACS2_VLDCLR] = 0xA8,
+ [PWRAP_INT_EN] = 0xAC,
+ [PWRAP_INT_FLG_RAW] = 0xB0,
+ [PWRAP_INT_FLG] = 0xB4,
+ [PWRAP_INT_CLR] = 0xB8,
+ [PWRAP_SIG_ADR] = 0xBC,
+ [PWRAP_SIG_MODE] = 0xC0,
+ [PWRAP_SIG_VALUE] = 0xC4,
+ [PWRAP_SIG_ERRVAL] = 0xC8,
+ [PWRAP_CRC_EN] = 0xCC,
+ [PWRAP_TIMER_EN] = 0xD0,
+ [PWRAP_TIMER_STA] = 0xD4,
+ [PWRAP_WDT_UNIT] = 0xD8,
+ [PWRAP_WDT_SRC_EN] = 0xDC,
+ [PWRAP_WDT_FLG] = 0xE0,
+ [PWRAP_DEBUG_INT_SEL] = 0xE4,
+ [PWRAP_DVFS_ADR0] = 0xE8,
+ [PWRAP_DVFS_WDATA0] = 0xEC,
+ [PWRAP_DVFS_ADR1] = 0xF0,
+ [PWRAP_DVFS_WDATA1] = 0xF4,
+ [PWRAP_DVFS_ADR2] = 0xF8,
+ [PWRAP_DVFS_WDATA2] = 0xFC,
+ [PWRAP_DVFS_ADR3] = 0x100,
+ [PWRAP_DVFS_WDATA3] = 0x104,
+ [PWRAP_DVFS_ADR4] = 0x108,
+ [PWRAP_DVFS_WDATA4] = 0x10C,
+ [PWRAP_DVFS_ADR5] = 0x110,
+ [PWRAP_DVFS_WDATA5] = 0x114,
+ [PWRAP_DVFS_ADR6] = 0x118,
+ [PWRAP_DVFS_WDATA6] = 0x11C,
+ [PWRAP_DVFS_ADR7] = 0x120,
+ [PWRAP_DVFS_WDATA7] = 0x124,
+ [PWRAP_DVFS_ADR8] = 0x128,
+ [PWRAP_DVFS_WDATA8] = 0x12C,
+ [PWRAP_DVFS_ADR9] = 0x130,
+ [PWRAP_DVFS_WDATA9] = 0x134,
+ [PWRAP_DVFS_ADR10] = 0x138,
+ [PWRAP_DVFS_WDATA10] = 0x13C,
+ [PWRAP_DVFS_ADR11] = 0x140,
+ [PWRAP_DVFS_WDATA11] = 0x144,
+ [PWRAP_DVFS_ADR12] = 0x148,
+ [PWRAP_DVFS_WDATA12] = 0x14C,
+ [PWRAP_DVFS_ADR13] = 0x150,
+ [PWRAP_DVFS_WDATA13] = 0x154,
+ [PWRAP_DVFS_ADR14] = 0x158,
+ [PWRAP_DVFS_WDATA14] = 0x15C,
+ [PWRAP_DVFS_ADR15] = 0x160,
+ [PWRAP_DVFS_WDATA15] = 0x164,
+ [PWRAP_SPMINF_STA] = 0x168,
+ [PWRAP_CIPHER_KEY_SEL] = 0x16C,
+ [PWRAP_CIPHER_IV_SEL] = 0x170,
+ [PWRAP_CIPHER_EN] = 0x174,
+ [PWRAP_CIPHER_RDY] = 0x178,
+ [PWRAP_CIPHER_MODE] = 0x17C,
+ [PWRAP_CIPHER_SWRST] = 0x180,
+ [PWRAP_DCM_EN] = 0x184,
+ [PWRAP_DCM_DBC_PRD] = 0x188,
+ [PWRAP_EXT_CK] = 0x18C,
+ [PWRAP_ADC_CMD_ADDR] = 0x190,
+ [PWRAP_PWRAP_ADC_CMD] = 0x194,
+ [PWRAP_ADC_RDATA_ADDR] = 0x198,
+ [PWRAP_GPS_STA] = 0x19C,
+ [PWRAP_SW_RST] = 0x1A0,
+ [PWRAP_DVFS_STEP_CTRL0] = 0x238,
+ [PWRAP_DVFS_STEP_CTRL1] = 0x23C,
+ [PWRAP_DVFS_STEP_CTRL2] = 0x240,
+ [PWRAP_SPI2_CTRL] = 0x244,
+};
+
static int mt8173_regs[] = {
[PWRAP_MUX_SEL] = 0x0,
[PWRAP_WRAP_EN] = 0x4,
enum pmic_type {
PMIC_MT6323,
+ PMIC_MT6380,
PMIC_MT6397,
};
enum pwrap_type {
PWRAP_MT2701,
+ PWRAP_MT7622,
PWRAP_MT8135,
PWRAP_MT8173,
};
+struct pmic_wrapper;
struct pwrap_slv_type {
const u32 *dew_regs;
enum pmic_type type;
+ const struct regmap_config *regmap;
+ /* Flags indicating the capability for the target slave */
+ u32 caps;
+ /*
+ * pwrap operations are highly associated with the PMIC types,
+ * so the pointers added increases flexibility allowing determination
+ * which type is used by the detection through device tree.
+ */
+ int (*pwrap_read)(struct pmic_wrapper *wrp, u32 adr, u32 *rdata);
+ int (*pwrap_write)(struct pmic_wrapper *wrp, u32 adr, u32 wdata);
};
struct pmic_wrapper {
u32 int_en_all;
u32 spi_w;
u32 wdt_src;
- int has_bridge:1;
+ unsigned int has_bridge:1;
int (*init_reg_clock)(struct pmic_wrapper *wrp);
int (*init_soc_specific)(struct pmic_wrapper *wrp);
};
} while (1);
}
-static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
+static int pwrap_read16(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
{
int ret;
return ret;
}
- pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata,
- PWRAP_WACS2_CMD);
+ pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD);
+
+ ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
+ if (ret)
+ return ret;
+
+ *rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA));
+
+ pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
+
+ return 0;
+}
+
+static int pwrap_read32(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
+{
+ int ret, msb;
+
+ *rdata = 0;
+ for (msb = 0; msb < 2; msb++) {
+ ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
+ if (ret) {
+ pwrap_leave_fsm_vldclr(wrp);
+ return ret;
+ }
+
+ pwrap_writel(wrp, ((msb << 30) | (adr << 16)),
+ PWRAP_WACS2_CMD);
+
+ ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
+ if (ret)
+ return ret;
+
+ *rdata += (PWRAP_GET_WACS_RDATA(pwrap_readl(wrp,
+ PWRAP_WACS2_RDATA)) << (16 * msb));
+
+ pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
+ }
return 0;
}
static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
+{
+ return wrp->slave->pwrap_read(wrp, adr, rdata);
+}
+
+static int pwrap_write16(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
{
int ret;
return ret;
}
- pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD);
+ pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata,
+ PWRAP_WACS2_CMD);
- ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
- if (ret)
- return ret;
+ return 0;
+}
- *rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA));
+static int pwrap_write32(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
+{
+ int ret, msb, rdata;
- pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
+ for (msb = 0; msb < 2; msb++) {
+ ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
+ if (ret) {
+ pwrap_leave_fsm_vldclr(wrp);
+ return ret;
+ }
+
+ pwrap_writel(wrp, (1 << 31) | (msb << 30) | (adr << 16) |
+ ((wdata >> (msb * 16)) & 0xffff),
+ PWRAP_WACS2_CMD);
+
+ /*
+ * The pwrap_read operation is the requirement of hardware used
+ * for the synchronization between two successive 16-bit
+ * pwrap_writel operations composing one 32-bit bus writing.
+ * Otherwise, we'll find the result fails on the lower 16-bit
+ * pwrap writing.
+ */
+ if (!msb)
+ pwrap_read(wrp, adr, &rdata);
+ }
return 0;
}
+static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
+{
+ return wrp->slave->pwrap_write(wrp, adr, wdata);
+}
+
static int pwrap_regmap_read(void *context, u32 adr, u32 *rdata)
{
return pwrap_read(context, adr, rdata);
return 0;
}
-static int pwrap_mt8135_init_reg_clock(struct pmic_wrapper *wrp)
+static int pwrap_init_dual_io(struct pmic_wrapper *wrp)
{
- pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
- pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
- pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
- pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START);
- pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END);
+ int ret;
+ u32 rdata;
+
+ /* Enable dual IO mode */
+ pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1);
+
+ /* Check IDLE & INIT_DONE in advance */
+ ret = pwrap_wait_for_state(wrp,
+ pwrap_is_fsm_idle_and_sync_idle);
+ if (ret) {
+ dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret);
+ return ret;
+ }
+
+ pwrap_writel(wrp, 1, PWRAP_DIO_EN);
+
+ /* Read Test */
+ pwrap_read(wrp,
+ wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata);
+ if (rdata != PWRAP_DEW_READ_TEST_VAL) {
+ dev_err(wrp->dev,
+ "Read failed on DIO mode: 0x%04x!=0x%04x\n",
+ PWRAP_DEW_READ_TEST_VAL, rdata);
+ return -EFAULT;
+ }
return 0;
}
-static int pwrap_mt8173_init_reg_clock(struct pmic_wrapper *wrp)
+/*
+ * pwrap_init_chip_select_ext is used to configure CS extension time for each
+ * phase during data transactions on the pwrap bus.
+ */
+static void pwrap_init_chip_select_ext(struct pmic_wrapper *wrp, u8 hext_write,
+ u8 hext_read, u8 lext_start,
+ u8 lext_end)
{
- pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
- pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
- pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
- pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
+ /*
+ * After finishing a write and read transaction, extends CS high time
+ * to be at least xT of BUS CLK as hext_write and hext_read specifies
+ * respectively.
+ */
+ pwrap_writel(wrp, hext_write, PWRAP_CSHEXT_WRITE);
+ pwrap_writel(wrp, hext_read, PWRAP_CSHEXT_READ);
+
+ /*
+ * Extends CS low time after CSL and before CSH command to be at
+ * least xT of BUS CLK as lext_start and lext_end specifies
+ * respectively.
+ */
+ pwrap_writel(wrp, lext_start, PWRAP_CSLEXT_START);
+ pwrap_writel(wrp, lext_end, PWRAP_CSLEXT_END);
+}
+
+static int pwrap_common_init_reg_clock(struct pmic_wrapper *wrp)
+{
+ switch (wrp->master->type) {
+ case PWRAP_MT8173:
+ pwrap_init_chip_select_ext(wrp, 0, 4, 2, 2);
+ break;
+ case PWRAP_MT8135:
+ pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
+ pwrap_init_chip_select_ext(wrp, 0, 4, 0, 0);
+ break;
+ default:
+ break;
+ }
return 0;
}
switch (wrp->slave->type) {
case PMIC_MT6397:
pwrap_writel(wrp, 0xc, PWRAP_RDDMY);
- pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_WRITE);
- pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_READ);
- pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
- pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
+ pwrap_init_chip_select_ext(wrp, 4, 0, 2, 2);
break;
case PMIC_MT6323:
pwrap_writel(wrp, 0x8, PWRAP_RDDMY);
pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_RDDMY_NO],
0x8);
- pwrap_writel(wrp, 0x5, PWRAP_CSHEXT_WRITE);
- pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_READ);
- pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
- pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
+ pwrap_init_chip_select_ext(wrp, 5, 0, 2, 2);
+ break;
+ default:
break;
}
case PWRAP_MT8173:
pwrap_writel(wrp, 1, PWRAP_CIPHER_EN);
break;
+ case PWRAP_MT7622:
+ pwrap_writel(wrp, 0, PWRAP_CIPHER_EN);
+ break;
}
/* Config cipher mode @PMIC */
pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_EN],
0x1);
break;
+ default:
+ break;
}
/* wait for cipher data ready@AP */
/* wait for cipher data ready@PMIC */
ret = pwrap_wait_for_state(wrp, pwrap_is_pmic_cipher_ready);
if (ret) {
- dev_err(wrp->dev, "timeout waiting for cipher data ready@PMIC\n");
+ dev_err(wrp->dev,
+ "timeout waiting for cipher data ready@PMIC\n");
return ret;
}
return 0;
}
+static int pwrap_init_security(struct pmic_wrapper *wrp)
+{
+ int ret;
+
+ /* Enable encryption */
+ ret = pwrap_init_cipher(wrp);
+ if (ret)
+ return ret;
+
+ /* Signature checking - using CRC */
+ if (pwrap_write(wrp,
+ wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1))
+ return -EFAULT;
+
+ pwrap_writel(wrp, 0x1, PWRAP_CRC_EN);
+ pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE);
+ pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL],
+ PWRAP_SIG_ADR);
+ pwrap_writel(wrp,
+ wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN);
+
+ return 0;
+}
+
static int pwrap_mt8135_init_soc_specific(struct pmic_wrapper *wrp)
{
/* enable pwrap events and pwrap bridge in AP side */
return 0;
}
+static int pwrap_mt7622_init_soc_specific(struct pmic_wrapper *wrp)
+{
+ pwrap_writel(wrp, 0, PWRAP_STAUPD_PRD);
+ /* enable 2wire SPI master */
+ pwrap_writel(wrp, 0x8000000, PWRAP_SPI2_CTRL);
+
+ return 0;
+}
+
static int pwrap_init(struct pmic_wrapper *wrp)
{
int ret;
- u32 rdata;
reset_control_reset(wrp->rstc);
if (wrp->rstc_bridge)
pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD);
}
- /* Reset SPI slave */
- ret = pwrap_reset_spislave(wrp);
- if (ret)
- return ret;
+ if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) {
+ /* Reset SPI slave */
+ ret = pwrap_reset_spislave(wrp);
+ if (ret)
+ return ret;
+ }
pwrap_writel(wrp, 1, PWRAP_WRAP_EN);
if (ret)
return ret;
- /* Setup serial input delay */
- ret = pwrap_init_sidly(wrp);
- if (ret)
- return ret;
-
- /* Enable dual IO mode */
- pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1);
-
- /* Check IDLE & INIT_DONE in advance */
- ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle);
- if (ret) {
- dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret);
- return ret;
+ if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) {
+ /* Setup serial input delay */
+ ret = pwrap_init_sidly(wrp);
+ if (ret)
+ return ret;
}
- pwrap_writel(wrp, 1, PWRAP_DIO_EN);
-
- /* Read Test */
- pwrap_read(wrp, wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata);
- if (rdata != PWRAP_DEW_READ_TEST_VAL) {
- dev_err(wrp->dev, "Read test failed after switch to DIO mode: 0x%04x != 0x%04x\n",
- PWRAP_DEW_READ_TEST_VAL, rdata);
- return -EFAULT;
+ if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_DUALIO)) {
+ /* Enable dual I/O mode */
+ ret = pwrap_init_dual_io(wrp);
+ if (ret)
+ return ret;
}
- /* Enable encryption */
- ret = pwrap_init_cipher(wrp);
- if (ret)
- return ret;
-
- /* Signature checking - using CRC */
- if (pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1))
- return -EFAULT;
-
- pwrap_writel(wrp, 0x1, PWRAP_CRC_EN);
- pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE);
- pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL],
- PWRAP_SIG_ADR);
- pwrap_writel(wrp, wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN);
+ if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SECURITY)) {
+ /* Enable security on bus */
+ ret = pwrap_init_security(wrp);
+ if (ret)
+ return ret;
+ }
if (wrp->master->type == PWRAP_MT8135)
pwrap_writel(wrp, 0x7, PWRAP_RRARB_EN);
return IRQ_HANDLED;
}
-static const struct regmap_config pwrap_regmap_config = {
+static const struct regmap_config pwrap_regmap_config16 = {
.reg_bits = 16,
.val_bits = 16,
.reg_stride = 2,
.max_register = 0xffff,
};
+static const struct regmap_config pwrap_regmap_config32 = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .reg_read = pwrap_regmap_read,
+ .reg_write = pwrap_regmap_write,
+ .max_register = 0xffff,
+};
+
static const struct pwrap_slv_type pmic_mt6323 = {
.dew_regs = mt6323_regs,
.type = PMIC_MT6323,
+ .regmap = &pwrap_regmap_config16,
+ .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO |
+ PWRAP_SLV_CAP_SECURITY,
+ .pwrap_read = pwrap_read16,
+ .pwrap_write = pwrap_write16,
+};
+
+static const struct pwrap_slv_type pmic_mt6380 = {
+ .dew_regs = NULL,
+ .type = PMIC_MT6380,
+ .regmap = &pwrap_regmap_config32,
+ .caps = 0,
+ .pwrap_read = pwrap_read32,
+ .pwrap_write = pwrap_write32,
};
static const struct pwrap_slv_type pmic_mt6397 = {
.dew_regs = mt6397_regs,
.type = PMIC_MT6397,
+ .regmap = &pwrap_regmap_config16,
+ .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO |
+ PWRAP_SLV_CAP_SECURITY,
+ .pwrap_read = pwrap_read16,
+ .pwrap_write = pwrap_write16,
};
static const struct of_device_id of_slave_match_tbl[] = {
{
.compatible = "mediatek,mt6323",
.data = &pmic_mt6323,
+ }, {
+ /* The MT6380 PMIC only implements a regulator, so we bind it
+ * directly instead of using a MFD.
+ */
+ .compatible = "mediatek,mt6380-regulator",
+ .data = &pmic_mt6380,
}, {
.compatible = "mediatek,mt6397",
.data = &pmic_mt6397,
.init_soc_specific = pwrap_mt2701_init_soc_specific,
};
+static const struct pmic_wrapper_type pwrap_mt7622 = {
+ .regs = mt7622_regs,
+ .type = PWRAP_MT7622,
+ .arb_en_all = 0xff,
+ .int_en_all = ~(u32)BIT(31),
+ .spi_w = PWRAP_MAN_CMD_SPI_WRITE,
+ .wdt_src = PWRAP_WDT_SRC_MASK_ALL,
+ .has_bridge = 0,
+ .init_reg_clock = pwrap_common_init_reg_clock,
+ .init_soc_specific = pwrap_mt7622_init_soc_specific,
+};
+
static const struct pmic_wrapper_type pwrap_mt8135 = {
.regs = mt8135_regs,
.type = PWRAP_MT8135,
.spi_w = PWRAP_MAN_CMD_SPI_WRITE,
.wdt_src = PWRAP_WDT_SRC_MASK_ALL,
.has_bridge = 1,
- .init_reg_clock = pwrap_mt8135_init_reg_clock,
+ .init_reg_clock = pwrap_common_init_reg_clock,
.init_soc_specific = pwrap_mt8135_init_soc_specific,
};
.spi_w = PWRAP_MAN_CMD_SPI_WRITE,
.wdt_src = PWRAP_WDT_SRC_MASK_NO_STAUPD,
.has_bridge = 0,
- .init_reg_clock = pwrap_mt8173_init_reg_clock,
+ .init_reg_clock = pwrap_common_init_reg_clock,
.init_soc_specific = pwrap_mt8173_init_soc_specific,
};
{
.compatible = "mediatek,mt2701-pwrap",
.data = &pwrap_mt2701,
+ }, {
+ .compatible = "mediatek,mt7622-pwrap",
+ .data = &pwrap_mt7622,
}, {
.compatible = "mediatek,mt8135-pwrap",
.data = &pwrap_mt8135,
if (IS_ERR(wrp->bridge_base))
return PTR_ERR(wrp->bridge_base);
- wrp->rstc_bridge = devm_reset_control_get(wrp->dev, "pwrap-bridge");
+ wrp->rstc_bridge = devm_reset_control_get(wrp->dev,
+ "pwrap-bridge");
if (IS_ERR(wrp->rstc_bridge)) {
ret = PTR_ERR(wrp->rstc_bridge);
- dev_dbg(wrp->dev, "cannot get pwrap-bridge reset: %d\n", ret);
+ dev_dbg(wrp->dev,
+ "cannot get pwrap-bridge reset: %d\n", ret);
return ret;
}
}
wrp->clk_spi = devm_clk_get(wrp->dev, "spi");
if (IS_ERR(wrp->clk_spi)) {
- dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_spi));
+ dev_dbg(wrp->dev, "failed to get clock: %ld\n",
+ PTR_ERR(wrp->clk_spi));
return PTR_ERR(wrp->clk_spi);
}
wrp->clk_wrap = devm_clk_get(wrp->dev, "wrap");
if (IS_ERR(wrp->clk_wrap)) {
- dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_wrap));
+ dev_dbg(wrp->dev, "failed to get clock: %ld\n",
+ PTR_ERR(wrp->clk_wrap));
return PTR_ERR(wrp->clk_wrap);
}
pwrap_writel(wrp, wrp->master->int_en_all, PWRAP_INT_EN);
irq = platform_get_irq(pdev, 0);
- ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, IRQF_TRIGGER_HIGH,
- "mt-pmic-pwrap", wrp);
+ ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt,
+ IRQF_TRIGGER_HIGH,
+ "mt-pmic-pwrap", wrp);
if (ret)
goto err_out2;
- wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, &pwrap_regmap_config);
+ wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, wrp->slave->regmap);
if (IS_ERR(wrp->regmap)) {
ret = PTR_ERR(wrp->regmap);
goto err_out2;
modes. It interface with various system drivers to put the cores in
low power modes.
+config QCOM_RMTFS_MEM
+ tristate "Qualcomm Remote Filesystem memory driver"
+ depends on ARCH_QCOM
+ help
+ The Qualcomm remote filesystem memory driver is used for allocating
+ and exposing regions of shared memory with remote processors for the
+ purpose of exchanging sector-data between the remote filesystem
+ service and its clients.
+
+ Say y here if you intend to boot the modem remoteproc.
+
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o
obj-$(CONFIG_QCOM_PM) += spm.o
+obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
--- /dev/null
+/*
+ * Copyright (c) 2017 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/cdev.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/of.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/io.h>
+#include <linux/qcom_scm.h>
+
+#define QCOM_RMTFS_MEM_DEV_MAX (MINORMASK + 1)
+
+static dev_t qcom_rmtfs_mem_major;
+
+struct qcom_rmtfs_mem {
+ struct device dev;
+ struct cdev cdev;
+
+ void *base;
+ phys_addr_t addr;
+ phys_addr_t size;
+
+ unsigned int client_id;
+};
+
+static ssize_t qcom_rmtfs_mem_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf);
+
+static DEVICE_ATTR(phys_addr, 0400, qcom_rmtfs_mem_show, NULL);
+static DEVICE_ATTR(size, 0400, qcom_rmtfs_mem_show, NULL);
+static DEVICE_ATTR(client_id, 0400, qcom_rmtfs_mem_show, NULL);
+
+static ssize_t qcom_rmtfs_mem_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = container_of(dev,
+ struct qcom_rmtfs_mem,
+ dev);
+
+ if (attr == &dev_attr_phys_addr)
+ return sprintf(buf, "%pa\n", &rmtfs_mem->addr);
+ if (attr == &dev_attr_size)
+ return sprintf(buf, "%pa\n", &rmtfs_mem->size);
+ if (attr == &dev_attr_client_id)
+ return sprintf(buf, "%d\n", rmtfs_mem->client_id);
+
+ return -EINVAL;
+}
+
+static struct attribute *qcom_rmtfs_mem_attrs[] = {
+ &dev_attr_phys_addr.attr,
+ &dev_attr_size.attr,
+ &dev_attr_client_id.attr,
+ NULL
+};
+ATTRIBUTE_GROUPS(qcom_rmtfs_mem);
+
+static int qcom_rmtfs_mem_open(struct inode *inode, struct file *filp)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = container_of(inode->i_cdev,
+ struct qcom_rmtfs_mem,
+ cdev);
+
+ get_device(&rmtfs_mem->dev);
+ filp->private_data = rmtfs_mem;
+
+ return 0;
+}
+static ssize_t qcom_rmtfs_mem_read(struct file *filp,
+ char __user *buf, size_t count, loff_t *f_pos)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data;
+
+ if (*f_pos >= rmtfs_mem->size)
+ return 0;
+
+ if (*f_pos + count >= rmtfs_mem->size)
+ count = rmtfs_mem->size - *f_pos;
+
+ if (copy_to_user(buf, rmtfs_mem->base + *f_pos, count))
+ return -EFAULT;
+
+ *f_pos += count;
+ return count;
+}
+
+static ssize_t qcom_rmtfs_mem_write(struct file *filp,
+ const char __user *buf, size_t count,
+ loff_t *f_pos)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data;
+
+ if (*f_pos >= rmtfs_mem->size)
+ return 0;
+
+ if (*f_pos + count >= rmtfs_mem->size)
+ count = rmtfs_mem->size - *f_pos;
+
+ if (copy_from_user(rmtfs_mem->base + *f_pos, buf, count))
+ return -EFAULT;
+
+ *f_pos += count;
+ return count;
+}
+
+static int qcom_rmtfs_mem_release(struct inode *inode, struct file *filp)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data;
+
+ put_device(&rmtfs_mem->dev);
+
+ return 0;
+}
+
+static const struct file_operations qcom_rmtfs_mem_fops = {
+ .owner = THIS_MODULE,
+ .open = qcom_rmtfs_mem_open,
+ .read = qcom_rmtfs_mem_read,
+ .write = qcom_rmtfs_mem_write,
+ .release = qcom_rmtfs_mem_release,
+ .llseek = default_llseek,
+};
+
+static void qcom_rmtfs_mem_release_device(struct device *dev)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = container_of(dev,
+ struct qcom_rmtfs_mem,
+ dev);
+
+ kfree(rmtfs_mem);
+}
+
+static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+{
+ struct device_node *node = pdev->dev.of_node;
+ struct reserved_mem *rmem;
+ struct qcom_rmtfs_mem *rmtfs_mem;
+ u32 client_id;
+ int ret;
+
+ rmem = of_reserved_mem_lookup(node);
+ if (!rmem) {
+ dev_err(&pdev->dev, "failed to acquire memory region\n");
+ return -EINVAL;
+ }
+
+ ret = of_property_read_u32(node, "qcom,client-id", &client_id);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to parse \"qcom,client-id\"\n");
+ return ret;
+
+ }
+
+ rmtfs_mem = kzalloc(sizeof(*rmtfs_mem), GFP_KERNEL);
+ if (!rmtfs_mem)
+ return -ENOMEM;
+
+ rmtfs_mem->addr = rmem->base;
+ rmtfs_mem->client_id = client_id;
+ rmtfs_mem->size = rmem->size;
+
+ device_initialize(&rmtfs_mem->dev);
+ rmtfs_mem->dev.parent = &pdev->dev;
+ rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups;
+
+ rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr,
+ rmtfs_mem->size, MEMREMAP_WC);
+ if (IS_ERR(rmtfs_mem->base)) {
+ dev_err(&pdev->dev, "failed to remap rmtfs_mem region\n");
+ ret = PTR_ERR(rmtfs_mem->base);
+ goto put_device;
+ }
+
+ cdev_init(&rmtfs_mem->cdev, &qcom_rmtfs_mem_fops);
+ rmtfs_mem->cdev.owner = THIS_MODULE;
+
+ dev_set_name(&rmtfs_mem->dev, "qcom_rmtfs_mem%d", client_id);
+ rmtfs_mem->dev.id = client_id;
+ rmtfs_mem->dev.devt = MKDEV(MAJOR(qcom_rmtfs_mem_major), client_id);
+
+ ret = cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to add cdev: %d\n", ret);
+ goto put_device;
+ }
+
+ rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+
+ dev_set_drvdata(&pdev->dev, rmtfs_mem);
+
+ return 0;
+
+put_device:
+ put_device(&rmtfs_mem->dev);
+
+ return ret;
+}
+
+static int qcom_rmtfs_mem_remove(struct platform_device *pdev)
+{
+ struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev);
+
+ cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev);
+ put_device(&rmtfs_mem->dev);
+
+ return 0;
+}
+
+static const struct of_device_id qcom_rmtfs_mem_of_match[] = {
+ { .compatible = "qcom,rmtfs-mem" },
+ {}
+};
+MODULE_DEVICE_TABLE(of, qcom_rmtfs_mem_of_match);
+
+static struct platform_driver qcom_rmtfs_mem_driver = {
+ .probe = qcom_rmtfs_mem_probe,
+ .remove = qcom_rmtfs_mem_remove,
+ .driver = {
+ .name = "qcom_rmtfs_mem",
+ .of_match_table = qcom_rmtfs_mem_of_match,
+ },
+};
+
+static int qcom_rmtfs_mem_init(void)
+{
+ int ret;
+
+ ret = alloc_chrdev_region(&qcom_rmtfs_mem_major, 0,
+ QCOM_RMTFS_MEM_DEV_MAX, "qcom_rmtfs_mem");
+ if (ret < 0) {
+ pr_err("qcom_rmtfs_mem: failed to allocate char dev region\n");
+ return ret;
+ }
+
+ ret = platform_driver_register(&qcom_rmtfs_mem_driver);
+ if (ret < 0) {
+ pr_err("qcom_rmtfs_mem: failed to register rmtfs_mem driver\n");
+ unregister_chrdev_region(qcom_rmtfs_mem_major,
+ QCOM_RMTFS_MEM_DEV_MAX);
+ }
+
+ return ret;
+}
+module_init(qcom_rmtfs_mem_init);
+
+static void qcom_rmtfs_mem_exit(void)
+{
+ platform_driver_unregister(&qcom_rmtfs_mem_driver);
+ unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX);
+}
+module_exit(qcom_rmtfs_mem_exit);
*
* Items in the non-cached region are allocated from the start of the partition
* while items in the cached region are allocated from the end. The free area
- * is hence the region between the cached and non-cached offsets.
+ * is hence the region between the cached and non-cached offsets. The header of
+ * cached items comes after the data.
*
+ * Version 12 (SMEM_GLOBAL_PART_VERSION) changes the item alloc/get procedure
+ * for the global heap. A new global partition is created from the global heap
+ * region with partition type (SMEM_GLOBAL_HOST) and the max smem item count is
+ * set by the bootloader.
*
* To synchronize allocations in the shared memory heaps a remote spinlock must
* be held - currently lock number 3 of the sfpb or tcsr is used for this on all
*/
/*
- * Item 3 of the global heap contains an array of versions for the various
- * software components in the SoC. We verify that the boot loader version is
- * what the expected version (SMEM_EXPECTED_VERSION) as a sanity check.
+ * The version member of the smem header contains an array of versions for the
+ * various software components in the SoC. We verify that the boot loader
+ * version is a valid version as a sanity check.
*/
-#define SMEM_ITEM_VERSION 3
-#define SMEM_MASTER_SBL_VERSION_INDEX 7
-#define SMEM_EXPECTED_VERSION 11
+#define SMEM_MASTER_SBL_VERSION_INDEX 7
+#define SMEM_GLOBAL_HEAP_VERSION 11
+#define SMEM_GLOBAL_PART_VERSION 12
/*
* The first 8 items are only to be allocated by the boot loader while
/* Processor/host identifier for the application processor */
#define SMEM_HOST_APPS 0
+/* Processor/host identifier for the global partition */
+#define SMEM_GLOBAL_HOST 0xfffe
+
/* Max number of processors/hosts in a system */
-#define SMEM_HOST_COUNT 9
+#define SMEM_HOST_COUNT 10
/**
* struct smem_proc_comm - proc_comm communication struct (legacy)
* @flags: flags for the partition (currently unused)
* @host0: first processor/host with access to this partition
* @host1: second processor/host with access to this partition
+ * @cacheline: alignment for "cached" entries
* @reserved: reserved entries for later use
*/
struct smem_ptable_entry {
__le32 flags;
__le16 host0;
__le16 host1;
- __le32 reserved[8];
+ __le32 cacheline;
+ __le32 reserved[7];
};
/**
};
#define SMEM_PRIVATE_CANARY 0xa5a5
+/**
+ * struct smem_info - smem region info located after the table of contents
+ * @magic: magic number, must be SMEM_INFO_MAGIC
+ * @size: size of the smem region
+ * @base_addr: base address of the smem region
+ * @reserved: for now reserved entry
+ * @num_items: highest accepted item number
+ */
+struct smem_info {
+ u8 magic[4];
+ __le32 size;
+ __le32 base_addr;
+ __le32 reserved;
+ __le16 num_items;
+};
+
+static const u8 SMEM_INFO_MAGIC[] = { 0x53, 0x49, 0x49, 0x49 }; /* SIII */
+
/**
* struct smem_region - representation of a chunk of memory used for smem
* @aux_base: identifier of aux_mem base
* struct qcom_smem - device data for the smem device
* @dev: device pointer
* @hwlock: reference to a hwspinlock
+ * @global_partition: pointer to global partition when in use
+ * @global_cacheline: cacheline size for global partition
* @partitions: list of pointers to partitions affecting the current
* processor/host
+ * @cacheline: list of cacheline sizes for each host
+ * @item_count: max accepted item number
* @num_regions: number of @regions
* @regions: list of the memory regions defining the shared memory
*/
struct hwspinlock *hwlock;
+ struct smem_partition_header *global_partition;
+ size_t global_cacheline;
struct smem_partition_header *partitions[SMEM_HOST_COUNT];
+ size_t cacheline[SMEM_HOST_COUNT];
+ u32 item_count;
unsigned num_regions;
struct smem_region regions[0];
};
static struct smem_private_entry *
-phdr_to_last_private_entry(struct smem_partition_header *phdr)
+phdr_to_last_uncached_entry(struct smem_partition_header *phdr)
{
void *p = phdr;
return p + le32_to_cpu(phdr->offset_free_uncached);
}
-static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr)
+static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr,
+ size_t cacheline)
+{
+ void *p = phdr;
+
+ return p + le32_to_cpu(phdr->size) - ALIGN(sizeof(*phdr), cacheline);
+}
+
+static void *phdr_to_last_cached_entry(struct smem_partition_header *phdr)
{
void *p = phdr;
}
static struct smem_private_entry *
-phdr_to_first_private_entry(struct smem_partition_header *phdr)
+phdr_to_first_uncached_entry(struct smem_partition_header *phdr)
{
void *p = phdr;
}
static struct smem_private_entry *
-private_entry_next(struct smem_private_entry *e)
+uncached_entry_next(struct smem_private_entry *e)
{
void *p = e;
le32_to_cpu(e->size);
}
-static void *entry_to_item(struct smem_private_entry *e)
+static struct smem_private_entry *
+cached_entry_next(struct smem_private_entry *e, size_t cacheline)
+{
+ void *p = e;
+
+ return p - le32_to_cpu(e->size) - ALIGN(sizeof(*e), cacheline);
+}
+
+static void *uncached_entry_to_item(struct smem_private_entry *e)
{
void *p = e;
return p + sizeof(*e) + le16_to_cpu(e->padding_hdr);
}
+static void *cached_entry_to_item(struct smem_private_entry *e)
+{
+ void *p = e;
+
+ return p - le32_to_cpu(e->size);
+}
+
/* Pointer to the one and only smem handle */
static struct qcom_smem *__smem;
#define HWSPINLOCK_TIMEOUT 1000
static int qcom_smem_alloc_private(struct qcom_smem *smem,
- unsigned host,
+ struct smem_partition_header *phdr,
unsigned item,
size_t size)
{
- struct smem_partition_header *phdr;
struct smem_private_entry *hdr, *end;
size_t alloc_size;
void *cached;
- phdr = smem->partitions[host];
- hdr = phdr_to_first_private_entry(phdr);
- end = phdr_to_last_private_entry(phdr);
- cached = phdr_to_first_cached_entry(phdr);
+ hdr = phdr_to_first_uncached_entry(phdr);
+ end = phdr_to_last_uncached_entry(phdr);
+ cached = phdr_to_last_cached_entry(phdr);
while (hdr < end) {
if (hdr->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
- "Found invalid canary in host %d partition\n",
- host);
+ "Found invalid canary in hosts %d:%d partition\n",
+ phdr->host0, phdr->host1);
return -EINVAL;
}
if (le16_to_cpu(hdr->item) == item)
return -EEXIST;
- hdr = private_entry_next(hdr);
+ hdr = uncached_entry_next(hdr);
}
/* Check that we don't grow into the cached region */
unsigned item,
size_t size)
{
- struct smem_header *header;
struct smem_global_entry *entry;
-
- if (WARN_ON(item >= SMEM_ITEM_COUNT))
- return -EINVAL;
+ struct smem_header *header;
header = smem->regions[0].virt_base;
entry = &header->toc[item];
*/
int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
{
+ struct smem_partition_header *phdr;
unsigned long flags;
int ret;
return -EINVAL;
}
+ if (WARN_ON(item >= __smem->item_count))
+ return -EINVAL;
+
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ret;
- if (host < SMEM_HOST_COUNT && __smem->partitions[host])
- ret = qcom_smem_alloc_private(__smem, host, item, size);
- else
+ if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
+ phdr = __smem->partitions[host];
+ ret = qcom_smem_alloc_private(__smem, phdr, item, size);
+ } else if (__smem->global_partition) {
+ phdr = __smem->global_partition;
+ ret = qcom_smem_alloc_private(__smem, phdr, item, size);
+ } else {
ret = qcom_smem_alloc_global(__smem, item, size);
+ }
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
u32 aux_base;
unsigned i;
- if (WARN_ON(item >= SMEM_ITEM_COUNT))
- return ERR_PTR(-EINVAL);
-
header = smem->regions[0].virt_base;
entry = &header->toc[item];
if (!entry->allocated)
}
static void *qcom_smem_get_private(struct qcom_smem *smem,
- unsigned host,
+ struct smem_partition_header *phdr,
+ size_t cacheline,
unsigned item,
size_t *size)
{
- struct smem_partition_header *phdr;
struct smem_private_entry *e, *end;
- phdr = smem->partitions[host];
- e = phdr_to_first_private_entry(phdr);
- end = phdr_to_last_private_entry(phdr);
+ e = phdr_to_first_uncached_entry(phdr);
+ end = phdr_to_last_uncached_entry(phdr);
while (e < end) {
- if (e->canary != SMEM_PRIVATE_CANARY) {
- dev_err(smem->dev,
- "Found invalid canary in host %d partition\n",
- host);
- return ERR_PTR(-EINVAL);
+ if (e->canary != SMEM_PRIVATE_CANARY)
+ goto invalid_canary;
+
+ if (le16_to_cpu(e->item) == item) {
+ if (size != NULL)
+ *size = le32_to_cpu(e->size) -
+ le16_to_cpu(e->padding_data);
+
+ return uncached_entry_to_item(e);
}
+ e = uncached_entry_next(e);
+ }
+
+ /* Item was not found in the uncached list, search the cached list */
+
+ e = phdr_to_first_cached_entry(phdr, cacheline);
+ end = phdr_to_last_cached_entry(phdr);
+
+ while (e > end) {
+ if (e->canary != SMEM_PRIVATE_CANARY)
+ goto invalid_canary;
+
if (le16_to_cpu(e->item) == item) {
if (size != NULL)
*size = le32_to_cpu(e->size) -
le16_to_cpu(e->padding_data);
- return entry_to_item(e);
+ return cached_entry_to_item(e);
}
- e = private_entry_next(e);
+ e = cached_entry_next(e, cacheline);
}
return ERR_PTR(-ENOENT);
+
+invalid_canary:
+ dev_err(smem->dev, "Found invalid canary in hosts %d:%d partition\n",
+ phdr->host0, phdr->host1);
+
+ return ERR_PTR(-EINVAL);
}
/**
*/
void *qcom_smem_get(unsigned host, unsigned item, size_t *size)
{
+ struct smem_partition_header *phdr;
unsigned long flags;
+ size_t cacheln;
int ret;
void *ptr = ERR_PTR(-EPROBE_DEFER);
if (!__smem)
return ptr;
+ if (WARN_ON(item >= __smem->item_count))
+ return ERR_PTR(-EINVAL);
+
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ERR_PTR(ret);
- if (host < SMEM_HOST_COUNT && __smem->partitions[host])
- ptr = qcom_smem_get_private(__smem, host, item, size);
- else
+ if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
+ phdr = __smem->partitions[host];
+ cacheln = __smem->cacheline[host];
+ ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size);
+ } else if (__smem->global_partition) {
+ phdr = __smem->global_partition;
+ cacheln = __smem->global_cacheline;
+ ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size);
+ } else {
ptr = qcom_smem_get_global(__smem, item, size);
+ }
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
phdr = __smem->partitions[host];
ret = le32_to_cpu(phdr->offset_free_cached) -
le32_to_cpu(phdr->offset_free_uncached);
+ } else if (__smem->global_partition) {
+ phdr = __smem->global_partition;
+ ret = le32_to_cpu(phdr->offset_free_cached) -
+ le32_to_cpu(phdr->offset_free_uncached);
} else {
header = __smem->regions[0].virt_base;
ret = le32_to_cpu(header->available);
static int qcom_smem_get_sbl_version(struct qcom_smem *smem)
{
+ struct smem_header *header;
__le32 *versions;
- size_t size;
- versions = qcom_smem_get_global(smem, SMEM_ITEM_VERSION, &size);
- if (IS_ERR(versions)) {
- dev_err(smem->dev, "Unable to read the version item\n");
- return -ENOENT;
- }
-
- if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) {
- dev_err(smem->dev, "Version item is too small\n");
- return -EINVAL;
- }
+ header = smem->regions[0].virt_base;
+ versions = header->version;
return le32_to_cpu(versions[SMEM_MASTER_SBL_VERSION_INDEX]);
}
-static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
- unsigned local_host)
+static struct smem_ptable *qcom_smem_get_ptable(struct qcom_smem *smem)
{
- struct smem_partition_header *header;
- struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
- unsigned remote_host;
- u32 version, host0, host1;
- int i;
+ u32 version;
ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K;
if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic)))
- return 0;
+ return ERR_PTR(-ENOENT);
version = le32_to_cpu(ptable->version);
if (version != 1) {
dev_err(smem->dev,
"Unsupported partition header version %d\n", version);
+ return ERR_PTR(-EINVAL);
+ }
+ return ptable;
+}
+
+static u32 qcom_smem_get_item_count(struct qcom_smem *smem)
+{
+ struct smem_ptable *ptable;
+ struct smem_info *info;
+
+ ptable = qcom_smem_get_ptable(smem);
+ if (IS_ERR_OR_NULL(ptable))
+ return SMEM_ITEM_COUNT;
+
+ info = (struct smem_info *)&ptable->entry[ptable->num_entries];
+ if (memcmp(info->magic, SMEM_INFO_MAGIC, sizeof(info->magic)))
+ return SMEM_ITEM_COUNT;
+
+ return le16_to_cpu(info->num_items);
+}
+
+static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+{
+ struct smem_partition_header *header;
+ struct smem_ptable_entry *entry = NULL;
+ struct smem_ptable *ptable;
+ u32 host0, host1, size;
+ int i;
+
+ ptable = qcom_smem_get_ptable(smem);
+ if (IS_ERR(ptable))
+ return PTR_ERR(ptable);
+
+ for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) {
+ entry = &ptable->entry[i];
+ host0 = le16_to_cpu(entry->host0);
+ host1 = le16_to_cpu(entry->host1);
+
+ if (host0 == SMEM_GLOBAL_HOST && host0 == host1)
+ break;
+ }
+
+ if (!entry) {
+ dev_err(smem->dev, "Missing entry for global partition\n");
+ return -EINVAL;
+ }
+
+ if (!le32_to_cpu(entry->offset) || !le32_to_cpu(entry->size)) {
+ dev_err(smem->dev, "Invalid entry for global partition\n");
+ return -EINVAL;
+ }
+
+ if (smem->global_partition) {
+ dev_err(smem->dev, "Already found the global partition\n");
+ return -EINVAL;
+ }
+
+ header = smem->regions[0].virt_base + le32_to_cpu(entry->offset);
+ host0 = le16_to_cpu(header->host0);
+ host1 = le16_to_cpu(header->host1);
+
+ if (memcmp(header->magic, SMEM_PART_MAGIC, sizeof(header->magic))) {
+ dev_err(smem->dev, "Global partition has invalid magic\n");
+ return -EINVAL;
+ }
+
+ if (host0 != SMEM_GLOBAL_HOST && host1 != SMEM_GLOBAL_HOST) {
+ dev_err(smem->dev, "Global partition hosts are invalid\n");
+ return -EINVAL;
+ }
+
+ if (le32_to_cpu(header->size) != le32_to_cpu(entry->size)) {
+ dev_err(smem->dev, "Global partition has invalid size\n");
return -EINVAL;
}
+ size = le32_to_cpu(header->offset_free_uncached);
+ if (size > le32_to_cpu(header->size)) {
+ dev_err(smem->dev,
+ "Global partition has invalid free pointer\n");
+ return -EINVAL;
+ }
+
+ smem->global_partition = header;
+ smem->global_cacheline = le32_to_cpu(entry->cacheline);
+
+ return 0;
+}
+
+static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
+ unsigned int local_host)
+{
+ struct smem_partition_header *header;
+ struct smem_ptable_entry *entry;
+ struct smem_ptable *ptable;
+ unsigned int remote_host;
+ u32 host0, host1;
+ int i;
+
+ ptable = qcom_smem_get_ptable(smem);
+ if (IS_ERR(ptable))
+ return PTR_ERR(ptable);
+
for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) {
entry = &ptable->entry[i];
host0 = le16_to_cpu(entry->host0);
return -EINVAL;
}
- if (header->size != entry->size) {
+ if (le32_to_cpu(header->size) != le32_to_cpu(entry->size)) {
dev_err(smem->dev,
"Partition %d has invalid size\n", i);
return -EINVAL;
}
smem->partitions[remote_host] = header;
+ smem->cacheline[remote_host] = le32_to_cpu(entry->cacheline);
}
return 0;
}
version = qcom_smem_get_sbl_version(smem);
- if (version >> 16 != SMEM_EXPECTED_VERSION) {
+ switch (version >> 16) {
+ case SMEM_GLOBAL_PART_VERSION:
+ ret = qcom_smem_set_global_partition(smem);
+ if (ret < 0)
+ return ret;
+ smem->item_count = qcom_smem_get_item_count(smem);
+ break;
+ case SMEM_GLOBAL_HEAP_VERSION:
+ smem->item_count = SMEM_ITEM_COUNT;
+ break;
+ default:
dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version);
return -EINVAL;
}
ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS);
- if (ret < 0)
+ if (ret < 0 && ret != -ENOENT)
return ret;
hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
default y if ARCH_RENESAS
select SOC_BUS
select RST_RCAR if ARCH_RCAR_GEN1 || ARCH_RCAR_GEN2 || \
- ARCH_R8A7795 || ARCH_R8A7796 || ARCH_R8A77995
+ ARCH_R8A7795 || ARCH_R8A7796 || ARCH_R8A77970 || \
+ ARCH_R8A77995
select SYSC_R8A7743 if ARCH_R8A7743
select SYSC_R8A7745 if ARCH_R8A7745
select SYSC_R8A7779 if ARCH_R8A7779
select SYSC_R8A7794 if ARCH_R8A7794
select SYSC_R8A7795 if ARCH_R8A7795
select SYSC_R8A7796 if ARCH_R8A7796
+ select SYSC_R8A77970 if ARCH_R8A77970
select SYSC_R8A77995 if ARCH_R8A77995
if SOC_RENESAS
bool "R-Car M3-W System Controller support" if COMPILE_TEST
select SYSC_RCAR
+config SYSC_R8A77970
+ bool "R-Car V3M System Controller support" if COMPILE_TEST
+ select SYSC_RCAR
+
config SYSC_R8A77995
bool "R-Car D3 System Controller support" if COMPILE_TEST
select SYSC_RCAR
obj-$(CONFIG_SYSC_R8A7794) += r8a7794-sysc.o
obj-$(CONFIG_SYSC_R8A7795) += r8a7795-sysc.o
obj-$(CONFIG_SYSC_R8A7796) += r8a7796-sysc.o
+obj-$(CONFIG_SYSC_R8A77970) += r8a77970-sysc.o
obj-$(CONFIG_SYSC_R8A77995) += r8a77995-sysc.o
# Family
--- /dev/null
+/*
+ * Renesas R-Car V3M System Controller
+ *
+ * Copyright (C) 2017 Cogent Embedded Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/bug.h>
+#include <linux/kernel.h>
+
+#include <dt-bindings/power/r8a77970-sysc.h>
+
+#include "rcar-sysc.h"
+
+static const struct rcar_sysc_area r8a77970_areas[] __initconst = {
+ { "always-on", 0, 0, R8A77970_PD_ALWAYS_ON, -1, PD_ALWAYS_ON },
+ { "ca53-scu", 0x140, 0, R8A77970_PD_CA53_SCU, R8A77970_PD_ALWAYS_ON,
+ PD_SCU },
+ { "ca53-cpu0", 0x200, 0, R8A77970_PD_CA53_CPU0, R8A77970_PD_CA53_SCU,
+ PD_CPU_NOCR },
+ { "ca53-cpu1", 0x200, 1, R8A77970_PD_CA53_CPU1, R8A77970_PD_CA53_SCU,
+ PD_CPU_NOCR },
+ { "cr7", 0x240, 0, R8A77970_PD_CR7, R8A77970_PD_ALWAYS_ON },
+ { "a3ir", 0x180, 0, R8A77970_PD_A3IR, R8A77970_PD_ALWAYS_ON },
+ { "a2ir0", 0x400, 0, R8A77970_PD_A2IR0, R8A77970_PD_ALWAYS_ON },
+ { "a2ir1", 0x400, 1, R8A77970_PD_A2IR1, R8A77970_PD_A2IR0 },
+ { "a2ir2", 0x400, 2, R8A77970_PD_A2IR2, R8A77970_PD_A2IR0 },
+ { "a2ir3", 0x400, 3, R8A77970_PD_A2IR3, R8A77970_PD_A2IR0 },
+ { "a2sc0", 0x400, 4, R8A77970_PD_A2SC0, R8A77970_PD_ALWAYS_ON },
+ { "a2sc1", 0x400, 5, R8A77970_PD_A2SC1, R8A77970_PD_A2SC0 },
+};
+
+const struct rcar_sysc_info r8a77970_sysc_info __initconst = {
+ .areas = r8a77970_areas,
+ .num_areas = ARRAY_SIZE(r8a77970_areas),
+};
/* R-Car Gen3 is handled like R-Car Gen2 */
{ .compatible = "renesas,r8a7795-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a7796-rst", .data = &rcar_rst_gen2 },
+ { .compatible = "renesas,r8a77970-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a77995-rst", .data = &rcar_rst_gen2 },
{ /* sentinel */ }
};
#ifdef CONFIG_SYSC_R8A7796
{ .compatible = "renesas,r8a7796-sysc", .data = &r8a7796_sysc_info },
#endif
+#ifdef CONFIG_SYSC_R8A77970
+ { .compatible = "renesas,r8a77970-sysc", .data = &r8a77970_sysc_info },
+#endif
#ifdef CONFIG_SYSC_R8A77995
{ .compatible = "renesas,r8a77995-sysc", .data = &r8a77995_sysc_info },
#endif
extern const struct rcar_sysc_info r8a7794_sysc_info;
extern const struct rcar_sysc_info r8a7795_sysc_info;
extern const struct rcar_sysc_info r8a7796_sysc_info;
+extern const struct rcar_sysc_info r8a77970_sysc_info;
extern const struct rcar_sysc_info r8a77995_sysc_info;
.id = 0x52,
};
+static const struct renesas_soc soc_rcar_v3m __initconst __maybe_unused = {
+ .family = &fam_rcar_gen3,
+ .id = 0x54,
+};
+
static const struct renesas_soc soc_rcar_d3 __initconst __maybe_unused = {
.family = &fam_rcar_gen3,
.id = 0x58,
#ifdef CONFIG_ARCH_R8A7796
{ .compatible = "renesas,r8a7796", .data = &soc_rcar_m3_w },
#endif
+#ifdef CONFIG_ARCH_R8A77970
+ { .compatible = "renesas,r8a77970", .data = &soc_rcar_v3m },
+#endif
#ifdef CONFIG_ARCH_R8A77995
{ .compatible = "renesas,r8a77995", .data = &soc_rcar_d3 },
#endif
if (pmu_data->powerdown_conf_extra)
pmu_data->powerdown_conf_extra(mode);
-
- if (pmu_data->pmu_config_extra) {
- for (i = 0; pmu_data->pmu_config_extra[i].offset != PMU_TABLE_END; i++)
- pmu_raw_writel(pmu_data->pmu_config_extra[i].val[mode],
- pmu_data->pmu_config_extra[i].offset);
- }
}
/*
}, {
.compatible = "samsung,exynos4210-pmu",
.data = exynos_pmu_data_arm_ptr(exynos4210_pmu_data),
- }, {
- .compatible = "samsung,exynos4212-pmu",
- .data = exynos_pmu_data_arm_ptr(exynos4212_pmu_data),
}, {
.compatible = "samsung,exynos4412-pmu",
.data = exynos_pmu_data_arm_ptr(exynos4412_pmu_data),
struct exynos_pmu_data {
const struct exynos_pmu_conf *pmu_config;
- const struct exynos_pmu_conf *pmu_config_extra;
void (*pmu_init)(void);
void (*powerdown_conf)(enum sys_powerdown);
/* list of all exported SoC specific data */
extern const struct exynos_pmu_data exynos3250_pmu_data;
extern const struct exynos_pmu_data exynos4210_pmu_data;
-extern const struct exynos_pmu_data exynos4212_pmu_data;
extern const struct exynos_pmu_data exynos4412_pmu_data;
extern const struct exynos_pmu_data exynos5250_pmu_data;
extern const struct exynos_pmu_data exynos5420_pmu_data;
{ PMU_TABLE_END,},
};
-static const struct exynos_pmu_conf exynos4x12_pmu_config[] = {
+static const struct exynos_pmu_conf exynos4412_pmu_config[] = {
{ S5P_ARM_CORE0_LOWPWR, { 0x0, 0x0, 0x2 } },
{ S5P_DIS_IRQ_CORE0, { 0x0, 0x0, 0x0 } },
{ S5P_DIS_IRQ_CENTRAL0, { 0x0, 0x0, 0x0 } },
{ S5P_GPS_ALIVE_LOWPWR, { 0x7, 0x0, 0x0 } },
{ S5P_CMU_SYSCLK_ISP_LOWPWR, { 0x1, 0x0, 0x0 } },
{ S5P_CMU_SYSCLK_GPS_LOWPWR, { 0x1, 0x0, 0x0 } },
- { PMU_TABLE_END,},
-};
-
-static const struct exynos_pmu_conf exynos4412_pmu_config[] = {
{ S5P_ARM_CORE2_LOWPWR, { 0x0, 0x0, 0x2 } },
{ S5P_DIS_IRQ_CORE2, { 0x0, 0x0, 0x0 } },
{ S5P_DIS_IRQ_CENTRAL2, { 0x0, 0x0, 0x0 } },
.pmu_config = exynos4210_pmu_config,
};
-const struct exynos_pmu_data exynos4212_pmu_data = {
- .pmu_config = exynos4x12_pmu_config,
-};
-
const struct exynos_pmu_data exynos4412_pmu_data = {
- .pmu_config = exynos4x12_pmu_config,
- .pmu_config_extra = exynos4412_pmu_config,
+ .pmu_config = exynos4412_pmu_config,
};
{
struct mrq_pg_request request;
struct tegra_bpmp_message msg;
+ int err;
memset(&request, 0, sizeof(request));
request.cmd = CMD_PG_SET_STATE;
msg.tx.data = &request;
msg.tx.size = sizeof(request);
- return tegra_bpmp_transfer(bpmp, &msg);
+ err = tegra_bpmp_transfer(bpmp, &msg);
+ if (err < 0)
+ return err;
+ else if (msg.rx.ret < 0)
+ return -EINVAL;
+
+ return 0;
}
static int tegra_bpmp_powergate_get_state(struct tegra_bpmp *bpmp,
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return PG_STATE_OFF;
+ else if (msg.rx.ret < 0)
+ return -EINVAL;
return response.get_state.state;
}
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return err;
+ else if (msg.rx.ret < 0)
+ return -EINVAL;
return response.get_max_id.max_id;
}
msg.rx.size = sizeof(response);
err = tegra_bpmp_transfer(bpmp, &msg);
- if (err < 0)
+ if (err < 0 || msg.rx.ret < 0)
return NULL;
return kstrdup(response.get_name.name, GFP_KERNEL);
struct iio_dev *indio_dev = spi_get_drvdata(st->sd.spi);
unsigned long long scale_uv;
int i, ret, id;
- u8 ones[6];
/* reset the serial interface */
- memset(&ones, 0xFF, 6);
- ret = spi_write(st->sd.spi, &ones, 6);
+ ret = ad_sd_reset(&st->sd, 48);
if (ret < 0)
goto out;
usleep_range(500, 1000); /* Wait for at least 500us */
if (!wbuf)
return -ENOMEM;
- enable_read_hw_ecc = 0;
- spinand_read_page(spi_nand, page_id, 0, CACHE_BUF, wbuf);
+ enable_read_hw_ecc = 1;
+ retval = spinand_read_page(spi_nand, page_id, 0, CACHE_BUF, wbuf);
+ if (retval < 0) {
+ dev_err(&spi_nand->dev, "ecc error on read page!!!\n");
+ return retval;
+ }
for (i = offset, j = 0; i < len; i++, j++)
wbuf[i] &= buf[j];
dev_dbg(&spi->dev, "set: DIO mapping");
#endif
- // check DIO number
- if (DIONumber > 5) {
- dev_dbg(&spi->dev, "set: illegal input param");
- return -EINVAL;
- }
-
switch (DIONumber) {
case 0: mask=MASK_DIO0; shift=SHIFT_DIO0; regaddr=REG_DIOMAPPING1; break;
case 1: mask=MASK_DIO1; shift=SHIFT_DIO1; regaddr=REG_DIOMAPPING1; break;
case 3: mask=MASK_DIO3; shift=SHIFT_DIO3; regaddr=REG_DIOMAPPING1; break;
case 4: mask=MASK_DIO4; shift=SHIFT_DIO4; regaddr=REG_DIOMAPPING2; break;
case 5: mask=MASK_DIO5; shift=SHIFT_DIO5; regaddr=REG_DIOMAPPING2; break;
+ default:
+ dev_dbg(&spi->dev, "set: illegal input param");
+ return -EINVAL;
}
// read reg
void _rtw_free_mlme_priv(struct mlme_priv *pmlmepriv)
{
- rtw_free_mlme_priv_ie_data(pmlmepriv);
-
if (pmlmepriv) {
+ rtw_free_mlme_priv_ie_data(pmlmepriv);
if (pmlmepriv->free_bss_buf) {
vfree(pmlmepriv->free_bss_buf);
}
if (num < 2)
return count;
+ if (id >= TOTAL_CAM_ENTRY)
+ return -EINVAL;
if (strcmp("c", cmd) == 0) {
_clear_cam_entry(adapter, id);
if (halmac_adapter->fw_version.h2c_version < 4)
return HALMAC_RET_FW_NO_SUPPORT;
+ driver_adapter = halmac_adapter->driver_adapter;
+
HALMAC_RT_TRACE(driver_adapter, HALMAC_MSG_H2C, DBG_DMESG,
"[TRACE]%s ==========>\n", __func__);
- driver_adapter = halmac_adapter->driver_adapter;
-
HALMAC_RT_TRACE(driver_adapter, HALMAC_MSG_H2C, DBG_DMESG,
"[TRACE]%s <==========\n", __func__);
void *driver_adapter = NULL;
enum halmac_ret_status status = HALMAC_RET_SUCCESS;
+ driver_adapter = halmac_adapter->driver_adapter;
+
HALMAC_RT_TRACE(driver_adapter, HALMAC_MSG_H2C, DBG_DMESG,
"%s!!\n", __func__);
- driver_adapter = halmac_adapter->driver_adapter;
h2c_header = h2c_buff;
h2c_cmd = h2c_header + HALMAC_H2C_CMD_HDR_SIZE_88XX;
void *driver_adapter = NULL;
enum halmac_ret_status status = HALMAC_RET_SUCCESS;
+ driver_adapter = halmac_adapter->driver_adapter;
+
HALMAC_RT_TRACE(driver_adapter, HALMAC_MSG_H2C, DBG_DMESG,
"halmac_send_h2c_set_pwr_mode_88xx!!\n");
- driver_adapter = halmac_adapter->driver_adapter;
h2c_header = H2c_buff;
h2c_cmd = h2c_header + HALMAC_H2C_CMD_HDR_SIZE_88XX;
enum halmac_cmd_process_status *process_status =
&halmac_adapter->halmac_state.scan_state_set.process_status;
+ driver_adapter = halmac_adapter->driver_adapter;
+
HALMAC_RT_TRACE(driver_adapter, HALMAC_MSG_H2C, DBG_DMESG,
"halmac_ctrl_ch_switch!!\n");
- driver_adapter = halmac_adapter->driver_adapter;
halmac_api = (struct halmac_api *)halmac_adapter->halmac_api;
if (halmac_transition_scan_state_88xx(
{
u8 h2c_buff[HALMAC_H2C_CMD_SIZE_88XX] = {0};
u16 h2c_seq_mum = 0;
- void *driver_adapter = NULL;
+ void *driver_adapter = halmac_adapter->driver_adapter;
struct halmac_h2c_header_info h2c_header_info;
enum halmac_ret_status status = HALMAC_RET_SUCCESS;
HALMAC_RT_TRACE(driver_adapter, HALMAC_MSG_H2C, DBG_DMESG,
"%s!!\n", __func__);
- driver_adapter = halmac_adapter->driver_adapter;
-
UPDATE_BEACON_PARSING_INFO_SET_FUNC_EN(h2c_buff, bcn_ie_info->func_en);
UPDATE_BEACON_PARSING_INFO_SET_SIZE_TH(h2c_buff, bcn_ie_info->size_th);
UPDATE_BEACON_PARSING_INFO_SET_TIMEOUT(h2c_buff, bcn_ie_info->timeout);
static int read_all_key;
+static int in_keyboard_notifier;
+
static void start_read_all_timer(struct vc_data *vc, int command);
enum {
cursor_track = read_all_mode;
spk_reset_index_count(0);
if (get_sentence_buf(vc, 0) == -1) {
- kbd_fakekey2(vc, RA_DOWN_ARROW);
+ del_timer(&cursor_timer);
+ if (!in_keyboard_notifier)
+ speakup_fake_down_arrow();
+ start_read_all_timer(vc, RA_DOWN_ARROW);
} else {
say_sentence_num(0, 0);
synth_insert_next_index(0);
int ret = NOTIFY_OK;
static int keycode; /* to hold the current keycode */
+ in_keyboard_notifier = 1;
+
if (vc->vc_mode == KD_GRAPHICS)
- return ret;
+ goto out;
/*
* First, determine whether we are handling a fake keypress on
*/
if (speakup_fake_key_pressed())
- return ret;
+ goto out;
switch (code) {
case KBD_KEYCODE:
break;
}
}
+out:
+ in_keyboard_notifier = 0;
return ret;
}
return 0;
}
-static int init_unisys(void)
+static int __init init_unisys(void)
{
int result;
return 0;
};
-static void exit_unisys(void)
+static void __exit exit_unisys(void)
{
acpi_bus_unregister_driver(&unisys_acpi_driver);
}
if (head_bytes > actual)
head_bytes = actual;
- memcpy((char *)page_address(pages[0]) +
+ memcpy((char *)kmap(pages[0]) +
pagelist->offset,
fragments,
head_bytes);
+ kunmap(pages[0]);
}
if ((actual >= 0) && (head_bytes < actual) &&
(tail_bytes != 0)) {
- memcpy((char *)page_address(pages[num_pages - 1]) +
+ memcpy((char *)kmap(pages[num_pages - 1]) +
((pagelist->offset + actual) &
(PAGE_SIZE - 1) & ~(g_cache_line_size - 1)),
fragments + g_cache_line_size,
tail_bytes);
+ kunmap(pages[num_pages - 1]);
}
down(&g_free_fragments_mutex);
obj-$(CONFIG_INTEL_PCH_THERMAL) += intel_pch_thermal.o
obj-$(CONFIG_ST_THERMAL) += st/
obj-$(CONFIG_QCOM_TSENS) += qcom/
-obj-$(CONFIG_TEGRA_SOCTHERM) += tegra/
+obj-y += tegra/
obj-$(CONFIG_HISI_THERMAL) += hisi_thermal.o
obj-$(CONFIG_MTK_THERMAL) += mtk_thermal.o
obj-$(CONFIG_GENERIC_ADC_THERMAL) += thermal-generic-adc.o
zones to manage temperatures. This option is also required for the
emergency thermal reset (thermtrip) feature to function.
+config TEGRA_BPMP_THERMAL
+ tristate "Tegra BPMP thermal sensing"
+ depends on TEGRA_BPMP || COMPILE_TEST
+ help
+ Enable this option for support for sensing system temperature of NVIDIA
+ Tegra systems-on-chip with the BPMP coprocessor (Tegra186).
+
endmenu
-obj-$(CONFIG_TEGRA_SOCTHERM) += tegra-soctherm.o
+obj-$(CONFIG_TEGRA_SOCTHERM) += tegra-soctherm.o
+obj-$(CONFIG_TEGRA_BPMP_THERMAL) += tegra-bpmp-thermal.o
tegra-soctherm-y := soctherm.o soctherm-fuse.o
tegra-soctherm-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124-soctherm.o
--- /dev/null
+/*
+ * Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Author:
+ * Mikko Perttunen <mperttunen@nvidia.com>
+ * Aapo Vienamo <avienamo@nvidia.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/thermal.h>
+#include <linux/workqueue.h>
+
+#include <soc/tegra/bpmp.h>
+#include <soc/tegra/bpmp-abi.h>
+
+struct tegra_bpmp_thermal_zone {
+ struct tegra_bpmp_thermal *tegra;
+ struct thermal_zone_device *tzd;
+ struct work_struct tz_device_update_work;
+ unsigned int idx;
+};
+
+struct tegra_bpmp_thermal {
+ struct device *dev;
+ struct tegra_bpmp *bpmp;
+ unsigned int num_zones;
+ struct tegra_bpmp_thermal_zone **zones;
+};
+
+static int tegra_bpmp_thermal_get_temp(void *data, int *out_temp)
+{
+ struct tegra_bpmp_thermal_zone *zone = data;
+ struct mrq_thermal_host_to_bpmp_request req;
+ union mrq_thermal_bpmp_to_host_response reply;
+ struct tegra_bpmp_message msg;
+ int err;
+
+ memset(&req, 0, sizeof(req));
+ req.type = CMD_THERMAL_GET_TEMP;
+ req.get_temp.zone = zone->idx;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.mrq = MRQ_THERMAL;
+ msg.tx.data = &req;
+ msg.tx.size = sizeof(req);
+ msg.rx.data = &reply;
+ msg.rx.size = sizeof(reply);
+
+ err = tegra_bpmp_transfer(zone->tegra->bpmp, &msg);
+ if (err)
+ return err;
+
+ *out_temp = reply.get_temp.temp;
+
+ return 0;
+}
+
+static int tegra_bpmp_thermal_set_trips(void *data, int low, int high)
+{
+ struct tegra_bpmp_thermal_zone *zone = data;
+ struct mrq_thermal_host_to_bpmp_request req;
+ struct tegra_bpmp_message msg;
+
+ memset(&req, 0, sizeof(req));
+ req.type = CMD_THERMAL_SET_TRIP;
+ req.set_trip.zone = zone->idx;
+ req.set_trip.enabled = true;
+ req.set_trip.low = low;
+ req.set_trip.high = high;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.mrq = MRQ_THERMAL;
+ msg.tx.data = &req;
+ msg.tx.size = sizeof(req);
+
+ return tegra_bpmp_transfer(zone->tegra->bpmp, &msg);
+}
+
+static void tz_device_update_work_fn(struct work_struct *work)
+{
+ struct tegra_bpmp_thermal_zone *zone;
+
+ zone = container_of(work, struct tegra_bpmp_thermal_zone,
+ tz_device_update_work);
+
+ thermal_zone_device_update(zone->tzd, THERMAL_TRIP_VIOLATED);
+}
+
+static void bpmp_mrq_thermal(unsigned int mrq, struct tegra_bpmp_channel *ch,
+ void *data)
+{
+ struct mrq_thermal_bpmp_to_host_request *req;
+ struct tegra_bpmp_thermal *tegra = data;
+ int i;
+
+ req = (struct mrq_thermal_bpmp_to_host_request *)ch->ib->data;
+
+ if (req->type != CMD_THERMAL_HOST_TRIP_REACHED) {
+ dev_err(tegra->dev, "%s: invalid request type: %d\n",
+ __func__, req->type);
+ tegra_bpmp_mrq_return(ch, -EINVAL, NULL, 0);
+ return;
+ }
+
+ for (i = 0; i < tegra->num_zones; ++i) {
+ if (tegra->zones[i]->idx != req->host_trip_reached.zone)
+ continue;
+
+ schedule_work(&tegra->zones[i]->tz_device_update_work);
+ tegra_bpmp_mrq_return(ch, 0, NULL, 0);
+ return;
+ }
+
+ dev_err(tegra->dev, "%s: invalid thermal zone: %d\n", __func__,
+ req->host_trip_reached.zone);
+ tegra_bpmp_mrq_return(ch, -EINVAL, NULL, 0);
+}
+
+static int tegra_bpmp_thermal_get_num_zones(struct tegra_bpmp *bpmp,
+ int *num_zones)
+{
+ struct mrq_thermal_host_to_bpmp_request req;
+ union mrq_thermal_bpmp_to_host_response reply;
+ struct tegra_bpmp_message msg;
+ int err;
+
+ memset(&req, 0, sizeof(req));
+ req.type = CMD_THERMAL_GET_NUM_ZONES;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.mrq = MRQ_THERMAL;
+ msg.tx.data = &req;
+ msg.tx.size = sizeof(req);
+ msg.rx.data = &reply;
+ msg.rx.size = sizeof(reply);
+
+ err = tegra_bpmp_transfer(bpmp, &msg);
+ if (err)
+ return err;
+
+ *num_zones = reply.get_num_zones.num;
+
+ return 0;
+}
+
+static const struct thermal_zone_of_device_ops tegra_bpmp_of_thermal_ops = {
+ .get_temp = tegra_bpmp_thermal_get_temp,
+ .set_trips = tegra_bpmp_thermal_set_trips,
+};
+
+static int tegra_bpmp_thermal_probe(struct platform_device *pdev)
+{
+ struct tegra_bpmp *bpmp = dev_get_drvdata(pdev->dev.parent);
+ struct tegra_bpmp_thermal *tegra;
+ struct thermal_zone_device *tzd;
+ unsigned int i, max_num_zones;
+ int err;
+
+ tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
+ if (!tegra)
+ return -ENOMEM;
+
+ tegra->dev = &pdev->dev;
+ tegra->bpmp = bpmp;
+
+ err = tegra_bpmp_thermal_get_num_zones(bpmp, &max_num_zones);
+ if (err) {
+ dev_err(&pdev->dev, "failed to get the number of zones: %d\n",
+ err);
+ return err;
+ }
+
+ tegra->zones = devm_kcalloc(&pdev->dev, max_num_zones,
+ sizeof(*tegra->zones), GFP_KERNEL);
+ if (!tegra->zones)
+ return -ENOMEM;
+
+ for (i = 0; i < max_num_zones; ++i) {
+ struct tegra_bpmp_thermal_zone *zone;
+ int temp;
+
+ zone = devm_kzalloc(&pdev->dev, sizeof(*zone), GFP_KERNEL);
+ if (!zone)
+ return -ENOMEM;
+
+ zone->idx = i;
+ zone->tegra = tegra;
+
+ err = tegra_bpmp_thermal_get_temp(zone, &temp);
+ if (err < 0) {
+ devm_kfree(&pdev->dev, zone);
+ continue;
+ }
+
+ tzd = devm_thermal_zone_of_sensor_register(
+ &pdev->dev, i, zone, &tegra_bpmp_of_thermal_ops);
+ if (IS_ERR(tzd)) {
+ if (PTR_ERR(tzd) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ devm_kfree(&pdev->dev, zone);
+ continue;
+ }
+
+ zone->tzd = tzd;
+ INIT_WORK(&zone->tz_device_update_work,
+ tz_device_update_work_fn);
+
+ tegra->zones[tegra->num_zones++] = zone;
+ }
+
+ err = tegra_bpmp_request_mrq(bpmp, MRQ_THERMAL, bpmp_mrq_thermal,
+ tegra);
+ if (err) {
+ dev_err(&pdev->dev, "failed to register mrq handler: %d\n",
+ err);
+ return err;
+ }
+
+ platform_set_drvdata(pdev, tegra);
+
+ return 0;
+}
+
+static int tegra_bpmp_thermal_remove(struct platform_device *pdev)
+{
+ struct tegra_bpmp_thermal *tegra = platform_get_drvdata(pdev);
+
+ tegra_bpmp_free_mrq(tegra->bpmp, MRQ_THERMAL, tegra);
+
+ return 0;
+}
+
+static const struct of_device_id tegra_bpmp_thermal_of_match[] = {
+ { .compatible = "nvidia,tegra186-bpmp-thermal" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, tegra_bpmp_thermal_of_match);
+
+static struct platform_driver tegra_bpmp_thermal_driver = {
+ .probe = tegra_bpmp_thermal_probe,
+ .remove = tegra_bpmp_thermal_remove,
+ .driver = {
+ .name = "tegra-bpmp-thermal",
+ .of_match_table = tegra_bpmp_thermal_of_match,
+ },
+};
+module_platform_driver(tegra_bpmp_thermal_driver);
+
+MODULE_AUTHOR("Mikko Perttunen <mperttunen@nvidia.com>");
+MODULE_DESCRIPTION("NVIDIA Tegra BPMP thermal sensor driver");
+MODULE_LICENSE("GPL v2");
unsigned char err_shadow;
struct async_icount icount; /* kernel counters for 4 input interrupts */
- int timeout;
+ unsigned int timeout;
int read_status_mask;
int ignore_status_mask;
- int xmit_fifo_size;
+ unsigned int xmit_fifo_size;
int xmit_head;
int xmit_tail;
int xmit_cnt;
static int mxser_set_baud(struct tty_struct *tty, long newspd)
{
struct mxser_port *info = tty->driver_data;
- int quot = 0, baud;
+ unsigned int quot = 0, baud;
unsigned char cval;
+ u64 timeout;
if (!info->ioaddr)
return -1;
quot = 0;
}
- info->timeout = ((info->xmit_fifo_size * HZ * 10 * quot) / info->baud_base);
- info->timeout += HZ / 50; /* Add .02 seconds of slop */
+ /*
+ * worst case (128 * 1000 * 10 * 18432) needs 35 bits, so divide in the
+ * u64 domain
+ */
+ timeout = (u64)info->xmit_fifo_size * HZ * 10 * quot;
+ do_div(timeout, info->baud_base);
+ info->timeout = timeout + HZ / 50; /* Add .02 seconds of slop */
if (quot) {
info->MCR |= UART_MCR_DTR;
{
unsigned int ctl, baud, quot, ier;
unsigned long flags;
+ int tries;
spin_lock_irqsave(&port->lock, flags);
+ /* Drain the hot tub fully before we power it off for the winter. */
+ for (tries = 3; !bcm_uart_tx_empty(port) && tries; tries--)
+ mdelay(10);
+
/* disable uart while changing speed */
bcm_uart_disable(port);
bcm_uart_flush(port);
static int lpuart_startup(struct uart_port *port)
{
struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
- int ret;
unsigned long flags;
unsigned char temp;
sport->rxfifo_size = 0x1 << (((temp >> UARTPFIFO_RXSIZE_OFF) &
UARTPFIFO_FIFOSIZE_MASK) + 1);
- ret = devm_request_irq(port->dev, port->irq, lpuart_int, 0,
- DRIVER_NAME, sport);
- if (ret)
- return ret;
-
spin_lock_irqsave(&sport->port.lock, flags);
lpuart_setup_watermark(sport);
static int lpuart32_startup(struct uart_port *port)
{
struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
- int ret;
unsigned long flags;
unsigned long temp;
sport->rxfifo_size = 0x1 << (((temp >> UARTFIFO_RXSIZE_OFF) &
UARTFIFO_FIFOSIZE_MASK) - 1);
- ret = devm_request_irq(port->dev, port->irq, lpuart32_int, 0,
- DRIVER_NAME, sport);
- if (ret)
- return ret;
-
spin_lock_irqsave(&sport->port.lock, flags);
lpuart32_setup_watermark(sport);
spin_unlock_irqrestore(&port->lock, flags);
- devm_free_irq(port->dev, port->irq, sport);
-
if (sport->lpuart_dma_rx_use) {
del_timer_sync(&sport->lpuart_timer);
lpuart_dma_rx_free(&sport->port);
static void lpuart32_shutdown(struct uart_port *port)
{
- struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
unsigned long temp;
unsigned long flags;
lpuart32_write(port, temp, UARTCTRL);
spin_unlock_irqrestore(&port->lock, flags);
-
- devm_free_irq(port->dev, port->irq, sport);
}
static void
platform_set_drvdata(pdev, &sport->port);
- if (lpuart_is_32(sport))
+ if (lpuart_is_32(sport)) {
lpuart_reg.cons = LPUART32_CONSOLE;
- else
+ ret = devm_request_irq(&pdev->dev, sport->port.irq, lpuart32_int, 0,
+ DRIVER_NAME, sport);
+ } else {
lpuart_reg.cons = LPUART_CONSOLE;
+ ret = devm_request_irq(&pdev->dev, sport->port.irq, lpuart_int, 0,
+ DRIVER_NAME, sport);
+ }
+
+ if (ret)
+ goto failed_irq_request;
ret = uart_add_one_port(&lpuart_reg, &sport->port);
- if (ret) {
- clk_disable_unprepare(sport->clk);
- return ret;
- }
+ if (ret)
+ goto failed_attach_port;
sport->dma_tx_chan = dma_request_slave_channel(sport->port.dev, "tx");
if (!sport->dma_tx_chan)
}
return 0;
+
+failed_attach_port:
+failed_irq_request:
+ clk_disable_unprepare(sport->clk);
+ return ret;
}
static int lpuart_remove(struct platform_device *pdev)
goto err_out;
uartclk = 0;
} else {
- clk_prepare_enable(clk);
+ ret = clk_prepare_enable(clk);
+ if (ret)
+ goto err_out;
+
+ ret = devm_add_action_or_reset(&pdev->dev,
+ (void(*)(void *))clk_disable_unprepare,
+ clk);
+ if (ret)
+ goto err_out;
+
uartclk = clk_get_rate(clk);
}
uart_unregister_driver(&s->uart);
err_out:
if (!IS_ERR(s->regulator))
- return regulator_disable(s->regulator);
+ regulator_disable(s->regulator);
return ret;
}
/*
* only set a new error if there is no previous error.
* Errors are only cleared during read/open
+ * Avoid propagating -EPIPE (stall) to userspace since it is
+ * better handled as an empty read
*/
- if (desc->rerr == 0)
+ if (desc->rerr == 0 && status != -EPIPE)
desc->rerr = status;
if (length + desc->length > desc->wMaxCommand) {
} else if (header->bDescriptorType ==
USB_DT_INTERFACE_ASSOCIATION) {
+ struct usb_interface_assoc_descriptor *d;
+
+ d = (struct usb_interface_assoc_descriptor *)header;
+ if (d->bLength < USB_DT_INTERFACE_ASSOCIATION_SIZE) {
+ dev_warn(ddev,
+ "config %d has an invalid interface association descriptor of length %d, skipping\n",
+ cfgno, d->bLength);
+ continue;
+ }
+
if (iad_num == USB_MAXIADS) {
dev_warn(ddev, "found more Interface "
"Association Descriptors "
"than allocated for in "
"configuration %d\n", cfgno);
} else {
- config->intf_assoc[iad_num] =
- (struct usb_interface_assoc_descriptor
- *)header;
+ config->intf_assoc[iad_num] = d;
iad_num++;
}
}
if (dev->quirks & USB_QUIRK_DELAY_INIT)
- msleep(100);
+ msleep(200);
result = usb_get_descriptor(dev, USB_DT_CONFIG, cfgno,
bigbuffer, length);
MODULE_PARM_DESC(usbfs_memory_mb,
"maximum MB allowed for usbfs buffers (0 = no limit)");
+/* Hard limit, necessary to avoid arithmetic overflow */
+#define USBFS_XFER_MAX (UINT_MAX / 2 - 1000000)
+
static atomic64_t usbfs_memory_usage; /* Total memory currently allocated */
/* Check whether it's okay to allocate more memory for a transfer */
USBDEVFS_URB_ZERO_PACKET |
USBDEVFS_URB_NO_INTERRUPT))
return -EINVAL;
+ if ((unsigned int)uurb->buffer_length >= USBFS_XFER_MAX)
+ return -EINVAL;
if (uurb->buffer_length > 0 && !uurb->buffer)
return -EINVAL;
if (!(uurb->type == USBDEVFS_URB_TYPE_CONTROL &&
totlen += isopkt[u].length;
}
u *= sizeof(struct usb_iso_packet_descriptor);
- uurb->buffer_length = totlen;
+ if (totlen <= uurb->buffer_length)
+ uurb->buffer_length = totlen;
+ else
+ WARN_ONCE(1, "uurb->buffer_length is too short %d vs %d",
+ totlen, uurb->buffer_length);
break;
default:
goto loop;
if (udev->quirks & USB_QUIRK_DELAY_INIT)
- msleep(1000);
+ msleep(2000);
/* consecutive bus-powered hubs aren't reliable; they can
* violate the voltage drop budget. if the new child has
elength = 1;
goto next_desc;
}
+ if ((buflen < elength) || (elength < 3)) {
+ dev_err(&intf->dev, "invalid descriptor buffer length\n");
+ break;
+ }
if (buffer[1] != USB_DT_CS_INTERFACE) {
dev_err(&intf->dev, "skipping garbage\n");
goto next_desc;
{ .compatible = "rockchip,rk3399-dwc3" },
{ .compatible = "xlnx,zynqmp-dwc3" },
{ .compatible = "cavium,octeon-7130-usb-uctl" },
+ { .compatible = "sprd,sc9860-dwc3" },
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);
DWC3_TRBCTL_CONTROL_DATA,
true);
+ req->trb = &dwc->ep0_trb[dep->trb_enqueue - 1];
+
/* Now prepare one extra TRB to align transfer size */
dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr,
maxpacket - rem,
DWC3_TRBCTL_CONTROL_DATA,
true);
+ req->trb = &dwc->ep0_trb[dep->trb_enqueue - 1];
+
/* Now prepare one extra TRB to align transfer size */
dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr,
0, DWC3_TRBCTL_CONTROL_DATA,
dwc3_ep0_prepare_one_trb(dep, req->request.dma,
req->request.length, DWC3_TRBCTL_CONTROL_DATA,
false);
+
+ req->trb = &dwc->ep0_trb[dep->trb_enqueue];
+
ret = dwc3_ep0_start_trans(dep);
}
static void ffs_data_get(struct ffs_data *ffs);
static void ffs_data_put(struct ffs_data *ffs);
/* Creates new ffs_data object. */
-static struct ffs_data *__must_check ffs_data_new(void) __attribute__((malloc));
+static struct ffs_data *__must_check ffs_data_new(const char *dev_name)
+ __attribute__((malloc));
/* Opened counter handling. */
static void ffs_data_opened(struct ffs_data *ffs);
struct usb_request *req)
{
struct ffs_io_data *io_data = req->context;
+ struct ffs_data *ffs = io_data->ffs;
ENTER();
INIT_WORK(&io_data->work, ffs_user_copy_worker);
- schedule_work(&io_data->work);
+ queue_work(ffs->io_completion_wq, &io_data->work);
}
static void __ffs_epfile_read_buffer_free(struct ffs_epfile *epfile)
if (unlikely(ret < 0))
return ERR_PTR(ret);
- ffs = ffs_data_new();
+ ffs = ffs_data_new(dev_name);
if (unlikely(!ffs))
return ERR_PTR(-ENOMEM);
ffs->file_perms = data.perms;
BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
waitqueue_active(&ffs->ep0req_completion.wait) ||
waitqueue_active(&ffs->wait));
+ destroy_workqueue(ffs->io_completion_wq);
kfree(ffs->dev_name);
kfree(ffs);
}
ffs_data_put(ffs);
}
-static struct ffs_data *ffs_data_new(void)
+static struct ffs_data *ffs_data_new(const char *dev_name)
{
struct ffs_data *ffs = kzalloc(sizeof *ffs, GFP_KERNEL);
if (unlikely(!ffs))
ENTER();
+ ffs->io_completion_wq = alloc_ordered_workqueue("%s", 0, dev_name);
+ if (!ffs->io_completion_wq) {
+ kfree(ffs);
+ return NULL;
+ }
+
refcount_set(&ffs->ref, 1);
atomic_set(&ffs->opened, 0);
ffs->state = FFS_READ_DESCRIPTORS;
struct completion thread_notifier;
struct task_struct *thread_task;
- /* Callback functions. */
- const struct fsg_operations *ops;
/* Gadget's private data. */
void *private_data;
static int fsg_main_thread(void *common_)
{
struct fsg_common *common = common_;
+ int i;
/*
* Allow the thread to be killed by a signal, but set the signal mask
common->thread_task = NULL;
spin_unlock_irq(&common->lock);
- if (!common->ops || !common->ops->thread_exits
- || common->ops->thread_exits(common) < 0) {
- int i;
+ /* Eject media from all LUNs */
- down_write(&common->filesem);
- for (i = 0; i < ARRAY_SIZE(common->luns); i++) {
- struct fsg_lun *curlun = common->luns[i];
- if (!curlun || !fsg_lun_is_open(curlun))
- continue;
+ down_write(&common->filesem);
+ for (i = 0; i < ARRAY_SIZE(common->luns); i++) {
+ struct fsg_lun *curlun = common->luns[i];
+ if (curlun && fsg_lun_is_open(curlun))
fsg_lun_close(curlun);
- curlun->unit_attention_data = SS_MEDIUM_NOT_PRESENT;
- }
- up_write(&common->filesem);
}
+ up_write(&common->filesem);
/* Let fsg_unbind() know the thread has exited */
complete_and_exit(&common->thread_notifier, 0);
}
EXPORT_SYMBOL_GPL(fsg_common_remove_luns);
-void fsg_common_set_ops(struct fsg_common *common,
- const struct fsg_operations *ops)
-{
- common->ops = ops;
-}
-EXPORT_SYMBOL_GPL(fsg_common_set_ops);
-
void fsg_common_free_buffers(struct fsg_common *common)
{
_fsg_common_free_buffers(common->buffhds, common->fsg_num_buffers);
struct fsg_common;
/* FSF callback functions */
-struct fsg_operations {
- /*
- * Callback function to call when thread exits. If no
- * callback is set or it returns value lower then zero MSF
- * will force eject all LUNs it operates on (including those
- * marked as non-removable or with prevent_medium_removal flag
- * set).
- */
- int (*thread_exits)(struct fsg_common *common);
-};
-
struct fsg_lun_opts {
struct config_group group;
struct fsg_lun *lun;
void fsg_common_remove_luns(struct fsg_common *common);
-void fsg_common_set_ops(struct fsg_common *common,
- const struct fsg_operations *ops);
-
int fsg_common_create_lun(struct fsg_common *common, struct fsg_lun_config *cfg,
unsigned int id, const char *name,
const char **name_pfx);
size_t size; /* Amount of data in a TX request. */
size_t bytes_copied = 0;
struct usb_request *req;
+ int value;
DBG(dev, "printer_write trying to send %d bytes\n", (int)len);
return -EAGAIN;
}
- if (usb_ep_queue(dev->in_ep, req, GFP_ATOMIC)) {
+ /* here, we unlock, and only unlock, to avoid deadlock. */
+ spin_unlock(&dev->lock);
+ value = usb_ep_queue(dev->in_ep, req, GFP_ATOMIC);
+ spin_lock(&dev->lock);
+ if (value) {
list_add(&req->list, &dev->tx_reqs);
spin_unlock_irqrestore(&dev->lock, flags);
mutex_unlock(&dev->lock_printer_io);
} file_perms;
struct eventfd_ctx *ffs_eventfd;
+ struct workqueue_struct *io_completion_wq;
bool no_disconnect;
struct work_struct reset_work;
#include <linux/aio.h>
#include <linux/uio.h>
#include <linux/refcount.h>
-
+#include <linux/delay.h>
#include <linux/device.h>
#include <linux/moduleparam.h>
struct dev_data {
spinlock_t lock;
refcount_t count;
+ int udc_usage;
enum ep0_state state; /* P: lock */
struct usb_gadgetfs_event event [N_EVENT];
unsigned ev_next;
INIT_WORK(&priv->work, ep_user_copy_worker);
schedule_work(&priv->work);
}
- spin_unlock(&epdata->dev->lock);
usb_ep_free_request(ep, req);
+ spin_unlock(&epdata->dev->lock);
put_ep(epdata);
}
struct usb_request *req = dev->req;
if ((retval = setup_req (ep, req, 0)) == 0) {
+ ++dev->udc_usage;
spin_unlock_irq (&dev->lock);
retval = usb_ep_queue (ep, req, GFP_KERNEL);
spin_lock_irq (&dev->lock);
+ --dev->udc_usage;
}
dev->state = STATE_DEV_CONNECTED;
retval = -EIO;
else {
len = min (len, (size_t)dev->req->actual);
-// FIXME don't call this with the spinlock held ...
+ ++dev->udc_usage;
+ spin_unlock_irq(&dev->lock);
if (copy_to_user (buf, dev->req->buf, len))
retval = -EFAULT;
else
retval = len;
+ spin_lock_irq(&dev->lock);
+ --dev->udc_usage;
clean_req (dev->gadget->ep0, dev->req);
/* NOTE userspace can't yet choose to stall */
}
retval = setup_req (dev->gadget->ep0, dev->req, len);
if (retval == 0) {
dev->state = STATE_DEV_CONNECTED;
+ ++dev->udc_usage;
spin_unlock_irq (&dev->lock);
if (copy_from_user (dev->req->buf, buf, len))
retval = -EFAULT;
GFP_KERNEL);
}
spin_lock_irq(&dev->lock);
+ --dev->udc_usage;
if (retval < 0) {
clean_req (dev->gadget->ep0, dev->req);
} else
struct usb_gadget *gadget = dev->gadget;
long ret = -ENOTTY;
- if (gadget->ops->ioctl)
+ spin_lock_irq(&dev->lock);
+ if (dev->state == STATE_DEV_OPENED ||
+ dev->state == STATE_DEV_UNBOUND) {
+ /* Not bound to a UDC */
+ } else if (gadget->ops->ioctl) {
+ ++dev->udc_usage;
+ spin_unlock_irq(&dev->lock);
+
ret = gadget->ops->ioctl (gadget, code, value);
+ spin_lock_irq(&dev->lock);
+ --dev->udc_usage;
+ }
+ spin_unlock_irq(&dev->lock);
+
return ret;
}
if (value < 0)
break;
+ ++dev->udc_usage;
spin_unlock (&dev->lock);
value = usb_ep_queue (gadget->ep0, dev->req,
GFP_KERNEL);
spin_lock (&dev->lock);
+ --dev->udc_usage;
if (value < 0) {
clean_req (gadget->ep0, dev->req);
break;
req->length = value;
req->zero = value < w_length;
+ ++dev->udc_usage;
spin_unlock (&dev->lock);
value = usb_ep_queue (gadget->ep0, req, GFP_KERNEL);
+ spin_lock(&dev->lock);
+ --dev->udc_usage;
+ spin_unlock(&dev->lock);
if (value < 0) {
DBG (dev, "ep_queue --> %d\n", value);
req->status = 0;
/* break link to FS */
ep = list_first_entry (&dev->epfiles, struct ep_data, epfiles);
list_del_init (&ep->epfiles);
+ spin_unlock_irq (&dev->lock);
+
dentry = ep->dentry;
ep->dentry = NULL;
parent = d_inode(dentry->d_parent);
/* break link to controller */
+ mutex_lock(&ep->lock);
if (ep->state == STATE_EP_ENABLED)
(void) usb_ep_disable (ep->ep);
ep->state = STATE_EP_UNBOUND;
usb_ep_free_request (ep->ep, ep->req);
ep->ep = NULL;
+ mutex_unlock(&ep->lock);
+
wake_up (&ep->wait);
put_ep (ep);
- spin_unlock_irq (&dev->lock);
-
/* break link to dcache */
inode_lock(parent);
d_delete (dentry);
spin_lock_irq (&dev->lock);
dev->state = STATE_DEV_UNBOUND;
+ while (dev->udc_usage > 0) {
+ spin_unlock_irq(&dev->lock);
+ usleep_range(1000, 2000);
+ spin_lock_irq(&dev->lock);
+ }
spin_unlock_irq (&dev->lock);
destroy_ep_files (dev);
FSG_MODULE_PARAMETERS(/* no prefix */, mod_data);
-static unsigned long msg_registered;
-static void msg_cleanup(void);
-
-static int msg_thread_exits(struct fsg_common *common)
-{
- msg_cleanup();
- return 0;
-}
-
static int msg_do_config(struct usb_configuration *c)
{
struct fsg_opts *opts;
static int msg_bind(struct usb_composite_dev *cdev)
{
- static const struct fsg_operations ops = {
- .thread_exits = msg_thread_exits,
- };
struct fsg_opts *opts;
struct fsg_config config;
int status;
if (status)
goto fail;
- fsg_common_set_ops(opts->common, &ops);
-
status = fsg_common_set_cdev(opts->common, cdev, config.can_stall);
if (status)
goto fail_set_cdev;
static int __init msg_init(void)
{
- int ret;
-
- ret = usb_composite_probe(&msg_driver);
- set_bit(0, &msg_registered);
-
- return ret;
+ return usb_composite_probe(&msg_driver);
}
module_init(msg_init);
-static void msg_cleanup(void)
+static void __exit msg_cleanup(void)
{
- if (test_and_clear_bit(0, &msg_registered))
- usb_composite_unregister(&msg_driver);
+ usb_composite_unregister(&msg_driver);
}
module_exit(msg_cleanup);
config USB_SNP_UDC_PLAT
tristate "Synopsys USB 2.0 Device controller"
depends on USB_GADGET && OF && HAS_DMA
+ depends on EXTCON || EXTCON=n
select USB_GADGET_DUALSPEED
select USB_SNP_CORE
default ARCH_BCM_IPROC
#include <linux/of_gpio.h>
#include "atmel_usba_udc.h"
+#define USBA_VBUS_IRQFLAGS (IRQF_ONESHOT \
+ | IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING)
#ifdef CONFIG_USB_GADGET_DEBUG_FS
#include <linux/debugfs.h>
IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(&pdev->dev,
gpio_to_irq(udc->vbus_pin), NULL,
- usba_vbus_irq_thread, IRQF_ONESHOT,
+ usba_vbus_irq_thread, USBA_VBUS_IRQFLAGS,
"atmel_usba_udc", udc);
if (ret) {
udc->vbus_pin = -ENODEV;
udc->dev.driver = &driver->driver;
udc->gadget->dev.driver = &driver->driver;
- if (driver->max_speed < udc->gadget->max_speed)
- usb_gadget_udc_set_speed(udc, driver->max_speed);
+ usb_gadget_udc_set_speed(udc, driver->max_speed);
ret = driver->bind(udc->gadget, driver);
if (ret)
struct usb_device *udev;
struct list_head urbp_list;
+ struct urbp *next_frame_urbp;
+
u32 stream_en_ep;
u8 num_stream[30 / 2];
*/
struct dummy_ep ep[DUMMY_ENDPOINTS];
int address;
+ int callback_usage;
struct usb_gadget gadget;
struct usb_gadget_driver *driver;
struct dummy_request fifo_req;
u8 fifo_buf[FIFO_SIZE];
u16 devstatus;
+ unsigned ints_enabled:1;
unsigned udc_suspended:1;
unsigned pullup:1;
USB_PORT_STAT_CONNECTION) == 0)
dum_hcd->port_status |=
(USB_PORT_STAT_C_CONNECTION << 16);
- if ((dum_hcd->port_status &
- USB_PORT_STAT_ENABLE) == 1 &&
- (dum_hcd->port_status &
- USB_SS_PORT_LS_U0) == 1 &&
- dum_hcd->rh_state != DUMMY_RH_SUSPENDED)
+ if ((dum_hcd->port_status & USB_PORT_STAT_ENABLE) &&
+ (dum_hcd->port_status &
+ USB_PORT_STAT_LINK_STATE) == USB_SS_PORT_LS_U0 &&
+ dum_hcd->rh_state != DUMMY_RH_SUSPENDED)
dum_hcd->active = 1;
}
} else {
(~dum_hcd->old_status) & dum_hcd->port_status;
/* Report reset and disconnect events to the driver */
- if (dum->driver && (disconnect || reset)) {
+ if (dum->ints_enabled && (disconnect || reset)) {
stop_activity(dum);
+ ++dum->callback_usage;
+ spin_unlock(&dum->lock);
if (reset)
usb_gadget_udc_reset(&dum->gadget, dum->driver);
else
dum->driver->disconnect(&dum->gadget);
+ spin_lock(&dum->lock);
+ --dum->callback_usage;
}
- } else if (dum_hcd->active != dum_hcd->old_active) {
+ } else if (dum_hcd->active != dum_hcd->old_active &&
+ dum->ints_enabled) {
+ ++dum->callback_usage;
+ spin_unlock(&dum->lock);
if (dum_hcd->old_active && dum->driver->suspend)
dum->driver->suspend(&dum->gadget);
else if (!dum_hcd->old_active && dum->driver->resume)
dum->driver->resume(&dum->gadget);
+ spin_lock(&dum->lock);
+ --dum->callback_usage;
}
dum_hcd->old_status = dum_hcd->port_status;
* can't enumerate without help from the driver we're binding.
*/
+ spin_lock_irq(&dum->lock);
dum->devstatus = 0;
dum->driver = driver;
+ dum->ints_enabled = 1;
+ spin_unlock_irq(&dum->lock);
return 0;
}
struct dummy *dum = dum_hcd->dum;
spin_lock_irq(&dum->lock);
+ dum->ints_enabled = 0;
+ stop_activity(dum);
+
+ /* emulate synchronize_irq(): wait for callbacks to finish */
+ while (dum->callback_usage > 0) {
+ spin_unlock_irq(&dum->lock);
+ usleep_range(1000, 2000);
+ spin_lock_irq(&dum->lock);
+ }
+
dum->driver = NULL;
spin_unlock_irq(&dum->lock);
memzero_explicit(&dum->gadget, sizeof(struct usb_gadget));
dum->gadget.name = gadget_name;
dum->gadget.ops = &dummy_ops;
- dum->gadget.max_speed = USB_SPEED_SUPER;
+ if (mod_data.is_super_speed)
+ dum->gadget.max_speed = USB_SPEED_SUPER;
+ else if (mod_data.is_high_speed)
+ dum->gadget.max_speed = USB_SPEED_HIGH;
+ else
+ dum->gadget.max_speed = USB_SPEED_FULL;
dum->gadget.dev.parent = &pdev->dev;
init_dummy_udc_hw(dum);
list_add_tail(&urbp->urbp_list, &dum_hcd->urbp_list);
urb->hcpriv = urbp;
+ if (!dum_hcd->next_frame_urbp)
+ dum_hcd->next_frame_urbp = urbp;
if (usb_pipetype(urb->pipe) == PIPE_CONTROL)
urb->error_count = 1; /* mark as a new urb */
if (!is_active((dum->gadget.speed == USB_SPEED_SUPER ?
dum->ss_hcd : dum->hs_hcd)))
return NULL;
+ if (!dum->ints_enabled)
+ return NULL;
if ((address & ~USB_DIR_IN) == 0)
return &dum->ep[0];
for (i = 1; i < DUMMY_ENDPOINTS; i++) {
spin_unlock_irqrestore(&dum->lock, flags);
return;
}
+ dum_hcd->next_frame_urbp = NULL;
for (i = 0; i < DUMMY_ENDPOINTS; i++) {
if (!ep_info[i].name)
int type;
int status = -EINPROGRESS;
+ /* stop when we reach URBs queued after the timer interrupt */
+ if (urbp == dum_hcd->next_frame_urbp)
+ break;
+
urb = urbp->urb;
if (urb->unlinked)
goto return_urb;
* until setup() returns; no reentrancy issues etc.
*/
if (value > 0) {
+ ++dum->callback_usage;
spin_unlock(&dum->lock);
value = dum->driver->setup(&dum->gadget,
&setup);
spin_lock(&dum->lock);
+ --dum->callback_usage;
if (value >= 0) {
/* no delays (max 64KB data stage) */
.product_desc = "Dummy host controller",
.hcd_priv_size = sizeof(struct dummy_hcd),
- .flags = HCD_USB3 | HCD_SHARED,
-
.reset = dummy_setup,
.start = dummy_start,
.stop = dummy_stop,
dev_info(&pdev->dev, "%s, driver " DRIVER_VERSION "\n", driver_desc);
dum = *((void **)dev_get_platdata(&pdev->dev));
- if (!mod_data.is_super_speed)
+ if (mod_data.is_super_speed)
+ dummy_hcd.flags = HCD_USB3 | HCD_SHARED;
+ else if (mod_data.is_high_speed)
dummy_hcd.flags = HCD_USB2;
+ else
+ dummy_hcd.flags = HCD_USB11;
hs_hcd = usb_create_hcd(&dummy_hcd, &pdev->dev, dev_name(&pdev->dev));
if (!hs_hcd)
return -ENOMEM;
usb3_ep->ep.maxpacket);
u8 *buf = usb3_req->req.buf + usb3_req->req.actual;
u32 tmp = 0;
- bool is_last;
+ bool is_last = !len ? true : false;
if (usb3_wait_pipe_status(usb3_ep, PX_STA_BUFSTS) < 0)
return -EBUSY;
usb3_write(usb3, tmp, fifo_reg);
}
- is_last = usb3_is_transfer_complete(usb3_ep, usb3_req);
+ if (!is_last)
+ is_last = usb3_is_transfer_complete(usb3_ep, usb3_req);
/* Send the data */
usb3_set_px_con_send(usb3_ep, len, is_last);
usb3_set_p0_con_for_ctrl_read_data(usb3);
} else {
usb3_clear_bit(usb3, P0_MOD_DIR, USB3_P0_MOD);
- usb3_set_p0_con_for_ctrl_write_data(usb3);
+ if (usb3_req->req.length)
+ usb3_set_p0_con_for_ctrl_write_data(usb3);
}
usb3_p0_xfer(usb3_ep, usb3_req);
static u32 usb3_calc_rammap_val(struct renesas_usb3_ep *usb3_ep,
const struct usb_endpoint_descriptor *desc)
{
- return usb3_ep->rammap_val | PN_RAMMAP_MPKT(usb_endpoint_maxp(desc));
+ int i;
+ const u32 max_packet_array[] = {8, 16, 32, 64, 512};
+ u32 mpkt = PN_RAMMAP_MPKT(1024);
+
+ for (i = 0; i < ARRAY_SIZE(max_packet_array); i++) {
+ if (usb_endpoint_maxp(desc) <= max_packet_array[i])
+ mpkt = PN_RAMMAP_MPKT(max_packet_array[i]);
+ }
+
+ return usb3_ep->rammap_val | mpkt;
}
static int usb3_enable_pipe_n(struct renesas_usb3_ep *usb3_ep,
if ((value & ASMT_CONTROL_WRITE_BIT) == 0)
return 0;
- usleep_range(40, 60);
+ udelay(50);
}
dev_warn(&pdev->dev, "%s: check_write_ready timeout", __func__);
*
* Takes care of the handoff between the Pre-OS (i.e. BIOS) and the OS.
* It signals to the BIOS that the OS wants control of the host controller,
- * and then waits 5 seconds for the BIOS to hand over control.
+ * and then waits 1 second for the BIOS to hand over control.
* If we timeout, assume the BIOS is broken and take control anyway.
*/
static void quirk_usb_handoff_xhci(struct pci_dev *pdev)
if (val & XHCI_HC_BIOS_OWNED) {
writel(val | XHCI_HC_OS_OWNED, base + ext_cap_offset);
- /* Wait for 5 seconds with 10 microsecond polling interval */
+ /* Wait for 1 second with 10 microsecond polling interval */
timeout = handshake(base + ext_cap_offset, XHCI_HC_BIOS_OWNED,
- 0, 5000, 10);
+ 0, 1000000, 10);
/* Assume a buggy BIOS and take HC ownership anyway */
if (timeout) {
* operational or runtime registers. Wait 5 seconds and no more.
*/
timeout = handshake(op_reg_base + XHCI_STS_OFFSET, XHCI_STS_CNR, 0,
- 5000, 10);
+ 5000000, 10);
/* Assume a buggy HC and start HC initialization anyway */
if (timeout) {
val = readl(op_reg_base + XHCI_STS_OFFSET);
/* If PSI table exists, add the custom speed attributes from it */
if (usb3_1 && xhci->usb3_rhub.psi_count) {
- u32 ssp_cap_base, bm_attrib, psi;
+ u32 ssp_cap_base, bm_attrib, psi, psi_mant, psi_exp;
int offset;
ssp_cap_base = USB_DT_BOS_SIZE + USB_DT_USB_SS_CAP_SIZE;
for (i = 0; i < xhci->usb3_rhub.psi_count; i++) {
psi = xhci->usb3_rhub.psi[i];
psi &= ~USB_SSP_SUBLINK_SPEED_RSVD;
+ psi_exp = XHCI_EXT_PORT_PSIE(psi);
+ psi_mant = XHCI_EXT_PORT_PSIM(psi);
+
+ /* Shift to Gbps and set SSP Link BIT(14) if 10Gpbs */
+ for (; psi_exp < 3; psi_exp++)
+ psi_mant /= 1000;
+ if (psi_mant >= 10)
+ psi |= BIT(14);
+
if ((psi & PLT_MASK) == PLT_SYM) {
/* Symmetric, create SSA RX and TX from one PSI entry */
put_unaligned_le32(psi, &buf[offset]);
t2 |= PORT_WKOC_E | PORT_WKCONN_E;
t2 &= ~PORT_WKDISC_E;
}
- if ((xhci->quirks & XHCI_U2_DISABLE_WAKE) &&
- (hcd->speed < HCD_USB3))
- t2 &= ~PORT_WAKE_BITS;
} else
t2 &= ~PORT_WAKE_BITS;
#define PCI_DEVICE_ID_INTEL_APL_XHCI 0x5aa8
#define PCI_DEVICE_ID_INTEL_DNV_XHCI 0x19d0
-#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
-#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
-#define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb
-#define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc
-
#define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
static const char hcd_name[] = "xhci_hcd";
if (pdev->vendor == PCI_VENDOR_ID_AMD)
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
- if ((pdev->vendor == PCI_VENDOR_ID_AMD) &&
- ((pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4) ||
- (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_3) ||
- (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_2) ||
- (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_1)))
- xhci->quirks |= XHCI_U2_DISABLE_WAKE;
-
if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
xhci->quirks |= XHCI_LPM_SUPPORT;
xhci->quirks |= XHCI_INTEL_HOST;
* 2. xhci_plat is child of a device from firmware (dwc3-plat)
* 3. xhci_plat is grandchild of a pci device (dwc3-pci)
*/
- sysdev = &pdev->dev;
- if (sysdev->parent && !sysdev->of_node && sysdev->parent->of_node)
- sysdev = sysdev->parent;
+ for (sysdev = &pdev->dev; sysdev; sysdev = sysdev->parent) {
+ if (is_of_node(sysdev->fwnode) ||
+ is_acpi_device_node(sysdev->fwnode))
+ break;
#ifdef CONFIG_PCI
- else if (sysdev->parent && sysdev->parent->parent &&
- sysdev->parent->parent->bus == &pci_bus_type)
- sysdev = sysdev->parent->parent;
+ else if (sysdev->bus == &pci_bus_type)
+ break;
#endif
+ }
+
+ if (!sysdev)
+ sysdev = &pdev->dev;
/* Try to set 64-bit DMA first */
if (WARN_ON(!sysdev->dma_mask))
if (xhci->quirks & XHCI_MTK_HOST) {
ret = xhci_mtk_add_ep_quirk(hcd, udev, ep);
if (ret < 0) {
- xhci_free_endpoint_ring(xhci, virt_dev, ep_index);
+ xhci_ring_free(xhci, virt_dev->eps[ep_index].new_ring);
+ virt_dev->eps[ep_index].new_ring = NULL;
return ret;
}
}
#define EP_MAXPSTREAMS(p) (((p) << 10) & EP_MAXPSTREAMS_MASK)
/* Endpoint is set up with a Linear Stream Array (vs. Secondary Stream Array) */
#define EP_HAS_LSA (1 << 15)
+/* hosts with LEC=1 use bits 31:24 as ESIT high bits. */
+#define CTX_TO_MAX_ESIT_PAYLOAD_HI(p) (((p) >> 24) & 0xff)
/* ep_info2 bitmasks */
/*
static inline unsigned int hcd_index(struct usb_hcd *hcd)
{
- if (hcd->speed == HCD_USB3)
+ if (hcd->speed >= HCD_USB3)
return 0;
else
return 1;
/* For controller with a broken Port Disable implementation */
#define XHCI_BROKEN_PORT_PED (1 << 25)
#define XHCI_LIMIT_ENDPOINT_INTERVAL_7 (1 << 26)
-#define XHCI_U2_DISABLE_WAKE (1 << 27)
+/* Reserved. It was XHCI_U2_DISABLE_WAKE */
#define XHCI_ASMEDIA_MODIFY_FLOWCONTROL (1 << 28)
unsigned int num_active_eps;
u8 lsa;
u8 hid;
- esit = EP_MAX_ESIT_PAYLOAD_HI(info) << 16 |
- EP_MAX_ESIT_PAYLOAD_LO(tx_info);
+ esit = CTX_TO_MAX_ESIT_PAYLOAD_HI(info) << 16 |
+ CTX_TO_MAX_ESIT_PAYLOAD(tx_info);
ep_state = info & EP_STATE_MASK;
max_pstr = info & EP_MAXPSTREAMS_MASK;
struct usbhs_fifo *fifo)
{
struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+ int ret = 0;
- if (!usbhs_pipe_is_dcp(pipe))
- usbhsf_fifo_barrier(priv, fifo);
+ if (!usbhs_pipe_is_dcp(pipe)) {
+ /*
+ * This driver checks the pipe condition first to avoid -EBUSY
+ * from usbhsf_fifo_barrier() with about 10 msec delay in
+ * the interrupt handler if the pipe is RX direction and empty.
+ */
+ if (usbhs_pipe_is_dir_in(pipe))
+ ret = usbhs_pipe_is_accessible(pipe);
+ if (!ret)
+ ret = usbhsf_fifo_barrier(priv, fifo);
+ }
- usbhs_write(priv, fifo->ctr, BCLR);
+ /*
+ * if non-DCP pipe, this driver should set BCLR when
+ * usbhsf_fifo_barrier() returns 0.
+ */
+ if (!ret)
+ usbhs_write(priv, fifo->ctr, BCLR);
}
static int usbhsf_fifo_rcv_len(struct usbhs_priv *priv,
if (result == USB_STOR_TRANSPORT_GOOD) {
srb->result = SAM_STAT_GOOD;
srb->sense_buffer[0] = 0x0;
+ }
+
+ /*
+ * ATA-passthru commands use sense data to report
+ * the command completion status, and often devices
+ * return Check Condition status when nothing is
+ * wrong.
+ */
+ else if (srb->cmnd[0] == ATA_16 ||
+ srb->cmnd[0] == ATA_12) {
+ /* leave the data alone */
+ }
/*
* If there was a problem, report an unspecified
* hardware error to prevent the higher layers from
* entering an infinite retry loop.
*/
- } else {
+ else {
srb->result = DID_ERROR << 16;
if ((sshdr.response_code & 0x72) == 0x72)
srb->sense_buffer[1] = HARDWARE_ERROR;
intf->desc.bInterfaceProtocol == USB_PR_UAS);
}
-static int uas_find_uas_alt_setting(struct usb_interface *intf)
+static struct usb_host_interface *uas_find_uas_alt_setting(
+ struct usb_interface *intf)
{
int i;
struct usb_host_interface *alt = &intf->altsetting[i];
if (uas_is_interface(alt))
- return alt->desc.bAlternateSetting;
+ return alt;
}
- return -ENODEV;
+ return NULL;
}
static int uas_find_endpoints(struct usb_host_interface *alt,
struct usb_device *udev = interface_to_usbdev(intf);
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
unsigned long flags = id->driver_info;
- int r, alt;
-
+ struct usb_host_interface *alt;
+ int r;
alt = uas_find_uas_alt_setting(intf);
- if (alt < 0)
+ if (!alt)
return 0;
- r = uas_find_endpoints(&intf->altsetting[alt], eps);
+ r = uas_find_endpoints(alt, eps);
if (r < 0)
return 0;
static int uas_switch_interface(struct usb_device *udev,
struct usb_interface *intf)
{
- int alt;
+ struct usb_host_interface *alt;
alt = uas_find_uas_alt_setting(intf);
- if (alt < 0)
- return alt;
+ if (!alt)
+ return -ENODEV;
- return usb_set_interface(udev,
- intf->altsetting[0].desc.bInterfaceNumber, alt);
+ return usb_set_interface(udev, alt->desc.bInterfaceNumber,
+ alt->desc.bAlternateSetting);
}
static int uas_configure_endpoints(struct uas_dev_info *devinfo)
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_SANE_SENSE ),
+/* Reported by Kris Lindgren <kris.lindgren@gmail.com> */
+UNUSUAL_DEV( 0x0bc2, 0x3332, 0x0000, 0x9999,
+ "Seagate",
+ "External",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_WP_DETECT ),
+
UNUSUAL_DEV( 0x0d49, 0x7310, 0x0000, 0x9999,
"Maxtor",
"USB to SATA",
if (iface->cur_altsetting->desc.bNumEndpoints < 1)
return -ENODEV;
+ if (!usb_endpoint_xfer_int(&iface->cur_altsetting->endpoint[0].desc))
+ return -ENODEV;
result = -ENOMEM;
uwb_rc = uwb_rc_alloc();
/** Start the UWB daemon */
void uwbd_start(struct uwb_rc *rc)
{
- rc->uwbd.task = kthread_run(uwbd, rc, "uwbd");
- if (rc->uwbd.task == NULL)
+ struct task_struct *task = kthread_run(uwbd, rc, "uwbd");
+ if (IS_ERR(task)) {
+ rc->uwbd.task = NULL;
printk(KERN_ERR "UWB: Cannot start management daemon; "
"UWB won't work\n");
- else
+ } else {
+ rc->uwbd.task = task;
rc->uwbd.pid = rc->uwbd.task->pid;
+ }
}
/* Stop the UWB daemon and free any unprocessed events */
void uwbd_stop(struct uwb_rc *rc)
{
- kthread_stop(rc->uwbd.task);
+ if (rc->uwbd.task)
+ kthread_stop(rc->uwbd.task);
uwbd_flush(rc);
}
static int bar_write(struct pci_dev *dev, int offset, u32 value, void *data)
{
struct pci_bar_info *bar = data;
+ unsigned int pos = (offset - PCI_BASE_ADDRESS_0) / 4;
+ const struct resource *res = dev->resource;
+ u32 mask;
if (unlikely(!bar)) {
pr_warn(DRV_NAME ": driver data not found for %s\n",
/* A write to obtain the length must happen as a 32-bit write.
* This does not (yet) support writing individual bytes
*/
- if (value == ~0)
+ if (res[pos].flags & IORESOURCE_IO)
+ mask = ~PCI_BASE_ADDRESS_IO_MASK;
+ else if (pos && (res[pos - 1].flags & IORESOURCE_MEM_64))
+ mask = 0;
+ else
+ mask = ~PCI_BASE_ADDRESS_MEM_MASK;
+ if ((value | mask) == ~0U)
bar->which = 1;
else {
u32 tmpval;
return err;
}
-static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
- grant_ref_t *gnt_refs,
- unsigned int nr_grefs,
- void **vaddr)
-{
- struct xenbus_map_node *node;
- struct vm_struct *area;
- pte_t *ptes[XENBUS_MAX_RING_GRANTS];
- phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
- int err = GNTST_okay;
- int i;
- bool leaked;
-
- *vaddr = NULL;
-
- if (nr_grefs > XENBUS_MAX_RING_GRANTS)
- return -EINVAL;
-
- node = kzalloc(sizeof(*node), GFP_KERNEL);
- if (!node)
- return -ENOMEM;
-
- area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, ptes);
- if (!area) {
- kfree(node);
- return -ENOMEM;
- }
-
- for (i = 0; i < nr_grefs; i++)
- phys_addrs[i] = arbitrary_virt_to_machine(ptes[i]).maddr;
-
- err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,
- phys_addrs,
- GNTMAP_host_map | GNTMAP_contains_pte,
- &leaked);
- if (err)
- goto failed;
-
- node->nr_handles = nr_grefs;
- node->pv.area = area;
-
- spin_lock(&xenbus_valloc_lock);
- list_add(&node->next, &xenbus_valloc_pages);
- spin_unlock(&xenbus_valloc_lock);
-
- *vaddr = area->addr;
- return 0;
-
-failed:
- if (!leaked)
- free_vm_area(area);
- else
- pr_alert("leaking VM area %p size %u page(s)", area, nr_grefs);
-
- kfree(node);
- return err;
-}
-
struct map_ring_valloc_hvm
{
unsigned int idx;
}
EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree);
+#ifdef CONFIG_XEN_PV
+static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
+ grant_ref_t *gnt_refs,
+ unsigned int nr_grefs,
+ void **vaddr)
+{
+ struct xenbus_map_node *node;
+ struct vm_struct *area;
+ pte_t *ptes[XENBUS_MAX_RING_GRANTS];
+ phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
+ int err = GNTST_okay;
+ int i;
+ bool leaked;
+
+ *vaddr = NULL;
+
+ if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+ return -EINVAL;
+
+ node = kzalloc(sizeof(*node), GFP_KERNEL);
+ if (!node)
+ return -ENOMEM;
+
+ area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, ptes);
+ if (!area) {
+ kfree(node);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < nr_grefs; i++)
+ phys_addrs[i] = arbitrary_virt_to_machine(ptes[i]).maddr;
+
+ err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,
+ phys_addrs,
+ GNTMAP_host_map | GNTMAP_contains_pte,
+ &leaked);
+ if (err)
+ goto failed;
+
+ node->nr_handles = nr_grefs;
+ node->pv.area = area;
+
+ spin_lock(&xenbus_valloc_lock);
+ list_add(&node->next, &xenbus_valloc_pages);
+ spin_unlock(&xenbus_valloc_lock);
+
+ *vaddr = area->addr;
+ return 0;
+
+failed:
+ if (!leaked)
+ free_vm_area(area);
+ else
+ pr_alert("leaking VM area %p size %u page(s)", area, nr_grefs);
+
+ kfree(node);
+ return err;
+}
+
static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr)
{
struct xenbus_map_node *node;
return err;
}
+static const struct xenbus_ring_ops ring_ops_pv = {
+ .map = xenbus_map_ring_valloc_pv,
+ .unmap = xenbus_unmap_ring_vfree_pv,
+};
+#endif
+
struct unmap_ring_vfree_hvm
{
unsigned int idx;
}
EXPORT_SYMBOL_GPL(xenbus_read_driver_state);
-static const struct xenbus_ring_ops ring_ops_pv = {
- .map = xenbus_map_ring_valloc_pv,
- .unmap = xenbus_unmap_ring_vfree_pv,
-};
-
static const struct xenbus_ring_ops ring_ops_hvm = {
.map = xenbus_map_ring_valloc_hvm,
.unmap = xenbus_unmap_ring_vfree_hvm,
void __init xenbus_ring_ops_init(void)
{
+#ifdef CONFIG_XEN_PV
if (!xen_feature(XENFEAT_auto_translated_physmap))
ring_ops = &ring_ops_pv;
else
+#endif
ring_ops = &ring_ops_hvm;
}
int size; /* size of magic/mask */
char *magic; /* magic or filename extension */
char *mask; /* mask, NULL for exact match */
- char *interpreter; /* filename of interpreter */
+ const char *interpreter; /* filename of interpreter */
char *name;
struct dentry *dentry;
struct file *interp_file;
{
Node *fmt;
struct file *interp_file = NULL;
- char iname[BINPRM_BUF_SIZE];
- const char *iname_addr = iname;
int retval;
int fd_binary = -1;
retval = -ENOEXEC;
if (!enabled)
- goto ret;
+ return retval;
/* to keep locking time low, we copy the interpreter string */
read_lock(&entries_lock);
fmt = check_file(bprm);
if (fmt)
- strlcpy(iname, fmt->interpreter, BINPRM_BUF_SIZE);
+ dget(fmt->dentry);
read_unlock(&entries_lock);
if (!fmt)
- goto ret;
+ return retval;
/* Need to be able to load the file after exec */
+ retval = -ENOENT;
if (bprm->interp_flags & BINPRM_FLAGS_PATH_INACCESSIBLE)
- return -ENOENT;
+ goto ret;
if (!(fmt->flags & MISC_FMT_PRESERVE_ARGV0)) {
retval = remove_arg_zero(bprm);
bprm->argc++;
/* add the interp as argv[0] */
- retval = copy_strings_kernel(1, &iname_addr, bprm);
+ retval = copy_strings_kernel(1, &fmt->interpreter, bprm);
if (retval < 0)
goto error;
bprm->argc++;
/* Update interp in case binfmt_script needs it. */
- retval = bprm_change_interp(iname, bprm);
+ retval = bprm_change_interp(fmt->interpreter, bprm);
if (retval < 0)
goto error;
- if (fmt->flags & MISC_FMT_OPEN_FILE && fmt->interp_file) {
+ if (fmt->flags & MISC_FMT_OPEN_FILE) {
interp_file = filp_clone_open(fmt->interp_file);
if (!IS_ERR(interp_file))
deny_write_access(interp_file);
} else {
- interp_file = open_exec(iname);
+ interp_file = open_exec(fmt->interpreter);
}
retval = PTR_ERR(interp_file);
if (IS_ERR(interp_file))
goto error;
ret:
+ dput(fmt->dentry);
return retval;
error:
if (fd_binary > 0)
static void bm_evict_inode(struct inode *inode)
{
+ Node *e = inode->i_private;
+
+ if (e->flags & MISC_FMT_OPEN_FILE)
+ filp_close(e->interp_file, NULL);
+
clear_inode(inode);
- kfree(inode->i_private);
+ kfree(e);
}
static void kill_node(Node *e)
struct dentry *dentry;
write_lock(&entries_lock);
- dentry = e->dentry;
- if (dentry) {
- list_del_init(&e->list);
- e->dentry = NULL;
- }
+ list_del_init(&e->list);
write_unlock(&entries_lock);
- if ((e->flags & MISC_FMT_OPEN_FILE) && e->interp_file) {
- filp_close(e->interp_file, NULL);
- e->interp_file = NULL;
- }
-
- if (dentry) {
- drop_nlink(d_inode(dentry));
- d_drop(dentry);
- dput(dentry);
- simple_release_fs(&bm_mnt, &entry_count);
- }
+ dentry = e->dentry;
+ drop_nlink(d_inode(dentry));
+ d_drop(dentry);
+ dput(dentry);
+ simple_release_fs(&bm_mnt, &entry_count);
}
/* /<entry> */
root = file_inode(file)->i_sb->s_root;
inode_lock(d_inode(root));
- kill_node(e);
+ if (!list_empty(&e->list))
+ kill_node(e);
inode_unlock(d_inode(root));
break;
inode_lock(d_inode(root));
while (!list_empty(&entries))
- kill_node(list_entry(entries.next, Node, list));
+ kill_node(list_first_entry(&entries, Node, list));
inode_unlock(d_inode(root));
break;
const char *i_arg, *i_name;
char *cp;
struct file *file;
- char interp[BINPRM_BUF_SIZE];
int retval;
if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!'))
break;
}
for (cp = bprm->buf+2; (*cp == ' ') || (*cp == '\t'); cp++);
- if (*cp == '\0')
+ if (*cp == '\0')
return -ENOEXEC; /* No interpreter name found */
i_name = cp;
i_arg = NULL;
*cp++ = '\0';
if (*cp)
i_arg = cp;
- strcpy (interp, i_name);
/*
* OK, we've parsed out the interpreter name and
* (optional) argument.
if (retval)
return retval;
retval = copy_strings_kernel(1, &bprm->interp, bprm);
- if (retval < 0) return retval;
+ if (retval < 0)
+ return retval;
bprm->argc++;
if (i_arg) {
retval = copy_strings_kernel(1, &i_arg, bprm);
- if (retval < 0) return retval;
+ if (retval < 0)
+ return retval;
bprm->argc++;
}
retval = copy_strings_kernel(1, &i_name, bprm);
- if (retval) return retval;
+ if (retval)
+ return retval;
bprm->argc++;
- retval = bprm_change_interp(interp, bprm);
+ retval = bprm_change_interp(i_name, bprm);
if (retval < 0)
return retval;
/*
* OK, now restart the process with the interpreter's dentry.
*/
- file = open_exec(interp);
+ file = open_exec(i_name);
if (IS_ERR(file))
return PTR_ERR(file);
struct inode *inode;
struct page *page;
unsigned long index;
- int ret;
+ unsigned int mirror = btrfs_io_bio(bio)->mirror_num;
+ int ret = 0;
if (bio->bi_status)
cb->errors = 1;
if (!refcount_dec_and_test(&cb->pending_bios))
goto out;
+ /*
+ * Record the correct mirror_num in cb->orig_bio so that
+ * read-repair can work properly.
+ */
+ ASSERT(btrfs_io_bio(cb->orig_bio));
+ btrfs_io_bio(cb->orig_bio)->mirror_num = mirror;
+ cb->mirror_num = mirror;
+
+ /*
+ * Some IO in this cb have failed, just skip checksum as there
+ * is no way it could be correct.
+ */
+ if (cb->errors == 1)
+ goto csum_failed;
+
inode = cb->inode;
ret = check_compressed_csum(BTRFS_I(inode), cb,
(u64)bio->bi_iter.bi_sector << 9);
#define BTRFS_FS_OPEN 5
#define BTRFS_FS_QUOTA_ENABLED 6
#define BTRFS_FS_QUOTA_ENABLING 7
-#define BTRFS_FS_QUOTA_DISABLING 8
#define BTRFS_FS_UPDATE_UUID_TREE_GEN 9
#define BTRFS_FS_CREATING_FREE_SPACE_TREE 10
#define BTRFS_FS_BTREE_ERR 11
* Indicate that a whole-filesystem exclusive operation is running
* (device replace, resize, device add/delete, balance)
*/
-#define BTRFS_FS_EXCL_OP 14
+#define BTRFS_FS_EXCL_OP 16
struct btrfs_fs_info {
u8 fsid[BTRFS_FSID_SIZE];
u64 flags;
do_barriers = !btrfs_test_opt(fs_info, NOBARRIER);
- backup_super_roots(fs_info);
+
+ /*
+ * max_mirrors == 0 indicates we're from commit_transaction,
+ * not from fsync where the tree roots in fs_info have not
+ * been consistent on disk.
+ */
+ if (max_mirrors == 0)
+ backup_super_roots(fs_info);
sb = fs_info->super_for_commit;
dev_item = &sb->dev_item;
}
}
- bio = btrfs_bio_alloc(bdev, sector << 9);
+ bio = btrfs_bio_alloc(bdev, (u64)sector << 9);
bio_add_page(bio, page, page_size, offset);
bio->bi_end_io = end_io_func;
bio->bi_private = tree;
unsigned int write_flags = 0;
unsigned long nr_written = 0;
- if (wbc->sync_mode == WB_SYNC_ALL)
- write_flags = REQ_SYNC;
+ write_flags = wbc_to_write_flags(wbc);
trace___extent_writepage(page, inode, wbc);
unsigned long i, num_pages;
unsigned long bio_flags = 0;
unsigned long start, end;
- unsigned int write_flags = (epd->sync_io ? REQ_SYNC : 0) | REQ_META;
+ unsigned int write_flags = wbc_to_write_flags(wbc) | REQ_META;
int ret = 0;
clear_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags);
if (epd->bio) {
int ret;
- bio_set_op_attrs(epd->bio, REQ_OP_WRITE,
- epd->sync_io ? REQ_SYNC : 0);
-
ret = submit_one_bio(epd->bio, 0, epd->bio_flags);
BUG_ON(ret < 0); /* -ENOMEM */
epd->bio = NULL;
const u64 offset,
const u64 bytes)
{
+ unsigned long index = offset >> PAGE_SHIFT;
+ unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT;
+ struct page *page;
+
+ while (index <= end_index) {
+ page = find_get_page(inode->i_mapping, index);
+ index++;
+ if (!page)
+ continue;
+ ClearPagePrivate2(page);
+ put_page(page);
+ }
return __endio_write_update_ordered(inode, offset + PAGE_SIZE,
bytes - PAGE_SIZE, false);
}
struct btrfs_io_bio *io_bio = btrfs_io_bio(bio);
blk_status_t err = bio->bi_status;
- if (dip->flags & BTRFS_DIO_ORIG_BIO_SUBMITTED) {
+ if (dip->flags & BTRFS_DIO_ORIG_BIO_SUBMITTED)
err = btrfs_subio_endio_read(inode, io_bio, err);
- if (!err)
- bio->bi_status = 0;
- }
unlock_extent(&BTRFS_I(inode)->io_tree, dip->logical_offset,
dip->logical_offset + dip->bytes - 1);
kfree(dip);
- dio_bio->bi_status = bio->bi_status;
+ dio_bio->bi_status = err;
dio_end_io(dio_bio);
if (io_bio->end_io)
btrfs_work_func_t func;
u64 ordered_offset = offset;
u64 ordered_bytes = bytes;
+ u64 last_offset;
int ret;
if (btrfs_is_free_space_inode(BTRFS_I(inode))) {
}
again:
+ last_offset = ordered_offset;
ret = btrfs_dec_test_first_ordered_pending(inode, &ordered,
&ordered_offset,
ordered_bytes,
btrfs_init_work(&ordered->work, func, finish_ordered_fn, NULL, NULL);
btrfs_queue_work(wq, &ordered->work);
out_test:
+ /*
+ * If btrfs_dec_test_ordered_pending does not find any ordered extent
+ * in the range, we can exit.
+ */
+ if (ordered_offset == last_offset)
+ return;
/*
* our bio might span multiple ordered extents. If we haven't
* completed the accounting for the whole dio, go back and try again
}
mutex_unlock(&fs_devices->device_list_mutex);
- fi_args->nodesize = fs_info->super_copy->nodesize;
- fi_args->sectorsize = fs_info->super_copy->sectorsize;
- fi_args->clone_alignment = fs_info->super_copy->sectorsize;
+ fi_args->nodesize = fs_info->nodesize;
+ fi_args->sectorsize = fs_info->sectorsize;
+ fi_args->clone_alignment = fs_info->sectorsize;
if (copy_to_user(arg, fi_args, sizeof(*fi_args)))
ret = -EFAULT;
out:
if (ret)
btrfs_cmp_data_free(cmp);
- return 0;
+ return ret;
}
static int btrfs_cmp_data(u64 len, struct cmp_pages *cmp)
ret = PTR_ERR(new_root);
goto out;
}
+ if (!is_fstree(new_root->objectid)) {
+ ret = -ENOENT;
+ goto out;
+ }
path = btrfs_alloc_path();
if (!path) {
}
ret = 0;
out:
- set_bit(BTRFS_FS_QUOTA_DISABLING, &root->fs_info->flags);
btrfs_free_path(path);
return ret;
}
if (!fs_info->quota_root)
goto out;
clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
- set_bit(BTRFS_FS_QUOTA_DISABLING, &fs_info->flags);
btrfs_qgroup_wait_for_completion(fs_info, false);
spin_lock(&fs_info->qgroup_lock);
quota_root = fs_info->quota_root;
}
}
ret = del_qgroup_item(trans, quota_root, qgroupid);
+ if (ret && ret != -ENOENT)
+ goto out;
while (!list_empty(&qgroup->groups)) {
list = list_first_entry(&qgroup->groups,
if (test_and_clear_bit(BTRFS_FS_QUOTA_ENABLING, &fs_info->flags))
set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
- if (test_and_clear_bit(BTRFS_FS_QUOTA_DISABLING, &fs_info->flags))
- clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
spin_lock(&fs_info->qgroup_lock);
while (!list_empty(&fs_info->dirty_qgroups)) {
while (!list_empty(list)) {
reloc_root = list_entry(list->next, struct btrfs_root,
root_list);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
reloc_root->node = NULL;
reloc_root->commit_root = NULL;
- __del_reloc_root(reloc_root);
}
}
} else {
btrfs_warn(sctx->send_root->fs_info, "unexpected inode type %o",
(int)(mode & S_IFMT));
- ret = -ENOTSUPP;
+ ret = -EOPNOTSUPP;
goto out;
}
struct extent_map *em, *n;
struct list_head extents;
struct extent_map_tree *tree = &inode->extent_tree;
+ u64 logged_start, logged_end;
u64 test_gen;
int ret = 0;
int num = 0;
down_write(&inode->dio_sem);
write_lock(&tree->lock);
test_gen = root->fs_info->last_trans_committed;
+ logged_start = start;
+ logged_end = end;
list_for_each_entry_safe(em, n, &tree->modified_extents, list) {
list_del_init(&em->list);
-
/*
* Just an arbitrary number, this can be really CPU intensive
* once we start getting a lot of extents, and really once we
if (em->generation <= test_gen)
continue;
+
+ if (em->start < logged_start)
+ logged_start = em->start;
+ if ((em->start + em->len - 1) > logged_end)
+ logged_end = em->start + em->len - 1;
+
/* Need a ref to keep it from getting evicted from cache */
refcount_inc(&em->refs);
set_bit(EXTENT_FLAG_LOGGING, &em->flags);
}
list_sort(NULL, &extents, extent_cmp);
- btrfs_get_logged_extents(inode, logged_list, start, end);
+ btrfs_get_logged_extents(inode, logged_list, logged_start, logged_end);
/*
* Some ordered extents started by fsync might have completed
* before we could collect them into the list logged_list, which
map_length = length;
btrfs_bio_counter_inc_blocked(fs_info);
- ret = __btrfs_map_block(fs_info, bio_op(bio), logical,
+ ret = __btrfs_map_block(fs_info, btrfs_op(bio), logical,
&map_length, &bbio, mirror_num, 1);
if (ret) {
btrfs_bio_counter_dec(fs_info);
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
-#include <linux/utsname.h>
#include <linux/ratelimit.h>
#include "super.h"
inode = req->r_inode;
ihold(inode);
} else {
- /* req->r_dentry is non-null for LSSNAP request.
- * fall-thru */
- WARN_ON_ONCE(!req->r_dentry);
+ /* req->r_dentry is non-null for LSSNAP request */
+ rcu_read_lock();
+ inode = get_nonsnap_parent(req->r_dentry);
+ rcu_read_unlock();
+ dout("__choose_mds using snapdir's parent %p\n", inode);
}
- }
- if (!inode && req->r_dentry) {
+ } else if (req->r_dentry) {
/* ignore race with rename; old or new d_parent is okay */
struct dentry *parent;
struct inode *dir;
void *p;
const char* metadata[][2] = {
- {"hostname", utsname()->nodename},
- {"kernel_version", utsname()->release},
+ {"hostname", mdsc->nodename},
+ {"kernel_version", init_utsname()->release},
{"entity_id", opt->name ? : ""},
{"root", fsopt->server_path ? : "/"},
{NULL, NULL}
init_rwsem(&mdsc->pool_perm_rwsem);
mdsc->pool_perm_tree = RB_ROOT;
+ strncpy(mdsc->nodename, utsname()->nodename,
+ sizeof(mdsc->nodename) - 1);
return 0;
}
#include <linux/rbtree.h>
#include <linux/spinlock.h>
#include <linux/refcount.h>
+#include <linux/utsname.h>
#include <linux/ceph/types.h>
#include <linux/ceph/messenger.h>
struct rw_semaphore pool_perm_rwsem;
struct rb_root pool_perm_tree;
+
+ char nodename[__NEW_UTS_LEN + 1];
};
extern const char *ceph_mds_op_name(int op);
realm->ino, realm, snapc, snapc->seq,
(unsigned int) snapc->num_snaps);
- if (realm->cached_context) {
- ceph_put_snap_context(realm->cached_context);
- /* queue realm for cap_snap creation */
- list_add_tail(&realm->dirty_item, dirty_realms);
- }
+ ceph_put_snap_context(realm->cached_context);
realm->cached_context = snapc;
+ /* queue realm for cap_snap creation */
+ list_add_tail(&realm->dirty_item, dirty_realms);
return 0;
fail:
if ((ses->serverDomain == NULL) ||
(ses->serverOS == NULL) ||
(ses->serverNOS == NULL)) {
- seq_printf(m, "\n%d) entry for %s not fully "
- "displayed\n\t", i, ses->serverName);
+ seq_printf(m, "\n%d) Name: %s Uses: %d Capability: 0x%x\tSession Status: %d\t",
+ i, ses->serverName, ses->ses_count,
+ ses->capabilities, ses->status);
+ if (ses->session_flags & SMB2_SESSION_FLAG_IS_GUEST)
+ seq_printf(m, "Guest\t");
+ else if (ses->session_flags & SMB2_SESSION_FLAG_IS_NULL)
+ seq_printf(m, "Anonymous\t");
} else {
seq_printf(m,
"\n%d) Name: %s Domain: %s Uses: %d OS:"
seq_puts(s, ",nocase");
if (tcon->retry)
seq_puts(s, ",hard");
+ else
+ seq_puts(s, ",soft");
if (tcon->use_persistent)
seq_puts(s, ",persistenthandles");
else if (tcon->use_resilient)
exit_cifs_idmap();
#endif
#ifdef CONFIG_CIFS_UPCALL
- unregister_key_type(&cifs_spnego_key_type);
+ exit_cifs_spnego();
#endif
cifs_destroy_request_bufs();
cifs_destroy_mids();
extern const struct export_operations cifs_export_ops;
#endif /* CONFIG_CIFS_NFSD_EXPORT */
-#define CIFS_VERSION "2.09"
+#define CIFS_VERSION "2.10"
#endif /* _CIFSFS_H */
#ifdef CONFIG_CIFS_SMB311
Smb_311,
#endif /* SMB311 */
+ Smb_3any,
+ Smb_default,
Smb_version_err
};
#define SMB21_VERSION_STRING "2.1"
extern struct smb_version_operations smb21_operations;
extern struct smb_version_values smb21_values;
+#define SMBDEFAULT_VERSION_STRING "default"
+extern struct smb_version_values smbdefault_values;
+#define SMB3ANY_VERSION_STRING "3"
+extern struct smb_version_values smb3any_values;
#define SMB30_VERSION_STRING "3.0"
extern struct smb_version_operations smb30_operations;
extern struct smb_version_values smb30_values;
{ Smb_311, SMB311_VERSION_STRING },
{ Smb_311, ALT_SMB311_VERSION_STRING },
#endif /* SMB311 */
+ { Smb_3any, SMB3ANY_VERSION_STRING },
+ { Smb_default, SMBDEFAULT_VERSION_STRING },
{ Smb_version_err, NULL }
};
vol->vals = &smb311_values;
break;
#endif /* SMB311 */
+ case Smb_3any:
+ vol->ops = &smb30_operations; /* currently identical with 3.0 */
+ vol->vals = &smb3any_values;
+ break;
+ case Smb_default:
+ vol->ops = &smb30_operations; /* currently identical with 3.0 */
+ vol->vals = &smbdefault_values;
+ break;
default:
cifs_dbg(VFS, "Unknown vers= option specified: %s\n", value);
return 1;
vol->actimeo = CIFS_DEF_ACTIMEO;
- /* FIXME: add autonegotiation for SMB3 or later rather than just SMB3 */
- vol->ops = &smb30_operations; /* both secure and accepted widely */
- vol->vals = &smb30_values;
+ /* offer SMB2.1 and later (SMB3 etc). Secure and widely accepted */
+ vol->ops = &smb30_operations;
+ vol->vals = &smbdefault_values;
vol->echo_interval = SMB_ECHO_INTERVAL_DEFAULT;
if (got_version == false)
pr_warn("No dialect specified on mount. Default has changed to "
- "a more secure dialect, SMB3 (vers=3.0), from CIFS "
+ "a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS "
"(SMB1). To use the less secure SMB1 dialect to access "
- "old servers which do not support SMB3 specify vers=1.0"
- " on mount. For somewhat newer servers such as Windows "
- "7 try vers=2.1.\n");
+ "old servers which do not support SMB3 (or SMB2.1) specify vers=1.0"
+ " on mount.\n");
kfree(mountdata_copy);
return 0;
if (vol->nosharesock)
return 0;
+ /* BB update this for smb3any and default case */
if ((server->vals != vol->vals) || (server->ops != vol->ops))
return 0;
cifs_dbg(FYI, "Security Mode: 0x%x Capabilities: 0x%x TimeAdjust: %d\n",
server->sec_mode, server->capabilities, server->timeAdj);
+ if (ses->auth_key.response) {
+ cifs_dbg(VFS, "Free previous auth_key.response = %p\n",
+ ses->auth_key.response);
+ kfree(ses->auth_key.response);
+ ses->auth_key.response = NULL;
+ ses->auth_key.len = 0;
+ }
+
if (server->ops->sess_setup)
rc = server->ops->sess_setup(xid, ses, nls_info);
if (backup_cred(cifs_sb))
create_options |= CREATE_OPEN_BACKUP_INTENT;
+ /* O_SYNC also has bit for O_DSYNC so following check picks up either */
+ if (f_flags & O_SYNC)
+ create_options |= CREATE_WRITE_THROUGH;
+
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
oparms.tcon = tcon;
oparms.cifs_sb = cifs_sb;
oparms.desired_access = desired_access;
struct cifs_tcon *tcon;
unsigned int num, max_num, max_buf;
LOCKING_ANDX_RANGE *buf, *cur;
- int types[] = {LOCKING_ANDX_LARGE_FILES,
- LOCKING_ANDX_SHARED_LOCK | LOCKING_ANDX_LARGE_FILES};
+ static const int types[] = {
+ LOCKING_ANDX_LARGE_FILES,
+ LOCKING_ANDX_SHARED_LOCK | LOCKING_ANDX_LARGE_FILES
+ };
int i;
xid = get_xid();
unsigned int xid)
{
int rc = 0, stored_rc;
- int types[] = {LOCKING_ANDX_LARGE_FILES,
- LOCKING_ANDX_SHARED_LOCK | LOCKING_ANDX_LARGE_FILES};
+ static const int types[] = {
+ LOCKING_ANDX_LARGE_FILES,
+ LOCKING_ANDX_SHARED_LOCK | LOCKING_ANDX_LARGE_FILES
+ };
unsigned int i;
unsigned int max_num, num, max_buf;
LOCKING_ANDX_RANGE *buf, *cur;
fattr->cf_atime = cifs_NTtimeToUnix(info->LastAccessTime);
fattr->cf_mtime = cifs_NTtimeToUnix(info->LastModificationTime);
fattr->cf_ctime = cifs_NTtimeToUnix(info->LastStatusChange);
+ /* old POSIX extensions don't get create time */
+
fattr->cf_mode = le64_to_cpu(info->Permissions);
/*
stat->blksize = CIFS_MAX_MSGSIZE;
stat->ino = CIFS_I(inode)->uniqueid;
+ /* old CIFS Unix Extensions doesn't return create time */
+ if (CIFS_I(inode)->createtime) {
+ stat->result_mask |= STATX_BTIME;
+ stat->btime =
+ cifs_NTtimeToUnix(cpu_to_le64(CIFS_I(inode)->createtime));
+ }
+
+ stat->attributes_mask |= (STATX_ATTR_COMPRESSED | STATX_ATTR_ENCRYPTED);
+ if (CIFS_I(inode)->cifsAttrs & FILE_ATTRIBUTE_COMPRESSED)
+ stat->attributes |= STATX_ATTR_COMPRESSED;
+ if (CIFS_I(inode)->cifsAttrs & FILE_ATTRIBUTE_ENCRYPTED)
+ stat->attributes |= STATX_ATTR_ENCRYPTED;
+
/*
* If on a multiuser mount without unix extensions or cifsacl being
* enabled, and the admin hasn't overridden them, set the ownership
return rc;
}
+#ifdef CONFIG_CIFS_XATTR
static ssize_t
move_smb2_ea_to_cifs(char *dst, size_t dst_size,
struct smb2_file_full_ea_info *src, size_t src_size,
return rc;
}
+#endif
static bool
smb2_can_echo(struct TCP_Server_Info *server)
.create_lease_size = sizeof(struct create_lease),
};
+struct smb_version_values smb3any_values = {
+ .version_string = SMB3ANY_VERSION_STRING,
+ .protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+ .req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
+ .large_lock_type = 0,
+ .exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ .shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+ .unlock_lock_type = SMB2_LOCKFLAG_UNLOCK,
+ .header_size = sizeof(struct smb2_hdr),
+ .max_header_size = MAX_SMB2_HDR_SIZE,
+ .read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
+ .lock_cmd = SMB2_LOCK,
+ .cap_unix = 0,
+ .cap_nt_find = SMB2_NT_FIND,
+ .cap_large_files = SMB2_LARGE_FILES,
+ .signing_enabled = SMB2_NEGOTIATE_SIGNING_ENABLED | SMB2_NEGOTIATE_SIGNING_REQUIRED,
+ .signing_required = SMB2_NEGOTIATE_SIGNING_REQUIRED,
+ .create_lease_size = sizeof(struct create_lease_v2),
+};
+
+struct smb_version_values smbdefault_values = {
+ .version_string = SMBDEFAULT_VERSION_STRING,
+ .protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+ .req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
+ .large_lock_type = 0,
+ .exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ .shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+ .unlock_lock_type = SMB2_LOCKFLAG_UNLOCK,
+ .header_size = sizeof(struct smb2_hdr),
+ .max_header_size = MAX_SMB2_HDR_SIZE,
+ .read_rsp_size = sizeof(struct smb2_read_rsp) - 1,
+ .lock_cmd = SMB2_LOCK,
+ .cap_unix = 0,
+ .cap_nt_find = SMB2_NT_FIND,
+ .cap_large_files = SMB2_LARGE_FILES,
+ .signing_enabled = SMB2_NEGOTIATE_SIGNING_ENABLED | SMB2_NEGOTIATE_SIGNING_REQUIRED,
+ .signing_required = SMB2_NEGOTIATE_SIGNING_REQUIRED,
+ .create_lease_size = sizeof(struct create_lease_v2),
+};
+
struct smb_version_values smb30_values = {
.version_string = SMB30_VERSION_STRING,
.protocol_id = SMB30_PROT_ID,
build_encrypt_ctxt((struct smb2_encryption_neg_context *)pneg_ctxt);
req->NegotiateContextOffset = cpu_to_le32(OFFSET_OF_NEG_CONTEXT);
req->NegotiateContextCount = cpu_to_le16(2);
- inc_rfc1001_len(req, 4 + sizeof(struct smb2_preauth_neg_context) + 2
+ inc_rfc1001_len(req, 4 + sizeof(struct smb2_preauth_neg_context)
+ sizeof(struct smb2_encryption_neg_context)); /* calculate hash */
}
#else
req->hdr.sync_hdr.SessionId = 0;
- req->Dialects[0] = cpu_to_le16(ses->server->vals->protocol_id);
-
- req->DialectCount = cpu_to_le16(1); /* One vers= at a time for now */
- inc_rfc1001_len(req, 2);
+ if (strcmp(ses->server->vals->version_string,
+ SMB3ANY_VERSION_STRING) == 0) {
+ req->Dialects[0] = cpu_to_le16(SMB30_PROT_ID);
+ req->Dialects[1] = cpu_to_le16(SMB302_PROT_ID);
+ req->DialectCount = cpu_to_le16(2);
+ inc_rfc1001_len(req, 4);
+ } else if (strcmp(ses->server->vals->version_string,
+ SMBDEFAULT_VERSION_STRING) == 0) {
+ req->Dialects[0] = cpu_to_le16(SMB21_PROT_ID);
+ req->Dialects[1] = cpu_to_le16(SMB30_PROT_ID);
+ req->Dialects[2] = cpu_to_le16(SMB302_PROT_ID);
+ req->DialectCount = cpu_to_le16(3);
+ inc_rfc1001_len(req, 6);
+ } else {
+ /* otherwise send specific dialect */
+ req->Dialects[0] = cpu_to_le16(ses->server->vals->protocol_id);
+ req->DialectCount = cpu_to_le16(1);
+ inc_rfc1001_len(req, 2);
+ }
/* only one of SMB2 signing flags may be set in SMB2 request */
if (ses->sign)
*/
if (rc == -EOPNOTSUPP) {
cifs_dbg(VFS, "Dialect not supported by server. Consider "
- "specifying vers=1.0 or vers=2.1 on mount for accessing"
+ "specifying vers=1.0 or vers=2.0 on mount for accessing"
" older servers\n");
goto neg_exit;
} else if (rc != 0)
goto neg_exit;
+ if (strcmp(ses->server->vals->version_string,
+ SMB3ANY_VERSION_STRING) == 0) {
+ if (rsp->DialectRevision == cpu_to_le16(SMB20_PROT_ID)) {
+ cifs_dbg(VFS,
+ "SMB2 dialect returned but not requested\n");
+ return -EIO;
+ } else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID)) {
+ cifs_dbg(VFS,
+ "SMB2.1 dialect returned but not requested\n");
+ return -EIO;
+ }
+ } else if (strcmp(ses->server->vals->version_string,
+ SMBDEFAULT_VERSION_STRING) == 0) {
+ if (rsp->DialectRevision == cpu_to_le16(SMB20_PROT_ID)) {
+ cifs_dbg(VFS,
+ "SMB2 dialect returned but not requested\n");
+ return -EIO;
+ } else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID)) {
+ /* ops set to 3.0 by default for default so update */
+ ses->server->ops = &smb21_operations;
+ }
+ } else if (le16_to_cpu(rsp->DialectRevision) !=
+ ses->server->vals->protocol_id) {
+ /* if requested single dialect ensure returned dialect matched */
+ cifs_dbg(VFS, "Illegal 0x%x dialect returned: not requested\n",
+ le16_to_cpu(rsp->DialectRevision));
+ return -EIO;
+ }
+
cifs_dbg(FYI, "mode 0x%x\n", rsp->SecurityMode);
- /* BB we may eventually want to match the negotiated vs. requested
- dialect, even though we are only requesting one at a time */
if (rsp->DialectRevision == cpu_to_le16(SMB20_PROT_ID))
cifs_dbg(FYI, "negotiated smb2.0 dialect\n");
else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID))
}
server->dialect = le16_to_cpu(rsp->DialectRevision);
+ /* BB: add check that dialect was valid given dialect(s) we asked for */
+
/* SMB2 only has an extended negflavor */
server->negflavor = CIFS_NEGFLAVOR_EXTENDED;
/* set it to the maximum buffer size value we can send with 1 credit */
struct validate_negotiate_info_req vneg_inbuf;
struct validate_negotiate_info_rsp *pneg_rsp;
u32 rsplen;
+ u32 inbuflen; /* max of 4 dialects */
cifs_dbg(FYI, "validate negotiate\n");
/*
* validation ioctl must be signed, so no point sending this if we
- * can not sign it. We could eventually change this to selectively
+ * can not sign it (ie are not known user). Even if signing is not
+ * required (enabled but not negotiated), in those cases we selectively
* sign just this, the first and only signed request on a connection.
- * This is good enough for now since a user who wants better security
- * would also enable signing on the mount. Having validation of
- * negotiate info for signed connections helps reduce attack vectors
+ * Having validation of negotiate info helps reduce attack vectors.
*/
- if (tcon->ses->server->sign == false)
+ if (tcon->ses->session_flags & SMB2_SESSION_FLAG_IS_GUEST)
return 0; /* validation requires signing */
+ if (tcon->ses->user_name == NULL) {
+ cifs_dbg(FYI, "Can't validate negotiate: null user mount\n");
+ return 0; /* validation requires signing */
+ }
+
+ if (tcon->ses->session_flags & SMB2_SESSION_FLAG_IS_NULL)
+ cifs_dbg(VFS, "Unexpected null user (anonymous) auth flag sent by server\n");
+
vneg_inbuf.Capabilities =
cpu_to_le32(tcon->ses->server->vals->req_capabilities);
memcpy(vneg_inbuf.Guid, tcon->ses->server->client_guid,
else
vneg_inbuf.SecurityMode = 0;
- vneg_inbuf.DialectCount = cpu_to_le16(1);
- vneg_inbuf.Dialects[0] =
- cpu_to_le16(tcon->ses->server->vals->protocol_id);
+
+ if (strcmp(tcon->ses->server->vals->version_string,
+ SMB3ANY_VERSION_STRING) == 0) {
+ vneg_inbuf.Dialects[0] = cpu_to_le16(SMB30_PROT_ID);
+ vneg_inbuf.Dialects[1] = cpu_to_le16(SMB302_PROT_ID);
+ vneg_inbuf.DialectCount = cpu_to_le16(2);
+ /* structure is big enough for 3 dialects, sending only 2 */
+ inbuflen = sizeof(struct validate_negotiate_info_req) - 2;
+ } else if (strcmp(tcon->ses->server->vals->version_string,
+ SMBDEFAULT_VERSION_STRING) == 0) {
+ vneg_inbuf.Dialects[0] = cpu_to_le16(SMB21_PROT_ID);
+ vneg_inbuf.Dialects[1] = cpu_to_le16(SMB30_PROT_ID);
+ vneg_inbuf.Dialects[2] = cpu_to_le16(SMB302_PROT_ID);
+ vneg_inbuf.DialectCount = cpu_to_le16(3);
+ /* structure is big enough for 3 dialects */
+ inbuflen = sizeof(struct validate_negotiate_info_req);
+ } else {
+ /* otherwise specific dialect was requested */
+ vneg_inbuf.Dialects[0] =
+ cpu_to_le16(tcon->ses->server->vals->protocol_id);
+ vneg_inbuf.DialectCount = cpu_to_le16(1);
+ /* structure is big enough for 3 dialects, sending only 1 */
+ inbuflen = sizeof(struct validate_negotiate_info_req) - 4;
+ }
rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
while (sess_data->func)
sess_data->func(sess_data);
+ if ((ses->session_flags & SMB2_SESSION_FLAG_IS_GUEST) && (ses->sign))
+ cifs_dbg(VFS, "signing requested but authenticated as guest\n");
rc = sess_data->result;
out:
kfree(sess_data);
struct cifs_tcon *tcon = oparms->tcon;
struct cifs_ses *ses = tcon->ses;
struct kvec iov[4];
- struct kvec rsp_iov;
+ struct kvec rsp_iov = {NULL, 0};
int resp_buftype;
int uni_path_len;
__le16 *copy_path = NULL;
if (rc != 0) {
cifs_stats_fail_inc(tcon, SMB2_CREATE_HE);
- if (err_buf)
+ if (err_buf && rsp)
*err_buf = kmemdup(rsp, get_rfc1002_length(rsp) + 4,
GFP_KERNEL);
goto creat_exit;
__u8 Guid[SMB2_CLIENT_GUID_SIZE];
__le16 SecurityMode;
__le16 DialectCount;
- __le16 Dialects[1]; /* dialect (someday maybe list) client asked for */
+ __le16 Dialects[3]; /* BB expand this if autonegotiate > 3 dialects */
} __packed;
struct validate_negotiate_info_rsp {
{
loff_t offset = dio->iocb->ki_pos;
ssize_t transferred = 0;
+ int err;
/*
* AIO submission can race with bio completion to get here while
if (ret == 0)
ret = transferred;
+ /*
+ * Try again to invalidate clean pages which might have been cached by
+ * non-direct readahead, or faulted in by get_user_pages() if the source
+ * of the write was an mmap'ed region of the file we're writing. Either
+ * one is a pretty crazy thing to do, so we don't support it 100%. If
+ * this invalidation fails, tough, the write still worked...
+ */
+ if (ret > 0 && dio->op == REQ_OP_WRITE &&
+ dio->inode->i_mapping->nrpages) {
+ err = invalidate_inode_pages2_range(dio->inode->i_mapping,
+ offset >> PAGE_SHIFT,
+ (offset + ret - 1) >> PAGE_SHIFT);
+ WARN_ON_ONCE(err);
+ }
+
if (dio->end_io) {
- int err;
// XXX: ki_pos??
err = dio->end_io(dio->iocb, offset, ret, dio->private);
struct dio *dio = bio->bi_private;
unsigned long remaining;
unsigned long flags;
+ bool defer_completion = false;
/* cleanup the bio */
dio_bio_complete(dio, bio);
spin_unlock_irqrestore(&dio->bio_lock, flags);
if (remaining == 0) {
- if (dio->result && dio->defer_completion) {
+ /*
+ * Defer completion when defer_completion is set or
+ * when the inode has pages mapped and this is AIO write.
+ * We need to invalidate those pages because there is a
+ * chance they contain stale data in the case buffered IO
+ * went in between AIO submission and completion into the
+ * same region.
+ */
+ if (dio->result)
+ defer_completion = dio->defer_completion ||
+ (dio->op == REQ_OP_WRITE &&
+ dio->inode->i_mapping->nrpages);
+ if (defer_completion) {
INIT_WORK(&dio->complete_work, dio_aio_complete_work);
queue_work(dio->inode->i_sb->s_dio_done_wq,
&dio->complete_work);
* For AIO O_(D)SYNC writes we need to defer completions to a workqueue
* so that we can call ->fsync.
*/
- if (dio->is_async && iov_iter_rw(iter) == WRITE &&
- ((iocb->ki_filp->f_flags & O_DSYNC) ||
- IS_SYNC(iocb->ki_filp->f_mapping->host))) {
- retval = dio_set_defer_completion(dio);
+ if (dio->is_async && iov_iter_rw(iter) == WRITE) {
+ retval = 0;
+ if ((iocb->ki_filp->f_flags & O_DSYNC) ||
+ IS_SYNC(iocb->ki_filp->f_mapping->host))
+ retval = dio_set_defer_completion(dio);
+ else if (!dio->inode->i_sb->s_dio_done_wq) {
+ /*
+ * In case of AIO write racing with buffered read we
+ * need to defer completion. We can't decide this now,
+ * however the workqueue needs to be initialized here.
+ */
+ retval = sb_init_dio_done_wq(dio->inode->i_sb);
+ }
if (retval) {
/*
* We grab i_mutex only for reads so we don't have
kfree(bprm);
}
-int bprm_change_interp(char *interp, struct linux_binprm *bprm)
+int bprm_change_interp(const char *interp, struct linux_binprm *bprm)
{
/* If a binfmt changed the interp, free it first. */
if (bprm->interp != bprm->filename)
* specific si_codes. In that case use SI_SIGIO instead
* to remove the ambiguity.
*/
- if (sig_specific_sicodes(signum))
+ if ((signum != SIGPOLL) && sig_specific_sicodes(signum))
si.si_code = SI_SIGIO;
/* Make sure we are called with one of the POLL_*
{
struct gfs2_glock_iter *gi = seq->private;
loff_t n = *pos;
- int ret;
-
- if (gi->last_pos <= *pos)
- n = (*pos - gi->last_pos);
- ret = rhashtable_walk_start(&gi->hti);
- if (ret)
+ rhashtable_walk_enter(&gl_hash_table, &gi->hti);
+ if (rhashtable_walk_start(&gi->hti) != 0)
return NULL;
do {
} while (gi->gl && n--);
gi->last_pos = *pos;
+
return gi->gl;
}
(*pos)++;
gi->last_pos = *pos;
gfs2_glock_iter_next(gi);
+
return gi->gl;
}
gi->gl = NULL;
rhashtable_walk_stop(&gi->hti);
+ rhashtable_walk_exit(&gi->hti);
}
static int gfs2_glock_seq_show(struct seq_file *seq, void *iter_ptr)
struct gfs2_glock_iter *gi = seq->private;
gi->sdp = inode->i_private;
- gi->last_pos = 0;
seq->buf = kmalloc(GFS2_SEQ_GOODSIZE, GFP_KERNEL | __GFP_NOWARN);
if (seq->buf)
seq->size = GFS2_SEQ_GOODSIZE;
gi->gl = NULL;
- rhashtable_walk_enter(&gl_hash_table, &gi->hti);
}
return ret;
}
struct gfs2_glock_iter *gi = seq->private;
gi->gl = NULL;
- rhashtable_walk_exit(&gi->hti);
return seq_release_private(inode, file);
}
static ssize_t iomap_dio_complete(struct iomap_dio *dio)
{
struct kiocb *iocb = dio->iocb;
+ struct inode *inode = file_inode(iocb->ki_filp);
ssize_t ret;
+ /*
+ * Try again to invalidate clean pages which might have been cached by
+ * non-direct readahead, or faulted in by get_user_pages() if the source
+ * of the write was an mmap'ed region of the file we're writing. Either
+ * one is a pretty crazy thing to do, so we don't support it 100%. If
+ * this invalidation fails, tough, the write still worked...
+ */
+ if (!dio->error &&
+ (dio->flags & IOMAP_DIO_WRITE) && inode->i_mapping->nrpages) {
+ ret = invalidate_inode_pages2_range(inode->i_mapping,
+ iocb->ki_pos >> PAGE_SHIFT,
+ (iocb->ki_pos + dio->size - 1) >> PAGE_SHIFT);
+ WARN_ON_ONCE(ret);
+ }
+
if (dio->end_io) {
ret = dio->end_io(iocb,
dio->error ? dio->error : dio->size,
WARN_ON_ONCE(ret);
ret = 0;
+ if (iov_iter_rw(iter) == WRITE && !is_sync_kiocb(iocb) &&
+ !inode->i_sb->s_dio_done_wq) {
+ ret = sb_init_dio_done_wq(inode->i_sb);
+ if (ret < 0)
+ goto out_free_dio;
+ }
+
inode_dio_begin(inode);
blk_start_plug(&plug);
if (ret < 0)
iomap_dio_set_error(dio, ret);
- if (ret >= 0 && iov_iter_rw(iter) == WRITE && !is_sync_kiocb(iocb) &&
- !inode->i_sb->s_dio_done_wq) {
- ret = sb_init_dio_done_wq(inode->i_sb);
- if (ret < 0)
- iomap_dio_set_error(dio, ret);
- }
-
if (!atomic_dec_and_test(&dio->ref)) {
if (!is_sync_kiocb(iocb))
return -EIOCBQUEUED;
ret = iomap_dio_complete(dio);
- /*
- * Try again to invalidate clean pages which might have been cached by
- * non-direct readahead, or faulted in by get_user_pages() if the source
- * of the write was an mmap'ed region of the file we're writing. Either
- * one is a pretty crazy thing to do, so we don't support it 100%. If
- * this invalidation fails, tough, the write still worked...
- */
- if (iov_iter_rw(iter) == WRITE) {
- int err = invalidate_inode_pages2_range(mapping,
- start >> PAGE_SHIFT, end >> PAGE_SHIFT);
- WARN_ON_ONCE(err);
- }
-
return ret;
out_free_dio:
if (sbi->s_fmode != ISOFS_INVALID_MODE)
seq_printf(m, ",fmode=%o", sbi->s_fmode);
+#ifdef CONFIG_JOLIET
if (sbi->s_nls_iocharset &&
strcmp(sbi->s_nls_iocharset->charset, CONFIG_NLS_DEFAULT) != 0)
seq_printf(m, ",iocharset=%s", sbi->s_nls_iocharset->charset);
+#endif
return 0;
}
/* File refers to upper, writable layer? */
upperdentry = d_real(dentry, NULL, 0, D_REAL_UPPER);
- if (upperdentry && file_inode(file) == d_inode(upperdentry))
+ if (upperdentry &&
+ (file_inode(file) == d_inode(upperdentry) ||
+ file_inode(file) == d_inode(dentry)))
return 0;
/* Lower layer: can't write to real file, sorry... */
c->tmpfile = true;
err = ovl_copy_up_locked(c);
} else {
- err = -EIO;
- if (lock_rename(c->workdir, c->destdir) != NULL) {
- pr_err("overlayfs: failed to lock workdir+upperdir\n");
- } else {
+ err = ovl_lock_rename_workdir(c->workdir, c->destdir);
+ if (!err) {
err = ovl_copy_up_locked(c);
unlock_rename(c->workdir, c->destdir);
}
return err;
}
-static int ovl_lock_rename_workdir(struct dentry *workdir,
- struct dentry *upperdir)
-{
- /* Workdir should not be the same as upperdir */
- if (workdir == upperdir)
- goto err;
-
- /* Workdir should not be subdir of upperdir and vice versa */
- if (lock_rename(workdir, upperdir) != NULL)
- goto err_unlock;
-
- return 0;
-
-err_unlock:
- unlock_rename(workdir, upperdir);
-err:
- pr_err("overlayfs: failed to lock workdir+upperdir\n");
- return -EIO;
-}
-
static struct dentry *ovl_clear_empty(struct dentry *dentry,
struct list_head *list)
{
index = lookup_one_len_unlocked(name.name, ofs->indexdir, name.len);
if (IS_ERR(index)) {
+ err = PTR_ERR(index);
pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%*s, err=%i);\n"
"overlayfs: mount with '-o index=off' to disable inodes index.\n",
d_inode(origin)->i_ino, name.len, name.name,
void ovl_inuse_unlock(struct dentry *dentry);
int ovl_nlink_start(struct dentry *dentry, bool *locked);
void ovl_nlink_end(struct dentry *dentry, bool locked);
+int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir);
static inline bool ovl_is_impuredir(struct dentry *dentry)
{
bool noxattr;
/* sb common to all layers */
struct super_block *same_sb;
+ /* Did we take the inuse lock? */
+ bool upperdir_locked;
+ bool workdir_locked;
};
/* private information held for every overlayfs dentry */
struct path *lowerstack, unsigned int numlower)
{
int err;
+ struct dentry *index = NULL;
struct inode *dir = dentry->d_inode;
struct path path = { .mnt = mnt, .dentry = dentry };
LIST_HEAD(list);
inode_lock_nested(dir, I_MUTEX_PARENT);
list_for_each_entry(p, &list, l_node) {
- struct dentry *index;
-
if (p->name[0] == '.') {
if (p->len == 1)
continue;
index = lookup_one_len(p->name, dentry, p->len);
if (IS_ERR(index)) {
err = PTR_ERR(index);
+ index = NULL;
break;
}
err = ovl_verify_index(index, lowerstack, numlower);
break;
}
dput(index);
+ index = NULL;
}
+ dput(index);
inode_unlock(dir);
out:
ovl_cache_free(&list);
dput(ufs->indexdir);
dput(ufs->workdir);
- ovl_inuse_unlock(ufs->workbasedir);
+ if (ufs->workdir_locked)
+ ovl_inuse_unlock(ufs->workbasedir);
dput(ufs->workbasedir);
- if (ufs->upper_mnt)
+ if (ufs->upper_mnt && ufs->upperdir_locked)
ovl_inuse_unlock(ufs->upper_mnt->mnt_root);
mntput(ufs->upper_mnt);
for (i = 0; i < ufs->numlower; i++)
goto out_put_upperpath;
err = -EBUSY;
- if (!ovl_inuse_trylock(upperpath.dentry)) {
- pr_err("overlayfs: upperdir is in-use by another mount\n");
+ if (ovl_inuse_trylock(upperpath.dentry)) {
+ ufs->upperdir_locked = true;
+ } else if (ufs->config.index) {
+ pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
goto out_put_upperpath;
+ } else {
+ pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
}
err = ovl_mount_dir(ufs->config.workdir, &workpath);
}
err = -EBUSY;
- if (!ovl_inuse_trylock(workpath.dentry)) {
- pr_err("overlayfs: workdir is in-use by another mount\n");
+ if (ovl_inuse_trylock(workpath.dentry)) {
+ ufs->workdir_locked = true;
+ } else if (ufs->config.index) {
+ pr_err("overlayfs: workdir is in-use by another mount, mount with '-o index=off' to override exclusive workdir protection.\n");
goto out_put_workpath;
+ } else {
+ pr_warn("overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
}
ufs->workbasedir = workpath.dentry;
out_free_lowertmp:
kfree(lowertmp);
out_unlock_workdentry:
- ovl_inuse_unlock(workpath.dentry);
+ if (ufs->workdir_locked)
+ ovl_inuse_unlock(workpath.dentry);
out_put_workpath:
path_put(&workpath);
out_unlock_upperdentry:
- ovl_inuse_unlock(upperpath.dentry);
+ if (ufs->upperdir_locked)
+ ovl_inuse_unlock(upperpath.dentry);
out_put_upperpath:
path_put(&upperpath);
out_free_config:
}
}
-/* Called must hold OVL_I(inode)->oi_lock */
+/* Caller must hold OVL_I(inode)->lock */
static void ovl_cleanup_index(struct dentry *dentry)
{
struct inode *dir = ovl_indexdir(dentry->d_sb)->d_inode;
err = PTR_ERR(index);
if (!IS_ERR(index))
err = ovl_cleanup(dir, index);
+ else
+ index = NULL;
+
inode_unlock(dir);
if (err)
goto fail;
mutex_unlock(&OVL_I(d_inode(dentry))->lock);
}
}
+
+int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir)
+{
+ /* Workdir should not be the same as upperdir */
+ if (workdir == upperdir)
+ goto err;
+
+ /* Workdir should not be subdir of upperdir and vice versa */
+ if (lock_rename(workdir, upperdir) != NULL)
+ goto err_unlock;
+
+ return 0;
+
+err_unlock:
+ unlock_rename(workdir, upperdir);
+err:
+ pr_err("overlayfs: failed to lock workdir+upperdir\n");
+ return -EIO;
+}
#include <linux/mman.h>
#include <linux/sched/mm.h>
#include <linux/sched/numa_balancing.h>
+#include <linux/sched/task_stack.h>
#include <linux/sched/task.h>
#include <linux/sched/cputime.h>
#include <linux/proc_fs.h>
* simple bit tests.
*/
static const char * const task_state_array[] = {
- "R (running)", /* 0 */
- "S (sleeping)", /* 1 */
- "D (disk sleep)", /* 2 */
- "T (stopped)", /* 4 */
- "t (tracing stop)", /* 8 */
- "X (dead)", /* 16 */
- "Z (zombie)", /* 32 */
+
+ /* states in TASK_REPORT: */
+ "R (running)", /* 0x00 */
+ "S (sleeping)", /* 0x01 */
+ "D (disk sleep)", /* 0x02 */
+ "T (stopped)", /* 0x04 */
+ "t (tracing stop)", /* 0x08 */
+ "X (dead)", /* 0x10 */
+ "Z (zombie)", /* 0x20 */
+ "P (parked)", /* 0x40 */
+
+ /* states beyond TASK_REPORT: */
+ "I (idle)", /* 0x80 */
};
static inline const char *get_task_state(struct task_struct *tsk)
{
- unsigned int state = (tsk->state | tsk->exit_state) & TASK_REPORT;
-
- /*
- * Parked tasks do not run; they sit in __kthread_parkme().
- * Without this check, we would report them as running, which is
- * clearly wrong, so we report them as sleeping instead.
- */
- if (tsk->state == TASK_PARKED)
- state = TASK_INTERRUPTIBLE;
-
- BUILD_BUG_ON(1 + ilog2(TASK_REPORT) != ARRAY_SIZE(task_state_array)-1);
-
- return task_state_array[fls(state)];
+ BUILD_BUG_ON(1 + ilog2(TASK_REPORT_MAX) != ARRAY_SIZE(task_state_array));
+ return task_state_array[__get_task_state(tsk)];
}
static inline int get_task_umask(struct task_struct *tsk)
* esp and eip are intentionally zeroed out. There is no
* non-racy way to read them without freezing the task.
* Programs that need reliable values can use ptrace(2).
+ *
+ * The only exception is if the task is core dumping because
+ * a program is not able to use ptrace(2) in that case. It is
+ * safe because the task has stopped executing permanently.
*/
+ if (permitted && (task->flags & PF_DUMPCORE)) {
+ eip = KSTK_EIP(task);
+ esp = KSTK_ESP(task);
+ }
}
get_task_comm(tcomm, task);
ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space, 0,
&warn_to[cnt]);
if (ret) {
+ spin_lock(&transfer_to[cnt]->dq_dqb_lock);
dquot_decr_inodes(transfer_to[cnt], inode_usage);
+ spin_unlock(&transfer_to[cnt]->dq_dqb_lock);
goto over_quota;
}
}
if (!dquot->dq_off) {
alloc = true;
down_write(&dqopt->dqio_sem);
+ } else {
+ down_read(&dqopt->dqio_sem);
}
ret = qtree_write_dquot(
sb_dqinfo(dquot->dq_sb, dquot->dq_id.type)->dqi_priv,
dquot);
if (alloc)
up_write(&dqopt->dqio_sem);
+ else
+ up_read(&dqopt->dqio_sem);
return ret;
}
* In the generic case the entire file is data, so as long as
* offset isn't at the end of the file then the offset is data.
*/
- if (offset >= eof)
+ if ((unsigned long long)offset >= eof)
return -ENXIO;
break;
case SEEK_HOLE:
* There is a virtual hole at the end of the file, so as long as
* offset isn't i_size or larger, return i_size.
*/
- if (offset >= eof)
+ if ((unsigned long long)offset >= eof)
return -ENXIO;
offset = eof;
break;
break;
if (ACCESS_ONCE(ctx->released) ||
fatal_signal_pending(current)) {
+ /*
+ * &ewq->wq may be queued in fork_event, but
+ * __remove_wait_queue ignores the head
+ * parameter. It would be a problem if it
+ * didn't.
+ */
__remove_wait_queue(&ctx->event_wqh, &ewq->wq);
if (ewq->msg.event == UFFD_EVENT_FORK) {
struct userfaultfd_ctx *new;
(unsigned long)
uwq->msg.arg.reserved.reserved1;
list_move(&uwq->wq.entry, &fork_event);
+ /*
+ * fork_nctx can be freed as soon as
+ * we drop the lock, unless we take a
+ * reference on it.
+ */
+ userfaultfd_ctx_get(fork_nctx);
spin_unlock(&ctx->event_wqh.lock);
ret = 0;
break;
if (!ret && msg->event == UFFD_EVENT_FORK) {
ret = resolve_userfault_fork(ctx, fork_nctx, msg);
+ spin_lock(&ctx->event_wqh.lock);
+ if (!list_empty(&fork_event)) {
+ /*
+ * The fork thread didn't abort, so we can
+ * drop the temporary refcount.
+ */
+ userfaultfd_ctx_put(fork_nctx);
+
+ uwq = list_first_entry(&fork_event,
+ typeof(*uwq),
+ wq.entry);
+ /*
+ * If fork_event list wasn't empty and in turn
+ * the event wasn't already released by fork
+ * (the event is allocated on fork kernel
+ * stack), put the event back to its place in
+ * the event_wq. fork_event head will be freed
+ * as soon as we return so the event cannot
+ * stay queued there no matter the current
+ * "ret" value.
+ */
+ list_del(&uwq->wq.entry);
+ __add_wait_queue(&ctx->event_wqh, &uwq->wq);
- if (!ret) {
- spin_lock(&ctx->event_wqh.lock);
- if (!list_empty(&fork_event)) {
- uwq = list_first_entry(&fork_event,
- typeof(*uwq),
- wq.entry);
- list_del(&uwq->wq.entry);
- __add_wait_queue(&ctx->event_wqh, &uwq->wq);
+ /*
+ * Leave the event in the waitqueue and report
+ * error to userland if we failed to resolve
+ * the userfault fork.
+ */
+ if (likely(!ret))
userfaultfd_event_complete(ctx, uwq);
- }
- spin_unlock(&ctx->event_wqh.lock);
+ } else {
+ /*
+ * Here the fork thread aborted and the
+ * refcount from the fork thread on fork_nctx
+ * has already been released. We still hold
+ * the reference we took before releasing the
+ * lock above. If resolve_userfault_fork
+ * failed we've to drop it because the
+ * fork_nctx has to be freed in such case. If
+ * it succeeded we'll hold it because the new
+ * uffd references it.
+ */
+ if (ret)
+ userfaultfd_ctx_put(fork_nctx);
}
+ spin_unlock(&ctx->event_wqh.lock);
}
return ret;
}
memcpy(value, buffer, len);
out:
- security_release_secctx(buffer, len);
+ kfree(buffer);
out_noalloc:
return len;
}
trace_xfs_ag_resv_free(pag, type, 0);
resv = xfs_perag_resv(pag, type);
- pag->pag_mount->m_ag_max_usable += resv->ar_asked;
+ if (pag->pag_agno == 0)
+ pag->pag_mount->m_ag_max_usable += resv->ar_asked;
/*
* AGFL blocks are always considered "free", so whatever
* was reserved at mount time must be given back at umount.
return error;
}
- mp->m_ag_max_usable -= ask;
+ /*
+ * Reduce the maximum per-AG allocation length by however much we're
+ * trying to reserve for an AG. Since this is a filesystem-wide
+ * counter, we only make the adjustment for AG 0. This assumes that
+ * there aren't any AGs hungrier for per-AG reservation than AG 0.
+ */
+ if (pag->pag_agno == 0)
+ mp->m_ag_max_usable -= ask;
resv = xfs_perag_resv(pag, type);
resv->ar_asked = ask;
#include "xfs_rmap.h"
#include "xfs_ag_resv.h"
#include "xfs_refcount.h"
-#include "xfs_rmap_btree.h"
#include "xfs_icache.h"
int maxrecs; /* maximum record count at this level */
xfs_mount_t *mp; /* mount structure */
xfs_filblks_t rval; /* return value */
- xfs_filblks_t orig_len;
mp = ip->i_mount;
-
- /* Calculate the worst-case size of the bmbt. */
- orig_len = len;
maxrecs = mp->m_bmap_dmxr[0];
for (level = 0, rval = 0;
level < XFS_BM_MAXLEVELS(mp, XFS_DATA_FORK);
len += maxrecs - 1;
do_div(len, maxrecs);
rval += len;
- if (len == 1) {
- rval += XFS_BM_MAXLEVELS(mp, XFS_DATA_FORK) -
+ if (len == 1)
+ return rval + XFS_BM_MAXLEVELS(mp, XFS_DATA_FORK) -
level - 1;
- break;
- }
if (level == 0)
maxrecs = mp->m_bmap_dmxr[1];
}
-
- /* Calculate the worst-case size of the rmapbt. */
- if (xfs_sb_version_hasrmapbt(&mp->m_sb))
- rval += 1 + xfs_rmapbt_calc_size(mp, orig_len) +
- mp->m_rmap_maxlevels;
-
return rval;
}
error = xfs_reflink_end_cow(ip, offset, size);
break;
case XFS_IO_UNWRITTEN:
- error = xfs_iomap_write_unwritten(ip, offset, size);
+ /* writeback should never update isize */
+ error = xfs_iomap_write_unwritten(ip, offset, size, false);
break;
default:
ASSERT(!xfs_ioend_is_append(ioend) || ioend->io_append_trans);
return error;
/*
- * The extent shiting code works on extent granularity. So, if
+ * Clean out anything hanging around in the cow fork now that
+ * we've flushed all the dirty data out to disk to avoid having
+ * CoW extents at the wrong offsets.
+ */
+ if (xfs_is_reflink_inode(ip)) {
+ error = xfs_reflink_cancel_cow_range(ip, offset, NULLFILEOFF,
+ true);
+ if (error)
+ return error;
+ }
+
+ /*
+ * The extent shifting code works on extent granularity. So, if
* stop_fsb is not the starting block of extent, we need to split
* the extent at stop_fsb.
*/
ip->i_d.di_flags2 |= tip->i_d.di_flags2 & XFS_DIFLAG2_REFLINK;
tip->i_d.di_flags2 &= ~XFS_DIFLAG2_REFLINK;
tip->i_d.di_flags2 |= f & XFS_DIFLAG2_REFLINK;
+ }
+
+ /* Swap the cow forks. */
+ if (xfs_sb_version_hasreflink(&mp->m_sb)) {
+ xfs_extnum_t extnum;
+
+ ASSERT(ip->i_cformat == XFS_DINODE_FMT_EXTENTS);
+ ASSERT(tip->i_cformat == XFS_DINODE_FMT_EXTENTS);
+
+ extnum = ip->i_cnextents;
+ ip->i_cnextents = tip->i_cnextents;
+ tip->i_cnextents = extnum;
+
cowfp = ip->i_cowfp;
ip->i_cowfp = tip->i_cowfp;
tip->i_cowfp = cowfp;
- xfs_inode_set_cowblocks_tag(ip);
- xfs_inode_set_cowblocks_tag(tip);
+
+ if (ip->i_cowfp && ip->i_cnextents)
+ xfs_inode_set_cowblocks_tag(ip);
+ else
+ xfs_inode_clear_cowblocks_tag(ip);
+ if (tip->i_cowfp && tip->i_cnextents)
+ xfs_inode_set_cowblocks_tag(tip);
+ else
+ xfs_inode_clear_cowblocks_tag(tip);
}
xfs_trans_log_inode(tp, ip, src_log_flags);
int size;
int offset;
- total_nr_pages = bp->b_page_count;
-
/* skip the pages in the buffer before the start offset */
page_index = 0;
offset = *buf_offset;
{
struct xfs_mount *mp = bp->b_target->bt_mount;
- xfs_alert(mp, "Metadata %s detected at %pF, %s block 0x%llx",
+ xfs_alert(mp, "Metadata %s detected at %pS, %s block 0x%llx",
bp->b_error == -EFSBADCRC ? "CRC error" : "corruption",
__return_address, bp->b_ops->name, bp->b_bn);
xfs_off_t count,
bool *did_zero)
{
- return iomap_zero_range(VFS_I(ip), pos, count, NULL, &xfs_iomap_ops);
+ return iomap_zero_range(VFS_I(ip), pos, count, did_zero, &xfs_iomap_ops);
}
int
*/
spin_lock(&ip->i_flags_lock);
if (iocb->ki_pos > i_size_read(inode)) {
- bool zero = false;
-
spin_unlock(&ip->i_flags_lock);
if (!drained_dio) {
if (*iolock == XFS_IOLOCK_SHARED) {
drained_dio = true;
goto restart;
}
- error = xfs_zero_eof(ip, iocb->ki_pos, i_size_read(inode), &zero);
+ error = xfs_zero_eof(ip, iocb->ki_pos, i_size_read(inode), NULL);
if (error)
return error;
} else
struct inode *inode = file_inode(iocb->ki_filp);
struct xfs_inode *ip = XFS_I(inode);
loff_t offset = iocb->ki_pos;
- bool update_size = false;
int error = 0;
trace_xfs_end_io_direct_write(ip, offset, size);
if (size <= 0)
return size;
+ if (flags & IOMAP_DIO_COW) {
+ error = xfs_reflink_end_cow(ip, offset, size);
+ if (error)
+ return error;
+ }
+
+ /*
+ * Unwritten conversion updates the in-core isize after extent
+ * conversion but before updating the on-disk size. Updating isize any
+ * earlier allows a racing dio read to find unwritten extents before
+ * they are converted.
+ */
+ if (flags & IOMAP_DIO_UNWRITTEN)
+ return xfs_iomap_write_unwritten(ip, offset, size, true);
+
/*
* We need to update the in-core inode size here so that we don't end up
* with the on-disk inode size being outside the in-core inode size. We
spin_lock(&ip->i_flags_lock);
if (offset + size > i_size_read(inode)) {
i_size_write(inode, offset + size);
- update_size = true;
- }
- spin_unlock(&ip->i_flags_lock);
-
- if (flags & IOMAP_DIO_COW) {
- error = xfs_reflink_end_cow(ip, offset, size);
- if (error)
- return error;
- }
-
- if (flags & IOMAP_DIO_UNWRITTEN)
- error = xfs_iomap_write_unwritten(ip, offset, size);
- else if (update_size)
+ spin_unlock(&ip->i_flags_lock);
error = xfs_setfilesize(ip, offset, size);
+ } else {
+ spin_unlock(&ip->i_flags_lock);
+ }
return error;
}
goto out;
/*
- * Clear the reflink flag if we truncated everything.
+ * Clear the reflink flag if there are no data fork blocks and
+ * there are no extents staged in the cow fork.
*/
- if (ip->i_d.di_nblocks == 0 && xfs_is_reflink_inode(ip)) {
- ip->i_d.di_flags2 &= ~XFS_DIFLAG2_REFLINK;
+ if (xfs_is_reflink_inode(ip) && ip->i_cnextents == 0) {
+ if (ip->i_d.di_nblocks == 0)
+ ip->i_d.di_flags2 &= ~XFS_DIFLAG2_REFLINK;
xfs_inode_clear_cowblocks_tag(ip);
}
*/
iip = INODE_ITEM(blip);
if ((iip->ili_logged && blip->li_lsn == iip->ili_flush_lsn) ||
- lip->li_flags & XFS_LI_FAILED)
+ (blip->li_flags & XFS_LI_FAILED))
need_ail++;
blip = next;
int *join_flags)
{
struct inode *inode = VFS_I(ip);
+ struct super_block *sb = inode->i_sb;
int error;
*join_flags = 0;
if (fa->fsx_xflags & FS_XFLAG_DAX) {
if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode)))
return -EINVAL;
- if (ip->i_mount->m_sb.sb_blocksize != PAGE_SIZE)
+ if (bdev_dax_supported(sb, sb->s_blocksize) < 0)
return -EINVAL;
}
xfs_iomap_write_unwritten(
xfs_inode_t *ip,
xfs_off_t offset,
- xfs_off_t count)
+ xfs_off_t count,
+ bool update_isize)
{
xfs_mount_t *mp = ip->i_mount;
xfs_fileoff_t offset_fsb;
xfs_trans_t *tp;
xfs_bmbt_irec_t imap;
struct xfs_defer_ops dfops;
+ struct inode *inode = VFS_I(ip);
xfs_fsize_t i_size;
uint resblks;
int error;
i_size = XFS_FSB_TO_B(mp, offset_fsb + count_fsb);
if (i_size > offset + count)
i_size = offset + count;
-
+ if (update_isize && i_size > i_size_read(inode))
+ i_size_write(inode, i_size);
i_size = xfs_new_eof(ip, i_size);
if (i_size) {
ip->i_d.di_size = i_size;
struct xfs_bmbt_irec *, int);
int xfs_iomap_write_allocate(struct xfs_inode *, int, xfs_off_t,
struct xfs_bmbt_irec *);
-int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t);
+int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
void xfs_bmbt_to_iomap(struct xfs_inode *, struct iomap *,
struct xfs_bmbt_irec *);
(end - 1) >> PAGE_SHIFT);
WARN_ON_ONCE(error);
- error = xfs_iomap_write_unwritten(ip, start, length);
+ error = xfs_iomap_write_unwritten(ip, start, length, false);
if (error)
goto out_drop_iolock;
}
/* If there is a hole at end_fsb - 1 go to the previous extent */
if (!xfs_iext_lookup_extent(ip, ifp, end_fsb - 1, &idx, &got) ||
got.br_startoff > end_fsb) {
- ASSERT(idx > 0);
+ /*
+ * In case of racing, overlapping AIO writes no COW extents
+ * might be left by the time I/O completes for the loser of
+ * the race. In that case we are done.
+ */
+ if (idx <= 0)
+ goto out_cancel;
xfs_iext_get_extent(ifp, --idx, &got);
}
out_defer:
xfs_defer_cancel(&dfops);
+out_cancel:
xfs_trans_cancel(tp);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
out:
"DAX and reflink have not been tested together!");
}
+ if (mp->m_flags & XFS_MOUNT_DISCARD) {
+ struct request_queue *q = bdev_get_queue(sb->s_bdev);
+
+ if (!blk_queue_discard(q)) {
+ xfs_warn(mp, "mounting with \"discard\" option, but "
+ "the device does not support discard");
+ mp->m_flags &= ~XFS_MOUNT_DISCARD;
+ }
+ }
+
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
if (mp->m_sb.sb_rblocks) {
xfs_alert(mp,
extern const struct fwnode_operations acpi_data_fwnode_ops;
extern const struct fwnode_operations acpi_static_fwnode_ops;
+bool is_acpi_device_node(const struct fwnode_handle *fwnode);
+bool is_acpi_data_node(const struct fwnode_handle *fwnode);
+
static inline bool is_acpi_node(const struct fwnode_handle *fwnode)
{
- return !IS_ERR_OR_NULL(fwnode) &&
- (fwnode->ops == &acpi_device_fwnode_ops
- || fwnode->ops == &acpi_data_fwnode_ops);
-}
-
-static inline bool is_acpi_device_node(const struct fwnode_handle *fwnode)
-{
- return !IS_ERR_OR_NULL(fwnode) &&
- fwnode->ops == &acpi_device_fwnode_ops;
+ return (is_acpi_device_node(fwnode) || is_acpi_data_node(fwnode));
}
#define to_acpi_device_node(__fwnode) \
NULL; \
})
-static inline bool is_acpi_data_node(const struct fwnode_handle *fwnode)
-{
- return !IS_ERR_OR_NULL(fwnode) && fwnode->ops == &acpi_data_fwnode_ops;
-}
-
#define to_acpi_data_node(__fwnode) \
({ \
typeof(__fwnode) __to_acpi_data_node_fwnode = __fwnode; \
(__ret); \
})
-#define this_cpu_generic_read(pcp) \
+#define __this_cpu_generic_read_nopreempt(pcp) \
({ \
typeof(pcp) __ret; \
preempt_disable_notrace(); \
- __ret = raw_cpu_generic_read(pcp); \
+ __ret = READ_ONCE(*raw_cpu_ptr(&(pcp))); \
preempt_enable_notrace(); \
__ret; \
})
+#define __this_cpu_generic_read_noirq(pcp) \
+({ \
+ typeof(pcp) __ret; \
+ unsigned long __flags; \
+ raw_local_irq_save(__flags); \
+ __ret = raw_cpu_generic_read(pcp); \
+ raw_local_irq_restore(__flags); \
+ __ret; \
+})
+
+#define this_cpu_generic_read(pcp) \
+({ \
+ typeof(pcp) __ret; \
+ if (__native_word(pcp)) \
+ __ret = __this_cpu_generic_read_nopreempt(pcp); \
+ else \
+ __ret = __this_cpu_generic_read_noirq(pcp); \
+ __ret; \
+})
+
#define this_cpu_generic_to_op(pcp, val, op) \
do { \
unsigned long __flags; \
#define TEGRA210_CLK_BLINK 280
/* 281 */
#define TEGRA210_CLK_SOR1_SRC 282
+#define TEGRA210_CLK_SOR1_OUT 282
/* 283 */
#define TEGRA210_CLK_XUSB_HOST_SRC 284
#define TEGRA210_CLK_XUSB_FALCON_SRC 285
--- /dev/null
+/*
+ * Copyright (c) 2017 MediaTek Inc.
+ * Author: Sean Wang <sean.wang@mediatek.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _DT_BINDINGS_RESET_CONTROLLER_MT7622
+#define _DT_BINDINGS_RESET_CONTROLLER_MT7622
+
+/* INFRACFG resets */
+#define MT7622_INFRA_EMI_REG_RST 0
+#define MT7622_INFRA_DRAMC0_A0_RST 1
+#define MT7622_INFRA_APCIRQ_EINT_RST 3
+#define MT7622_INFRA_APXGPT_RST 4
+#define MT7622_INFRA_SCPSYS_RST 5
+#define MT7622_INFRA_PMIC_WRAP_RST 7
+#define MT7622_INFRA_IRRX_RST 9
+#define MT7622_INFRA_EMI_RST 16
+#define MT7622_INFRA_WED0_RST 17
+#define MT7622_INFRA_DRAMC_RST 18
+#define MT7622_INFRA_CCI_INTF_RST 19
+#define MT7622_INFRA_TRNG_RST 21
+#define MT7622_INFRA_SYSIRQ_RST 22
+#define MT7622_INFRA_WED1_RST 25
+
+/* PERICFG Subsystem resets */
+#define MT7622_PERI_UART0_SW_RST 0
+#define MT7622_PERI_UART1_SW_RST 1
+#define MT7622_PERI_UART2_SW_RST 2
+#define MT7622_PERI_UART3_SW_RST 3
+#define MT7622_PERI_UART4_SW_RST 4
+#define MT7622_PERI_BTIF_SW_RST 6
+#define MT7622_PERI_PWM_SW_RST 8
+#define MT7622_PERI_AUXADC_SW_RST 10
+#define MT7622_PERI_DMA_SW_RST 11
+#define MT7622_PERI_IRTX_SW_RST 13
+#define MT7622_PERI_NFI_SW_RST 14
+#define MT7622_PERI_THERM_SW_RST 16
+#define MT7622_PERI_MSDC0_SW_RST 19
+#define MT7622_PERI_MSDC1_SW_RST 20
+#define MT7622_PERI_I2C0_SW_RST 22
+#define MT7622_PERI_I2C1_SW_RST 23
+#define MT7622_PERI_I2C2_SW_RST 24
+#define MT7622_PERI_SPI0_SW_RST 33
+#define MT7622_PERI_SPI1_SW_RST 34
+#define MT7622_PERI_FLASHIF_SW_RST 36
+
+/* TOPRGU resets */
+#define MT7622_TOPRGU_INFRA_RST 0
+#define MT7622_TOPRGU_ETHDMA_RST 1
+#define MT7622_TOPRGU_DDRPHY_RST 6
+#define MT7622_TOPRGU_INFRA_AO_RST 8
+#define MT7622_TOPRGU_CONN_RST 9
+#define MT7622_TOPRGU_APMIXED_RST 10
+#define MT7622_TOPRGU_CONN_MCU_RST 12
+
+/* PCIe/SATA Subsystem resets */
+#define MT7622_SATA_PHY_REG_RST 12
+#define MT7622_SATA_PHY_SW_RST 13
+#define MT7622_SATA_AXI_BUS_RST 15
+#define MT7622_PCIE1_CORE_RST 19
+#define MT7622_PCIE1_MMIO_RST 20
+#define MT7622_PCIE1_HRST 21
+#define MT7622_PCIE1_USER_RST 22
+#define MT7622_PCIE1_PIPE_RST 23
+#define MT7622_PCIE0_CORE_RST 27
+#define MT7622_PCIE0_MMIO_RST 28
+#define MT7622_PCIE0_HRST 29
+#define MT7622_PCIE0_USER_RST 30
+#define MT7622_PCIE0_PIPE_RST 31
+
+/* SSUSB Subsystem resets */
+#define MT7622_SSUSB_PHY_PWR_RST 3
+#define MT7622_SSUSB_MAC_PWR_RST 4
+
+/* ETHSYS Subsystem resets */
+#define MT7622_ETHSYS_SYS_RST 0
+#define MT7622_ETHSYS_MCM_RST 2
+#define MT7622_ETHSYS_HSDMA_RST 5
+#define MT7622_ETHSYS_FE_RST 6
+#define MT7622_ETHSYS_GMAC_RST 23
+#define MT7622_ETHSYS_EPHY_RST 24
+#define MT7622_ETHSYS_CRYPTO_RST 29
+#define MT7622_ETHSYS_PPE_RST 31
+
+#endif /* _DT_BINDINGS_RESET_CONTROLLER_MT7622 */
--- /dev/null
+/**
+ * This header provides index for the HSDK reset controller.
+ */
+#ifndef _DT_BINDINGS_RESET_CONTROLLER_SNPS_HSDK
+#define _DT_BINDINGS_RESET_CONTROLLER_SNPS_HSDK
+
+#define HSDK_APB_RESET 0
+#define HSDK_AXI_RESET 1
+#define HSDK_ETH_RESET 2
+#define HSDK_USB_RESET 3
+#define HSDK_SDIO_RESET 4
+#define HSDK_HDMI_RESET 5
+#define HSDK_GFX_RESET 6
+#define HSDK_DMAC_RESET 7
+#define HSDK_EBI_RESET 8
+
+#endif /*_DT_BINDINGS_RESET_CONTROLLER_SNPS_HSDK*/
+++ /dev/null
-/**
- * This header provides index for the HSDK v1 reset controller.
- */
-#ifndef _DT_BINDINGS_RESET_CONTROLLER_HSDK_V1
-#define _DT_BINDINGS_RESET_CONTROLLER_HSDK_V1
-
-#define HSDK_V1_APB_RESET 0
-#define HSDK_V1_AXI_RESET 1
-#define HSDK_V1_ETH_RESET 2
-#define HSDK_V1_USB_RESET 3
-#define HSDK_V1_SDIO_RESET 4
-#define HSDK_V1_HDMI_RESET 5
-#define HSDK_V1_GFX_RESET 6
-#define HSDK_V1_DMAC_RESET 7
-#define HSDK_V1_EBI_RESET 8
-
-#endif /*_DT_BINDINGS_RESET_CONTROLLER_HSDK_V1*/
--- /dev/null
+/*
+ * This header provides constants for binding nvidia,tegra186-bpmp-thermal.
+ */
+
+#ifndef _DT_BINDINGS_THERMAL_TEGRA186_BPMP_THERMAL_H
+#define _DT_BINDINGS_THERMAL_TEGRA186_BPMP_THERMAL_H
+
+#define TEGRA186_BPMP_THERMAL_ZONE_CPU 2
+#define TEGRA186_BPMP_THERMAL_ZONE_GPU 3
+#define TEGRA186_BPMP_THERMAL_ZONE_AUX 4
+#define TEGRA186_BPMP_THERMAL_ZONE_PLLX 5
+#define TEGRA186_BPMP_THERMAL_ZONE_AO 6
+
+#endif
static inline void audit_seccomp(unsigned long syscall, long signr, int code)
{
- if (!audit_enabled)
- return;
-
- /* Force a record to be reported if a signal was delivered. */
- if (signr || unlikely(!audit_dummy_context()))
+ if (audit_enabled && unlikely(!audit_dummy_context()))
__audit_seccomp(syscall, signr, code);
}
int executable_stack);
extern int transfer_args_to_stack(struct linux_binprm *bprm,
unsigned long *sp_location);
-extern int bprm_change_interp(char *interp, struct linux_binprm *bprm);
+extern int bprm_change_interp(const char *interp, struct linux_binprm *bprm);
extern int copy_strings_kernel(int argc, const char *const *argv,
struct linux_binprm *bprm);
extern int prepare_bprm_creds(struct linux_binprm *bprm);
/**
* FIELD_GET() - extract a bitfield element
* @_mask: shifted mask defining the field's length and position
- * @_reg: 32bit value of entire bitfield
+ * @_reg: value of entire bitfield
*
* FIELD_GET() extracts the field specified by @_mask from the
* bitfield passed in as @_reg by masking and shifting it down.
int node;
#ifdef CONFIG_BLK_DEV_IO_TRACE
struct blk_trace *blk_trace;
+ struct mutex blk_trace_mutex;
#endif
/*
* for flush operations
#include <linux/types.h>
+/*
+ * CPU-up CPU-down
+ *
+ * BP AP BP AP
+ *
+ * OFFLINE OFFLINE
+ * | ^
+ * v |
+ * BRINGUP_CPU->AP_OFFLINE BRINGUP_CPU <- AP_IDLE_DEAD (idle thread/play_dead)
+ * | AP_OFFLINE
+ * v (IRQ-off) ,---------------^
+ * AP_ONLNE | (stop_machine)
+ * | TEARDOWN_CPU <- AP_ONLINE_IDLE
+ * | ^
+ * v |
+ * AP_ACTIVE AP_ACTIVE
+ */
+
enum cpuhp_state {
- CPUHP_OFFLINE,
+ CPUHP_INVALID = -1,
+ CPUHP_OFFLINE = 0,
CPUHP_CREATE_THREADS,
CPUHP_PERF_PREPARE,
CPUHP_PERF_X86_PREPARE,
size_t count);
};
-#define DRIVER_ATTR(_name, _mode, _show, _store) \
- struct driver_attribute driver_attr_##_name = __ATTR(_name, _mode, _show, _store)
#define DRIVER_ATTR_RW(_name) \
struct driver_attribute driver_attr_##_name = __ATTR_RW(_name)
#define DRIVER_ATTR_RO(_name) \
* @driver_data: Private pointer for driver specific info.
* @links: Links to suppliers and consumers of this device.
* @power: For device power management.
- * See Documentation/power/admin-guide/devices.rst for details.
+ * See Documentation/driver-api/pm/devices.rst for details.
* @pm_domain: Provide callbacks that are executed during system suspend,
* hibernation, system resume and during runtime PM transitions
* along with subsystem-level and driver-level callbacks.
unsigned long flags; /* error bits */
spinlock_t private_lock; /* for use by the address_space */
gfp_t gfp_mask; /* implicit gfp mask for allocations */
- struct list_head private_list; /* ditto */
+ struct list_head private_list; /* for use by the address_space */
void *private_data; /* ditto */
errseq_t wb_err;
} __attribute__((aligned(sizeof(long)))) __randomize_layout;
int ad_sd_read_reg(struct ad_sigma_delta *sigma_delta, unsigned int reg,
unsigned int size, unsigned int *val);
+int ad_sd_reset(struct ad_sigma_delta *sigma_delta,
+ unsigned int reset_length);
+
int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
const struct iio_chan_spec *chan, int *val);
int ad_sd_calibrate_all(struct ad_sigma_delta *sigma_delta,
int input_ff_upload(struct input_dev *dev, struct ff_effect *effect, struct file *file);
int input_ff_erase(struct input_dev *dev, int effect_id, struct file *file);
+int input_ff_flush(struct input_dev *dev, struct file *file);
int input_ff_create_memless(struct input_dev *dev, void *data,
int (*play_effect)(struct input_dev *, void *, struct ff_effect *));
* @map: map a physically contiguous memory region to an iommu domain
* @unmap: unmap a physically contiguous memory region from an iommu domain
* @map_sg: map a scatter-gather list of physically contiguous memory chunks
+ * to an iommu domain
* @flush_tlb_all: Synchronously flush all hardware TLBs for this domain
* @tlb_range_add: Add a given iova range to the flush queue for this domain
* @tlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
* queue
- * to an iommu domain
* @iova_to_phys: translate iova to physical address
* @add_device: add device to iommu grouping
* @remove_device: remove device from iommu grouping
static inline
struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d)
{
- if (!cpumask_empty(d->common->effective_affinity))
- return d->common->effective_affinity;
-
- return d->common->affinity;
+ return d->common->effective_affinity;
}
static inline void irq_data_update_effective_affinity(struct irq_data *d,
const struct cpumask *m)
#define KEY_FLAG_BUILTIN 8 /* set if key is built in to the kernel */
#define KEY_FLAG_ROOT_CAN_INVAL 9 /* set if key can be invalidated by root without permission */
#define KEY_FLAG_KEEP 10 /* set if key should not be removed */
+#define KEY_FLAG_UID_KEYRING 11 /* set if key is a user or user session keyring */
/* the key type and key description string
* - the desc is used to match a key against search criteria
#define KEY_ALLOC_NOT_IN_QUOTA 0x0002 /* not in quota */
#define KEY_ALLOC_BUILT_IN 0x0004 /* Key is built into kernel */
#define KEY_ALLOC_BYPASS_RESTRICTION 0x0008 /* Override the check on restricted keyrings */
+#define KEY_ALLOC_UID_KEYRING 0x0010 /* allocating a user or user session keyring */
extern void key_revoke(struct key *key);
extern void key_invalidate(struct key *key);
MLX5_CAP_RESERVED,
MLX5_CAP_VECTOR_CALC,
MLX5_CAP_QOS,
- MLX5_CAP_FPGA,
/* NUM OF CAP Types */
MLX5_CAP_NUM
};
MLX5_GET(mcam_reg, (mdev)->caps.mcam, mng_feature_cap_mask.enhanced_features.fld)
#define MLX5_CAP_FPGA(mdev, cap) \
- MLX5_GET(fpga_cap, (mdev)->caps.hca_cur[MLX5_CAP_FPGA], cap)
+ MLX5_GET(fpga_cap, (mdev)->caps.fpga, cap)
#define MLX5_CAP64_FPGA(mdev, cap) \
- MLX5_GET64(fpga_cap, (mdev)->caps.hca_cur[MLX5_CAP_FPGA], cap)
+ MLX5_GET64(fpga_cap, (mdev)->caps.fpga, cap)
enum {
MLX5_CMD_STAT_OK = 0x0,
u32 hca_max[MLX5_CAP_NUM][MLX5_UN_SZ_DW(hca_cap_union)];
u32 pcam[MLX5_ST_SZ_DW(pcam_reg)];
u32 mcam[MLX5_ST_SZ_DW(mcam_reg)];
+ u32 fpga[MLX5_ST_SZ_DW(fpga_cap)];
} caps;
phys_addr_t iseg_base;
struct mlx5_init_seg __iomem *iseg;
u8 reserved_at_80[0x18];
u8 log_max_destination[0x8];
- u8 reserved_at_a0[0x18];
+ u8 log_max_flow_counter[0x8];
+ u8 reserved_at_a8[0x10];
u8 log_max_flow[0x8];
u8 reserved_at_c0[0x40];
#if defined(CONFIG_X86_INTEL_MPX)
/* MPX specific bounds table or bounds directory */
-# define VM_MPX VM_HIGH_ARCH_BIT_4
+# define VM_MPX VM_HIGH_ARCH_4
#else
# define VM_MPX VM_NONE
#endif
#define MMC_CAP_UHS_SDR50 (1 << 18) /* Host supports UHS SDR50 mode */
#define MMC_CAP_UHS_SDR104 (1 << 19) /* Host supports UHS SDR104 mode */
#define MMC_CAP_UHS_DDR50 (1 << 20) /* Host supports UHS DDR50 mode */
-#define MMC_CAP_NO_BOUNCE_BUFF (1 << 21) /* Disable bounce buffers on host */
+/* (1 << 21) is free for reuse */
#define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */
#define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */
#define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */
#else /* CONFIG_MMU_NOTIFIER */
+static inline int mm_has_notifiers(struct mm_struct *mm)
+{
+ return 0;
+}
+
static inline void mmu_notifier_release(struct mm_struct *mm)
{
}
#error Allocator MAX_ORDER exceeds SECTION_SIZE
#endif
-#define pfn_to_section_nr(pfn) ((pfn) >> PFN_SECTION_SHIFT)
-#define section_nr_to_pfn(sec) ((sec) << PFN_SECTION_SHIFT)
+static inline unsigned long pfn_to_section_nr(unsigned long pfn)
+{
+ return pfn >> PFN_SECTION_SHIFT;
+}
+static inline unsigned long section_nr_to_pfn(unsigned long sec)
+{
+ return sec << PFN_SECTION_SHIFT;
+}
#define SECTION_ALIGN_UP(pfn) (((pfn) + PAGES_PER_SECTION - 1) & PAGE_SECTION_MASK)
#define SECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SECTION_MASK)
#ifdef CONFIG_LOCKUP_DETECTOR
void lockup_detector_init(void);
+void lockup_detector_soft_poweroff(void);
+void lockup_detector_cleanup(void);
+bool is_hardlockup(void);
+
+extern int watchdog_user_enabled;
+extern int nmi_watchdog_user_enabled;
+extern int soft_watchdog_user_enabled;
+extern int watchdog_thresh;
+extern unsigned long watchdog_enabled;
+
+extern struct cpumask watchdog_cpumask;
+extern unsigned long *watchdog_cpumask_bits;
+#ifdef CONFIG_SMP
+extern int sysctl_softlockup_all_cpu_backtrace;
+extern int sysctl_hardlockup_all_cpu_backtrace;
#else
-static inline void lockup_detector_init(void)
-{
-}
-#endif
+#define sysctl_softlockup_all_cpu_backtrace 0
+#define sysctl_hardlockup_all_cpu_backtrace 0
+#endif /* !CONFIG_SMP */
+
+#else /* CONFIG_LOCKUP_DETECTOR */
+static inline void lockup_detector_init(void) { }
+static inline void lockup_detector_soft_poweroff(void) { }
+static inline void lockup_detector_cleanup(void) { }
+#endif /* !CONFIG_LOCKUP_DETECTOR */
#ifdef CONFIG_SOFTLOCKUP_DETECTOR
extern void touch_softlockup_watchdog_sched(void);
extern void touch_softlockup_watchdog_sync(void);
extern void touch_all_softlockup_watchdogs(void);
extern unsigned int softlockup_panic;
-extern int soft_watchdog_enabled;
-extern atomic_t watchdog_park_in_progress;
#else
-static inline void touch_softlockup_watchdog_sched(void)
-{
-}
-static inline void touch_softlockup_watchdog(void)
-{
-}
-static inline void touch_softlockup_watchdog_sync(void)
-{
-}
-static inline void touch_all_softlockup_watchdogs(void)
-{
-}
+static inline void touch_softlockup_watchdog_sched(void) { }
+static inline void touch_softlockup_watchdog(void) { }
+static inline void touch_softlockup_watchdog_sync(void) { }
+static inline void touch_all_softlockup_watchdogs(void) { }
#endif
#ifdef CONFIG_DETECT_HUNG_TASK
void reset_hung_task_detector(void);
#else
-static inline void reset_hung_task_detector(void)
-{
-}
+static inline void reset_hung_task_detector(void) { }
#endif
/*
* 'watchdog_enabled' variable. Each lockup detector has its dedicated bit -
* bit 0 for the hard lockup detector and bit 1 for the soft lockup detector.
*
- * 'watchdog_user_enabled', 'nmi_watchdog_enabled' and 'soft_watchdog_enabled'
- * are variables that are only used as an 'interface' between the parameters
- * in /proc/sys/kernel and the internal state bits in 'watchdog_enabled'. The
- * 'watchdog_thresh' variable is handled differently because its value is not
- * boolean, and the lockup detectors are 'suspended' while 'watchdog_thresh'
- * is equal zero.
+ * 'watchdog_user_enabled', 'nmi_watchdog_user_enabled' and
+ * 'soft_watchdog_user_enabled' are variables that are only used as an
+ * 'interface' between the parameters in /proc/sys/kernel and the internal
+ * state bits in 'watchdog_enabled'. The 'watchdog_thresh' variable is
+ * handled differently because its value is not boolean, and the lockup
+ * detectors are 'suspended' while 'watchdog_thresh' is equal zero.
*/
#define NMI_WATCHDOG_ENABLED_BIT 0
#define SOFT_WATCHDOG_ENABLED_BIT 1
static inline void hardlockup_detector_disable(void) {}
#endif
+#if defined(CONFIG_HAVE_NMI_WATCHDOG) || defined(CONFIG_HARDLOCKUP_DETECTOR)
+# define NMI_WATCHDOG_SYSCTL_PERM 0644
+#else
+# define NMI_WATCHDOG_SYSCTL_PERM 0444
+#endif
+
#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
extern void arch_touch_nmi_watchdog(void);
+extern void hardlockup_detector_perf_stop(void);
+extern void hardlockup_detector_perf_restart(void);
+extern void hardlockup_detector_perf_disable(void);
+extern void hardlockup_detector_perf_enable(void);
+extern void hardlockup_detector_perf_cleanup(void);
+extern int hardlockup_detector_perf_init(void);
#else
-#if !defined(CONFIG_HAVE_NMI_WATCHDOG)
+static inline void hardlockup_detector_perf_stop(void) { }
+static inline void hardlockup_detector_perf_restart(void) { }
+static inline void hardlockup_detector_perf_disable(void) { }
+static inline void hardlockup_detector_perf_enable(void) { }
+static inline void hardlockup_detector_perf_cleanup(void) { }
+# if !defined(CONFIG_HAVE_NMI_WATCHDOG)
+static inline int hardlockup_detector_perf_init(void) { return -ENODEV; }
static inline void arch_touch_nmi_watchdog(void) {}
+# else
+static inline int hardlockup_detector_perf_init(void) { return 0; }
+# endif
#endif
-#endif
+
+void watchdog_nmi_stop(void);
+void watchdog_nmi_start(void);
+int watchdog_nmi_probe(void);
/**
* touch_nmi_watchdog - restart NMI watchdog timeout.
- *
+ *
* If the architecture supports the NMI watchdog, touch_nmi_watchdog()
* may be used to reset the timeout - for code which intentionally
* disables interrupts for a long time. This call is stateless.
u64 hw_nmi_get_sample_period(int watchdog_thresh);
#endif
-#ifdef CONFIG_LOCKUP_DETECTOR
-extern int nmi_watchdog_enabled;
-extern int watchdog_user_enabled;
-extern int watchdog_thresh;
-extern unsigned long watchdog_enabled;
-extern struct cpumask watchdog_cpumask;
-extern unsigned long *watchdog_cpumask_bits;
-extern int __read_mostly watchdog_suspended;
-#ifdef CONFIG_SMP
-extern int sysctl_softlockup_all_cpu_backtrace;
-extern int sysctl_hardlockup_all_cpu_backtrace;
-#else
-#define sysctl_softlockup_all_cpu_backtrace 0
-#define sysctl_hardlockup_all_cpu_backtrace 0
-#endif
-
#if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
defined(CONFIG_HARDLOCKUP_DETECTOR)
void watchdog_update_hrtimer_threshold(u64 period);
static inline void watchdog_update_hrtimer_threshold(u64 period) { }
#endif
-extern bool is_hardlockup(void);
struct ctl_table;
extern int proc_watchdog(struct ctl_table *, int ,
void __user *, size_t *, loff_t *);
void __user *, size_t *, loff_t *);
extern int proc_watchdog_cpumask(struct ctl_table *, int,
void __user *, size_t *, loff_t *);
-extern int lockup_detector_suspend(void);
-extern void lockup_detector_resume(void);
-#else
-static inline int lockup_detector_suspend(void)
-{
- return 0;
-}
-
-static inline void lockup_detector_resume(void)
-{
-}
-#endif
#ifdef CONFIG_HAVE_ACPI_APEI_NMI
#include <asm/nmi.h>
* indicating an FC transport Aborted status.
* Entrypoint is Mandatory.
*
- * @defer_rcv: Called by the transport to signal the LLLD that it has
- * begun processing of a previously received NVME CMD IU. The LLDD
- * is now free to re-use the rcv buffer associated with the
- * nvmefc_tgt_fcp_req.
- *
* @max_hw_queues: indicates the maximum number of hw queues the LLDD
* supports for cpu affinitization.
* Value is Mandatory. Must be at least 1.
* outstanding operation (if there was one) to complete, then will
* call the fcp_req_release() callback to return the command's
* exchange context back to the LLDD.
+ * Entrypoint is Mandatory.
*
* @fcp_req_release: Called by the transport to return a nvmefc_tgt_fcp_req
* to the LLDD after all operations on the fcp operation are complete.
* This may be due to the command completing or upon completion of
* abort cleanup.
+ * Entrypoint is Mandatory.
+ *
+ * @defer_rcv: Called by the transport to signal the LLLD that it has
+ * begun processing of a previously received NVME CMD IU. The LLDD
+ * is now free to re-use the rcv buffer associated with the
+ * nvmefc_tgt_fcp_req.
+ * Entrypoint is Optional.
*
* @max_hw_queues: indicates the maximum number of hw queues the LLDD
* supports for cpu affinitization.
*
* @NVME_SGL_FMT_ADDRESS: absolute address of the data block
* @NVME_SGL_FMT_OFFSET: relative offset of the in-capsule data block
+ * @NVME_SGL_FMT_TRANSPORT_A: transport defined format, value 0xA
* @NVME_SGL_FMT_INVALIDATE: RDMA transport specific remote invalidation
* request subtype
*/
enum {
NVME_SGL_FMT_ADDRESS = 0x00,
NVME_SGL_FMT_OFFSET = 0x01,
+ NVME_SGL_FMT_TRANSPORT_A = 0x0A,
NVME_SGL_FMT_INVALIDATE = 0x0f,
};
*
* For struct nvme_keyed_sgl_desc:
* @NVME_KEY_SGL_FMT_DATA_DESC: keyed data block descriptor
+ *
+ * Transport-specific SGL types:
+ * @NVME_TRANSPORT_SGL_DATA_DESC: Transport SGL data dlock descriptor
*/
enum {
NVME_SGL_FMT_DATA_DESC = 0x00,
NVME_SGL_FMT_SEG_DESC = 0x02,
NVME_SGL_FMT_LAST_SEG_DESC = 0x03,
NVME_KEY_SGL_FMT_DATA_DESC = 0x04,
+ NVME_TRANSPORT_SGL_DATA_DESC = 0x05,
};
struct nvme_sgl_desc {
NVME_SC_UNWRITTEN_BLOCK = 0x287,
NVME_SC_DNR = 0x4000,
-
-
- /*
- * FC Transport-specific error status values for NVME commands
- *
- * Transport-specific status code values must be in the range 0xB0..0xBF
- */
-
- /* Generic FC failure - catchall */
- NVME_SC_FC_TRANSPORT_ERROR = 0x00B0,
-
- /* I/O failure due to FC ABTS'd */
- NVME_SC_FC_TRANSPORT_ABORTED = 0x00B1,
};
struct nvme_completion {
extern struct platform_device *of_device_alloc(struct device_node *np,
const char *bus_id,
struct device *parent);
+#ifdef CONFIG_OF
extern struct platform_device *of_find_device_by_node(struct device_node *np);
+#else
+static inline struct platform_device *of_find_device_by_node(struct device_node *np)
+{
+ return NULL;
+}
+#endif
/* Platform devices and busses creation */
extern struct platform_device *of_platform_device_create(struct device_node *np,
void fdt_init_reserved_mem(void);
void fdt_reserved_mem_save_node(unsigned long node, const char *uname,
phys_addr_t base, phys_addr_t size);
+struct reserved_mem *of_reserved_mem_lookup(struct device_node *np);
#else
static inline int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
static inline void fdt_init_reserved_mem(void) { }
static inline void fdt_reserved_mem_save_node(unsigned long node,
const char *uname, phys_addr_t base, phys_addr_t size) { }
+static inline struct reserved_mem *of_reserved_mem_lookup(struct device_node *np)
+{
+ return NULL;
+}
#endif
/**
#define dev_is_pci(d) (false)
#define dev_is_pf(d) (false)
+static inline bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags)
+{ return false; }
#endif /* CONFIG_PCI */
/* Include architecture-dependent settings and functions */
extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare);
extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size);
extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare);
+extern int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val);
+extern int qcom_scm_io_writel(phys_addr_t addr, unsigned int val);
#else
static inline
int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus)
static inline int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) { return -ENODEV; }
static inline int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) { return -ENODEV; }
static inline int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) { return -ENODEV; }
+static inline int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) { return -ENODEV; }
+static inline int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) { return -ENODEV; }
#endif
#endif
*/
/* Used in tsk->state: */
-#define TASK_RUNNING 0
-#define TASK_INTERRUPTIBLE 1
-#define TASK_UNINTERRUPTIBLE 2
-#define __TASK_STOPPED 4
-#define __TASK_TRACED 8
+#define TASK_RUNNING 0x0000
+#define TASK_INTERRUPTIBLE 0x0001
+#define TASK_UNINTERRUPTIBLE 0x0002
+#define __TASK_STOPPED 0x0004
+#define __TASK_TRACED 0x0008
/* Used in tsk->exit_state: */
-#define EXIT_DEAD 16
-#define EXIT_ZOMBIE 32
+#define EXIT_DEAD 0x0010
+#define EXIT_ZOMBIE 0x0020
#define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD)
/* Used in tsk->state again: */
-#define TASK_DEAD 64
-#define TASK_WAKEKILL 128
-#define TASK_WAKING 256
-#define TASK_PARKED 512
-#define TASK_NOLOAD 1024
-#define TASK_NEW 2048
-#define TASK_STATE_MAX 4096
-
-#define TASK_STATE_TO_CHAR_STR "RSDTtXZxKWPNn"
+#define TASK_PARKED 0x0040
+#define TASK_DEAD 0x0080
+#define TASK_WAKEKILL 0x0100
+#define TASK_WAKING 0x0200
+#define TASK_NOLOAD 0x0400
+#define TASK_NEW 0x0800
+#define TASK_STATE_MAX 0x1000
/* Convenience macros for the sake of set_current_state: */
#define TASK_KILLABLE (TASK_WAKEKILL | TASK_UNINTERRUPTIBLE)
/* get_task_state(): */
#define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \
TASK_UNINTERRUPTIBLE | __TASK_STOPPED | \
- __TASK_TRACED | EXIT_ZOMBIE | EXIT_DEAD)
+ __TASK_TRACED | EXIT_DEAD | EXIT_ZOMBIE | \
+ TASK_PARKED)
#define task_is_traced(task) ((task->state & __TASK_TRACED) != 0)
return task_pgrp_nr_ns(tsk, &init_pid_ns);
}
-static inline char task_state_to_char(struct task_struct *task)
+#define TASK_REPORT_IDLE (TASK_REPORT + 1)
+#define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
+
+static inline unsigned int __get_task_state(struct task_struct *tsk)
+{
+ unsigned int tsk_state = READ_ONCE(tsk->state);
+ unsigned int state = (tsk_state | tsk->exit_state) & TASK_REPORT;
+
+ BUILD_BUG_ON_NOT_POWER_OF_2(TASK_REPORT_MAX);
+
+ if (tsk_state == TASK_IDLE)
+ state = TASK_REPORT_IDLE;
+
+ return fls(state);
+}
+
+static inline char __task_state_to_char(unsigned int state)
{
- const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
- unsigned long state = task->state;
+ static const char state_char[] = "RSDTtXZPI";
- state = state ? __ffs(state) + 1 : 0;
+ BUILD_BUG_ON(1 + ilog2(TASK_REPORT_MAX) != sizeof(state_char) - 1);
- /* Make sure the string lines up properly with the number of task states: */
- BUILD_BUG_ON(sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1);
+ return state_char[state];
+}
- return state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?';
+static inline char task_state_to_char(struct task_struct *tsk)
+{
+ return __task_state_to_char(__get_task_state(tsk));
}
/**
/* mmput gets rid of the mappings and all user-space */
extern void mmput(struct mm_struct *);
+#ifdef CONFIG_MMU
+/* same as above but performs the slow path from the async context. Can
+ * be called from the atomic context as well
+ */
+void mmput_async(struct mm_struct *);
+#endif
/* Grab a reference to a task's mm, if it is not already going away */
extern struct mm_struct *get_task_mm(struct task_struct *task);
#include <uapi/linux/seccomp.h>
-#define SECCOMP_FILTER_FLAG_MASK (SECCOMP_FILTER_FLAG_TSYNC)
+#define SECCOMP_FILTER_FLAG_MASK (SECCOMP_FILTER_FLAG_TSYNC | \
+ SECCOMP_FILTER_FLAG_LOG)
#ifdef CONFIG_SECCOMP
}
void smpboot_unregister_percpu_thread(struct smp_hotplug_thread *plug_thread);
-int smpboot_update_cpumask_percpu_thread(struct smp_hotplug_thread *plug_thread,
- const struct cpumask *);
+void smpboot_update_cpumask_percpu_thread(struct smp_hotplug_thread *plug_thread,
+ const struct cpumask *);
#endif
} \
static inline long SYSC##name(__MAP(x,__SC_DECL,__VA_ARGS__))
-#ifdef TIF_FSCHECK
/*
* Called before coming back to user-mode. Returning to user-mode with an
* address limit different than USER_DS can allow to overwrite kernel memory.
*/
static inline void addr_limit_user_check(void)
{
-
+#ifdef TIF_FSCHECK
if (!test_thread_flag(TIF_FSCHECK))
return;
+#endif
- BUG_ON(!segment_eq(get_fs(), USER_DS));
+ if (CHECK_DATA_CORRUPTION(!segment_eq(get_fs(), USER_DS),
+ "Invalid address limit on user-mode return"))
+ force_sig(SIGKILL, current);
+
+#ifdef TIF_FSCHECK
clear_thread_flag(TIF_FSCHECK);
-}
#endif
+}
asmlinkage long sys32_quotactl(unsigned int cmd, const char __user *special,
qid_t id, void __user *addr);
#define setup_pinned_deferrable_timer_on_stack(timer, fn, data) \
__setup_timer_on_stack((timer), (fn), (data), TIMER_DEFERRABLE | TIMER_PINNED)
+#define TIMER_DATA_TYPE unsigned long
+#define TIMER_FUNC_TYPE void (*)(TIMER_DATA_TYPE)
+
+static inline void timer_setup(struct timer_list *timer,
+ void (*callback)(struct timer_list *),
+ unsigned int flags)
+{
+ __setup_timer(timer, (TIMER_FUNC_TYPE)callback,
+ (TIMER_DATA_TYPE)timer, flags);
+}
+
+#define from_timer(var, callback_timer, timer_fieldname) \
+ container_of(callback_timer, typeof(*var), timer_fieldname)
+
/**
* timer_pending - is a timer pending?
* @timer: the timer in question
int perf_refcount;
struct hlist_head __percpu *perf_events;
struct bpf_prog *prog;
+ struct perf_event *bpf_prog_owner;
int (*perf_perm)(struct trace_event_call *,
struct perf_event *);
--- /dev/null
+/*
+ * Copyright (c) 2016 - Savoir-faire Linux
+ * Author: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#ifndef _TS_NBUS_H
+#define _TS_NBUS_H
+
+struct ts_nbus;
+
+extern int ts_nbus_read(struct ts_nbus *ts_nbus, u8 adr, u16 *val);
+extern int ts_nbus_write(struct ts_nbus *ts_nbus, u8 adr, u16 val);
+
+#endif /* _TS_NBUS_H */
static inline struct dst_entry *dst_clone(struct dst_entry *dst)
{
if (dst)
- atomic_inc(&dst->__refcnt);
+ dst_hold(dst);
return dst;
}
__skb_dst_copy(nskb, oskb->_skb_refdst);
}
-/**
- * skb_dst_force - makes sure skb dst is refcounted
- * @skb: buffer
- *
- * If dst is not yet refcounted, let's do it
- */
-static inline void skb_dst_force(struct sk_buff *skb)
-{
- if (skb_dst_is_noref(skb)) {
- WARN_ON(!rcu_read_lock_held());
- skb->_skb_refdst &= ~SKB_DST_NOREF;
- dst_clone(skb_dst(skb));
- }
-}
-
/**
* dst_hold_safe - Take a reference on a dst if possible
* @dst: pointer to dst entry
}
/**
- * skb_dst_force_safe - makes sure skb dst is refcounted
+ * skb_dst_force - makes sure skb dst is refcounted
* @skb: buffer
*
* If dst is not yet refcounted and not destroyed, grab a ref on it.
*/
-static inline void skb_dst_force_safe(struct sk_buff *skb)
+static inline void skb_dst_force(struct sk_buff *skb)
{
if (skb_dst_is_noref(skb)) {
struct dst_entry *dst = skb_dst(skb);
+ WARN_ON(!rcu_read_lock_held());
if (!dst_hold_safe(dst))
dst = NULL;
*/
static inline int nla_put_u8(struct sk_buff *skb, int attrtype, u8 value)
{
- return nla_put(skb, attrtype, sizeof(u8), &value);
+ /* temporary variables to work around GCC PR81715 with asan-stack=1 */
+ u8 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(u8), &tmp);
}
/**
*/
static inline int nla_put_u16(struct sk_buff *skb, int attrtype, u16 value)
{
- return nla_put(skb, attrtype, sizeof(u16), &value);
+ u16 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(u16), &tmp);
}
/**
*/
static inline int nla_put_be16(struct sk_buff *skb, int attrtype, __be16 value)
{
- return nla_put(skb, attrtype, sizeof(__be16), &value);
+ __be16 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(__be16), &tmp);
}
/**
*/
static inline int nla_put_net16(struct sk_buff *skb, int attrtype, __be16 value)
{
- return nla_put_be16(skb, attrtype | NLA_F_NET_BYTEORDER, value);
+ __be16 tmp = value;
+
+ return nla_put_be16(skb, attrtype | NLA_F_NET_BYTEORDER, tmp);
}
/**
*/
static inline int nla_put_le16(struct sk_buff *skb, int attrtype, __le16 value)
{
- return nla_put(skb, attrtype, sizeof(__le16), &value);
+ __le16 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(__le16), &tmp);
}
/**
*/
static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
- return nla_put(skb, attrtype, sizeof(u32), &value);
+ u32 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(u32), &tmp);
}
/**
*/
static inline int nla_put_be32(struct sk_buff *skb, int attrtype, __be32 value)
{
- return nla_put(skb, attrtype, sizeof(__be32), &value);
+ __be32 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(__be32), &tmp);
}
/**
*/
static inline int nla_put_net32(struct sk_buff *skb, int attrtype, __be32 value)
{
- return nla_put_be32(skb, attrtype | NLA_F_NET_BYTEORDER, value);
+ __be32 tmp = value;
+
+ return nla_put_be32(skb, attrtype | NLA_F_NET_BYTEORDER, tmp);
}
/**
*/
static inline int nla_put_le32(struct sk_buff *skb, int attrtype, __le32 value)
{
- return nla_put(skb, attrtype, sizeof(__le32), &value);
+ __le32 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(__le32), &tmp);
}
/**
static inline int nla_put_u64_64bit(struct sk_buff *skb, int attrtype,
u64 value, int padattr)
{
- return nla_put_64bit(skb, attrtype, sizeof(u64), &value, padattr);
+ u64 tmp = value;
+
+ return nla_put_64bit(skb, attrtype, sizeof(u64), &tmp, padattr);
}
/**
static inline int nla_put_be64(struct sk_buff *skb, int attrtype, __be64 value,
int padattr)
{
- return nla_put_64bit(skb, attrtype, sizeof(__be64), &value, padattr);
+ __be64 tmp = value;
+
+ return nla_put_64bit(skb, attrtype, sizeof(__be64), &tmp, padattr);
}
/**
static inline int nla_put_net64(struct sk_buff *skb, int attrtype, __be64 value,
int padattr)
{
- return nla_put_be64(skb, attrtype | NLA_F_NET_BYTEORDER, value,
+ __be64 tmp = value;
+
+ return nla_put_be64(skb, attrtype | NLA_F_NET_BYTEORDER, tmp,
padattr);
}
static inline int nla_put_le64(struct sk_buff *skb, int attrtype, __le64 value,
int padattr)
{
- return nla_put_64bit(skb, attrtype, sizeof(__le64), &value, padattr);
+ __le64 tmp = value;
+
+ return nla_put_64bit(skb, attrtype, sizeof(__le64), &tmp, padattr);
}
/**
*/
static inline int nla_put_s8(struct sk_buff *skb, int attrtype, s8 value)
{
- return nla_put(skb, attrtype, sizeof(s8), &value);
+ s8 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(s8), &tmp);
}
/**
*/
static inline int nla_put_s16(struct sk_buff *skb, int attrtype, s16 value)
{
- return nla_put(skb, attrtype, sizeof(s16), &value);
+ s16 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(s16), &tmp);
}
/**
*/
static inline int nla_put_s32(struct sk_buff *skb, int attrtype, s32 value)
{
- return nla_put(skb, attrtype, sizeof(s32), &value);
+ s32 tmp = value;
+
+ return nla_put(skb, attrtype, sizeof(s32), &tmp);
}
/**
static inline int nla_put_s64(struct sk_buff *skb, int attrtype, s64 value,
int padattr)
{
- return nla_put_64bit(skb, attrtype, sizeof(s64), &value, padattr);
+ s64 tmp = value;
+
+ return nla_put_64bit(skb, attrtype, sizeof(s64), &tmp, padattr);
}
/**
static inline int nla_put_in_addr(struct sk_buff *skb, int attrtype,
__be32 addr)
{
- return nla_put_be32(skb, attrtype, addr);
+ __be32 tmp = addr;
+
+ return nla_put_be32(skb, attrtype, tmp);
}
/**
/* This is used to register protocols. */
struct net_protocol {
- void (*early_demux)(struct sk_buff *skb);
- void (*early_demux_handler)(struct sk_buff *skb);
+ int (*early_demux)(struct sk_buff *skb);
+ int (*early_demux_handler)(struct sk_buff *skb);
int (*handler)(struct sk_buff *skb);
void (*err_handler)(struct sk_buff *skb, u32 info);
unsigned int no_policy:1,
fl4->fl4_gre_key = gre_key;
return ip_route_output_key(net, fl4);
}
-
+int ip_mc_validate_source(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ u8 tos, struct net_device *dev,
+ struct in_device *in_dev, u32 *itag);
int ip_route_input_noref(struct sk_buff *skb, __be32 dst, __be32 src,
u8 tos, struct net_device *devin);
int ip_route_input_rcu(struct sk_buff *skb, __be32 dst, __be32 src,
rcu_read_lock();
err = ip_route_input_noref(skb, dst, src, tos, devin);
if (!err) {
- skb_dst_force_safe(skb);
+ skb_dst_force(skb);
if (!skb_dst(skb))
err = -EINVAL;
}
static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
{
/* dont let skb dst not refcounted, we are going to leave rcu lock */
- skb_dst_force_safe(skb);
+ skb_dst_force(skb);
if (!sk->sk_backlog.tail)
sk->sk_backlog.head = skb;
void tcp_shutdown(struct sock *sk, int how);
-void tcp_v4_early_demux(struct sk_buff *skb);
+int tcp_v4_early_demux(struct sk_buff *skb);
int tcp_v4_rcv(struct sk_buff *skb);
int tcp_v4_tw_remember_stamp(struct inet_timewait_sock *tw);
int min_tso_segs);
void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss,
int nonagle);
-bool tcp_may_send_now(struct sock *sk);
int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs);
int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs);
void tcp_retransmit_timer(struct sock *sk);
return __skb_recv_udp(sk, flags, noblock, &peeked, &off, err);
}
-void udp_v4_early_demux(struct sk_buff *skb);
+int udp_v4_early_demux(struct sk_buff *skb);
bool udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst);
int udp_get_port(struct sock *sk, unsigned short snum,
int (*saddr_cmp)(const struct sock *,
IB_TM_CAP_RC = 1 << 0,
};
-struct ib_xrq_caps {
+struct ib_tm_caps {
/* Max size of RNDV header */
u32 max_rndv_hdr_size;
/* Max number of entries in tag matching list */
struct ib_rss_caps rss_caps;
u32 max_wq_type_rq;
u32 raw_packet_caps; /* Use ib_raw_packet_caps enum */
- struct ib_xrq_caps xrq_caps;
+ struct ib_tm_caps tm_caps;
};
enum ib_mtu {
u32 lkey;
u32 rkey;
u64 iova;
- u32 length;
+ u64 length;
unsigned int page_size;
bool need_inval;
union {
unsigned no_dif:1; /* T10 PI (DIF) should be disabled */
unsigned broken_fua:1; /* Don't set FUA bit */
unsigned lun_in_cdb:1; /* Store LUN bits in CDB[1] */
+ unsigned unmap_limit_for_ws:1; /* Use the UNMAP limit for WRITE SAME */
atomic_t disk_events_disable_depth; /* disable depth for disk events */
#define BLIST_TRY_VPD_PAGES 0x10000000 /* Attempt to read VPD pages */
#define BLIST_NO_RSOC 0x20000000 /* don't try to issue RSOC */
#define BLIST_MAX_1024 0x40000000 /* maximum 1024 sector cdb length */
+#define BLIST_UNMAP_LIMIT_WS 0x80000000 /* Use UNMAP limit for WRITE SAME */
#endif
unsigned int target_id);
extern void iscsi_remove_session(struct iscsi_cls_session *session);
extern void iscsi_free_session(struct iscsi_cls_session *session);
-extern int iscsi_destroy_session(struct iscsi_cls_session *session);
extern struct iscsi_cls_conn *iscsi_create_conn(struct iscsi_cls_session *sess,
int dd_size, uint32_t cid);
extern int iscsi_destroy_conn(struct iscsi_cls_conn *conn);
struct reset_controller_dev rstc;
struct genpd_onecell_data genpd;
-};
-struct tegra_bpmp *tegra_bpmp_get(struct device *dev);
-void tegra_bpmp_put(struct tegra_bpmp *bpmp);
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *debugfs_mirror;
+#endif
+};
struct tegra_bpmp_message {
unsigned int mrq;
struct {
void *data;
size_t size;
+ int ret;
} rx;
};
+#if IS_ENABLED(CONFIG_TEGRA_BPMP)
+struct tegra_bpmp *tegra_bpmp_get(struct device *dev);
+void tegra_bpmp_put(struct tegra_bpmp *bpmp);
int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg);
int tegra_bpmp_transfer(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg);
+void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code,
+ const void *data, size_t size);
int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp, unsigned int mrq,
tegra_bpmp_mrq_handler_t handler, void *data);
void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp, unsigned int mrq,
void *data);
+#else
+static inline struct tegra_bpmp *tegra_bpmp_get(struct device *dev)
+{
+ return ERR_PTR(-ENOTSUPP);
+}
+static inline void tegra_bpmp_put(struct tegra_bpmp *bpmp)
+{
+}
+static inline int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
+ struct tegra_bpmp_message *msg)
+{
+ return -ENOTSUPP;
+}
+static inline int tegra_bpmp_transfer(struct tegra_bpmp *bpmp,
+ struct tegra_bpmp_message *msg)
+{
+ return -ENOTSUPP;
+}
+static inline void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel,
+ int code, const void *data,
+ size_t size)
+{
+}
+
+static inline int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp,
+ unsigned int mrq,
+ tegra_bpmp_mrq_handler_t handler,
+ void *data)
+{
+ return -ENOTSUPP;
+}
+static inline void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp,
+ unsigned int mrq, void *data)
+{
+}
+#endif
#if IS_ENABLED(CONFIG_CLK_TEGRA_BPMP)
int tegra_bpmp_init_clocks(struct tegra_bpmp *bpmp);
}
#endif
+#if IS_ENABLED(CONFIG_DEBUG_FS)
+int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp);
+#else
+static inline int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp)
+{
+ return 0;
+}
+#endif
+
+
#endif /* __SOC_TEGRA_BPMP_H */
#define AC_VERB_SET_EAPD_BTLENABLE 0x70c
#define AC_VERB_SET_DIGI_CONVERT_1 0x70d
#define AC_VERB_SET_DIGI_CONVERT_2 0x70e
+#define AC_VERB_SET_DIGI_CONVERT_3 0x73e
#define AC_VERB_SET_VOLUME_KNOB_CONTROL 0x70f
#define AC_VERB_SET_GPIO_DATA 0x715
#define AC_VERB_SET_GPIO_MASK 0x716
* Preemption ignores task state, therefore preempted tasks are always
* RUNNING (we will not have dequeued if state != RUNNING).
*/
- return preempt ? TASK_RUNNING | TASK_STATE_MAX : p->state;
+ if (preempt)
+ return TASK_STATE_MAX;
+
+ return __get_task_state(p);
}
#endif /* CREATE_TRACE_POINTS */
TP_printk("prev_comm=%s prev_pid=%d prev_prio=%d prev_state=%s%s ==> next_comm=%s next_pid=%d next_prio=%d",
__entry->prev_comm, __entry->prev_pid, __entry->prev_prio,
- __entry->prev_state & (TASK_STATE_MAX-1) ?
- __print_flags(__entry->prev_state & (TASK_STATE_MAX-1), "|",
- { 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" },
- { 16, "Z" }, { 32, "X" }, { 64, "x" },
- { 128, "K" }, { 256, "W" }, { 512, "P" },
- { 1024, "N" }) : "R",
+
+ (__entry->prev_state & (TASK_REPORT_MAX - 1)) ?
+ __print_flags(__entry->prev_state & (TASK_REPORT_MAX - 1), "|",
+ { 0x01, "S" }, { 0x02, "D" }, { 0x04, "T" },
+ { 0x08, "t" }, { 0x10, "X" }, { 0x20, "Z" },
+ { 0x40, "P" }, { 0x80, "I" }) :
+ "R",
+
__entry->prev_state & TASK_STATE_MAX ? "+" : "",
__entry->next_comm, __entry->next_pid, __entry->next_prio)
);
* jump into another BPF program
* @ctx: context pointer passed to next program
* @prog_array_map: pointer to map which type is BPF_MAP_TYPE_PROG_ARRAY
- * @index: index inside array that selects specific program to run
+ * @index: 32-bit index inside array that selects specific program to run
* Return: 0 on success or negative error
*
* int bpf_clone_redirect(skb, ifindex, flags)
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
-#define DM_VERSION_MINOR 36
+#define DM_VERSION_MINOR 37
#define DM_VERSION_PATCHLEVEL 0
-#define DM_VERSION_EXTRA "-ioctl (2017-06-09)"
+#define DM_VERSION_EXTRA "-ioctl (2017-09-20)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */
* %ethtool_link_mode_bit_indices for the link modes, and other
* link features that the link partner advertised through
* autonegotiation; 0 if unknown or not applicable. Read-only.
+ * @transceiver: Used to distinguish different possible PHY types,
+ * reported consistently by PHYLIB. Read-only.
*
* If autonegotiation is disabled, the speed and @duplex represent the
* fixed link mode and are writable if the driver supports multiple
__u8 eth_tp_mdix;
__u8 eth_tp_mdix_ctrl;
__s8 link_mode_masks_nwords;
- __u32 reserved[8];
+ __u8 transceiver;
+ __u8 reserved1[3];
+ __u32 reserved[7];
__u32 link_mode_masks[0];
/* layout of link_mode_masks fields:
* __u32 map_supported[link_mode_masks_nwords];
#ifndef KFD_IOCTL_H_INCLUDED
#define KFD_IOCTL_H_INCLUDED
-#include <linux/types.h>
+#include <drm/drm.h>
#include <linux/ioctl.h>
#define KFD_IOCTL_MAJOR_VERSION 1
#define KFD_IOCTL_MINOR_VERSION 1
struct kfd_ioctl_get_version_args {
- uint32_t major_version; /* from KFD */
- uint32_t minor_version; /* from KFD */
+ __u32 major_version; /* from KFD */
+ __u32 minor_version; /* from KFD */
};
/* For kfd_ioctl_create_queue_args.queue_type. */
#define KFD_MAX_QUEUE_PRIORITY 15
struct kfd_ioctl_create_queue_args {
- uint64_t ring_base_address; /* to KFD */
- uint64_t write_pointer_address; /* from KFD */
- uint64_t read_pointer_address; /* from KFD */
- uint64_t doorbell_offset; /* from KFD */
-
- uint32_t ring_size; /* to KFD */
- uint32_t gpu_id; /* to KFD */
- uint32_t queue_type; /* to KFD */
- uint32_t queue_percentage; /* to KFD */
- uint32_t queue_priority; /* to KFD */
- uint32_t queue_id; /* from KFD */
-
- uint64_t eop_buffer_address; /* to KFD */
- uint64_t eop_buffer_size; /* to KFD */
- uint64_t ctx_save_restore_address; /* to KFD */
- uint64_t ctx_save_restore_size; /* to KFD */
+ __u64 ring_base_address; /* to KFD */
+ __u64 write_pointer_address; /* from KFD */
+ __u64 read_pointer_address; /* from KFD */
+ __u64 doorbell_offset; /* from KFD */
+
+ __u32 ring_size; /* to KFD */
+ __u32 gpu_id; /* to KFD */
+ __u32 queue_type; /* to KFD */
+ __u32 queue_percentage; /* to KFD */
+ __u32 queue_priority; /* to KFD */
+ __u32 queue_id; /* from KFD */
+
+ __u64 eop_buffer_address; /* to KFD */
+ __u64 eop_buffer_size; /* to KFD */
+ __u64 ctx_save_restore_address; /* to KFD */
+ __u64 ctx_save_restore_size; /* to KFD */
};
struct kfd_ioctl_destroy_queue_args {
- uint32_t queue_id; /* to KFD */
- uint32_t pad;
+ __u32 queue_id; /* to KFD */
+ __u32 pad;
};
struct kfd_ioctl_update_queue_args {
- uint64_t ring_base_address; /* to KFD */
+ __u64 ring_base_address; /* to KFD */
- uint32_t queue_id; /* to KFD */
- uint32_t ring_size; /* to KFD */
- uint32_t queue_percentage; /* to KFD */
- uint32_t queue_priority; /* to KFD */
+ __u32 queue_id; /* to KFD */
+ __u32 ring_size; /* to KFD */
+ __u32 queue_percentage; /* to KFD */
+ __u32 queue_priority; /* to KFD */
};
/* For kfd_ioctl_set_memory_policy_args.default_policy and alternate_policy */
#define KFD_IOC_CACHE_POLICY_NONCOHERENT 1
struct kfd_ioctl_set_memory_policy_args {
- uint64_t alternate_aperture_base; /* to KFD */
- uint64_t alternate_aperture_size; /* to KFD */
+ __u64 alternate_aperture_base; /* to KFD */
+ __u64 alternate_aperture_size; /* to KFD */
- uint32_t gpu_id; /* to KFD */
- uint32_t default_policy; /* to KFD */
- uint32_t alternate_policy; /* to KFD */
- uint32_t pad;
+ __u32 gpu_id; /* to KFD */
+ __u32 default_policy; /* to KFD */
+ __u32 alternate_policy; /* to KFD */
+ __u32 pad;
};
/*
*/
struct kfd_ioctl_get_clock_counters_args {
- uint64_t gpu_clock_counter; /* from KFD */
- uint64_t cpu_clock_counter; /* from KFD */
- uint64_t system_clock_counter; /* from KFD */
- uint64_t system_clock_freq; /* from KFD */
+ __u64 gpu_clock_counter; /* from KFD */
+ __u64 cpu_clock_counter; /* from KFD */
+ __u64 system_clock_counter; /* from KFD */
+ __u64 system_clock_freq; /* from KFD */
- uint32_t gpu_id; /* to KFD */
- uint32_t pad;
+ __u32 gpu_id; /* to KFD */
+ __u32 pad;
};
#define NUM_OF_SUPPORTED_GPUS 7
struct kfd_process_device_apertures {
- uint64_t lds_base; /* from KFD */
- uint64_t lds_limit; /* from KFD */
- uint64_t scratch_base; /* from KFD */
- uint64_t scratch_limit; /* from KFD */
- uint64_t gpuvm_base; /* from KFD */
- uint64_t gpuvm_limit; /* from KFD */
- uint32_t gpu_id; /* from KFD */
- uint32_t pad;
+ __u64 lds_base; /* from KFD */
+ __u64 lds_limit; /* from KFD */
+ __u64 scratch_base; /* from KFD */
+ __u64 scratch_limit; /* from KFD */
+ __u64 gpuvm_base; /* from KFD */
+ __u64 gpuvm_limit; /* from KFD */
+ __u32 gpu_id; /* from KFD */
+ __u32 pad;
};
struct kfd_ioctl_get_process_apertures_args {
process_apertures[NUM_OF_SUPPORTED_GPUS];/* from KFD */
/* from KFD, should be in the range [1 - NUM_OF_SUPPORTED_GPUS] */
- uint32_t num_of_nodes;
- uint32_t pad;
+ __u32 num_of_nodes;
+ __u32 pad;
};
#define MAX_ALLOWED_NUM_POINTS 100
#define MAX_ALLOWED_WAC_BUFF_SIZE 128
struct kfd_ioctl_dbg_register_args {
- uint32_t gpu_id; /* to KFD */
- uint32_t pad;
+ __u32 gpu_id; /* to KFD */
+ __u32 pad;
};
struct kfd_ioctl_dbg_unregister_args {
- uint32_t gpu_id; /* to KFD */
- uint32_t pad;
+ __u32 gpu_id; /* to KFD */
+ __u32 pad;
};
struct kfd_ioctl_dbg_address_watch_args {
- uint64_t content_ptr; /* a pointer to the actual content */
- uint32_t gpu_id; /* to KFD */
- uint32_t buf_size_in_bytes; /*including gpu_id and buf_size */
+ __u64 content_ptr; /* a pointer to the actual content */
+ __u32 gpu_id; /* to KFD */
+ __u32 buf_size_in_bytes; /*including gpu_id and buf_size */
};
struct kfd_ioctl_dbg_wave_control_args {
- uint64_t content_ptr; /* a pointer to the actual content */
- uint32_t gpu_id; /* to KFD */
- uint32_t buf_size_in_bytes; /*including gpu_id and buf_size */
+ __u64 content_ptr; /* a pointer to the actual content */
+ __u32 gpu_id; /* to KFD */
+ __u32 buf_size_in_bytes; /*including gpu_id and buf_size */
};
/* Matching HSA_EVENTTYPE */
#define KFD_SIGNAL_EVENT_LIMIT 256
struct kfd_ioctl_create_event_args {
- uint64_t event_page_offset; /* from KFD */
- uint32_t event_trigger_data; /* from KFD - signal events only */
- uint32_t event_type; /* to KFD */
- uint32_t auto_reset; /* to KFD */
- uint32_t node_id; /* to KFD - only valid for certain
+ __u64 event_page_offset; /* from KFD */
+ __u32 event_trigger_data; /* from KFD - signal events only */
+ __u32 event_type; /* to KFD */
+ __u32 auto_reset; /* to KFD */
+ __u32 node_id; /* to KFD - only valid for certain
event types */
- uint32_t event_id; /* from KFD */
- uint32_t event_slot_index; /* from KFD */
+ __u32 event_id; /* from KFD */
+ __u32 event_slot_index; /* from KFD */
};
struct kfd_ioctl_destroy_event_args {
- uint32_t event_id; /* to KFD */
- uint32_t pad;
+ __u32 event_id; /* to KFD */
+ __u32 pad;
};
struct kfd_ioctl_set_event_args {
- uint32_t event_id; /* to KFD */
- uint32_t pad;
+ __u32 event_id; /* to KFD */
+ __u32 pad;
};
struct kfd_ioctl_reset_event_args {
- uint32_t event_id; /* to KFD */
- uint32_t pad;
+ __u32 event_id; /* to KFD */
+ __u32 pad;
};
struct kfd_memory_exception_failure {
- uint32_t NotPresent; /* Page not present or supervisor privilege */
- uint32_t ReadOnly; /* Write access to a read-only page */
- uint32_t NoExecute; /* Execute access to a page marked NX */
- uint32_t pad;
+ __u32 NotPresent; /* Page not present or supervisor privilege */
+ __u32 ReadOnly; /* Write access to a read-only page */
+ __u32 NoExecute; /* Execute access to a page marked NX */
+ __u32 pad;
};
/* memory exception data*/
struct kfd_hsa_memory_exception_data {
struct kfd_memory_exception_failure failure;
- uint64_t va;
- uint32_t gpu_id;
- uint32_t pad;
+ __u64 va;
+ __u32 gpu_id;
+ __u32 pad;
};
/* Event data*/
union {
struct kfd_hsa_memory_exception_data memory_exception_data;
}; /* From KFD */
- uint64_t kfd_event_data_ext; /* pointer to an extension structure
+ __u64 kfd_event_data_ext; /* pointer to an extension structure
for future exception types */
- uint32_t event_id; /* to KFD */
- uint32_t pad;
+ __u32 event_id; /* to KFD */
+ __u32 pad;
};
struct kfd_ioctl_wait_events_args {
- uint64_t events_ptr; /* pointed to struct
+ __u64 events_ptr; /* pointed to struct
kfd_event_data array, to KFD */
- uint32_t num_events; /* to KFD */
- uint32_t wait_for_all; /* to KFD */
- uint32_t timeout; /* to KFD */
- uint32_t wait_result; /* from KFD */
+ __u32 num_events; /* to KFD */
+ __u32 wait_for_all; /* to KFD */
+ __u32 timeout; /* to KFD */
+ __u32 wait_result; /* from KFD */
};
struct kfd_ioctl_set_scratch_backing_va_args {
#define SECCOMP_MODE_FILTER 2 /* uses user-supplied filter. */
/* Valid operations for seccomp syscall. */
-#define SECCOMP_SET_MODE_STRICT 0
-#define SECCOMP_SET_MODE_FILTER 1
+#define SECCOMP_SET_MODE_STRICT 0
+#define SECCOMP_SET_MODE_FILTER 1
+#define SECCOMP_GET_ACTION_AVAIL 2
/* Valid flags for SECCOMP_SET_MODE_FILTER */
#define SECCOMP_FILTER_FLAG_TSYNC 1
+#define SECCOMP_FILTER_FLAG_LOG 2
/*
* All BPF programs must return a 32-bit value.
* The bottom 16-bits are for optional return data.
- * The upper 16-bits are ordered from least permissive values to most.
+ * The upper 16-bits are ordered from least permissive values to most,
+ * as a signed value (so 0x8000000 is negative).
*
* The ordering ensures that a min_t() over composed return values always
* selects the least permissive choice.
*/
-#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
-#define SECCOMP_RET_TRAP 0x00030000U /* disallow and force a SIGSYS */
-#define SECCOMP_RET_ERRNO 0x00050000U /* returns an errno */
-#define SECCOMP_RET_TRACE 0x7ff00000U /* pass to a tracer or disallow */
-#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
+#define SECCOMP_RET_KILL_PROCESS 0x80000000U /* kill the process */
+#define SECCOMP_RET_KILL_THREAD 0x00000000U /* kill the thread */
+#define SECCOMP_RET_KILL SECCOMP_RET_KILL_THREAD
+#define SECCOMP_RET_TRAP 0x00030000U /* disallow and force a SIGSYS */
+#define SECCOMP_RET_ERRNO 0x00050000U /* returns an errno */
+#define SECCOMP_RET_TRACE 0x7ff00000U /* pass to a tracer or disallow */
+#define SECCOMP_RET_LOG 0x7ffc0000U /* allow after logging */
+#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
/* Masks for the return value sections. */
+#define SECCOMP_RET_ACTION_FULL 0xffff0000U
#define SECCOMP_RET_ACTION 0x7fff0000U
#define SECCOMP_RET_DATA 0x0000ffffU
__u8 iFunction;
} __attribute__ ((packed));
+#define USB_DT_INTERFACE_ASSOCIATION_SIZE 8
/*-------------------------------------------------------------------------*/
struct ib_uverbs_rss_caps rss_caps;
__u32 max_wq_type_rq;
__u32 raw_packet_caps;
- struct ib_uverbs_tm_caps xrq_caps;
+ struct ib_uverbs_tm_caps tm_caps;
};
struct ib_uverbs_query_port {
BUG();
}
-/* TODO: this shouldn't be here but it is because the frontend drivers
- * are using it (its rolled in headers) even though we won't hit the code path.
- * So for right now just punt with this.
- */
-static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
-{
- BUG();
- return NULL;
-}
-
extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
struct gnttab_map_grant_ref *kmap_ops,
struct page **pages, unsigned int count);
info.shm_swp = ip->shm_swp;
info.swap_attempts = ip->swap_attempts;
info.swap_successes = ip->swap_successes;
- return copy_to_user(up, &info, sizeof(info));
+ return copy_to_user(uip, &info, sizeof(info));
}
static int copy_compat_shmid_to_user(void __user *buf, struct shmid64_ds *in,
err = shmctl_stat(ns, shmid, cmd, &sem64);
if (err < 0)
return err;
- if (copy_compat_shmid_to_user(&sem64, uptr, version))
+ if (copy_compat_shmid_to_user(uptr, &sem64, version))
err = -EFAULT;
return err;
struct bpf_map *map = (struct bpf_map *) (unsigned long) BPF_R2;
struct bpf_array *array = container_of(map, struct bpf_array, map);
struct bpf_prog *prog;
- u64 index = BPF_R3;
+ u32 index = BPF_R3;
if (unlikely(index >= array->map.max_entries))
goto out;
static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
{
struct bpf_dtab *dtab;
+ int err = -EINVAL;
u64 cost;
- int err;
/* check sanity of attributes */
if (attr->max_entries == 0 || attr->key_size != 4 ||
if (err)
goto free_dtab;
+ err = -ENOMEM;
+
/* A per cpu bitfield with a bit per possible net device */
dtab->flush_needed = __alloc_percpu(dev_map_bitmap_size(attr),
__alignof__(unsigned long));
free_dtab:
free_percpu(dtab->flush_needed);
kfree(dtab);
- return ERR_PTR(-ENOMEM);
+ return ERR_PTR(err);
}
static void dev_map_free(struct bpf_map *map)
static void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
{
+ unsigned long flags;
+
if (do_idr_lock)
- spin_lock_bh(&map_idr_lock);
+ spin_lock_irqsave(&map_idr_lock, flags);
else
__acquire(&map_idr_lock);
idr_remove(&map_idr, map->id);
if (do_idr_lock)
- spin_unlock_bh(&map_idr_lock);
+ spin_unlock_irqrestore(&map_idr_lock, flags);
else
__release(&map_idr_lock);
}
}
if (insn->imm == BPF_FUNC_redirect_map) {
- u64 addr = (unsigned long)prog;
+ /* Note, we cannot use prog directly as imm as subsequent
+ * rewrites would still change the prog pointer. The only
+ * stable address we can use is aux, which also works with
+ * prog clones during blinding.
+ */
+ u64 addr = (unsigned long)prog->aux;
struct bpf_insn r4_ld[] = {
BPF_LD_IMM64(BPF_REG_4, addr),
*insn,
list_del_init(&cset->mg_node);
}
spin_unlock_irq(&css_set_lock);
+
+ /*
+ * Re-initialize the cgroup_taskset structure in case it is reused
+ * again in another cgroup_migrate_add_task()/cgroup_migrate_execute()
+ * iteration.
+ */
+ tset->nr_tasks = 0;
+ tset->csets = &tset->src_csets;
return ret;
}
#include <linux/lockdep.h>
#include <linux/tick.h>
#include <linux/irq.h>
+#include <linux/nmi.h>
#include <linux/smpboot.h>
#include <linux/relay.h>
#include <linux/slab.h>
* @bringup: Single callback bringup or teardown selector
* @cb_state: The state for a single callback (install/uninstall)
* @result: Result of the operation
- * @done: Signal completion to the issuer of the task
+ * @done_up: Signal completion to the issuer of the task for cpu-up
+ * @done_down: Signal completion to the issuer of the task for cpu-down
*/
struct cpuhp_cpu_state {
enum cpuhp_state state;
enum cpuhp_state target;
+ enum cpuhp_state fail;
#ifdef CONFIG_SMP
struct task_struct *thread;
bool should_run;
bool single;
bool bringup;
struct hlist_node *node;
+ struct hlist_node *last;
enum cpuhp_state cb_state;
int result;
- struct completion done;
+ struct completion done_up;
+ struct completion done_down;
#endif
};
-static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state);
+static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state) = {
+ .fail = CPUHP_INVALID,
+};
#if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP)
-static struct lock_class_key cpuhp_state_key;
-static struct lockdep_map cpuhp_state_lock_map =
- STATIC_LOCKDEP_MAP_INIT("cpuhp_state", &cpuhp_state_key);
+static struct lockdep_map cpuhp_state_up_map =
+ STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map);
+static struct lockdep_map cpuhp_state_down_map =
+ STATIC_LOCKDEP_MAP_INIT("cpuhp_state-down", &cpuhp_state_down_map);
+
+
+static void inline cpuhp_lock_acquire(bool bringup)
+{
+ lock_map_acquire(bringup ? &cpuhp_state_up_map : &cpuhp_state_down_map);
+}
+
+static void inline cpuhp_lock_release(bool bringup)
+{
+ lock_map_release(bringup ? &cpuhp_state_up_map : &cpuhp_state_down_map);
+}
+#else
+
+static void inline cpuhp_lock_acquire(bool bringup) { }
+static void inline cpuhp_lock_release(bool bringup) { }
+
#endif
/**
/**
* cpuhp_invoke_callback _ Invoke the callbacks for a given state
* @cpu: The cpu for which the callback should be invoked
- * @step: The step in the state machine
+ * @state: The state to do callbacks for
* @bringup: True if the bringup callback should be invoked
+ * @node: For multi-instance, do a single entry callback for install/remove
+ * @lastp: For multi-instance rollback, remember how far we got
*
* Called from cpu hotplug and from the state register machinery.
*/
static int cpuhp_invoke_callback(unsigned int cpu, enum cpuhp_state state,
- bool bringup, struct hlist_node *node)
+ bool bringup, struct hlist_node *node,
+ struct hlist_node **lastp)
{
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
struct cpuhp_step *step = cpuhp_get_step(state);
int (*cb)(unsigned int cpu);
int ret, cnt;
+ if (st->fail == state) {
+ st->fail = CPUHP_INVALID;
+
+ if (!(bringup ? step->startup.single : step->teardown.single))
+ return 0;
+
+ return -EAGAIN;
+ }
+
if (!step->multi_instance) {
+ WARN_ON_ONCE(lastp && *lastp);
cb = bringup ? step->startup.single : step->teardown.single;
if (!cb)
return 0;
/* Single invocation for instance add/remove */
if (node) {
+ WARN_ON_ONCE(lastp && *lastp);
trace_cpuhp_multi_enter(cpu, st->target, state, cbm, node);
ret = cbm(cpu, node);
trace_cpuhp_exit(cpu, st->state, state, ret);
/* State transition. Invoke on all instances */
cnt = 0;
hlist_for_each(node, &step->list) {
+ if (lastp && node == *lastp)
+ break;
+
trace_cpuhp_multi_enter(cpu, st->target, state, cbm, node);
ret = cbm(cpu, node);
trace_cpuhp_exit(cpu, st->state, state, ret);
- if (ret)
- goto err;
+ if (ret) {
+ if (!lastp)
+ goto err;
+
+ *lastp = node;
+ return ret;
+ }
cnt++;
}
+ if (lastp)
+ *lastp = NULL;
return 0;
err:
/* Rollback the instances if one failed */
hlist_for_each(node, &step->list) {
if (!cnt--)
break;
- cbm(cpu, node);
+
+ trace_cpuhp_multi_enter(cpu, st->target, state, cbm, node);
+ ret = cbm(cpu, node);
+ trace_cpuhp_exit(cpu, st->state, state, ret);
+ /*
+ * Rollback must not fail,
+ */
+ WARN_ON_ONCE(ret);
}
return ret;
}
#ifdef CONFIG_SMP
+static inline void wait_for_ap_thread(struct cpuhp_cpu_state *st, bool bringup)
+{
+ struct completion *done = bringup ? &st->done_up : &st->done_down;
+ wait_for_completion(done);
+}
+
+static inline void complete_ap_thread(struct cpuhp_cpu_state *st, bool bringup)
+{
+ struct completion *done = bringup ? &st->done_up : &st->done_down;
+ complete(done);
+}
+
+/*
+ * The former STARTING/DYING states, ran with IRQs disabled and must not fail.
+ */
+static bool cpuhp_is_atomic_state(enum cpuhp_state state)
+{
+ return CPUHP_AP_IDLE_DEAD <= state && state < CPUHP_AP_ONLINE;
+}
+
/* Serializes the updates to cpu_online_mask, cpu_present_mask */
static DEFINE_MUTEX(cpu_add_remove_lock);
bool cpuhp_tasks_frozen;
EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
#endif /* CONFIG_HOTPLUG_CPU */
-static void __cpuhp_kick_ap_work(struct cpuhp_cpu_state *st);
+static inline enum cpuhp_state
+cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+{
+ enum cpuhp_state prev_state = st->state;
+
+ st->rollback = false;
+ st->last = NULL;
+
+ st->target = target;
+ st->single = false;
+ st->bringup = st->state < target;
+
+ return prev_state;
+}
+
+static inline void
+cpuhp_reset_state(struct cpuhp_cpu_state *st, enum cpuhp_state prev_state)
+{
+ st->rollback = true;
+
+ /*
+ * If we have st->last we need to undo partial multi_instance of this
+ * state first. Otherwise start undo at the previous state.
+ */
+ if (!st->last) {
+ if (st->bringup)
+ st->state--;
+ else
+ st->state++;
+ }
+
+ st->target = prev_state;
+ st->bringup = !st->bringup;
+}
+
+/* Regular hotplug invocation of the AP hotplug thread */
+static void __cpuhp_kick_ap(struct cpuhp_cpu_state *st)
+{
+ if (!st->single && st->state == st->target)
+ return;
+
+ st->result = 0;
+ /*
+ * Make sure the above stores are visible before should_run becomes
+ * true. Paired with the mb() above in cpuhp_thread_fun()
+ */
+ smp_mb();
+ st->should_run = true;
+ wake_up_process(st->thread);
+ wait_for_ap_thread(st, st->bringup);
+}
+
+static int cpuhp_kick_ap(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+{
+ enum cpuhp_state prev_state;
+ int ret;
+
+ prev_state = cpuhp_set_state(st, target);
+ __cpuhp_kick_ap(st);
+ if ((ret = st->result)) {
+ cpuhp_reset_state(st, prev_state);
+ __cpuhp_kick_ap(st);
+ }
+
+ return ret;
+}
static int bringup_wait_for_ap(unsigned int cpu)
{
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
/* Wait for the CPU to reach CPUHP_AP_ONLINE_IDLE */
- wait_for_completion(&st->done);
+ wait_for_ap_thread(st, true);
if (WARN_ON_ONCE((!cpu_online(cpu))))
return -ECANCELED;
stop_machine_unpark(cpu);
kthread_unpark(st->thread);
- /* Should we go further up ? */
- if (st->target > CPUHP_AP_ONLINE_IDLE) {
- __cpuhp_kick_ap_work(st);
- wait_for_completion(&st->done);
- }
- return st->result;
+ if (st->target <= CPUHP_AP_ONLINE_IDLE)
+ return 0;
+
+ return cpuhp_kick_ap(st, st->target);
}
static int bringup_cpu(unsigned int cpu)
/*
* Hotplug state machine related functions
*/
-static void undo_cpu_down(unsigned int cpu, struct cpuhp_cpu_state *st)
-{
- for (st->state++; st->state < st->target; st->state++) {
- struct cpuhp_step *step = cpuhp_get_step(st->state);
-
- if (!step->skip_onerr)
- cpuhp_invoke_callback(cpu, st->state, true, NULL);
- }
-}
-
-static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
- enum cpuhp_state target)
-{
- enum cpuhp_state prev_state = st->state;
- int ret = 0;
-
- for (; st->state > target; st->state--) {
- ret = cpuhp_invoke_callback(cpu, st->state, false, NULL);
- if (ret) {
- st->target = prev_state;
- undo_cpu_down(cpu, st);
- break;
- }
- }
- return ret;
-}
static void undo_cpu_up(unsigned int cpu, struct cpuhp_cpu_state *st)
{
struct cpuhp_step *step = cpuhp_get_step(st->state);
if (!step->skip_onerr)
- cpuhp_invoke_callback(cpu, st->state, false, NULL);
+ cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
}
}
while (st->state < target) {
st->state++;
- ret = cpuhp_invoke_callback(cpu, st->state, true, NULL);
+ ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
if (ret) {
st->target = prev_state;
undo_cpu_up(cpu, st);
{
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
- init_completion(&st->done);
+ init_completion(&st->done_up);
+ init_completion(&st->done_down);
}
static int cpuhp_should_run(unsigned int cpu)
return st->should_run;
}
-/* Execute the teardown callbacks. Used to be CPU_DOWN_PREPARE */
-static int cpuhp_ap_offline(unsigned int cpu, struct cpuhp_cpu_state *st)
-{
- enum cpuhp_state target = max((int)st->target, CPUHP_TEARDOWN_CPU);
-
- return cpuhp_down_callbacks(cpu, st, target);
-}
-
-/* Execute the online startup callbacks. Used to be CPU_ONLINE */
-static int cpuhp_ap_online(unsigned int cpu, struct cpuhp_cpu_state *st)
-{
- return cpuhp_up_callbacks(cpu, st, st->target);
-}
-
/*
* Execute teardown/startup callbacks on the plugged cpu. Also used to invoke
* callbacks when a state gets [un]installed at runtime.
+ *
+ * Each invocation of this function by the smpboot thread does a single AP
+ * state callback.
+ *
+ * It has 3 modes of operation:
+ * - single: runs st->cb_state
+ * - up: runs ++st->state, while st->state < st->target
+ * - down: runs st->state--, while st->state > st->target
+ *
+ * When complete or on error, should_run is cleared and the completion is fired.
*/
static void cpuhp_thread_fun(unsigned int cpu)
{
struct cpuhp_cpu_state *st = this_cpu_ptr(&cpuhp_state);
- int ret = 0;
+ bool bringup = st->bringup;
+ enum cpuhp_state state;
/*
- * Paired with the mb() in cpuhp_kick_ap_work and
- * cpuhp_invoke_ap_callback, so the work set is consistent visible.
+ * ACQUIRE for the cpuhp_should_run() load of ->should_run. Ensures
+ * that if we see ->should_run we also see the rest of the state.
*/
smp_mb();
- if (!st->should_run)
+
+ if (WARN_ON_ONCE(!st->should_run))
return;
- st->should_run = false;
+ cpuhp_lock_acquire(bringup);
- lock_map_acquire(&cpuhp_state_lock_map);
- /* Single callback invocation for [un]install ? */
if (st->single) {
- if (st->cb_state < CPUHP_AP_ONLINE) {
- local_irq_disable();
- ret = cpuhp_invoke_callback(cpu, st->cb_state,
- st->bringup, st->node);
- local_irq_enable();
+ state = st->cb_state;
+ st->should_run = false;
+ } else {
+ if (bringup) {
+ st->state++;
+ state = st->state;
+ st->should_run = (st->state < st->target);
+ WARN_ON_ONCE(st->state > st->target);
} else {
- ret = cpuhp_invoke_callback(cpu, st->cb_state,
- st->bringup, st->node);
+ state = st->state;
+ st->state--;
+ st->should_run = (st->state > st->target);
+ WARN_ON_ONCE(st->state < st->target);
}
- } else if (st->rollback) {
- BUG_ON(st->state < CPUHP_AP_ONLINE_IDLE);
+ }
+
+ WARN_ON_ONCE(!cpuhp_is_ap_state(state));
+
+ if (st->rollback) {
+ struct cpuhp_step *step = cpuhp_get_step(state);
+ if (step->skip_onerr)
+ goto next;
+ }
+
+ if (cpuhp_is_atomic_state(state)) {
+ local_irq_disable();
+ st->result = cpuhp_invoke_callback(cpu, state, bringup, st->node, &st->last);
+ local_irq_enable();
- undo_cpu_down(cpu, st);
- st->rollback = false;
+ /*
+ * STARTING/DYING must not fail!
+ */
+ WARN_ON_ONCE(st->result);
} else {
- /* Cannot happen .... */
- BUG_ON(st->state < CPUHP_AP_ONLINE_IDLE);
-
- /* Regular hotplug work */
- if (st->state < st->target)
- ret = cpuhp_ap_online(cpu, st);
- else if (st->state > st->target)
- ret = cpuhp_ap_offline(cpu, st);
+ st->result = cpuhp_invoke_callback(cpu, state, bringup, st->node, &st->last);
+ }
+
+ if (st->result) {
+ /*
+ * If we fail on a rollback, we're up a creek without no
+ * paddle, no way forward, no way back. We loose, thanks for
+ * playing.
+ */
+ WARN_ON_ONCE(st->rollback);
+ st->should_run = false;
}
- lock_map_release(&cpuhp_state_lock_map);
- st->result = ret;
- complete(&st->done);
+
+next:
+ cpuhp_lock_release(bringup);
+
+ if (!st->should_run)
+ complete_ap_thread(st, bringup);
}
/* Invoke a single callback on a remote cpu */
struct hlist_node *node)
{
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+ int ret;
if (!cpu_online(cpu))
return 0;
- lock_map_acquire(&cpuhp_state_lock_map);
- lock_map_release(&cpuhp_state_lock_map);
+ cpuhp_lock_acquire(false);
+ cpuhp_lock_release(false);
+
+ cpuhp_lock_acquire(true);
+ cpuhp_lock_release(true);
/*
* If we are up and running, use the hotplug thread. For early calls
* we invoke the thread function directly.
*/
if (!st->thread)
- return cpuhp_invoke_callback(cpu, state, bringup, node);
+ return cpuhp_invoke_callback(cpu, state, bringup, node, NULL);
+
+ st->rollback = false;
+ st->last = NULL;
+ st->node = node;
+ st->bringup = bringup;
st->cb_state = state;
st->single = true;
- st->bringup = bringup;
- st->node = node;
- /*
- * Make sure the above stores are visible before should_run becomes
- * true. Paired with the mb() above in cpuhp_thread_fun()
- */
- smp_mb();
- st->should_run = true;
- wake_up_process(st->thread);
- wait_for_completion(&st->done);
- return st->result;
-}
+ __cpuhp_kick_ap(st);
-/* Regular hotplug invocation of the AP hotplug thread */
-static void __cpuhp_kick_ap_work(struct cpuhp_cpu_state *st)
-{
- st->result = 0;
- st->single = false;
/*
- * Make sure the above stores are visible before should_run becomes
- * true. Paired with the mb() above in cpuhp_thread_fun()
+ * If we failed and did a partial, do a rollback.
*/
- smp_mb();
- st->should_run = true;
- wake_up_process(st->thread);
+ if ((ret = st->result) && st->last) {
+ st->rollback = true;
+ st->bringup = !bringup;
+
+ __cpuhp_kick_ap(st);
+ }
+
+ return ret;
}
static int cpuhp_kick_ap_work(unsigned int cpu)
{
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
- enum cpuhp_state state = st->state;
+ enum cpuhp_state prev_state = st->state;
+ int ret;
+
+ cpuhp_lock_acquire(false);
+ cpuhp_lock_release(false);
- trace_cpuhp_enter(cpu, st->target, state, cpuhp_kick_ap_work);
- lock_map_acquire(&cpuhp_state_lock_map);
- lock_map_release(&cpuhp_state_lock_map);
- __cpuhp_kick_ap_work(st);
- wait_for_completion(&st->done);
- trace_cpuhp_exit(cpu, st->state, state, st->result);
- return st->result;
+ cpuhp_lock_acquire(true);
+ cpuhp_lock_release(true);
+
+ trace_cpuhp_enter(cpu, st->target, prev_state, cpuhp_kick_ap_work);
+ ret = cpuhp_kick_ap(st, st->target);
+ trace_cpuhp_exit(cpu, st->state, prev_state, ret);
+
+ return ret;
}
static struct smp_hotplug_thread cpuhp_threads = {
struct cpuhp_cpu_state *st = this_cpu_ptr(&cpuhp_state);
enum cpuhp_state target = max((int)st->target, CPUHP_AP_OFFLINE);
int err, cpu = smp_processor_id();
+ int ret;
/* Ensure this CPU doesn't handle any more interrupts. */
err = __cpu_disable();
WARN_ON(st->state != CPUHP_TEARDOWN_CPU);
st->state--;
/* Invoke the former CPU_DYING callbacks */
- for (; st->state > target; st->state--)
- cpuhp_invoke_callback(cpu, st->state, false, NULL);
+ for (; st->state > target; st->state--) {
+ ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ /*
+ * DYING must not fail!
+ */
+ WARN_ON_ONCE(ret);
+ }
/* Give up timekeeping duties */
tick_handover_do_timer();
*
* Wait for the stop thread to go away.
*/
- wait_for_completion(&st->done);
+ wait_for_ap_thread(st, false);
BUG_ON(st->state != CPUHP_AP_IDLE_DEAD);
/* Interrupts are moved away from the dying cpu, reenable alloc/free */
{
struct cpuhp_cpu_state *st = arg;
- complete(&st->done);
+ complete_ap_thread(st, false);
}
void cpuhp_report_idle_dead(void)
cpuhp_complete_idle_dead, st, 0);
}
-#else
-#define takedown_cpu NULL
-#endif
+static void undo_cpu_down(unsigned int cpu, struct cpuhp_cpu_state *st)
+{
+ for (st->state++; st->state < st->target; st->state++) {
+ struct cpuhp_step *step = cpuhp_get_step(st->state);
-#ifdef CONFIG_HOTPLUG_CPU
+ if (!step->skip_onerr)
+ cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+ }
+}
+
+static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ enum cpuhp_state target)
+{
+ enum cpuhp_state prev_state = st->state;
+ int ret = 0;
+
+ for (; st->state > target; st->state--) {
+ ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ if (ret) {
+ st->target = prev_state;
+ undo_cpu_down(cpu, st);
+ break;
+ }
+ }
+ return ret;
+}
/* Requires cpu_add_remove_lock to be held */
static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
cpuhp_tasks_frozen = tasks_frozen;
- prev_state = st->state;
- st->target = target;
+ prev_state = cpuhp_set_state(st, target);
/*
* If the current CPU state is in the range of the AP hotplug thread,
* then we need to kick the thread.
*/
if (st->state > CPUHP_TEARDOWN_CPU) {
+ st->target = max((int)target, CPUHP_TEARDOWN_CPU);
ret = cpuhp_kick_ap_work(cpu);
/*
* The AP side has done the error rollback already. Just
*/
if (st->state > CPUHP_TEARDOWN_CPU)
goto out;
+
+ st->target = target;
}
/*
* The AP brought itself down to CPUHP_TEARDOWN_CPU. So we need
*/
ret = cpuhp_down_callbacks(cpu, st, target);
if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
- st->target = prev_state;
- st->rollback = true;
- cpuhp_kick_ap_work(cpu);
+ cpuhp_reset_state(st, prev_state);
+ __cpuhp_kick_ap(st);
}
out:
cpus_write_unlock();
+ /*
+ * Do post unplug cleanup. This is still protected against
+ * concurrent CPU hotplug via cpu_add_remove_lock.
+ */
+ lockup_detector_cleanup();
return ret;
}
cpu_maps_update_done();
return err;
}
+
int cpu_down(unsigned int cpu)
{
return do_cpu_down(cpu, CPUHP_OFFLINE);
}
EXPORT_SYMBOL(cpu_down);
+
+#else
+#define takedown_cpu NULL
#endif /*CONFIG_HOTPLUG_CPU*/
/**
{
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
enum cpuhp_state target = min((int)st->target, CPUHP_AP_ONLINE);
+ int ret;
rcu_cpu_starting(cpu); /* Enables RCU usage on this CPU. */
while (st->state < target) {
st->state++;
- cpuhp_invoke_callback(cpu, st->state, true, NULL);
+ ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+ /*
+ * STARTING must not fail!
+ */
+ WARN_ON_ONCE(ret);
}
}
return;
st->state = CPUHP_AP_ONLINE_IDLE;
- complete(&st->done);
+ complete_ap_thread(st, true);
}
/* Requires cpu_add_remove_lock to be held */
cpuhp_tasks_frozen = tasks_frozen;
- st->target = target;
+ cpuhp_set_state(st, target);
/*
* If the current CPU state is in the range of the AP hotplug thread,
* then we need to kick the thread once more.
struct cpuhp_step *sp = cpuhp_get_step(state);
int ret;
+ /*
+ * If there's nothing to do, we done.
+ * Relies on the union for multi_instance.
+ */
if ((bringup && !sp->startup.single) ||
(!bringup && !sp->teardown.single))
return 0;
if (cpuhp_is_ap_state(state))
ret = cpuhp_invoke_ap_callback(cpu, state, bringup, node);
else
- ret = cpuhp_invoke_callback(cpu, state, bringup, node);
+ ret = cpuhp_invoke_callback(cpu, state, bringup, node, NULL);
#else
- ret = cpuhp_invoke_callback(cpu, state, bringup, node);
+ ret = cpuhp_invoke_callback(cpu, state, bringup, node, NULL);
#endif
BUG_ON(ret && !bringup);
return ret;
}
static DEVICE_ATTR(target, 0644, show_cpuhp_target, write_cpuhp_target);
+
+static ssize_t write_cpuhp_fail(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, dev->id);
+ struct cpuhp_step *sp;
+ int fail, ret;
+
+ ret = kstrtoint(buf, 10, &fail);
+ if (ret)
+ return ret;
+
+ /*
+ * Cannot fail STARTING/DYING callbacks.
+ */
+ if (cpuhp_is_atomic_state(fail))
+ return -EINVAL;
+
+ /*
+ * Cannot fail anything that doesn't have callbacks.
+ */
+ mutex_lock(&cpuhp_state_mutex);
+ sp = cpuhp_get_step(fail);
+ if (!sp->startup.single && !sp->teardown.single)
+ ret = -EINVAL;
+ mutex_unlock(&cpuhp_state_mutex);
+ if (ret)
+ return ret;
+
+ st->fail = fail;
+
+ return count;
+}
+
+static ssize_t show_cpuhp_fail(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, dev->id);
+
+ return sprintf(buf, "%d\n", st->fail);
+}
+
+static DEVICE_ATTR(fail, 0644, show_cpuhp_fail, write_cpuhp_fail);
+
static struct attribute *cpuhp_cpu_attrs[] = {
&dev_attr_state.attr,
&dev_attr_target.attr,
+ &dev_attr_fail.attr,
NULL
};
}
}
event->tp_event->prog = prog;
+ event->tp_event->bpf_prog_owner = event;
return 0;
}
return;
prog = event->tp_event->prog;
- if (prog) {
+ if (prog && event->tp_event->bpf_prog_owner == event) {
event->tp_event->prog = NULL;
bpf_prog_put(prog);
}
return NULL;
}
+static bool __always_inline rb_need_aux_wakeup(struct ring_buffer *rb)
+{
+ if (rb->aux_overwrite)
+ return false;
+
+ if (rb->aux_head - rb->aux_wakeup >= rb->aux_watermark) {
+ rb->aux_wakeup = rounddown(rb->aux_head, rb->aux_watermark);
+ return true;
+ }
+
+ return false;
+}
+
/*
* Commit the data written by hardware into the ring buffer by adjusting
* aux_head and posting a PERF_RECORD_AUX into the perf buffer. It is the
}
rb->user_page->aux_head = rb->aux_head;
- if (rb->aux_head - rb->aux_wakeup >= rb->aux_watermark) {
+ if (rb_need_aux_wakeup(rb))
wakeup = true;
- rb->aux_wakeup = rounddown(rb->aux_head, rb->aux_watermark);
- }
if (wakeup) {
if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED)
rb->aux_head += size;
rb->user_page->aux_head = rb->aux_head;
- if (rb->aux_head - rb->aux_wakeup >= rb->aux_watermark) {
+ if (rb_need_aux_wakeup(rb)) {
perf_output_wakeup(handle);
- rb->aux_wakeup = rounddown(rb->aux_head, rb->aux_watermark);
handle->wakeup = rb->aux_wakeup + rb->aux_watermark;
}
struct waitid_info info = {.status = 0};
long err = kernel_waitid(which, upid, &info, options, ru ? &r : NULL);
int signo = 0;
+
if (err > 0) {
signo = SIGCHLD;
err = 0;
- }
-
- if (!err) {
if (ru && copy_to_user(ru, &r, sizeof(struct rusage)))
return -EFAULT;
}
if (err > 0) {
signo = SIGCHLD;
err = 0;
- }
-
- if (!err && uru) {
- /* kernel_waitid() overwrites everything in ru */
- if (COMPAT_USE_64BIT_TIME)
- err = copy_to_user(uru, &ru, sizeof(ru));
- else
- err = put_compat_rusage(&ru, uru);
- if (err)
- return -EFAULT;
+ if (uru) {
+ /* kernel_waitid() overwrites everything in ru */
+ if (COMPAT_USE_64BIT_TIME)
+ err = copy_to_user(uru, &ru, sizeof(ru));
+ else
+ err = put_compat_rusage(&ru, uru);
+ if (err)
+ return -EFAULT;
+ }
}
if (!infop)
int __kernel_text_address(unsigned long addr)
{
- if (core_kernel_text(addr))
- return 1;
- if (is_module_text_address(addr))
- return 1;
- if (is_ftrace_trampoline(addr))
- return 1;
- if (is_kprobe_optinsn_slot(addr) || is_kprobe_insn_slot(addr))
- return 1;
- if (is_bpf_text_address(addr))
+ if (kernel_text_address(addr))
return 1;
/*
* There might be init symbols in saved stacktraces.
int kernel_text_address(unsigned long addr)
{
+ bool no_rcu;
+ int ret = 1;
+
if (core_kernel_text(addr))
return 1;
+
+ /*
+ * If a stack dump happens while RCU is not watching, then
+ * RCU needs to be notified that it requires to start
+ * watching again. This can happen either by tracing that
+ * triggers a stack trace, or a WARN() that happens during
+ * coming back from idle, or cpu on or offlining.
+ *
+ * is_module_text_address() as well as the kprobe slots
+ * and is_bpf_text_address() require RCU to be watching.
+ */
+ no_rcu = !rcu_is_watching();
+
+ /* Treat this like an NMI as it can happen anywhere */
+ if (no_rcu)
+ rcu_nmi_enter();
+
if (is_module_text_address(addr))
- return 1;
+ goto out;
if (is_ftrace_trampoline(addr))
- return 1;
+ goto out;
if (is_kprobe_optinsn_slot(addr) || is_kprobe_insn_slot(addr))
- return 1;
+ goto out;
if (is_bpf_text_address(addr))
- return 1;
- return 0;
+ goto out;
+ ret = 0;
+out:
+ if (no_rcu)
+ rcu_nmi_exit();
+
+ return ret;
}
/*
}
EXPORT_SYMBOL_GPL(mmput);
+#ifdef CONFIG_MMU
+static void mmput_async_fn(struct work_struct *work)
+{
+ struct mm_struct *mm = container_of(work, struct mm_struct,
+ async_put_work);
+
+ __mmput(mm);
+}
+
+void mmput_async(struct mm_struct *mm)
+{
+ if (atomic_dec_and_test(&mm->mm_users)) {
+ INIT_WORK(&mm->async_put_work, mmput_async_fn);
+ schedule_work(&mm->async_put_work);
+ }
+}
+#endif
+
/**
* set_mm_exe_file - change a reference to the mm's executable file
*
/*
* Drops a reference to the pi_state object and frees or caches it
* when the last reference is gone.
- *
- * Must be called with the hb lock held.
*/
static void put_pi_state(struct futex_pi_state *pi_state)
{
* and has cleaned up the pi_state already
*/
if (pi_state->owner) {
- raw_spin_lock_irq(&pi_state->owner->pi_lock);
- list_del_init(&pi_state->list);
- raw_spin_unlock_irq(&pi_state->owner->pi_lock);
+ struct task_struct *owner;
- rt_mutex_proxy_unlock(&pi_state->pi_mutex, pi_state->owner);
+ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+ owner = pi_state->owner;
+ if (owner) {
+ raw_spin_lock(&owner->pi_lock);
+ list_del_init(&pi_state->list);
+ raw_spin_unlock(&owner->pi_lock);
+ }
+ rt_mutex_proxy_unlock(&pi_state->pi_mutex, owner);
+ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
}
- if (current->pi_state_cache)
+ if (current->pi_state_cache) {
kfree(pi_state);
- else {
+ } else {
/*
* pi_state->list is already empty.
* clear pi_state->owner.
raw_spin_unlock_irq(&curr->pi_lock);
spin_lock(&hb->lock);
-
- raw_spin_lock_irq(&curr->pi_lock);
+ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+ raw_spin_lock(&curr->pi_lock);
/*
* We dropped the pi-lock, so re-check whether this
* task still owns the PI-state:
*/
if (head->next != next) {
+ raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
spin_unlock(&hb->lock);
continue;
}
WARN_ON(list_empty(&pi_state->list));
list_del_init(&pi_state->list);
pi_state->owner = NULL;
- raw_spin_unlock_irq(&curr->pi_lock);
+ raw_spin_unlock(&curr->pi_lock);
get_pi_state(pi_state);
+ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
spin_unlock(&hb->lock);
rt_mutex_futex_unlock(&pi_state->pi_mutex);
WARN_ON(!list_empty(&pi_state->list));
list_add(&pi_state->list, &p->pi_state_list);
+ /*
+ * Assignment without holding pi_state->pi_mutex.wait_lock is safe
+ * because there is no concurrency as the object is not published yet.
+ */
pi_state->owner = p;
raw_spin_unlock_irq(&p->pi_lock);
raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
spin_unlock(&hb->lock);
+ /* drops pi_state->pi_mutex.wait_lock */
ret = wake_futex_pi(uaddr, uval, pi_state);
put_pi_state(pi_state);
irqd_clr_managed_shutdown(d);
- if (cpumask_any_and(aff, cpu_online_mask) > nr_cpu_ids) {
+ if (cpumask_any_and(aff, cpu_online_mask) >= nr_cpu_ids) {
/*
* Catch code which fiddles with enable_irq() on a managed
* and potentially shutdown IRQ. Chained interrupt
/* Calc pointer to the next generic chip */
tmp += sizeof(*gc) + num_ct * sizeof(struct irq_chip_type);
}
- d->name = name;
return 0;
}
EXPORT_SYMBOL_GPL(__irq_alloc_domain_generic_chips);
struct irq_desc *desc;
struct irq_domain *domain;
struct radix_tree_iter iter;
- void **slot;
+ void __rcu **slot;
int i;
seq_printf(m, " %-16s %-6s %-10s %-10s %s\n",
/* The irq_data was moved, fix the revmap to refer to the new location */
static void irq_domain_fix_revmap(struct irq_data *d)
{
- void **slot;
+ void __rcu **slot;
if (d->hwirq < d->domain->revmap_size)
return; /* Not using radix tree. */
#endif
action = __free_irq(irq, dev_id);
+
+ if (!action)
+ return NULL;
+
devname = action->name;
kfree(action);
return devname;
if (filp_epoll) {
filp_tgt = get_epoll_tfile_raw_ptr(filp_epoll, slot.tfd, slot.toff);
fput(filp_epoll);
- } else
+ }
if (IS_ERR(filp_tgt))
return PTR_ERR(filp_tgt);
unsigned long flags;
DEFINE_WAKE_Q(wake_q);
+ /*
+ * __rwsem_down_write_failed_common(sem)
+ * rwsem_optimistic_spin(sem)
+ * osq_unlock(sem->osq)
+ * ...
+ * atomic_long_add_return(&sem->count)
+ *
+ * - VS -
+ *
+ * __up_write()
+ * if (atomic_long_sub_return_release(&sem->count) < 0)
+ * rwsem_wake(sem)
+ * osq_is_locked(&sem->osq)
+ *
+ * And __up_write() must observe !osq_is_locked() when it observes the
+ * atomic_long_add_return() in order to not miss a wakeup.
+ *
+ * This boils down to:
+ *
+ * [S.rel] X = 1 [RmW] r0 = (Y += 0)
+ * MB RMB
+ * [RmW] Y += 1 [L] r1 = X
+ *
+ * exists (r0=1 /\ r1=0)
+ */
+ smp_rmb();
+
/*
* If a spinner is present, it is not necessary to do the wakeup.
* Try to do wakeup only if the trylock succeeds to minimize
pgprot_t pgprot = PAGE_KERNEL;
struct dev_pagemap *pgmap;
struct page_map *page_map;
- int error, nid, is_ram;
+ int error, nid, is_ram, i = 0;
align_start = res->start & ~(SECTION_SIZE - 1);
align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
list_del(&page->lru);
page->pgmap = pgmap;
percpu_ref_get(ref);
+ if (!(++i % 1024))
+ cond_resched();
}
devres_add(dev, page_map);
return __va(res->start);
} \
int param_get_##name(char *buffer, const struct kernel_param *kp) \
{ \
- return scnprintf(buffer, PAGE_SIZE, format, \
+ return scnprintf(buffer, PAGE_SIZE, format "\n", \
*((type *)kp->arg)); \
} \
const struct kernel_param_ops param_ops_##name = { \
EXPORT_SYMBOL(param_ops_##name)
-STANDARD_PARAM_DEF(byte, unsigned char, "%hhu", kstrtou8);
-STANDARD_PARAM_DEF(short, short, "%hi", kstrtos16);
-STANDARD_PARAM_DEF(ushort, unsigned short, "%hu", kstrtou16);
-STANDARD_PARAM_DEF(int, int, "%i", kstrtoint);
-STANDARD_PARAM_DEF(uint, unsigned int, "%u", kstrtouint);
-STANDARD_PARAM_DEF(long, long, "%li", kstrtol);
-STANDARD_PARAM_DEF(ulong, unsigned long, "%lu", kstrtoul);
-STANDARD_PARAM_DEF(ullong, unsigned long long, "%llu", kstrtoull);
+STANDARD_PARAM_DEF(byte, unsigned char, "%hhu", kstrtou8);
+STANDARD_PARAM_DEF(short, short, "%hi", kstrtos16);
+STANDARD_PARAM_DEF(ushort, unsigned short, "%hu", kstrtou16);
+STANDARD_PARAM_DEF(int, int, "%i", kstrtoint);
+STANDARD_PARAM_DEF(uint, unsigned int, "%u", kstrtouint);
+STANDARD_PARAM_DEF(long, long, "%li", kstrtol);
+STANDARD_PARAM_DEF(ulong, unsigned long, "%lu", kstrtoul);
+STANDARD_PARAM_DEF(ullong, unsigned long long, "%llu", kstrtoull);
int param_set_charp(const char *val, const struct kernel_param *kp)
{
int param_get_charp(char *buffer, const struct kernel_param *kp)
{
- return scnprintf(buffer, PAGE_SIZE, "%s", *((char **)kp->arg));
+ return scnprintf(buffer, PAGE_SIZE, "%s\n", *((char **)kp->arg));
}
EXPORT_SYMBOL(param_get_charp);
int param_get_bool(char *buffer, const struct kernel_param *kp)
{
/* Y and N chosen as being relatively non-coder friendly */
- return sprintf(buffer, "%c", *(bool *)kp->arg ? 'Y' : 'N');
+ return sprintf(buffer, "%c\n", *(bool *)kp->arg ? 'Y' : 'N');
}
EXPORT_SYMBOL(param_get_bool);
int param_get_invbool(char *buffer, const struct kernel_param *kp)
{
- return sprintf(buffer, "%c", (*(bool *)kp->arg) ? 'N' : 'Y');
+ return sprintf(buffer, "%c\n", (*(bool *)kp->arg) ? 'N' : 'Y');
}
EXPORT_SYMBOL(param_get_invbool);
struct kernel_param p = *kp;
for (i = off = 0; i < (arr->num ? *arr->num : arr->max); i++) {
+ /* Replace \n with comma */
if (i)
- buffer[off++] = ',';
+ buffer[off - 1] = ',';
p.arg = arr->elem + arr->elemsize * i;
check_kparam_locked(p.mod);
ret = arr->ops->get(buffer + off, &p);
int param_get_string(char *buffer, const struct kernel_param *kp)
{
const struct kparam_string *kps = kp->str;
- return strlcpy(buffer, kps->string, kps->maxlen);
+ return scnprintf(buffer, PAGE_SIZE, "%s\n", kps->string);
}
EXPORT_SYMBOL(param_get_string);
kernel_param_lock(mk->mod);
count = attribute->param->ops->get(buf, attribute->param);
kernel_param_unlock(mk->mod);
- if (count > 0) {
- strcat(buf, "\n");
- ++count;
- }
return count;
}
/*
* add_sysfs_param - add a parameter to sysfs
* @mk: struct module_kobject
- * @kparam: the actual parameter definition to add to sysfs
+ * @kp: the actual parameter definition to add to sysfs
* @name: name of parameter
*
* Create a kobject if for a (per-module) parameter if mp NULL, and
* frozen processes + suspended devices + idle processors.
* Thus s2idle_enter() should be called right after
* all devices have been suspended.
+ *
+ * Wakeups during the noirq suspend of devices may be spurious,
+ * so prevent them from terminating the loop right away.
*/
error = dpm_noirq_suspend_devices(PMSG_SUSPEND);
if (!error)
s2idle_enter();
+ else if (error == -EBUSY && pm_wakeup_pending())
+ error = 0;
- dpm_noirq_resume_devices(PMSG_RESUME);
- if (error && (error != -EBUSY || !pm_wakeup_pending())) {
- dpm_noirq_end();
- break;
- }
-
- if (s2idle_ops && s2idle_ops->wake)
+ if (!error && s2idle_ops && s2idle_ops->wake)
s2idle_ops->wake();
+ dpm_noirq_resume_devices(PMSG_RESUME);
+
dpm_noirq_end();
+ if (error)
+ break;
+
if (s2idle_ops && s2idle_ops->sync)
s2idle_ops->sync();
RCU_LOCKDEP_WARN(!irqs_disabled(), "rcu_irq_exit() invoked with irqs enabled!!!");
rdtp = this_cpu_ptr(&rcu_dynticks);
+
+ /* Page faults can happen in NMI handlers, so check... */
+ if (rdtp->dynticks_nmi_nesting)
+ return;
+
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
rdtp->dynticks_nesting < 1);
if (rdtp->dynticks_nesting <= 1) {
RCU_LOCKDEP_WARN(!irqs_disabled(), "rcu_irq_enter() invoked with irqs enabled!!!");
rdtp = this_cpu_ptr(&rcu_dynticks);
+
+ /* Page faults can happen in NMI handlers, so check... */
+ if (rdtp->dynticks_nmi_nesting)
+ return;
+
oldval = rdtp->dynticks_nesting;
rdtp->dynticks_nesting++;
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
put_task_stack(p);
}
+static inline bool
+state_filter_match(unsigned long state_filter, struct task_struct *p)
+{
+ /* no filter, everything matches */
+ if (!state_filter)
+ return true;
+
+ /* filter, but doesn't match */
+ if (!(p->state & state_filter))
+ return false;
+
+ /*
+ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
+ * TASK_KILLABLE).
+ */
+ if (state_filter == TASK_UNINTERRUPTIBLE && p->state == TASK_IDLE)
+ return false;
+
+ return true;
+}
+
+
void show_state_filter(unsigned long state_filter)
{
struct task_struct *g, *p;
*/
touch_nmi_watchdog();
touch_all_softlockup_watchdogs();
- if (!state_filter || (p->state & state_filter))
+ if (state_filter_match(state_filter, p))
sched_show_task(p);
}
}
#endif
-static const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
-
static void
print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
{
#include <linux/audit.h>
#include <linux/compat.h>
#include <linux/coredump.h>
+#include <linux/kmemleak.h>
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/seccomp.h>
#include <linux/slab.h>
#include <linux/syscalls.h>
+#include <linux/sysctl.h>
#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
#include <asm/syscall.h>
* get/put helpers should be used when accessing an instance
* outside of a lifetime-guarded section. In general, this
* is only needed for handling filters shared across tasks.
+ * @log: true if all actions except for SECCOMP_RET_ALLOW should be logged
* @prev: points to a previously installed, or inherited, filter
* @prog: the BPF program to evaluate
*
*/
struct seccomp_filter {
refcount_t usage;
+ bool log;
struct seccomp_filter *prev;
struct bpf_prog *prog;
};
/**
* seccomp_run_filters - evaluates all seccomp filters against @sd
* @sd: optional seccomp data to be passed to filters
+ * @match: stores struct seccomp_filter that resulted in the return value,
+ * unless filter returned SECCOMP_RET_ALLOW, in which case it will
+ * be unchanged.
*
* Returns valid seccomp BPF response codes.
*/
-static u32 seccomp_run_filters(const struct seccomp_data *sd)
+#define ACTION_ONLY(ret) ((s32)((ret) & (SECCOMP_RET_ACTION_FULL)))
+static u32 seccomp_run_filters(const struct seccomp_data *sd,
+ struct seccomp_filter **match)
{
struct seccomp_data sd_local;
u32 ret = SECCOMP_RET_ALLOW;
/* Ensure unexpected behavior doesn't result in failing open. */
if (unlikely(WARN_ON(f == NULL)))
- return SECCOMP_RET_KILL;
+ return SECCOMP_RET_KILL_PROCESS;
if (!sd) {
populate_seccomp_data(&sd_local);
for (; f; f = f->prev) {
u32 cur_ret = BPF_PROG_RUN(f->prog, sd);
- if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
+ if (ACTION_ONLY(cur_ret) < ACTION_ONLY(ret)) {
ret = cur_ret;
+ *match = f;
+ }
}
return ret;
}
return ret;
}
+ /* Set log flag, if present. */
+ if (flags & SECCOMP_FILTER_FLAG_LOG)
+ filter->log = true;
+
/*
* If there is an existing filter, make it the prev and don't drop its
* task reference.
return 0;
}
+void __get_seccomp_filter(struct seccomp_filter *filter)
+{
+ /* Reference count is bounded by the number of total processes. */
+ refcount_inc(&filter->usage);
+}
+
/* get_seccomp_filter - increments the reference count of the filter on @tsk */
void get_seccomp_filter(struct task_struct *tsk)
{
struct seccomp_filter *orig = tsk->seccomp.filter;
if (!orig)
return;
- /* Reference count is bounded by the number of total processes. */
- refcount_inc(&orig->usage);
+ __get_seccomp_filter(orig);
}
static inline void seccomp_filter_free(struct seccomp_filter *filter)
}
}
-/* put_seccomp_filter - decrements the ref count of tsk->seccomp.filter */
-void put_seccomp_filter(struct task_struct *tsk)
+static void __put_seccomp_filter(struct seccomp_filter *orig)
{
- struct seccomp_filter *orig = tsk->seccomp.filter;
/* Clean up single-reference branches iteratively. */
while (orig && refcount_dec_and_test(&orig->usage)) {
struct seccomp_filter *freeme = orig;
}
}
+/* put_seccomp_filter - decrements the ref count of tsk->seccomp.filter */
+void put_seccomp_filter(struct task_struct *tsk)
+{
+ __put_seccomp_filter(tsk->seccomp.filter);
+}
+
static void seccomp_init_siginfo(siginfo_t *info, int syscall, int reason)
{
memset(info, 0, sizeof(*info));
}
#endif /* CONFIG_SECCOMP_FILTER */
+/* For use with seccomp_actions_logged */
+#define SECCOMP_LOG_KILL_PROCESS (1 << 0)
+#define SECCOMP_LOG_KILL_THREAD (1 << 1)
+#define SECCOMP_LOG_TRAP (1 << 2)
+#define SECCOMP_LOG_ERRNO (1 << 3)
+#define SECCOMP_LOG_TRACE (1 << 4)
+#define SECCOMP_LOG_LOG (1 << 5)
+#define SECCOMP_LOG_ALLOW (1 << 6)
+
+static u32 seccomp_actions_logged = SECCOMP_LOG_KILL_PROCESS |
+ SECCOMP_LOG_KILL_THREAD |
+ SECCOMP_LOG_TRAP |
+ SECCOMP_LOG_ERRNO |
+ SECCOMP_LOG_TRACE |
+ SECCOMP_LOG_LOG;
+
+static inline void seccomp_log(unsigned long syscall, long signr, u32 action,
+ bool requested)
+{
+ bool log = false;
+
+ switch (action) {
+ case SECCOMP_RET_ALLOW:
+ break;
+ case SECCOMP_RET_TRAP:
+ log = requested && seccomp_actions_logged & SECCOMP_LOG_TRAP;
+ break;
+ case SECCOMP_RET_ERRNO:
+ log = requested && seccomp_actions_logged & SECCOMP_LOG_ERRNO;
+ break;
+ case SECCOMP_RET_TRACE:
+ log = requested && seccomp_actions_logged & SECCOMP_LOG_TRACE;
+ break;
+ case SECCOMP_RET_LOG:
+ log = seccomp_actions_logged & SECCOMP_LOG_LOG;
+ break;
+ case SECCOMP_RET_KILL_THREAD:
+ log = seccomp_actions_logged & SECCOMP_LOG_KILL_THREAD;
+ break;
+ case SECCOMP_RET_KILL_PROCESS:
+ default:
+ log = seccomp_actions_logged & SECCOMP_LOG_KILL_PROCESS;
+ }
+
+ /*
+ * Force an audit message to be emitted when the action is RET_KILL_*,
+ * RET_LOG, or the FILTER_FLAG_LOG bit was set and the action is
+ * allowed to be logged by the admin.
+ */
+ if (log)
+ return __audit_seccomp(syscall, signr, action);
+
+ /*
+ * Let the audit subsystem decide if the action should be audited based
+ * on whether the current task itself is being audited.
+ */
+ return audit_seccomp(syscall, signr, action);
+}
+
/*
* Secure computing mode 1 allows only read/write/exit/sigreturn.
* To be fully secure this must be combined with rlimit
#ifdef SECCOMP_DEBUG
dump_stack();
#endif
- audit_seccomp(this_syscall, SIGKILL, SECCOMP_RET_KILL);
+ seccomp_log(this_syscall, SIGKILL, SECCOMP_RET_KILL_THREAD, true);
do_exit(SIGKILL);
}
const bool recheck_after_trace)
{
u32 filter_ret, action;
+ struct seccomp_filter *match = NULL;
int data;
/*
*/
rmb();
- filter_ret = seccomp_run_filters(sd);
+ filter_ret = seccomp_run_filters(sd, &match);
data = filter_ret & SECCOMP_RET_DATA;
- action = filter_ret & SECCOMP_RET_ACTION;
+ action = filter_ret & SECCOMP_RET_ACTION_FULL;
switch (action) {
case SECCOMP_RET_ERRNO:
return 0;
+ case SECCOMP_RET_LOG:
+ seccomp_log(this_syscall, 0, action, true);
+ return 0;
+
case SECCOMP_RET_ALLOW:
+ /*
+ * Note that the "match" filter will always be NULL for
+ * this action since SECCOMP_RET_ALLOW is the starting
+ * state in seccomp_run_filters().
+ */
return 0;
- case SECCOMP_RET_KILL:
+ case SECCOMP_RET_KILL_THREAD:
+ case SECCOMP_RET_KILL_PROCESS:
default:
- audit_seccomp(this_syscall, SIGSYS, action);
+ seccomp_log(this_syscall, SIGSYS, action, true);
/* Dump core only if this is the last remaining thread. */
- if (get_nr_threads(current) == 1) {
+ if (action == SECCOMP_RET_KILL_PROCESS ||
+ get_nr_threads(current) == 1) {
siginfo_t info;
/* Show the original registers in the dump. */
seccomp_init_siginfo(&info, this_syscall, data);
do_coredump(&info);
}
- do_exit(SIGSYS);
+ if (action == SECCOMP_RET_KILL_PROCESS)
+ do_group_exit(SIGSYS);
+ else
+ do_exit(SIGSYS);
}
unreachable();
skip:
- audit_seccomp(this_syscall, 0, action);
+ seccomp_log(this_syscall, 0, action, match ? match->log : false);
return -1;
}
#else
}
#endif
+static long seccomp_get_action_avail(const char __user *uaction)
+{
+ u32 action;
+
+ if (copy_from_user(&action, uaction, sizeof(action)))
+ return -EFAULT;
+
+ switch (action) {
+ case SECCOMP_RET_KILL_PROCESS:
+ case SECCOMP_RET_KILL_THREAD:
+ case SECCOMP_RET_TRAP:
+ case SECCOMP_RET_ERRNO:
+ case SECCOMP_RET_TRACE:
+ case SECCOMP_RET_LOG:
+ case SECCOMP_RET_ALLOW:
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
/* Common entry point for both prctl and syscall. */
static long do_seccomp(unsigned int op, unsigned int flags,
const char __user *uargs)
return seccomp_set_mode_strict();
case SECCOMP_SET_MODE_FILTER:
return seccomp_set_mode_filter(flags, uargs);
+ case SECCOMP_GET_ACTION_AVAIL:
+ if (flags != 0)
+ return -EINVAL;
+
+ return seccomp_get_action_avail(uargs);
default:
return -EINVAL;
}
if (!data)
goto out;
- get_seccomp_filter(task);
+ __get_seccomp_filter(filter);
spin_unlock_irq(&task->sighand->siglock);
if (copy_to_user(data, fprog->filter, bpf_classic_proglen(fprog)))
ret = -EFAULT;
- put_seccomp_filter(task);
+ __put_seccomp_filter(filter);
return ret;
out:
return ret;
}
#endif
+
+#ifdef CONFIG_SYSCTL
+
+/* Human readable action names for friendly sysctl interaction */
+#define SECCOMP_RET_KILL_PROCESS_NAME "kill_process"
+#define SECCOMP_RET_KILL_THREAD_NAME "kill_thread"
+#define SECCOMP_RET_TRAP_NAME "trap"
+#define SECCOMP_RET_ERRNO_NAME "errno"
+#define SECCOMP_RET_TRACE_NAME "trace"
+#define SECCOMP_RET_LOG_NAME "log"
+#define SECCOMP_RET_ALLOW_NAME "allow"
+
+static const char seccomp_actions_avail[] =
+ SECCOMP_RET_KILL_PROCESS_NAME " "
+ SECCOMP_RET_KILL_THREAD_NAME " "
+ SECCOMP_RET_TRAP_NAME " "
+ SECCOMP_RET_ERRNO_NAME " "
+ SECCOMP_RET_TRACE_NAME " "
+ SECCOMP_RET_LOG_NAME " "
+ SECCOMP_RET_ALLOW_NAME;
+
+struct seccomp_log_name {
+ u32 log;
+ const char *name;
+};
+
+static const struct seccomp_log_name seccomp_log_names[] = {
+ { SECCOMP_LOG_KILL_PROCESS, SECCOMP_RET_KILL_PROCESS_NAME },
+ { SECCOMP_LOG_KILL_THREAD, SECCOMP_RET_KILL_THREAD_NAME },
+ { SECCOMP_LOG_TRAP, SECCOMP_RET_TRAP_NAME },
+ { SECCOMP_LOG_ERRNO, SECCOMP_RET_ERRNO_NAME },
+ { SECCOMP_LOG_TRACE, SECCOMP_RET_TRACE_NAME },
+ { SECCOMP_LOG_LOG, SECCOMP_RET_LOG_NAME },
+ { SECCOMP_LOG_ALLOW, SECCOMP_RET_ALLOW_NAME },
+ { }
+};
+
+static bool seccomp_names_from_actions_logged(char *names, size_t size,
+ u32 actions_logged)
+{
+ const struct seccomp_log_name *cur;
+ bool append_space = false;
+
+ for (cur = seccomp_log_names; cur->name && size; cur++) {
+ ssize_t ret;
+
+ if (!(actions_logged & cur->log))
+ continue;
+
+ if (append_space) {
+ ret = strscpy(names, " ", size);
+ if (ret < 0)
+ return false;
+
+ names += ret;
+ size -= ret;
+ } else
+ append_space = true;
+
+ ret = strscpy(names, cur->name, size);
+ if (ret < 0)
+ return false;
+
+ names += ret;
+ size -= ret;
+ }
+
+ return true;
+}
+
+static bool seccomp_action_logged_from_name(u32 *action_logged,
+ const char *name)
+{
+ const struct seccomp_log_name *cur;
+
+ for (cur = seccomp_log_names; cur->name; cur++) {
+ if (!strcmp(cur->name, name)) {
+ *action_logged = cur->log;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+static bool seccomp_actions_logged_from_names(u32 *actions_logged, char *names)
+{
+ char *name;
+
+ *actions_logged = 0;
+ while ((name = strsep(&names, " ")) && *name) {
+ u32 action_logged = 0;
+
+ if (!seccomp_action_logged_from_name(&action_logged, name))
+ return false;
+
+ *actions_logged |= action_logged;
+ }
+
+ return true;
+}
+
+static int seccomp_actions_logged_handler(struct ctl_table *ro_table, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+{
+ char names[sizeof(seccomp_actions_avail)];
+ struct ctl_table table;
+ int ret;
+
+ if (write && !capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ memset(names, 0, sizeof(names));
+
+ if (!write) {
+ if (!seccomp_names_from_actions_logged(names, sizeof(names),
+ seccomp_actions_logged))
+ return -EINVAL;
+ }
+
+ table = *ro_table;
+ table.data = names;
+ table.maxlen = sizeof(names);
+ ret = proc_dostring(&table, write, buffer, lenp, ppos);
+ if (ret)
+ return ret;
+
+ if (write) {
+ u32 actions_logged;
+
+ if (!seccomp_actions_logged_from_names(&actions_logged,
+ table.data))
+ return -EINVAL;
+
+ if (actions_logged & SECCOMP_LOG_ALLOW)
+ return -EINVAL;
+
+ seccomp_actions_logged = actions_logged;
+ }
+
+ return 0;
+}
+
+static struct ctl_path seccomp_sysctl_path[] = {
+ { .procname = "kernel", },
+ { .procname = "seccomp", },
+ { }
+};
+
+static struct ctl_table seccomp_sysctl_table[] = {
+ {
+ .procname = "actions_avail",
+ .data = (void *) &seccomp_actions_avail,
+ .maxlen = sizeof(seccomp_actions_avail),
+ .mode = 0444,
+ .proc_handler = proc_dostring,
+ },
+ {
+ .procname = "actions_logged",
+ .mode = 0644,
+ .proc_handler = seccomp_actions_logged_handler,
+ },
+ { }
+};
+
+static int __init seccomp_sysctl_init(void)
+{
+ struct ctl_table_header *hdr;
+
+ hdr = register_sysctl_paths(seccomp_sysctl_path, seccomp_sysctl_table);
+ if (!hdr)
+ pr_warn("seccomp: sysctl registration failed\n");
+ else
+ kmemleak_not_leak(hdr);
+
+ return 0;
+}
+
+device_initcall(seccomp_sysctl_init)
+
+#endif /* CONFIG_SYSCTL */
* by the client, but only by calling this function.
* This function can only be called on a registered smp_hotplug_thread.
*/
-int smpboot_update_cpumask_percpu_thread(struct smp_hotplug_thread *plug_thread,
- const struct cpumask *new)
+void smpboot_update_cpumask_percpu_thread(struct smp_hotplug_thread *plug_thread,
+ const struct cpumask *new)
{
struct cpumask *old = plug_thread->cpumask;
- cpumask_var_t tmp;
+ static struct cpumask tmp;
unsigned int cpu;
- if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
- return -ENOMEM;
-
- get_online_cpus();
+ lockdep_assert_cpus_held();
mutex_lock(&smpboot_threads_lock);
/* Park threads that were exclusively enabled on the old mask. */
- cpumask_andnot(tmp, old, new);
- for_each_cpu_and(cpu, tmp, cpu_online_mask)
+ cpumask_andnot(&tmp, old, new);
+ for_each_cpu_and(cpu, &tmp, cpu_online_mask)
smpboot_park_thread(plug_thread, cpu);
/* Unpark threads that are exclusively enabled on the new mask. */
- cpumask_andnot(tmp, new, old);
- for_each_cpu_and(cpu, tmp, cpu_online_mask)
+ cpumask_andnot(&tmp, new, old);
+ for_each_cpu_and(cpu, &tmp, cpu_online_mask)
smpboot_unpark_thread(plug_thread, cpu);
cpumask_copy(old, new);
mutex_unlock(&smpboot_threads_lock);
- put_online_cpus();
-
- free_cpumask_var(tmp);
-
- return 0;
}
-EXPORT_SYMBOL_GPL(smpboot_update_cpumask_percpu_thread);
static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
.data = &sysctl_sched_time_avg,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &one,
},
#ifdef CONFIG_SCHEDSTATS
{
#if defined(CONFIG_LOCKUP_DETECTOR)
{
.procname = "watchdog",
- .data = &watchdog_user_enabled,
- .maxlen = sizeof (int),
- .mode = 0644,
+ .data = &watchdog_user_enabled,
+ .maxlen = sizeof(int),
+ .mode = 0644,
.proc_handler = proc_watchdog,
.extra1 = &zero,
.extra2 = &one,
},
{
.procname = "nmi_watchdog",
- .data = &nmi_watchdog_enabled,
- .maxlen = sizeof (int),
- .mode = 0644,
+ .data = &nmi_watchdog_user_enabled,
+ .maxlen = sizeof(int),
+ .mode = NMI_WATCHDOG_SYSCTL_PERM,
.proc_handler = proc_nmi_watchdog,
.extra1 = &zero,
-#if defined(CONFIG_HAVE_NMI_WATCHDOG) || defined(CONFIG_HARDLOCKUP_DETECTOR)
.extra2 = &one,
-#else
- .extra2 = &zero,
-#endif
},
{
.procname = "watchdog_cpumask",
#ifdef CONFIG_SOFTLOCKUP_DETECTOR
{
.procname = "soft_watchdog",
- .data = &soft_watchdog_enabled,
- .maxlen = sizeof (int),
- .mode = 0644,
+ .data = &soft_watchdog_user_enabled,
+ .maxlen = sizeof(int),
+ .mode = 0644,
.proc_handler = proc_soft_watchdog,
.extra1 = &zero,
.extra2 = &one,
int write, void *data)
{
if (write) {
- if (*lvalp > UINT_MAX)
- return -EINVAL;
if (*lvalp > UINT_MAX)
return -EINVAL;
*valp = *lvalp;
}
EXPORT_SYMBOL_GPL(blk_trace_startstop);
+/*
+ * When reading or writing the blktrace sysfs files, the references to the
+ * opened sysfs or device files should prevent the underlying block device
+ * from being removed. So no further delete protection is really needed.
+ */
+
/**
* blk_trace_ioctl: - handle the ioctls associated with tracing
* @bdev: the block device
if (!q)
return -ENXIO;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock(&q->blk_trace_mutex);
switch (cmd) {
case BLKTRACESETUP:
break;
}
- mutex_unlock(&bdev->bd_mutex);
+ mutex_unlock(&q->blk_trace_mutex);
return ret;
}
if (q == NULL)
goto out_bdput;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock(&q->blk_trace_mutex);
if (attr == &dev_attr_enable) {
ret = sprintf(buf, "%u\n", !!q->blk_trace);
ret = sprintf(buf, "%llu\n", q->blk_trace->end_lba);
out_unlock_bdev:
- mutex_unlock(&bdev->bd_mutex);
+ mutex_unlock(&q->blk_trace_mutex);
out_bdput:
bdput(bdev);
out:
if (q == NULL)
goto out_bdput;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock(&q->blk_trace_mutex);
if (attr == &dev_attr_enable) {
if (value)
}
out_unlock_bdev:
- mutex_unlock(&bdev->bd_mutex);
+ mutex_unlock(&q->blk_trace_mutex);
out_bdput:
bdput(bdev);
out:
static char ftrace_graph_notrace_buf[FTRACE_FILTER_SIZE] __initdata;
static int ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer);
-static unsigned long save_global_trampoline;
-static unsigned long save_global_flags;
-
static int __init set_graph_function(char *str)
{
strlcpy(ftrace_graph_buf, str, FTRACE_FILTER_SIZE);
unregister_pm_notifier(&ftrace_suspend_notifier);
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
-#ifdef CONFIG_DYNAMIC_FTRACE
- /*
- * Function graph does not allocate the trampoline, but
- * other global_ops do. We need to reset the ALLOC_TRAMP flag
- * if one was used.
- */
- global_ops.trampoline = save_global_trampoline;
- if (save_global_flags & FTRACE_OPS_FL_ALLOC_TRAMP)
- global_ops.flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
-#endif
-
out:
mutex_unlock(&ftrace_lock);
}
/* If this file was open for write, then erase contents */
if ((file->f_mode & FMODE_WRITE) && (file->f_flags & O_TRUNC)) {
int cpu = tracing_get_cpu(inode);
+ struct trace_buffer *trace_buf = &tr->trace_buffer;
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (tr->current_trace->print_max)
+ trace_buf = &tr->max_buffer;
+#endif
if (cpu == RING_BUFFER_ALL_CPUS)
- tracing_reset_online_cpus(&tr->trace_buffer);
+ tracing_reset_online_cpus(trace_buf);
else
- tracing_reset(&tr->trace_buffer, cpu);
+ tracing_reset(trace_buf, cpu);
}
if (file->f_mode & FMODE_READ) {
if (t == tr->current_trace)
goto out;
+ /* Some tracers won't work on kernel command line */
+ if (system_state < SYSTEM_RUNNING && t->noboot) {
+ pr_warn("Tracer '%s' is not allowed on command line, ignored\n",
+ t->name);
+ goto out;
+ }
+
/* Some tracers are only allowed for the top level buffer */
if (!trace_ok_for_array(t, tr)) {
ret = -EINVAL;
*
* iter->pos will be 0 if we haven't read anything.
*/
- if (!tracing_is_on() && iter->pos)
+ if (!tracer_tracing_is_on(iter->tr) && iter->pos)
break;
mutex_unlock(&iter->mutex);
#ifdef CONFIG_TRACER_MAX_TRACE
bool use_max_tr;
#endif
+ /* True if tracer cannot be enabled in kernel param */
+ bool noboot;
};
.close = mmio_close,
.read = mmio_read,
.print_line = mmio_print_line,
+ .noboot = true,
};
__init static int init_mmio_trace(void)
return !trace_seq_has_overflowed(s);
}
-static const char state_to_char[] = TASK_STATE_TO_CHAR_STR;
-
-static int task_state_char(unsigned long state)
-{
- int bit = state ? __ffs(state) + 1 : 0;
-
- return bit < sizeof(state_to_char) - 1 ? state_to_char[bit] : '?';
-}
-
/**
* ftrace_find_event - find a registered event
* @type: the type of event to look for
trace_assign_type(field, iter->ent);
- T = task_state_char(field->next_state);
- S = task_state_char(field->prev_state);
+ T = __task_state_to_char(field->next_state);
+ S = __task_state_to_char(field->prev_state);
trace_find_cmdline(field->next_pid, comm);
trace_seq_printf(&iter->seq,
" %5d:%3d:%c %s [%03d] %5d:%3d:%c %s\n",
trace_assign_type(field, iter->ent);
if (!S)
- S = task_state_char(field->prev_state);
- T = task_state_char(field->next_state);
+ S = __task_state_to_char(field->prev_state);
+ T = __task_state_to_char(field->next_state);
trace_seq_printf(&iter->seq, "%d %d %c %d %d %d %c\n",
field->prev_pid,
field->prev_prio,
trace_assign_type(field, iter->ent);
if (!S)
- S = task_state_char(field->prev_state);
- T = task_state_char(field->next_state);
+ S = __task_state_to_char(field->prev_state);
+ T = __task_state_to_char(field->next_state);
SEQ_PUT_HEX_FIELD(s, field->prev_pid);
SEQ_PUT_HEX_FIELD(s, field->prev_prio);
entry = ring_buffer_event_data(event);
entry->prev_pid = prev->pid;
entry->prev_prio = prev->prio;
- entry->prev_state = prev->state;
+ entry->prev_state = __get_task_state(prev);
entry->next_pid = next->pid;
entry->next_prio = next->prio;
- entry->next_state = next->state;
+ entry->next_state = __get_task_state(next);
entry->next_cpu = task_cpu(next);
if (!call_filter_check_discard(call, entry, buffer, event))
entry = ring_buffer_event_data(event);
entry->prev_pid = curr->pid;
entry->prev_prio = curr->prio;
- entry->prev_state = curr->state;
+ entry->prev_state = __get_task_state(curr);
entry->next_pid = wakee->pid;
entry->next_prio = wakee->prio;
- entry->next_state = wakee->state;
+ entry->next_state = __get_task_state(wakee);
entry->next_cpu = task_cpu(wakee);
if (!call_filter_check_discard(call, entry, buffer, event))
if (in_nmi())
return;
- /*
- * There's a slight chance that we are tracing inside the
- * RCU infrastructure, and rcu_irq_enter() will not work
- * as expected.
- */
- if (unlikely(rcu_irq_enter_disabled()))
- return;
-
local_irq_save(flags);
arch_spin_lock(&stack_trace_max_lock);
- /*
- * RCU may not be watching, make it see us.
- * The stack trace code uses rcu_sched.
- */
- rcu_irq_enter();
-
/* In case another CPU set the tracer_frame on us */
if (unlikely(!frame_size))
this_size -= tracer_frame;
}
out:
- rcu_irq_exit();
arch_spin_unlock(&stack_trace_max_lock);
local_irq_restore(flags);
}
#include <linux/kvm_para.h>
#include <linux/kthread.h>
-/* Watchdog configuration */
-static DEFINE_MUTEX(watchdog_proc_mutex);
-
-int __read_mostly nmi_watchdog_enabled;
+static DEFINE_MUTEX(watchdog_mutex);
#if defined(CONFIG_HARDLOCKUP_DETECTOR) || defined(CONFIG_HAVE_NMI_WATCHDOG)
-unsigned long __read_mostly watchdog_enabled = SOFT_WATCHDOG_ENABLED |
- NMI_WATCHDOG_ENABLED;
+# define WATCHDOG_DEFAULT (SOFT_WATCHDOG_ENABLED | NMI_WATCHDOG_ENABLED)
+# define NMI_WATCHDOG_DEFAULT 1
#else
-unsigned long __read_mostly watchdog_enabled = SOFT_WATCHDOG_ENABLED;
+# define WATCHDOG_DEFAULT (SOFT_WATCHDOG_ENABLED)
+# define NMI_WATCHDOG_DEFAULT 0
#endif
+unsigned long __read_mostly watchdog_enabled;
+int __read_mostly watchdog_user_enabled = 1;
+int __read_mostly nmi_watchdog_user_enabled = NMI_WATCHDOG_DEFAULT;
+int __read_mostly soft_watchdog_user_enabled = 1;
+int __read_mostly watchdog_thresh = 10;
+int __read_mostly nmi_watchdog_available;
+
+struct cpumask watchdog_allowed_mask __read_mostly;
+
+struct cpumask watchdog_cpumask __read_mostly;
+unsigned long *watchdog_cpumask_bits = cpumask_bits(&watchdog_cpumask);
+
#ifdef CONFIG_HARDLOCKUP_DETECTOR
-/* boot commands */
/*
* Should we panic when a soft-lockup or hard-lockup occurs:
*/
* kernel command line parameters are parsed, because otherwise it is not
* possible to override this in hardlockup_panic_setup().
*/
-void hardlockup_detector_disable(void)
+void __init hardlockup_detector_disable(void)
{
- watchdog_enabled &= ~NMI_WATCHDOG_ENABLED;
+ nmi_watchdog_user_enabled = 0;
}
static int __init hardlockup_panic_setup(char *str)
else if (!strncmp(str, "nopanic", 7))
hardlockup_panic = 0;
else if (!strncmp(str, "0", 1))
- watchdog_enabled &= ~NMI_WATCHDOG_ENABLED;
+ nmi_watchdog_user_enabled = 0;
else if (!strncmp(str, "1", 1))
- watchdog_enabled |= NMI_WATCHDOG_ENABLED;
+ nmi_watchdog_user_enabled = 1;
return 1;
}
__setup("nmi_watchdog=", hardlockup_panic_setup);
-#endif
-
-#ifdef CONFIG_SOFTLOCKUP_DETECTOR
-int __read_mostly soft_watchdog_enabled;
-#endif
-
-int __read_mostly watchdog_user_enabled;
-int __read_mostly watchdog_thresh = 10;
-
-#ifdef CONFIG_SMP
-int __read_mostly sysctl_softlockup_all_cpu_backtrace;
+# ifdef CONFIG_SMP
int __read_mostly sysctl_hardlockup_all_cpu_backtrace;
-#endif
-struct cpumask watchdog_cpumask __read_mostly;
-unsigned long *watchdog_cpumask_bits = cpumask_bits(&watchdog_cpumask);
-/*
- * The 'watchdog_running' variable is set to 1 when the watchdog threads
- * are registered/started and is set to 0 when the watchdog threads are
- * unregistered/stopped, so it is an indicator whether the threads exist.
- */
-static int __read_mostly watchdog_running;
-/*
- * If a subsystem has a need to deactivate the watchdog temporarily, it
- * can use the suspend/resume interface to achieve this. The content of
- * the 'watchdog_suspended' variable reflects this state. Existing threads
- * are parked/unparked by the lockup_detector_{suspend|resume} functions
- * (see comment blocks pertaining to those functions for further details).
- *
- * 'watchdog_suspended' also prevents threads from being registered/started
- * or unregistered/stopped via parameters in /proc/sys/kernel, so the state
- * of 'watchdog_running' cannot change while the watchdog is deactivated
- * temporarily (see related code in 'proc' handlers).
- */
-int __read_mostly watchdog_suspended;
+static int __init hardlockup_all_cpu_backtrace_setup(char *str)
+{
+ sysctl_hardlockup_all_cpu_backtrace = !!simple_strtol(str, NULL, 0);
+ return 1;
+}
+__setup("hardlockup_all_cpu_backtrace=", hardlockup_all_cpu_backtrace_setup);
+# endif /* CONFIG_SMP */
+#endif /* CONFIG_HARDLOCKUP_DETECTOR */
/*
* These functions can be overridden if an architecture implements its
*/
int __weak watchdog_nmi_enable(unsigned int cpu)
{
+ hardlockup_detector_perf_enable();
return 0;
}
+
void __weak watchdog_nmi_disable(unsigned int cpu)
{
+ hardlockup_detector_perf_disable();
}
-/*
- * watchdog_nmi_reconfigure can be implemented to be notified after any
- * watchdog configuration change. The arch hardlockup watchdog should
- * respond to the following variables:
- * - nmi_watchdog_enabled
+/* Return 0, if a NMI watchdog is available. Error code otherwise */
+int __weak __init watchdog_nmi_probe(void)
+{
+ return hardlockup_detector_perf_init();
+}
+
+/**
+ * watchdog_nmi_stop - Stop the watchdog for reconfiguration
+ *
+ * The reconfiguration steps are:
+ * watchdog_nmi_stop();
+ * update_variables();
+ * watchdog_nmi_start();
+ */
+void __weak watchdog_nmi_stop(void) { }
+
+/**
+ * watchdog_nmi_start - Start the watchdog after reconfiguration
+ *
+ * Counterpart to watchdog_nmi_stop().
+ *
+ * The following variables have been updated in update_variables() and
+ * contain the currently valid configuration:
+ * - watchdog_enabled
* - watchdog_thresh
* - watchdog_cpumask
- * - sysctl_hardlockup_all_cpu_backtrace
- * - hardlockup_panic
- * - watchdog_suspended
*/
-void __weak watchdog_nmi_reconfigure(void)
+void __weak watchdog_nmi_start(void) { }
+
+/**
+ * lockup_detector_update_enable - Update the sysctl enable bit
+ *
+ * Caller needs to make sure that the NMI/perf watchdogs are off, so this
+ * can't race with watchdog_nmi_disable().
+ */
+static void lockup_detector_update_enable(void)
{
+ watchdog_enabled = 0;
+ if (!watchdog_user_enabled)
+ return;
+ if (nmi_watchdog_available && nmi_watchdog_user_enabled)
+ watchdog_enabled |= NMI_WATCHDOG_ENABLED;
+ if (soft_watchdog_user_enabled)
+ watchdog_enabled |= SOFT_WATCHDOG_ENABLED;
}
-
#ifdef CONFIG_SOFTLOCKUP_DETECTOR
-/* Helper for online, unparked cpus. */
-#define for_each_watchdog_cpu(cpu) \
- for_each_cpu_and((cpu), cpu_online_mask, &watchdog_cpumask)
-
-atomic_t watchdog_park_in_progress = ATOMIC_INIT(0);
+/* Global variables, exported for sysctl */
+unsigned int __read_mostly softlockup_panic =
+ CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE;
+static bool softlockup_threads_initialized __read_mostly;
static u64 __read_mostly sample_period;
static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
static unsigned long soft_lockup_nmi_warn;
-unsigned int __read_mostly softlockup_panic =
- CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE;
-
static int __init softlockup_panic_setup(char *str)
{
softlockup_panic = simple_strtoul(str, NULL, 0);
-
return 1;
}
__setup("softlockup_panic=", softlockup_panic_setup);
static int __init nowatchdog_setup(char *str)
{
- watchdog_enabled = 0;
+ watchdog_user_enabled = 0;
return 1;
}
__setup("nowatchdog", nowatchdog_setup);
static int __init nosoftlockup_setup(char *str)
{
- watchdog_enabled &= ~SOFT_WATCHDOG_ENABLED;
+ soft_watchdog_user_enabled = 0;
return 1;
}
__setup("nosoftlockup", nosoftlockup_setup);
#ifdef CONFIG_SMP
+int __read_mostly sysctl_softlockup_all_cpu_backtrace;
+
static int __init softlockup_all_cpu_backtrace_setup(char *str)
{
- sysctl_softlockup_all_cpu_backtrace =
- !!simple_strtol(str, NULL, 0);
+ sysctl_softlockup_all_cpu_backtrace = !!simple_strtol(str, NULL, 0);
return 1;
}
__setup("softlockup_all_cpu_backtrace=", softlockup_all_cpu_backtrace_setup);
-#ifdef CONFIG_HARDLOCKUP_DETECTOR
-static int __init hardlockup_all_cpu_backtrace_setup(char *str)
-{
- sysctl_hardlockup_all_cpu_backtrace =
- !!simple_strtol(str, NULL, 0);
- return 1;
-}
-__setup("hardlockup_all_cpu_backtrace=", hardlockup_all_cpu_backtrace_setup);
-#endif
#endif
+static void __lockup_detector_cleanup(void);
+
/*
* Hard-lockup warnings should be triggered after just a few seconds. Soft-
* lockups can have false positives under extreme conditions. So we generally
int cpu;
/*
- * this is done lockless
- * do we care if a 0 races with a timestamp?
- * all it means is the softlock check starts one cycle later
+ * watchdog_mutex cannpt be taken here, as this might be called
+ * from (soft)interrupt context, so the access to
+ * watchdog_allowed_cpumask might race with a concurrent update.
+ *
+ * The watchdog time stamp can race against a concurrent real
+ * update as well, the only side effect might be a cycle delay for
+ * the softlockup check.
*/
- for_each_watchdog_cpu(cpu)
+ for_each_cpu(cpu, &watchdog_allowed_mask)
per_cpu(watchdog_touch_ts, cpu) = 0;
wq_watchdog_touch(-1);
}
__this_cpu_inc(hrtimer_interrupts);
}
-static int watchdog_enable_all_cpus(void);
-static void watchdog_disable_all_cpus(void);
-
/* watchdog kicker functions */
static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
{
int duration;
int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace;
- if (atomic_read(&watchdog_park_in_progress) != 0)
+ if (!watchdog_enabled)
return HRTIMER_NORESTART;
/* kick the hardlockup detector */
static void watchdog_enable(unsigned int cpu)
{
- struct hrtimer *hrtimer = raw_cpu_ptr(&watchdog_hrtimer);
+ struct hrtimer *hrtimer = this_cpu_ptr(&watchdog_hrtimer);
- /* kick off the timer for the hardlockup detector */
+ /*
+ * Start the timer first to prevent the NMI watchdog triggering
+ * before the timer has a chance to fire.
+ */
hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
hrtimer->function = watchdog_timer_fn;
-
- /* Enable the perf event */
- watchdog_nmi_enable(cpu);
-
- /* done here because hrtimer_start can only pin to smp_processor_id() */
hrtimer_start(hrtimer, ns_to_ktime(sample_period),
HRTIMER_MODE_REL_PINNED);
- /* initialize timestamp */
- watchdog_set_prio(SCHED_FIFO, MAX_RT_PRIO - 1);
+ /* Initialize timestamp */
__touch_watchdog();
+ /* Enable the perf event */
+ if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
+ watchdog_nmi_enable(cpu);
+
+ watchdog_set_prio(SCHED_FIFO, MAX_RT_PRIO - 1);
}
static void watchdog_disable(unsigned int cpu)
{
- struct hrtimer *hrtimer = raw_cpu_ptr(&watchdog_hrtimer);
+ struct hrtimer *hrtimer = this_cpu_ptr(&watchdog_hrtimer);
watchdog_set_prio(SCHED_NORMAL, 0);
- hrtimer_cancel(hrtimer);
- /* disable the perf event */
+ /*
+ * Disable the perf event first. That prevents that a large delay
+ * between disabling the timer and disabling the perf event causes
+ * the perf NMI to detect a false positive.
+ */
watchdog_nmi_disable(cpu);
+ hrtimer_cancel(hrtimer);
}
static void watchdog_cleanup(unsigned int cpu, bool online)
__this_cpu_write(soft_lockup_hrtimer_cnt,
__this_cpu_read(hrtimer_interrupts));
__touch_watchdog();
-
- /*
- * watchdog_nmi_enable() clears the NMI_WATCHDOG_ENABLED bit in the
- * failure path. Check for failures that can occur asynchronously -
- * for example, when CPUs are on-lined - and shut down the hardware
- * perf event on each CPU accordingly.
- *
- * The only non-obvious place this bit can be cleared is through
- * watchdog_nmi_enable(), so a pr_info() is placed there. Placing a
- * pr_info here would be too noisy as it would result in a message
- * every few seconds if the hardlockup was disabled but the softlockup
- * enabled.
- */
- if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
- watchdog_nmi_disable(cpu);
}
static struct smp_hotplug_thread watchdog_threads = {
.unpark = watchdog_enable,
};
-/*
- * park all watchdog threads that are specified in 'watchdog_cpumask'
- *
- * This function returns an error if kthread_park() of a watchdog thread
- * fails. In this situation, the watchdog threads of some CPUs can already
- * be parked and the watchdog threads of other CPUs can still be runnable.
- * Callers are expected to handle this special condition as appropriate in
- * their context.
- *
- * This function may only be called in a context that is protected against
- * races with CPU hotplug - for example, via get_online_cpus().
- */
-static int watchdog_park_threads(void)
+static void softlockup_update_smpboot_threads(void)
{
- int cpu, ret = 0;
+ lockdep_assert_held(&watchdog_mutex);
- atomic_set(&watchdog_park_in_progress, 1);
+ if (!softlockup_threads_initialized)
+ return;
- for_each_watchdog_cpu(cpu) {
- ret = kthread_park(per_cpu(softlockup_watchdog, cpu));
- if (ret)
- break;
- }
-
- atomic_set(&watchdog_park_in_progress, 0);
-
- return ret;
+ smpboot_update_cpumask_percpu_thread(&watchdog_threads,
+ &watchdog_allowed_mask);
}
-/*
- * unpark all watchdog threads that are specified in 'watchdog_cpumask'
- *
- * This function may only be called in a context that is protected against
- * races with CPU hotplug - for example, via get_online_cpus().
- */
-static void watchdog_unpark_threads(void)
+/* Temporarily park all watchdog threads */
+static void softlockup_park_all_threads(void)
{
- int cpu;
-
- for_each_watchdog_cpu(cpu)
- kthread_unpark(per_cpu(softlockup_watchdog, cpu));
+ cpumask_clear(&watchdog_allowed_mask);
+ softlockup_update_smpboot_threads();
}
-static int update_watchdog_all_cpus(void)
+/* Unpark enabled threads */
+static void softlockup_unpark_threads(void)
{
- int ret;
-
- ret = watchdog_park_threads();
- if (ret)
- return ret;
-
- watchdog_unpark_threads();
-
- return 0;
+ cpumask_copy(&watchdog_allowed_mask, &watchdog_cpumask);
+ softlockup_update_smpboot_threads();
}
-static int watchdog_enable_all_cpus(void)
+static void lockup_detector_reconfigure(void)
{
- int err = 0;
-
- if (!watchdog_running) {
- err = smpboot_register_percpu_thread_cpumask(&watchdog_threads,
- &watchdog_cpumask);
- if (err)
- pr_err("Failed to create watchdog threads, disabled\n");
- else
- watchdog_running = 1;
- } else {
- /*
- * Enable/disable the lockup detectors or
- * change the sample period 'on the fly'.
- */
- err = update_watchdog_all_cpus();
-
- if (err) {
- watchdog_disable_all_cpus();
- pr_err("Failed to update lockup detectors, disabled\n");
- }
- }
-
- if (err)
- watchdog_enabled = 0;
-
- return err;
+ cpus_read_lock();
+ watchdog_nmi_stop();
+ softlockup_park_all_threads();
+ set_sample_period();
+ lockup_detector_update_enable();
+ if (watchdog_enabled && watchdog_thresh)
+ softlockup_unpark_threads();
+ watchdog_nmi_start();
+ cpus_read_unlock();
+ /*
+ * Must be called outside the cpus locked section to prevent
+ * recursive locking in the perf code.
+ */
+ __lockup_detector_cleanup();
}
-static void watchdog_disable_all_cpus(void)
+/*
+ * Create the watchdog thread infrastructure and configure the detector(s).
+ *
+ * The threads are not unparked as watchdog_allowed_mask is empty. When
+ * the threads are sucessfully initialized, take the proper locks and
+ * unpark the threads in the watchdog_cpumask if the watchdog is enabled.
+ */
+static __init void lockup_detector_setup(void)
{
- if (watchdog_running) {
- watchdog_running = 0;
- smpboot_unregister_percpu_thread(&watchdog_threads);
- }
-}
+ int ret;
-#ifdef CONFIG_SYSCTL
-static int watchdog_update_cpus(void)
-{
- return smpboot_update_cpumask_percpu_thread(
- &watchdog_threads, &watchdog_cpumask);
-}
-#endif
+ /*
+ * If sysctl is off and watchdog got disabled on the command line,
+ * nothing to do here.
+ */
+ lockup_detector_update_enable();
-#else /* SOFTLOCKUP */
-static int watchdog_park_threads(void)
-{
- return 0;
-}
+ if (!IS_ENABLED(CONFIG_SYSCTL) &&
+ !(watchdog_enabled && watchdog_thresh))
+ return;
-static void watchdog_unpark_threads(void)
-{
-}
+ ret = smpboot_register_percpu_thread_cpumask(&watchdog_threads,
+ &watchdog_allowed_mask);
+ if (ret) {
+ pr_err("Failed to initialize soft lockup detector threads\n");
+ return;
+ }
-static int watchdog_enable_all_cpus(void)
-{
- return 0;
+ mutex_lock(&watchdog_mutex);
+ softlockup_threads_initialized = true;
+ lockup_detector_reconfigure();
+ mutex_unlock(&watchdog_mutex);
}
-static void watchdog_disable_all_cpus(void)
+#else /* CONFIG_SOFTLOCKUP_DETECTOR */
+static inline int watchdog_park_threads(void) { return 0; }
+static inline void watchdog_unpark_threads(void) { }
+static inline int watchdog_enable_all_cpus(void) { return 0; }
+static inline void watchdog_disable_all_cpus(void) { }
+static void lockup_detector_reconfigure(void)
{
+ cpus_read_lock();
+ watchdog_nmi_stop();
+ lockup_detector_update_enable();
+ watchdog_nmi_start();
+ cpus_read_unlock();
}
-
-#ifdef CONFIG_SYSCTL
-static int watchdog_update_cpus(void)
+static inline void lockup_detector_setup(void)
{
- return 0;
+ lockup_detector_reconfigure();
}
-#endif
+#endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
-static void set_sample_period(void)
+static void __lockup_detector_cleanup(void)
{
+ lockdep_assert_held(&watchdog_mutex);
+ hardlockup_detector_perf_cleanup();
}
-#endif /* SOFTLOCKUP */
-/*
- * Suspend the hard and soft lockup detector by parking the watchdog threads.
+/**
+ * lockup_detector_cleanup - Cleanup after cpu hotplug or sysctl changes
+ *
+ * Caller must not hold the cpu hotplug rwsem.
*/
-int lockup_detector_suspend(void)
+void lockup_detector_cleanup(void)
{
- int ret = 0;
-
- get_online_cpus();
- mutex_lock(&watchdog_proc_mutex);
- /*
- * Multiple suspend requests can be active in parallel (counted by
- * the 'watchdog_suspended' variable). If the watchdog threads are
- * running, the first caller takes care that they will be parked.
- * The state of 'watchdog_running' cannot change while a suspend
- * request is active (see related code in 'proc' handlers).
- */
- if (watchdog_running && !watchdog_suspended)
- ret = watchdog_park_threads();
-
- if (ret == 0)
- watchdog_suspended++;
- else {
- watchdog_disable_all_cpus();
- pr_err("Failed to suspend lockup detectors, disabled\n");
- watchdog_enabled = 0;
- }
-
- watchdog_nmi_reconfigure();
-
- mutex_unlock(&watchdog_proc_mutex);
-
- return ret;
+ mutex_lock(&watchdog_mutex);
+ __lockup_detector_cleanup();
+ mutex_unlock(&watchdog_mutex);
}
-/*
- * Resume the hard and soft lockup detector by unparking the watchdog threads.
+/**
+ * lockup_detector_soft_poweroff - Interface to stop lockup detector(s)
+ *
+ * Special interface for parisc. It prevents lockup detector warnings from
+ * the default pm_poweroff() function which busy loops forever.
*/
-void lockup_detector_resume(void)
+void lockup_detector_soft_poweroff(void)
{
- mutex_lock(&watchdog_proc_mutex);
-
- watchdog_suspended--;
- /*
- * The watchdog threads are unparked if they were previously running
- * and if there is no more active suspend request.
- */
- if (watchdog_running && !watchdog_suspended)
- watchdog_unpark_threads();
-
- watchdog_nmi_reconfigure();
-
- mutex_unlock(&watchdog_proc_mutex);
- put_online_cpus();
+ watchdog_enabled = 0;
}
#ifdef CONFIG_SYSCTL
-/*
- * Update the run state of the lockup detectors.
- */
-static int proc_watchdog_update(void)
+/* Propagate any changes to the watchdog threads */
+static void proc_watchdog_update(void)
{
- int err = 0;
-
- /*
- * Watchdog threads won't be started if they are already active.
- * The 'watchdog_running' variable in watchdog_*_all_cpus() takes
- * care of this. If those threads are already active, the sample
- * period will be updated and the lockup detectors will be enabled
- * or disabled 'on the fly'.
- */
- if (watchdog_enabled && watchdog_thresh)
- err = watchdog_enable_all_cpus();
- else
- watchdog_disable_all_cpus();
-
- watchdog_nmi_reconfigure();
-
- return err;
-
+ /* Remove impossible cpus to keep sysctl output clean. */
+ cpumask_and(&watchdog_cpumask, &watchdog_cpumask, cpu_possible_mask);
+ lockup_detector_reconfigure();
}
/*
* common function for watchdog, nmi_watchdog and soft_watchdog parameter
*
- * caller | table->data points to | 'which' contains the flag(s)
- * -------------------|-----------------------|-----------------------------
- * proc_watchdog | watchdog_user_enabled | NMI_WATCHDOG_ENABLED or'ed
- * | | with SOFT_WATCHDOG_ENABLED
- * -------------------|-----------------------|-----------------------------
- * proc_nmi_watchdog | nmi_watchdog_enabled | NMI_WATCHDOG_ENABLED
- * -------------------|-----------------------|-----------------------------
- * proc_soft_watchdog | soft_watchdog_enabled | SOFT_WATCHDOG_ENABLED
+ * caller | table->data points to | 'which'
+ * -------------------|----------------------------|--------------------------
+ * proc_watchdog | watchdog_user_enabled | NMI_WATCHDOG_ENABLED |
+ * | | SOFT_WATCHDOG_ENABLED
+ * -------------------|----------------------------|--------------------------
+ * proc_nmi_watchdog | nmi_watchdog_user_enabled | NMI_WATCHDOG_ENABLED
+ * -------------------|----------------------------|--------------------------
+ * proc_soft_watchdog | soft_watchdog_user_enabled | SOFT_WATCHDOG_ENABLED
*/
static int proc_watchdog_common(int which, struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
- int err, old, new;
- int *watchdog_param = (int *)table->data;
+ int err, old, *param = table->data;
- get_online_cpus();
- mutex_lock(&watchdog_proc_mutex);
+ mutex_lock(&watchdog_mutex);
- if (watchdog_suspended) {
- /* no parameter changes allowed while watchdog is suspended */
- err = -EAGAIN;
- goto out;
- }
-
- /*
- * If the parameter is being read return the state of the corresponding
- * bit(s) in 'watchdog_enabled', else update 'watchdog_enabled' and the
- * run state of the lockup detectors.
- */
if (!write) {
- *watchdog_param = (watchdog_enabled & which) != 0;
+ /*
+ * On read synchronize the userspace interface. This is a
+ * racy snapshot.
+ */
+ *param = (watchdog_enabled & which) != 0;
err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
} else {
+ old = READ_ONCE(*param);
err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
- if (err)
- goto out;
-
- /*
- * There is a race window between fetching the current value
- * from 'watchdog_enabled' and storing the new value. During
- * this race window, watchdog_nmi_enable() can sneak in and
- * clear the NMI_WATCHDOG_ENABLED bit in 'watchdog_enabled'.
- * The 'cmpxchg' detects this race and the loop retries.
- */
- do {
- old = watchdog_enabled;
- /*
- * If the parameter value is not zero set the
- * corresponding bit(s), else clear it(them).
- */
- if (*watchdog_param)
- new = old | which;
- else
- new = old & ~which;
- } while (cmpxchg(&watchdog_enabled, old, new) != old);
-
- /*
- * Update the run state of the lockup detectors. There is _no_
- * need to check the value returned by proc_watchdog_update()
- * and to restore the previous value of 'watchdog_enabled' as
- * both lockup detectors are disabled if proc_watchdog_update()
- * returns an error.
- */
- if (old == new)
- goto out;
-
- err = proc_watchdog_update();
+ if (!err && old != READ_ONCE(*param))
+ proc_watchdog_update();
}
-out:
- mutex_unlock(&watchdog_proc_mutex);
- put_online_cpus();
+ mutex_unlock(&watchdog_mutex);
return err;
}
int proc_nmi_watchdog(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
+ if (!nmi_watchdog_available && write)
+ return -ENOTSUPP;
return proc_watchdog_common(NMI_WATCHDOG_ENABLED,
table, write, buffer, lenp, ppos);
}
int proc_watchdog_thresh(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
- int err, old, new;
-
- get_online_cpus();
- mutex_lock(&watchdog_proc_mutex);
+ int err, old;
- if (watchdog_suspended) {
- /* no parameter changes allowed while watchdog is suspended */
- err = -EAGAIN;
- goto out;
- }
+ mutex_lock(&watchdog_mutex);
- old = ACCESS_ONCE(watchdog_thresh);
+ old = READ_ONCE(watchdog_thresh);
err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
- if (err || !write)
- goto out;
-
- /*
- * Update the sample period. Restore on failure.
- */
- new = ACCESS_ONCE(watchdog_thresh);
- if (old == new)
- goto out;
+ if (!err && write && old != READ_ONCE(watchdog_thresh))
+ proc_watchdog_update();
- set_sample_period();
- err = proc_watchdog_update();
- if (err) {
- watchdog_thresh = old;
- set_sample_period();
- }
-out:
- mutex_unlock(&watchdog_proc_mutex);
- put_online_cpus();
+ mutex_unlock(&watchdog_mutex);
return err;
}
{
int err;
- get_online_cpus();
- mutex_lock(&watchdog_proc_mutex);
-
- if (watchdog_suspended) {
- /* no parameter changes allowed while watchdog is suspended */
- err = -EAGAIN;
- goto out;
- }
+ mutex_lock(&watchdog_mutex);
err = proc_do_large_bitmap(table, write, buffer, lenp, ppos);
- if (!err && write) {
- /* Remove impossible cpus to keep sysctl output cleaner. */
- cpumask_and(&watchdog_cpumask, &watchdog_cpumask,
- cpu_possible_mask);
-
- if (watchdog_running) {
- /*
- * Failure would be due to being unable to allocate
- * a temporary cpumask, so we are likely not in a
- * position to do much else to make things better.
- */
- if (watchdog_update_cpus() != 0)
- pr_err("cpumask update failed\n");
- }
+ if (!err && write)
+ proc_watchdog_update();
- watchdog_nmi_reconfigure();
- }
-out:
- mutex_unlock(&watchdog_proc_mutex);
- put_online_cpus();
+ mutex_unlock(&watchdog_mutex);
return err;
}
-
#endif /* CONFIG_SYSCTL */
void __init lockup_detector_init(void)
{
- set_sample_period();
-
#ifdef CONFIG_NO_HZ_FULL
if (tick_nohz_full_enabled()) {
pr_info("Disabling watchdog on nohz_full cores by default\n");
cpumask_copy(&watchdog_cpumask, cpu_possible_mask);
#endif
- if (watchdog_enabled)
- watchdog_enable_all_cpus();
+ if (!watchdog_nmi_probe())
+ nmi_watchdog_available = true;
+ lockup_detector_setup();
}
static DEFINE_PER_CPU(bool, hard_watchdog_warn);
static DEFINE_PER_CPU(bool, watchdog_nmi_touch);
static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
+static struct cpumask dead_events_mask;
static unsigned long hardlockup_allcpu_dumped;
+static unsigned int watchdog_cpus;
void arch_touch_nmi_watchdog(void)
{
/* Callback function for perf event subsystem */
static void watchdog_overflow_callback(struct perf_event *event,
- struct perf_sample_data *data,
- struct pt_regs *regs)
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
{
/* Ensure the watchdog never gets throttled */
event->hw.interrupts = 0;
- if (atomic_read(&watchdog_park_in_progress) != 0)
- return;
-
if (__this_cpu_read(watchdog_nmi_touch) == true) {
__this_cpu_write(watchdog_nmi_touch, false);
return;
return;
}
-/*
- * People like the simple clean cpu node info on boot.
- * Reduce the watchdog noise by only printing messages
- * that are different from what cpu0 displayed.
- */
-static unsigned long firstcpu_err;
-static atomic_t watchdog_cpus;
-
-int watchdog_nmi_enable(unsigned int cpu)
+static int hardlockup_detector_event_create(void)
{
+ unsigned int cpu = smp_processor_id();
struct perf_event_attr *wd_attr;
- struct perf_event *event = per_cpu(watchdog_ev, cpu);
- int firstcpu = 0;
-
- /* nothing to do if the hard lockup detector is disabled */
- if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
- goto out;
-
- /* is it already setup and enabled? */
- if (event && event->state > PERF_EVENT_STATE_OFF)
- goto out;
-
- /* it is setup but not enabled */
- if (event != NULL)
- goto out_enable;
-
- if (atomic_inc_return(&watchdog_cpus) == 1)
- firstcpu = 1;
+ struct perf_event *evt;
wd_attr = &wd_hw_attr;
wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
/* Try to register using hardware perf events */
- event = perf_event_create_kernel_counter(wd_attr, cpu, NULL, watchdog_overflow_callback, NULL);
+ evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL,
+ watchdog_overflow_callback, NULL);
+ if (IS_ERR(evt)) {
+ pr_info("Perf event create on CPU %d failed with %ld\n", cpu,
+ PTR_ERR(evt));
+ return PTR_ERR(evt);
+ }
+ this_cpu_write(watchdog_ev, evt);
+ return 0;
+}
- /* save the first cpu's error for future comparision */
- if (firstcpu && IS_ERR(event))
- firstcpu_err = PTR_ERR(event);
+/**
+ * hardlockup_detector_perf_enable - Enable the local event
+ */
+void hardlockup_detector_perf_enable(void)
+{
+ if (hardlockup_detector_event_create())
+ return;
- if (!IS_ERR(event)) {
- /* only print for the first cpu initialized */
- if (firstcpu || firstcpu_err)
- pr_info("enabled on all CPUs, permanently consumes one hw-PMU counter.\n");
- goto out_save;
- }
+ if (!watchdog_cpus++)
+ pr_info("Enabled. Permanently consumes one hw-PMU counter.\n");
- /*
- * Disable the hard lockup detector if _any_ CPU fails to set up
- * set up the hardware perf event. The watchdog() function checks
- * the NMI_WATCHDOG_ENABLED bit periodically.
- *
- * The barriers are for syncing up watchdog_enabled across all the
- * cpus, as clear_bit() does not use barriers.
- */
- smp_mb__before_atomic();
- clear_bit(NMI_WATCHDOG_ENABLED_BIT, &watchdog_enabled);
- smp_mb__after_atomic();
-
- /* skip displaying the same error again */
- if (!firstcpu && (PTR_ERR(event) == firstcpu_err))
- return PTR_ERR(event);
-
- /* vary the KERN level based on the returned errno */
- if (PTR_ERR(event) == -EOPNOTSUPP)
- pr_info("disabled (cpu%i): not supported (no LAPIC?)\n", cpu);
- else if (PTR_ERR(event) == -ENOENT)
- pr_warn("disabled (cpu%i): hardware events not enabled\n",
- cpu);
- else
- pr_err("disabled (cpu%i): unable to create perf event: %ld\n",
- cpu, PTR_ERR(event));
-
- pr_info("Shutting down hard lockup detector on all cpus\n");
-
- return PTR_ERR(event);
-
- /* success path */
-out_save:
- per_cpu(watchdog_ev, cpu) = event;
-out_enable:
- perf_event_enable(per_cpu(watchdog_ev, cpu));
-out:
- return 0;
+ perf_event_enable(this_cpu_read(watchdog_ev));
}
-void watchdog_nmi_disable(unsigned int cpu)
+/**
+ * hardlockup_detector_perf_disable - Disable the local event
+ */
+void hardlockup_detector_perf_disable(void)
{
- struct perf_event *event = per_cpu(watchdog_ev, cpu);
+ struct perf_event *event = this_cpu_read(watchdog_ev);
if (event) {
perf_event_disable(event);
+ cpumask_set_cpu(smp_processor_id(), &dead_events_mask);
+ watchdog_cpus--;
+ }
+}
+
+/**
+ * hardlockup_detector_perf_cleanup - Cleanup disabled events and destroy them
+ *
+ * Called from lockup_detector_cleanup(). Serialized by the caller.
+ */
+void hardlockup_detector_perf_cleanup(void)
+{
+ int cpu;
+
+ for_each_cpu(cpu, &dead_events_mask) {
+ struct perf_event *event = per_cpu(watchdog_ev, cpu);
+
+ /*
+ * Required because for_each_cpu() reports unconditionally
+ * CPU0 as set on UP kernels. Sigh.
+ */
+ if (event)
+ perf_event_release_kernel(event);
per_cpu(watchdog_ev, cpu) = NULL;
+ }
+ cpumask_clear(&dead_events_mask);
+}
+
+/**
+ * hardlockup_detector_perf_stop - Globally stop watchdog events
+ *
+ * Special interface for x86 to handle the perf HT bug.
+ */
+void __init hardlockup_detector_perf_stop(void)
+{
+ int cpu;
+
+ lockdep_assert_cpus_held();
+
+ for_each_online_cpu(cpu) {
+ struct perf_event *event = per_cpu(watchdog_ev, cpu);
+
+ if (event)
+ perf_event_disable(event);
+ }
+}
- /* should be in cleanup, but blocks oprofile */
- perf_event_release_kernel(event);
+/**
+ * hardlockup_detector_perf_restart - Globally restart watchdog events
+ *
+ * Special interface for x86 to handle the perf HT bug.
+ */
+void __init hardlockup_detector_perf_restart(void)
+{
+ int cpu;
+
+ lockdep_assert_cpus_held();
+
+ if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
+ return;
+
+ for_each_online_cpu(cpu) {
+ struct perf_event *event = per_cpu(watchdog_ev, cpu);
+
+ if (event)
+ perf_event_enable(event);
+ }
+}
+
+/**
+ * hardlockup_detector_perf_init - Probe whether NMI event is available at all
+ */
+int __init hardlockup_detector_perf_init(void)
+{
+ int ret = hardlockup_detector_event_create();
- /* watchdog_nmi_enable() expects this to be zero initially. */
- if (atomic_dec_and_test(&watchdog_cpus))
- firstcpu_err = 0;
+ if (ret) {
+ pr_info("Perf NMI watchdog permanently disabled\n");
+ } else {
+ perf_event_release_kernel(this_cpu_read(watchdog_ev));
+ this_cpu_write(watchdog_ev, NULL);
}
+ return ret;
}
range 0 8192
default 0 if KASAN
default 2048 if GCC_PLUGIN_LATENT_ENTROPY
- default 1024 if !64BIT
+ default 1280 if (!64BIT && PARISC)
+ default 1024 if (!64BIT && !PARISC)
default 2048 if 64BIT
help
Tell gcc to warn at build time for stack frames larger than this.
* idr_alloc() and idr_remove() (as long as the ID being removed is not
* the one being replaced!).
*
- * Returns: 0 on success. %-ENOENT indicates that @id was not found.
- * %-EINVAL indicates that @id or @ptr were not valid.
+ * Returns: the old value on success. %-ENOENT indicates that @id was not
+ * found. %-EINVAL indicates that @id or @ptr were not valid.
*/
void *idr_replace(struct idr *idr, void *ptr, int id)
{
static inline bool page_copy_sane(struct page *page, size_t offset, size_t n)
{
- size_t v = n + offset;
- if (likely(n <= v && v <= (PAGE_SIZE << compound_order(page))))
+ struct page *head = compound_head(page);
+ size_t v = n + offset + page_address(page) - page_address(head);
+
+ if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head))))
return true;
WARN_ON(1);
return false;
}
#endif
+static void zap_modalias_env(struct kobj_uevent_env *env)
+{
+ static const char modalias_prefix[] = "MODALIAS=";
+ int i;
+
+ for (i = 0; i < env->envp_idx;) {
+ if (strncmp(env->envp[i], modalias_prefix,
+ sizeof(modalias_prefix) - 1)) {
+ i++;
+ continue;
+ }
+
+ if (i != env->envp_idx - 1)
+ memmove(&env->envp[i], &env->envp[i + 1],
+ sizeof(env->envp[i]) * env->envp_idx - 1);
+
+ env->envp_idx--;
+ }
+}
+
/**
* kobject_uevent_env - send an uevent with environmental data
*
}
}
- /*
- * Mark "add" and "remove" events in the object to ensure proper
- * events to userspace during automatic cleanup. If the object did
- * send an "add" event, "remove" will automatically generated by
- * the core, if not already done by the caller.
- */
- if (action == KOBJ_ADD)
+ switch (action) {
+ case KOBJ_ADD:
+ /*
+ * Mark "add" event so we can make sure we deliver "remove"
+ * event to userspace during automatic cleanup. If
+ * the object did send an "add" event, "remove" will
+ * automatically generated by the core, if not already done
+ * by the caller.
+ */
kobj->state_add_uevent_sent = 1;
- else if (action == KOBJ_REMOVE)
+ break;
+
+ case KOBJ_REMOVE:
kobj->state_remove_uevent_sent = 1;
+ break;
+
+ case KOBJ_UNBIND:
+ zap_modalias_env(env);
+ break;
+
+ default:
+ break;
+ }
mutex_lock(&uevent_sock_mutex);
/* we will send an event, so request a new sequence number */
const BYTE * const lowLimit = lowPrefix - dictSize;
const BYTE * const dictEnd = (const BYTE *)dictStart + dictSize;
- const unsigned int dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };
- const int dec64table[] = { 0, 0, 0, -1, 0, 1, 2, 3 };
+ static const unsigned int dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };
+ static const int dec64table[] = { 0, 0, 0, -1, 0, 1, 2, 3 };
const int safeDecode = (endOnInput == endOnInputSize);
const int checkOffset = ((safeDecode) && (dictSize < (int)(64 * KB)));
if (time_is_before_jiffies(rs->begin + rs->interval)) {
if (rs->missed) {
if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) {
- pr_warn("%s: %d callbacks suppressed\n", func, rs->missed);
+ printk_deferred(KERN_WARNING
+ "%s: %d callbacks suppressed\n",
+ func, rs->missed);
rs->missed = 0;
}
}
* rhashtable_walk_start - Start a hash table walk
* @iter: Hash table iterator
*
- * Start a hash table walk. Note that we take the RCU lock in all
- * cases including when we return an error. So you must always call
- * rhashtable_walk_stop to clean up.
+ * Start a hash table walk at the current iterator position. Note that we take
+ * the RCU lock in all cases including when we return an error. So you must
+ * always call rhashtable_walk_stop to clean up.
*
* Returns zero if successful.
*
* rhashtable_walk_stop - Finish a hash table walk
* @iter: Hash table iterator
*
- * Finish a hash table walk.
+ * Finish a hash table walk. Does not reset the iterator to the start of the
+ * hash table.
*/
void rhashtable_walk_stop(struct rhashtable_iter *iter)
__releases(RCU)
if (pgdat->kcompactd_max_order < order)
pgdat->kcompactd_max_order = order;
- /*
- * Pairs with implicit barrier in wait_event_freezable()
- * such that wakeups are not missed in the lockless
- * waitqueue_active() call.
- */
- smp_acquire__after_ctrl_dep();
-
if (pgdat->kcompactd_classzone_idx > classzone_idx)
pgdat->kcompactd_classzone_idx = classzone_idx;
- if (!waitqueue_active(&pgdat->kcompactd_wait))
+ /*
+ * Pairs with implicit barrier in wait_event_freezable()
+ * such that wakeups are not missed.
+ */
+ if (!wq_has_sleeper(&pgdat->kcompactd_wait))
return;
if (!kcompactd_node_suitable(pgdat))
trace_file_check_and_advance_wb_err(file, old);
spin_unlock(&file->f_lock);
}
+
+ /*
+ * We're mostly using this function as a drop in replacement for
+ * filemap_check_errors. Clear AS_EIO/AS_ENOSPC to emulate the effect
+ * that the legacy code would have had on these flags.
+ */
+ clear_bit(AS_EIO, &mapping->flags);
+ clear_bit(AS_ENOSPC, &mapping->flags);
return err;
}
EXPORT_SYMBOL(file_check_and_advance_wb_err);
* we're writing. Either one is a pretty crazy thing to do,
* so we don't support it 100%. If this invalidation
* fails, tough, the write still worked...
+ *
+ * Most of the time we do not need this since dio_complete() will do
+ * the invalidation for us. However there are some file systems that
+ * do not end up with dio_complete() being called, so let's not break
+ * them by removing it completely
*/
- invalidate_inode_pages2_range(mapping,
- pos >> PAGE_SHIFT, end);
+ if (mapping->nrpages)
+ invalidate_inode_pages2_range(mapping,
+ pos >> PAGE_SHIFT, end);
if (written > 0) {
pos += written;
*/
static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
{
+ struct mm_struct *mm = rmap_item->mm;
struct rmap_item *tree_rmap_item;
struct page *tree_page = NULL;
struct stable_node *stable_node;
if (ksm_use_zero_pages && (checksum == zero_checksum)) {
struct vm_area_struct *vma;
- vma = find_mergeable_vma(rmap_item->mm, rmap_item->address);
+ down_read(&mm->mmap_sem);
+ vma = find_mergeable_vma(mm, rmap_item->address);
err = try_to_merge_one_page(vma, page,
ZERO_PAGE(rmap_item->address));
+ up_read(&mm->mmap_sem);
/*
* In case of failure, the page was not really empty, so we
* need to continue. Otherwise we're done.
{
int size = memcg_nr_cache_ids;
- nlru->memcg_lrus = kmalloc(size * sizeof(void *), GFP_KERNEL);
+ nlru->memcg_lrus = kvmalloc(size * sizeof(void *), GFP_KERNEL);
if (!nlru->memcg_lrus)
return -ENOMEM;
if (__memcg_init_list_lru_node(nlru->memcg_lrus, 0, size)) {
- kfree(nlru->memcg_lrus);
+ kvfree(nlru->memcg_lrus);
return -ENOMEM;
}
static void memcg_destroy_list_lru_node(struct list_lru_node *nlru)
{
__memcg_destroy_list_lru_node(nlru->memcg_lrus, 0, memcg_nr_cache_ids);
- kfree(nlru->memcg_lrus);
+ kvfree(nlru->memcg_lrus);
}
static int memcg_update_list_lru_node(struct list_lru_node *nlru,
BUG_ON(old_size > new_size);
old = nlru->memcg_lrus;
- new = kmalloc(new_size * sizeof(void *), GFP_KERNEL);
+ new = kvmalloc(new_size * sizeof(void *), GFP_KERNEL);
if (!new)
return -ENOMEM;
if (__memcg_init_list_lru_node(new, old_size, new_size)) {
- kfree(new);
+ kvfree(new);
return -ENOMEM;
}
nlru->memcg_lrus = new;
spin_unlock_irq(&nlru->lock);
- kfree(old);
+ kvfree(old);
return 0;
}
{
struct page *page;
struct zone *zone;
+ unsigned int order;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- for (; start < end; start += PAGE_SIZE <<
- compound_order(compound_head(page))) {
+
+ for (; start < end; start += PAGE_SIZE << order) {
int ret;
ret = get_user_pages_fast(start, 1, 0, &page);
if (ret != 1)
return ret;
+ /*
+ * When soft offlining hugepages, after migrating the page
+ * we dissolve it, therefore in the second loop "page" will
+ * no longer be a compound page, and order will be 0.
+ */
+ order = compound_order(compound_head(page));
+
if (PageHWPoison(page)) {
put_page(page);
continue;
struct memcg_stock_pcp *stock;
unsigned long flags;
+ /*
+ * The only protection from memory hotplug vs. drain_stock races is
+ * that we always operate on local CPU stock here with IRQ disabled
+ */
local_irq_save(flags);
stock = this_cpu_ptr(&memcg_stock);
/* If someone's already draining, avoid adding running more workers. */
if (!mutex_trylock(&percpu_charge_mutex))
return;
- /* Notify other cpus that system-wide "drain" is running */
- get_online_cpus();
+ /*
+ * Notify other cpus that system-wide "drain" is running
+ * We do not care about races with the cpu hotplug because cpu down
+ * as well as workers from this path always operate on the local
+ * per-cpu data. CPU up doesn't touch memcg_stock at all.
+ */
curcpu = get_cpu();
for_each_online_cpu(cpu) {
struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
struct mem_cgroup *memcg;
memcg = stock->cached;
- if (!memcg || !stock->nr_pages)
+ if (!memcg || !stock->nr_pages || !css_tryget(&memcg->css))
continue;
- if (!mem_cgroup_is_descendant(memcg, root_memcg))
+ if (!mem_cgroup_is_descendant(memcg, root_memcg)) {
+ css_put(&memcg->css);
continue;
+ }
if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
if (cpu == curcpu)
drain_local_stock(&stock->work);
else
schedule_work_on(cpu, &stock->work);
}
+ css_put(&memcg->css);
}
put_cpu();
- put_online_cpus();
mutex_unlock(&percpu_charge_mutex);
}
static void uncharge_page(struct page *page, struct uncharge_gather *ug)
{
VM_BUG_ON_PAGE(PageLRU(page), page);
- VM_BUG_ON_PAGE(!PageHWPoison(page) && page_count(page), page);
+ VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) &&
+ !PageHWPoison(page) , page);
if (!page->mem_cgroup)
return;
* vm_normal_page() so that we do not have to special case all
* call site of vm_normal_page().
*/
- if (likely(pfn < highest_memmap_pfn)) {
+ if (likely(pfn <= highest_memmap_pfn)) {
struct page *page = pfn_to_page(pfn);
if (is_device_public_page(page)) {
if (err && (err != -EEXIST))
break;
err = 0;
+ cond_resched();
}
vmemmap_populate_print_last();
out:
#ifdef CONFIG_MEMORY_HOTREMOVE
/* find the smallest valid pfn in the range [start_pfn, end_pfn) */
-static int find_smallest_section_pfn(int nid, struct zone *zone,
+static unsigned long find_smallest_section_pfn(int nid, struct zone *zone,
unsigned long start_pfn,
unsigned long end_pfn)
{
}
/* find the biggest valid pfn in the range [start_pfn, end_pfn). */
-static int find_biggest_section_pfn(int nid, struct zone *zone,
+static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
unsigned long start_pfn,
unsigned long end_pfn)
{
return ret;
scn_nr = __section_nr(ms);
- start_pfn = section_nr_to_pfn(scn_nr);
+ start_pfn = section_nr_to_pfn((unsigned long)scn_nr);
__remove_zone(zone, start_pfn);
sparse_remove_one_section(zone, ms, map_offset);
#include <linux/ratelimit.h>
#include <linux/kthread.h>
#include <linux/init.h>
+#include <linux/mmu_notifier.h>
#include <asm/tlb.h>
#include "internal.h"
goto unlock_oom;
}
+ /*
+ * If the mm has notifiers then we would need to invalidate them around
+ * unmap_page_range and that is risky because notifiers can sleep and
+ * what they do is basically undeterministic. So let's have a short
+ * sleep to give the oom victim some more time.
+ * TODO: we really want to get rid of this ugly hack and make sure that
+ * notifiers cannot block for unbounded amount of time and add
+ * mmu_notifier_invalidate_range_{start,end} around unmap_page_range
+ */
+ if (mm_has_notifiers(mm)) {
+ up_read(&mm->mmap_sem);
+ schedule_timeout_idle(HZ);
+ goto unlock_oom;
+ }
+
/*
* MMF_OOM_SKIP is set by exit_mmap when the OOM reaper can't
* work on the mm anymore. The check for MMF_OOM_SKIP must run
}
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
-static void init_reserved_page(unsigned long pfn)
+static void __meminit init_reserved_page(unsigned long pfn)
{
pg_data_t *pgdat;
int nid, zid;
__init_single_page(page, pfn, zone, nid);
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+ cond_resched();
} else {
__init_single_pfn(pfn, zone, nid);
}
last_alloc + 1 : 0;
as_len = 0;
- start = chunk->start_offset;
+ start = chunk->start_offset / PCPU_MIN_ALLOC_SIZE;
/*
* If a bit is set in the allocation map, the bound_map identifies
block->contig_hint_start);
return;
}
+ /* reset to satisfy the second predicate above */
+ block_off = 0;
*bits = block->right_free;
*bit_off = (i + 1) * PCPU_BITMAP_BLOCK_BITS - block->right_free;
*bit_off = pcpu_block_off_to_off(i, block->first_free);
return;
}
+ /* reset to satisfy the second predicate above */
+ block_off = 0;
*bit_off = ALIGN(PCPU_BITMAP_BLOCK_BITS - block->right_free,
align);
#include <linux/uaccess.h>
#include <asm/sections.h>
-const int rodata_test_data = 0xC3;
+static const int rodata_test_data = 0xC3;
void rodata_test(void)
{
if (!memcg_nr_cache_ids)
return 0;
- arr = kzalloc(sizeof(struct memcg_cache_array) +
- memcg_nr_cache_ids * sizeof(void *),
- GFP_KERNEL);
+ arr = kvzalloc(sizeof(struct memcg_cache_array) +
+ memcg_nr_cache_ids * sizeof(void *),
+ GFP_KERNEL);
if (!arr)
return -ENOMEM;
static void destroy_memcg_params(struct kmem_cache *s)
{
if (is_root_cache(s))
- kfree(rcu_access_pointer(s->memcg_params.memcg_caches));
+ kvfree(rcu_access_pointer(s->memcg_params.memcg_caches));
+}
+
+static void free_memcg_params(struct rcu_head *rcu)
+{
+ struct memcg_cache_array *old;
+
+ old = container_of(rcu, struct memcg_cache_array, rcu);
+ kvfree(old);
}
static int update_memcg_params(struct kmem_cache *s, int new_array_size)
{
struct memcg_cache_array *old, *new;
- new = kzalloc(sizeof(struct memcg_cache_array) +
- new_array_size * sizeof(void *), GFP_KERNEL);
+ new = kvzalloc(sizeof(struct memcg_cache_array) +
+ new_array_size * sizeof(void *), GFP_KERNEL);
if (!new)
return -ENOMEM;
rcu_assign_pointer(s->memcg_params.memcg_caches, new);
if (old)
- kfree_rcu(old, rcu);
+ call_rcu(&old->rcu, free_memcg_params);
return 0;
}
void *arg)
{
if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
- !PageUnevictable(page)) {
+ !PageSwapCache(page) && !PageUnevictable(page)) {
bool active = PageActive(page);
del_page_from_lru_list(page, lruvec,
void mark_page_lazyfree(struct page *page)
{
if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
- !PageUnevictable(page)) {
+ !PageSwapCache(page) && !PageUnevictable(page)) {
struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
get_page(page);
* clear SWAP_HAS_CACHE flag.
*/
goto fail;
+ /*
+ * Normally the page will be dirtied in unmap because its pte should be
+ * dirty. A special case is MADV_FREE page. The page'e pte could have
+ * dirty bit cleared but the page's SwapBacked bit is still set because
+ * clearing the dirty bit and SwapBacked bit has no lock protected. For
+ * such page, unmap will not set dirty bit for it, so page reclaim will
+ * not write the page out. This can cause data corruption when the page
+ * is swap in later. Always setting the dirty bit for the page solves
+ * the problem.
+ */
+ set_page_dirty(page);
return 1;
WARN_ON(!list_empty(&zhdr->buddy));
set_bit(PAGE_STALE, &page->private);
+ clear_bit(NEEDS_COMPACTING, &page->private);
spin_lock(&pool->lock);
if (!list_empty(&page->lru))
list_del(&page->lru);
list_del(&zhdr->buddy);
if (WARN_ON(!test_bit(PAGE_STALE, &page->private)))
continue;
- clear_bit(NEEDS_COMPACTING, &page->private);
spin_unlock(&pool->stale_lock);
cancel_work_sync(&zhdr->work);
free_z3fold_page(page);
* stale pages list. cancel_work_sync() can sleep so we must make
* sure it won't be called in case we're in atomic context.
*/
- if (zhdr && (can_sleep || !work_pending(&zhdr->work) ||
- !unlikely(work_busy(&zhdr->work)))) {
+ if (zhdr && (can_sleep || !work_pending(&zhdr->work))) {
list_del(&zhdr->buddy);
- clear_bit(NEEDS_COMPACTING, &page->private);
spin_unlock(&pool->stale_lock);
if (can_sleep)
cancel_work_sync(&zhdr->work);
goto next;
}
next:
+ spin_lock(&pool->lock);
if (test_bit(PAGE_HEADLESS, &page->private)) {
if (ret == 0) {
+ spin_unlock(&pool->lock);
free_z3fold_page(page);
return 0;
}
} else if (kref_put(&zhdr->refcount, release_z3fold_page)) {
atomic64_dec(&pool->pages_nr);
+ spin_unlock(&pool->lock);
return 0;
}
- spin_lock(&pool->lock);
/*
* Add to the beginning of LRU.
if (unlikely(!skb))
return false;
+ if (unlikely(!(vlan_dev->flags & IFF_UP))) {
+ kfree_skb(skb);
+ *skbp = NULL;
+ return false;
+ }
+
skb->dev = vlan_dev;
if (unlikely(skb->pkt_type == PACKET_OTHERHOST)) {
/* Our lower layer thinks this is not local, let's make sure.
Provide extensive information about internal Bluetooth states
in debugfs.
-config BT_LEGACY_IOCTL
- bool "Enable legacy ioctl interfaces"
- depends on BT && BT_BREDR
- default y
- help
- Enable support for legacy ioctl interfaces. This is only needed
- for old and deprecated applications using direct ioctl calls for
- controller management. Since Linux 3.4 all configuration and
- setup is done via mgmt interface and this is no longer needed.
-
source "drivers/bluetooth/Kconfig"
return 0;
}
-#ifdef CONFIG_BT_LEGACY_IOCTL
static int hci_sock_blacklist_add(struct hci_dev *hdev, void __user *arg)
{
bdaddr_t bdaddr;
release_sock(sk);
return err;
}
-#endif
static int hci_sock_bind(struct socket *sock, struct sockaddr *addr,
int addr_len)
.getname = hci_sock_getname,
.sendmsg = hci_sock_sendmsg,
.recvmsg = hci_sock_recvmsg,
-#ifdef CONFIG_BT_LEGACY_IOCTL
.ioctl = hci_sock_ioctl,
-#else
- .ioctl = sock_no_ioctl,
-#endif
.poll = datagram_poll,
.listen = sock_no_listen,
.shutdown = sock_no_shutdown,
pg = lookup_pg_mapping(&osdmap->pg_upmap_items, pgid);
if (pg) {
- for (i = 0; i < raw->size; i++) {
- for (j = 0; j < pg->pg_upmap_items.len; j++) {
- int from = pg->pg_upmap_items.from_to[j][0];
- int to = pg->pg_upmap_items.from_to[j][1];
-
- if (from == raw->osds[i]) {
- if (!(to != CRUSH_ITEM_NONE &&
- to < osdmap->max_osd &&
- osdmap->osd_weight[to] == 0))
- raw->osds[i] = to;
+ /*
+ * Note: this approach does not allow a bidirectional swap,
+ * e.g., [[1,2],[2,1]] applied to [0,1,2] -> [0,2,1].
+ */
+ for (i = 0; i < pg->pg_upmap_items.len; i++) {
+ int from = pg->pg_upmap_items.from_to[i][0];
+ int to = pg->pg_upmap_items.from_to[i][1];
+ int pos = -1;
+ bool exists = false;
+
+ /* make sure replacement doesn't already appear */
+ for (j = 0; j < raw->size; j++) {
+ int osd = raw->osds[j];
+
+ if (osd == to) {
+ exists = true;
break;
}
+ /* ignore mapping if target is marked out */
+ if (osd == from && pos < 0 &&
+ !(to != CRUSH_ITEM_NONE &&
+ to < osdmap->max_osd &&
+ osdmap->osd_weight[to] == 0)) {
+ pos = j;
+ }
}
+ if (!exists && pos >= 0)
+ raw->osds[pos] = to;
}
}
}
ucmsg = cmsg_compat_nxthdr(kmsg, ucmsg, ucmlen);
}
+ /*
+ * check the length of messages copied in is the same as the
+ * what we get from the first loop
+ */
+ if ((char *)kcmsg - (char *)kcmsg_base != kcmlen)
+ goto Einval;
+
/* Ok, looks like we made it. Hook it up and return success. */
kmsg->msg_control = kcmsg_base;
kmsg->msg_controllen = kcmlen;
goto again;
}
out_unlock:
- if (pt_prev)
- pt_prev->func(skb2, skb->dev, pt_prev, skb->dev);
+ if (pt_prev) {
+ if (!skb_orphan_frags_rx(skb2, GFP_ATOMIC))
+ pt_prev->func(skb2, skb->dev, pt_prev, skb->dev);
+ else
+ kfree_skb(skb2);
+ }
rcu_read_unlock();
}
EXPORT_SYMBOL_GPL(dev_queue_xmit_nit);
__skb_pull(skb, off);
else if (off < 0)
__skb_push(skb, -off);
+ skb->mac_header += off;
switch (act) {
case XDP_REDIRECT:
= link_ksettings->base.eth_tp_mdix;
legacy_settings->eth_tp_mdix_ctrl
= link_ksettings->base.eth_tp_mdix_ctrl;
+ legacy_settings->transceiver
+ = link_ksettings->base.transceiver;
return retval;
}
bool sk_filter_charge(struct sock *sk, struct sk_filter *fp)
{
- bool ret = __sk_filter_charge(sk, fp);
- if (ret)
- refcount_inc(&fp->refcnt);
- return ret;
+ if (!refcount_inc_not_zero(&fp->refcnt))
+ return false;
+
+ if (!__sk_filter_charge(sk, fp)) {
+ sk_filter_release(fp);
+ return false;
+ }
+ return true;
}
static struct bpf_prog *bpf_migrate_filter(struct bpf_prog *fp)
u32 flags;
struct bpf_map *map;
struct bpf_map *map_to_flush;
- const struct bpf_prog *map_owner;
+ unsigned long map_owner;
};
static DEFINE_PER_CPU(struct redirect_info, redirect_info);
}
EXPORT_SYMBOL_GPL(xdp_do_flush_map);
+static inline bool xdp_map_invalid(const struct bpf_prog *xdp_prog,
+ unsigned long aux)
+{
+ return (unsigned long)xdp_prog->aux != aux;
+}
+
static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp,
struct bpf_prog *xdp_prog)
{
struct redirect_info *ri = this_cpu_ptr(&redirect_info);
- const struct bpf_prog *map_owner = ri->map_owner;
+ unsigned long map_owner = ri->map_owner;
struct bpf_map *map = ri->map;
struct net_device *fwd = NULL;
u32 index = ri->ifindex;
ri->ifindex = 0;
ri->map = NULL;
- ri->map_owner = NULL;
+ ri->map_owner = 0;
- if (unlikely(map_owner != xdp_prog)) {
+ if (unlikely(xdp_map_invalid(xdp_prog, map_owner))) {
err = -EFAULT;
map = NULL;
goto err;
struct bpf_prog *xdp_prog)
{
struct redirect_info *ri = this_cpu_ptr(&redirect_info);
- const struct bpf_prog *map_owner = ri->map_owner;
+ unsigned long map_owner = ri->map_owner;
struct bpf_map *map = ri->map;
struct net_device *fwd = NULL;
u32 index = ri->ifindex;
ri->ifindex = 0;
ri->map = NULL;
- ri->map_owner = NULL;
+ ri->map_owner = 0;
if (map) {
- if (unlikely(map_owner != xdp_prog)) {
+ if (unlikely(xdp_map_invalid(xdp_prog, map_owner))) {
err = -EFAULT;
map = NULL;
goto err;
ri->ifindex = ifindex;
ri->flags = flags;
ri->map = NULL;
- ri->map_owner = NULL;
+ ri->map_owner = 0;
return XDP_REDIRECT;
}
};
BPF_CALL_4(bpf_xdp_redirect_map, struct bpf_map *, map, u32, ifindex, u64, flags,
- const struct bpf_prog *, map_owner)
+ unsigned long, map_owner)
{
struct redirect_info *ri = this_cpu_ptr(&redirect_info);
return -EMSGSIZE;
ifsm = nlmsg_data(nlh);
+ ifsm->family = PF_UNSPEC;
+ ifsm->pad1 = 0;
+ ifsm->pad2 = 0;
ifsm->ifindex = dev->ifindex;
ifsm->filter_mask = filter_mask;
sock_copy(newsk, sk);
+ newsk->sk_prot_creator = sk->sk_prot;
+
/* SANITY */
if (likely(newsk->sk_net_refcnt))
get_net(sock_net(newsk));
sock_reset_flag(newsk, SOCK_DONE);
- filter = rcu_dereference_protected(newsk->sk_filter, 1);
+ rcu_read_lock();
+ filter = rcu_dereference(sk->sk_filter);
if (filter != NULL)
/* though it's an empty new sock, the charging may fail
* if sysctl_optmem_max was changed between creation of
* original socket and cloning
*/
is_charged = sk_filter_charge(newsk, filter);
+ RCU_INIT_POINTER(newsk->sk_filter, filter);
+ rcu_read_unlock();
if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {
/* We need to make sure that we don't uncharge the new
p->old_duplex = -1;
port->netdev = slave_dev;
- ret = register_netdev(slave_dev);
- if (ret) {
- netdev_err(master, "error %d registering interface %s\n",
- ret, slave_dev->name);
- port->netdev = NULL;
- free_percpu(p->stats64);
- free_netdev(slave_dev);
- return ret;
- }
netif_carrier_off(slave_dev);
ret = dsa_slave_phy_setup(p, slave_dev);
if (ret) {
netdev_err(master, "error %d setting up slave phy\n", ret);
- unregister_netdev(slave_dev);
- free_percpu(p->stats64);
- free_netdev(slave_dev);
- return ret;
+ goto out_free;
+ }
+
+ ret = register_netdev(slave_dev);
+ if (ret) {
+ netdev_err(master, "error %d registering interface %s\n",
+ ret, slave_dev->name);
+ goto out_phy;
}
return 0;
+
+out_phy:
+ phy_disconnect(p->phy);
+ if (of_phy_is_fixed_link(p->dp->dn))
+ of_phy_deregister_fixed_link(p->dp->dn);
+out_free:
+ free_percpu(p->stats64);
+ free_netdev(slave_dev);
+ port->netdev = NULL;
+ return ret;
}
void dsa_slave_destroy(struct net_device *slave_dev)
#if IS_ENABLED(CONFIG_IPV6)
if (tb->fast_sk_family == AF_INET6)
return ipv6_rcv_saddr_equal(&tb->fast_v6_rcv_saddr,
- &sk->sk_v6_rcv_saddr,
+ inet6_rcv_saddr(sk),
tb->fast_rcv_saddr,
sk->sk_rcv_saddr,
tb->fast_ipv6_only,
goto fail_unlock;
}
success:
- if (!hlist_empty(&tb->owners)) {
+ if (hlist_empty(&tb->owners)) {
tb->fastreuse = reuse;
if (sk->sk_reuseport) {
tb->fastreuseport = FASTREUSEPORT_ANY;
tb->fastuid = uid;
tb->fast_rcv_saddr = sk->sk_rcv_saddr;
tb->fast_ipv6_only = ipv6_only_sock(sk);
+ tb->fast_sk_family = sk->sk_family;
#if IS_ENABLED(CONFIG_IPV6)
tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
#endif
tb->fastuid = uid;
tb->fast_rcv_saddr = sk->sk_rcv_saddr;
tb->fast_ipv6_only = ipv6_only_sock(sk);
+ tb->fast_sk_family = sk->sk_family;
#if IS_ENABLED(CONFIG_IPV6)
tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
#endif
break;
}
if (cmp == -1)
- pp = &(*pp)->rb_left;
+ pp = &next->rb_left;
else
- pp = &(*pp)->rb_right;
+ pp = &next->rb_right;
}
*parent_p = parent;
*pp_p = pp;
struct ip_tunnel *tunnel;
struct erspanhdr *ershdr;
const struct iphdr *iph;
- __be32 session_id;
__be32 index;
int len;
/* The original GRE header does not have key field,
* Use ERSPAN 10-bit session ID as key.
*/
- session_id = cpu_to_be32(ntohs(ershdr->session_id));
- tpi->key = session_id;
+ tpi->key = cpu_to_be32(ntohs(ershdr->session_id) & ID_MASK);
index = ershdr->md.index;
tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex,
tpi->flags | TUNNEL_KEY,
if (skb_cow_head(skb, dev->needed_headroom))
goto free_skb;
- if (skb->len > dev->mtu) {
+ if (skb->len - dev->hard_header_len > dev->mtu) {
pskb_trim(skb, dev->mtu);
truncate = true;
}
{
__gre_tunnel_init(dev);
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ netif_keep_dst(dev);
return ip_tunnel_init(dev);
}
tunnel->tun_hlen = 8;
tunnel->parms.iph.protocol = IPPROTO_GRE;
- t_hlen = tunnel->hlen + sizeof(struct iphdr) + sizeof(struct erspanhdr);
+ tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen +
+ sizeof(struct erspanhdr);
+ t_hlen = tunnel->hlen + sizeof(struct iphdr);
dev->needed_headroom = LL_MAX_HEADER + t_hlen + 4;
dev->mtu = ETH_DATA_LEN - t_hlen - 4;
dev->features |= GRE_FEATURES;
dev->hw_features |= GRE_FEATURES;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ netif_keep_dst(dev);
return ip_tunnel_init(dev);
}
static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
{
const struct iphdr *iph = ip_hdr(skb);
- struct rtable *rt;
+ int (*edemux)(struct sk_buff *skb);
struct net_device *dev = skb->dev;
- void (*edemux)(struct sk_buff *skb);
+ struct rtable *rt;
+ int err;
/* if ingress device is enslaved to an L3 master device pass the
* skb to its handler for processing
ipprot = rcu_dereference(inet_protos[protocol]);
if (ipprot && (edemux = READ_ONCE(ipprot->early_demux))) {
- edemux(skb);
+ err = edemux(skb);
+ if (unlikely(err))
+ goto drop_error;
/* must reload iph, skb->head might have changed */
iph = ip_hdr(skb);
}
* how the packet travels inside Linux networking.
*/
if (!skb_valid_dst(skb)) {
- int err = ip_route_input_noref(skb, iph->daddr, iph->saddr,
- iph->tos, dev);
- if (unlikely(err)) {
- if (err == -EXDEV)
- __NET_INC_STATS(net, LINUX_MIB_IPRPFILTER);
- goto drop;
- }
+ err = ip_route_input_noref(skb, iph->daddr, iph->saddr,
+ iph->tos, dev);
+ if (unlikely(err))
+ goto drop_error;
}
#ifdef CONFIG_IP_ROUTE_CLASSID
drop:
kfree_skb(skb);
return NET_RX_DROP;
+
+drop_error:
+ if (err == -EXDEV)
+ __NET_INC_STATS(net, LINUX_MIB_IPRPFILTER);
+ goto drop;
}
/*
struct ip_tunnel_parm *parms = &tunnel->parms;
struct dst_entry *dst = skb_dst(skb);
struct net_device *tdev; /* Device to other host */
+ int pkt_len = skb->len;
int err;
int mtu;
err = dst_output(tunnel->net, skb->sk, skb);
if (net_xmit_eval(err) == 0)
- err = skb->len;
+ err = pkt_len;
iptunnel_xmit_stats(dev, err);
return NETDEV_TX_OK;
EXPORT_SYMBOL(rt_dst_alloc);
/* called in rcu_read_lock() section */
-static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
- u8 tos, struct net_device *dev, int our)
+int ip_mc_validate_source(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ u8 tos, struct net_device *dev,
+ struct in_device *in_dev, u32 *itag)
{
- struct rtable *rth;
- struct in_device *in_dev = __in_dev_get_rcu(dev);
- unsigned int flags = RTCF_MULTICAST;
- u32 itag = 0;
int err;
/* Primary sanity checks. */
-
if (!in_dev)
return -EINVAL;
if (ipv4_is_multicast(saddr) || ipv4_is_lbcast(saddr) ||
skb->protocol != htons(ETH_P_IP))
- goto e_inval;
+ return -EINVAL;
if (ipv4_is_loopback(saddr) && !IN_DEV_ROUTE_LOCALNET(in_dev))
- goto e_inval;
+ return -EINVAL;
if (ipv4_is_zeronet(saddr)) {
if (!ipv4_is_local_multicast(daddr))
- goto e_inval;
+ return -EINVAL;
} else {
err = fib_validate_source(skb, saddr, 0, tos, 0, dev,
- in_dev, &itag);
+ in_dev, itag);
if (err < 0)
- goto e_err;
+ return err;
}
+ return 0;
+}
+
+/* called in rcu_read_lock() section */
+static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ u8 tos, struct net_device *dev, int our)
+{
+ struct in_device *in_dev = __in_dev_get_rcu(dev);
+ unsigned int flags = RTCF_MULTICAST;
+ struct rtable *rth;
+ u32 itag = 0;
+ int err;
+
+ err = ip_mc_validate_source(skb, daddr, saddr, tos, dev, in_dev, &itag);
+ if (err)
+ return err;
+
if (our)
flags |= RTCF_LOCAL;
rth = rt_dst_alloc(dev_net(dev)->loopback_dev, flags, RTN_MULTICAST,
IN_DEV_CONF_GET(in_dev, NOPOLICY), false, false);
if (!rth)
- goto e_nobufs;
+ return -ENOBUFS;
#ifdef CONFIG_IP_ROUTE_CLASSID
rth->dst.tclassid = itag;
skb_dst_set(skb, &rth->dst);
return 0;
-
-e_nobufs:
- return -ENOBUFS;
-e_inval:
- return -EINVAL;
-e_err:
- return err;
}
}
EXPORT_SYMBOL(tcp_v4_do_rcv);
-void tcp_v4_early_demux(struct sk_buff *skb)
+int tcp_v4_early_demux(struct sk_buff *skb)
{
const struct iphdr *iph;
const struct tcphdr *th;
struct sock *sk;
if (skb->pkt_type != PACKET_HOST)
- return;
+ return 0;
if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct tcphdr)))
- return;
+ return 0;
iph = ip_hdr(skb);
th = tcp_hdr(skb);
if (th->doff < sizeof(struct tcphdr) / 4)
- return;
+ return 0;
sk = __inet_lookup_established(dev_net(skb->dev), &tcp_hashinfo,
iph->saddr, th->source,
skb_dst_set_noref(skb, dst);
}
}
+ return 0;
}
bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
return !after(end_seq, tcp_wnd_end(tp));
}
-/* This checks if the data bearing packet SKB (usually tcp_send_head(sk))
- * should be put on the wire right now. If so, it returns the number of
- * packets allowed by the congestion window.
- */
-static unsigned int tcp_snd_test(const struct sock *sk, struct sk_buff *skb,
- unsigned int cur_mss, int nonagle)
-{
- const struct tcp_sock *tp = tcp_sk(sk);
- unsigned int cwnd_quota;
-
- tcp_init_tso_segs(skb, cur_mss);
-
- if (!tcp_nagle_test(tp, skb, cur_mss, nonagle))
- return 0;
-
- cwnd_quota = tcp_cwnd_test(tp, skb);
- if (cwnd_quota && !tcp_snd_wnd_test(tp, skb, cur_mss))
- cwnd_quota = 0;
-
- return cwnd_quota;
-}
-
-/* Test if sending is allowed right now. */
-bool tcp_may_send_now(struct sock *sk)
-{
- const struct tcp_sock *tp = tcp_sk(sk);
- struct sk_buff *skb = tcp_send_head(sk);
-
- return skb &&
- tcp_snd_test(sk, skb, tcp_current_mss(sk),
- (tcp_skb_is_last(sk, skb) ?
- tp->nonagle : TCP_NAGLE_PUSH));
-}
-
/* Trim TSO SKB to LEN bytes, put the remaining data into a new packet
* which is put after SKB on the list. It is very much like
* tcp_fragment() except that it may make several kinds of assumptions
goto done;
}
+ /* data was not sent, this is our new send_head */
+ sk->sk_send_head = syn_data;
+ tp->packets_out -= tcp_skb_pcount(syn_data);
+
fallback:
/* Send a regular SYN with Fast Open cookie request option */
if (fo->cookie.len > 0)
*/
tp->snd_nxt = tp->write_seq;
tp->pushed_seq = tp->write_seq;
+ buff = tcp_send_head(sk);
+ if (unlikely(buff)) {
+ tp->snd_nxt = TCP_SKB_CB(buff)->seq;
+ tp->pushed_seq = TCP_SKB_CB(buff)->seq;
+ }
TCP_INC_STATS(sock_net(sk), TCP_MIB_ACTIVEOPENS);
/* Timer for repeating the SYN until an answer. */
return NULL;
}
-void udp_v4_early_demux(struct sk_buff *skb)
+int udp_v4_early_demux(struct sk_buff *skb)
{
struct net *net = dev_net(skb->dev);
+ struct in_device *in_dev = NULL;
const struct iphdr *iph;
const struct udphdr *uh;
struct sock *sk = NULL;
/* validate the packet */
if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
- return;
+ return 0;
iph = ip_hdr(skb);
uh = udp_hdr(skb);
if (skb->pkt_type == PACKET_BROADCAST ||
skb->pkt_type == PACKET_MULTICAST) {
- struct in_device *in_dev = __in_dev_get_rcu(skb->dev);
+ in_dev = __in_dev_get_rcu(skb->dev);
if (!in_dev)
- return;
+ return 0;
/* we are supposed to accept bcast packets */
if (skb->pkt_type == PACKET_MULTICAST) {
ours = ip_check_mc_rcu(in_dev, iph->daddr, iph->saddr,
iph->protocol);
if (!ours)
- return;
+ return 0;
}
sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
}
if (!sk || !refcount_inc_not_zero(&sk->sk_refcnt))
- return;
+ return 0;
skb->sk = sk;
skb->destructor = sock_efree;
if (dst)
dst = dst_check(dst, 0);
if (dst) {
+ u32 itag = 0;
+
/* set noref for now.
* any place which wants to hold dst has to call
* dst_hold_safe()
*/
skb_dst_set_noref(skb, dst);
+
+ /* for unconnected multicast sockets we need to validate
+ * the source on each packet
+ */
+ if (!inet_sk(sk)->inet_daddr && in_dev)
+ return ip_mc_validate_source(skb, iph->daddr,
+ iph->saddr, iph->tos,
+ skb->dev, in_dev, &itag);
}
+ return 0;
}
int udp_rcv(struct sk_buff *skb)
return 0;
}
-static inline bool ipv6_use_optimistic_addr(struct inet6_dev *idev)
+static bool ipv6_use_optimistic_addr(struct net *net,
+ struct inet6_dev *idev)
{
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
- return idev && idev->cnf.optimistic_dad && idev->cnf.use_optimistic;
+ if (!idev)
+ return false;
+ if (!net->ipv6.devconf_all->optimistic_dad && !idev->cnf.optimistic_dad)
+ return false;
+ if (!net->ipv6.devconf_all->use_optimistic && !idev->cnf.use_optimistic)
+ return false;
+
+ return true;
#else
return false;
#endif
/* Rule 3: Avoid deprecated and optimistic addresses */
u8 avoid = IFA_F_DEPRECATED;
- if (!ipv6_use_optimistic_addr(score->ifa->idev))
+ if (!ipv6_use_optimistic_addr(net, score->ifa->idev))
avoid |= IFA_F_OPTIMISTIC;
ret = ipv6_saddr_preferred(score->addr_type) ||
!(score->ifa->flags & avoid);
int max_addresses = in6_dev->cnf.max_addresses;
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
- if (in6_dev->cnf.optimistic_dad &&
+ if ((net->ipv6.devconf_all->optimistic_dad ||
+ in6_dev->cnf.optimistic_dad) &&
!net->ipv6.devconf_all->forwarding && sllao)
addr_flags |= IFA_F_OPTIMISTIC;
#endif
u32 addr_flags = flags | IFA_F_PERMANENT;
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
- if (idev->cnf.optimistic_dad &&
+ if ((dev_net(idev->dev)->ipv6.devconf_all->optimistic_dad ||
+ idev->cnf.optimistic_dad) &&
!dev_net(idev->dev)->ipv6.devconf_all->forwarding)
addr_flags |= IFA_F_OPTIMISTIC;
#endif
goto out;
if (dev->flags&(IFF_NOARP|IFF_LOOPBACK) ||
+ dev_net(dev)->ipv6.devconf_all->accept_dad < 1 ||
idev->cnf.accept_dad < 1 ||
!(ifp->flags&IFA_F_TENTATIVE) ||
ifp->flags & IFA_F_NODAD) {
*/
if (ifp->flags & IFA_F_OPTIMISTIC) {
ip6_ins_rt(ifp->rt);
- if (ipv6_use_optimistic_addr(idev)) {
+ if (ipv6_use_optimistic_addr(dev_net(dev), idev)) {
/* Because optimistic nodes can use this address,
* notify listeners. If DAD fails, RTM_DELADDR is sent.
*/
action = DAD_ABORT;
ifp->state = INET6_IFADDR_STATE_POSTDAD;
- if (idev->cnf.accept_dad > 1 && !idev->cnf.disable_ipv6 &&
+ if ((dev_net(idev->dev)->ipv6.devconf_all->accept_dad > 1 ||
+ idev->cnf.accept_dad > 1) &&
+ !idev->cnf.disable_ipv6 &&
!(ifp->flags & IFA_F_STABLE_PRIVACY)) {
struct in6_addr addr;
/* Don't send DELADDR notification for TENTATIVE address,
* since NEWADDR notification is sent only after removing
- * TENTATIVE flag.
+ * TENTATIVE flag, if DAD has not failed.
*/
- if (ifa->flags & IFA_F_TENTATIVE && event == RTM_DELADDR)
+ if (ifa->flags & IFA_F_TENTATIVE && !(ifa->flags & IFA_F_DADFAILED) &&
+ event == RTM_DELADDR)
return;
skb = nlmsg_new(inet6_ifaddr_msgsize(), GFP_ATOMIC);
}
static int ip6gre_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type,
- const void *daddr, const void *saddr, unsigned int len)
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
{
struct ip6_tnl *t = netdev_priv(dev);
- struct ipv6hdr *ipv6h = skb_push(skb, t->hlen);
- __be16 *p = (__be16 *)(ipv6h+1);
+ struct ipv6hdr *ipv6h;
+ __be16 *p;
- ip6_flow_hdr(ipv6h, 0,
- ip6_make_flowlabel(dev_net(dev), skb,
- t->fl.u.ip6.flowlabel, true,
- &t->fl.u.ip6));
+ ipv6h = skb_push(skb, t->hlen + sizeof(*ipv6h));
+ ip6_flow_hdr(ipv6h, 0, ip6_make_flowlabel(dev_net(dev), skb,
+ t->fl.u.ip6.flowlabel,
+ true, &t->fl.u.ip6));
ipv6h->hop_limit = t->parms.hop_limit;
ipv6h->nexthdr = NEXTHDR_GRE;
ipv6h->saddr = t->parms.laddr;
ipv6h->daddr = t->parms.raddr;
- p[0] = t->parms.o_flags;
- p[1] = htons(type);
+ p = (__be16 *)(ipv6h + 1);
+ p[0] = t->parms.o_flags;
+ p[1] = htons(type);
/*
* Set the source hardware address.
dev->features |= NETIF_F_NETNS_LOCAL;
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ netif_keep_dst(dev);
}
static bool ip6gre_netlink_encap_parms(struct nlattr *data[],
struct dst_entry *dst = NULL, *ndst = NULL;
struct net_device *tdev;
int mtu;
+ unsigned int eth_hlen = t->dev->type == ARPHRD_ETHER ? ETH_HLEN : 0;
unsigned int psh_hlen = sizeof(struct ipv6hdr) + t->encap_hlen;
unsigned int max_headroom = psh_hlen;
bool use_cache = false;
t->parms.name);
goto tx_err_dst_release;
}
- mtu = dst_mtu(dst) - psh_hlen - t->tun_hlen;
+ mtu = dst_mtu(dst) - eth_hlen - psh_hlen - t->tun_hlen;
if (encap_limit >= 0) {
max_headroom += 8;
mtu -= 8;
mtu = IPV6_MIN_MTU;
if (skb_dst(skb) && !t->parms.collect_md)
skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
- if (skb->len - t->tun_hlen > mtu && !skb_is_gso(skb)) {
+ if (skb->len - t->tun_hlen - eth_hlen > mtu && !skb_is_gso(skb)) {
*pmtu = mtu;
err = -EMSGSIZE;
goto tx_err_dst_release;
{
int err;
+ if (!ipv6_mod_enabled())
+ return -EOPNOTSUPP;
+
err = register_pernet_device(&ip6_tnl_net_ops);
if (err < 0)
goto out_pernet;
struct dst_entry *dst = skb_dst(skb);
struct net_device *tdev;
struct xfrm_state *x;
+ int pkt_len = skb->len;
int err = -1;
int mtu;
struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
u64_stats_update_begin(&tstats->syncp);
- tstats->tx_bytes += skb->len;
+ tstats->tx_bytes += pkt_len;
tstats->tx_packets++;
u64_stats_update_end(&tstats->syncp);
} else {
*/
offset = skb_transport_offset(skb);
skb->csum = skb_checksum(skb, offset, skb->len - offset, 0);
+ csum = skb->csum;
skb->ip_summed = CHECKSUM_NONE;
hlist_del_init(&session->hlist);
+ if (test_and_set_bit(0, &session->dead))
+ goto again;
+
if (session->ref != NULL)
(*session->ref)(session);
/* This function is used by the netlink TUNNEL_DELETE command.
*/
-int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
+void l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
{
- l2tp_tunnel_inc_refcount(tunnel);
- if (false == queue_work(l2tp_wq, &tunnel->del_work)) {
- l2tp_tunnel_dec_refcount(tunnel);
- return 1;
+ if (!test_and_set_bit(0, &tunnel->dead)) {
+ l2tp_tunnel_inc_refcount(tunnel);
+ queue_work(l2tp_wq, &tunnel->del_work);
}
- return 0;
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
*/
int l2tp_session_delete(struct l2tp_session *session)
{
+ if (test_and_set_bit(0, &session->dead))
+ return 0;
+
if (session->ref)
(*session->ref)(session);
__l2tp_session_unhash(session);
struct l2tp_session {
int magic; /* should be
* L2TP_SESSION_MAGIC */
+ long dead;
struct l2tp_tunnel *tunnel; /* back pointer to tunnel
* context */
struct l2tp_tunnel {
int magic; /* Should be L2TP_TUNNEL_MAGIC */
+
+ unsigned long dead;
+
struct rcu_head rcu;
rwlock_t hlist_lock; /* protect session_hlist */
bool acpt_newsess; /* Indicates whether this
u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg,
struct l2tp_tunnel **tunnelp);
void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
-int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel);
+void l2tp_tunnel_delete(struct l2tp_tunnel *tunnel);
struct l2tp_session *l2tp_session_create(int priv_size,
struct l2tp_tunnel *tunnel,
u32 session_id, u32 peer_session_id,
struct net_device *dev;
struct sock *tunnel_sock;
struct l2tp_session *session;
- struct list_head list;
atomic_long_t tx_bytes;
atomic_long_t tx_packets;
atomic_long_t tx_dropped;
struct net_device *dev;
};
-/* per-net private data for this module */
-static unsigned int l2tp_eth_net_id;
-struct l2tp_eth_net {
- struct list_head l2tp_eth_dev_list;
- spinlock_t l2tp_eth_lock;
-};
-
-static inline struct l2tp_eth_net *l2tp_eth_pernet(struct net *net)
-{
- return net_generic(net, l2tp_eth_net_id);
-}
static int l2tp_eth_dev_init(struct net_device *dev)
{
static void l2tp_eth_dev_uninit(struct net_device *dev)
{
- struct l2tp_eth *priv = netdev_priv(dev);
- struct l2tp_eth_net *pn = l2tp_eth_pernet(dev_net(dev));
-
- spin_lock(&pn->l2tp_eth_lock);
- list_del_init(&priv->list);
- spin_unlock(&pn->l2tp_eth_lock);
dev_put(dev);
}
struct l2tp_eth *priv;
struct l2tp_eth_sess *spriv;
int rc;
- struct l2tp_eth_net *pn;
if (cfg->ifname) {
strlcpy(name, cfg->ifname, IFNAMSIZ);
priv = netdev_priv(dev);
priv->dev = dev;
priv->session = session;
- INIT_LIST_HEAD(&priv->list);
priv->tunnel_sock = tunnel->sock;
session->recv_skb = l2tp_eth_dev_recv;
strlcpy(session->ifname, dev->name, IFNAMSIZ);
dev_hold(dev);
- pn = l2tp_eth_pernet(dev_net(dev));
- spin_lock(&pn->l2tp_eth_lock);
- list_add(&priv->list, &pn->l2tp_eth_dev_list);
- spin_unlock(&pn->l2tp_eth_lock);
return 0;
return rc;
}
-static __net_init int l2tp_eth_init_net(struct net *net)
-{
- struct l2tp_eth_net *pn = net_generic(net, l2tp_eth_net_id);
-
- INIT_LIST_HEAD(&pn->l2tp_eth_dev_list);
- spin_lock_init(&pn->l2tp_eth_lock);
-
- return 0;
-}
-
-static struct pernet_operations l2tp_eth_net_ops = {
- .init = l2tp_eth_init_net,
- .id = &l2tp_eth_net_id,
- .size = sizeof(struct l2tp_eth_net),
-};
-
static const struct l2tp_nl_cmd_ops l2tp_eth_nl_cmd_ops = {
.session_create = l2tp_eth_create,
err = l2tp_nl_register_ops(L2TP_PWTYPE_ETH, &l2tp_eth_nl_cmd_ops);
if (err)
- goto out;
-
- err = register_pernet_device(&l2tp_eth_net_ops);
- if (err)
- goto out_unreg;
+ goto err;
pr_info("L2TP ethernet pseudowire support (L2TPv3)\n");
return 0;
-out_unreg:
- l2tp_nl_unregister_ops(L2TP_PWTYPE_ETH);
-out:
+err:
return err;
}
static void __exit l2tp_eth_exit(void)
{
- unregister_pernet_device(&l2tp_eth_net_ops);
l2tp_nl_unregister_ops(L2TP_PWTYPE_ETH);
}
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
- if (sock) {
+ if (sock)
inet_shutdown(sock, SEND_SHUTDOWN);
- /* Don't let the session go away before our socket does */
- l2tp_session_inc_refcount(session);
- }
+
+ /* Don't let the session go away before our socket does */
+ l2tp_session_inc_refcount(session);
}
/* Really kill the session socket. (Called from sock_put() if
static int
mtype_head(struct ip_set *set, struct sk_buff *skb)
{
- const struct htype *h = set->data;
+ struct htype *h = set->data;
const struct htable *t;
struct nlattr *nested;
size_t memsize;
u8 htable_bits;
+ /* If any members have expired, set->elements will be wrong
+ * mytype_expire function will update it with the right count.
+ * we do not hold set->lock here, so grab it first.
+ * set->elements can still be incorrect in the case of a huge set,
+ * because elements might time out during the listing.
+ */
+ if (SET_WITH_TIMEOUT(set)) {
+ spin_lock_bh(&set->lock);
+ mtype_expire(set, h);
+ spin_unlock_bh(&set->lock);
+ }
+
rcu_read_lock_bh();
t = rcu_dereference_bh_nfnl(h->table);
memsize = mtype_ahash_memsize(h, t) + set->ext_size;
srchash = hash_by_src(net,
&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
- lock = &nf_nat_locks[srchash % ARRAY_SIZE(nf_nat_locks)];
+ lock = &nf_nat_locks[srchash % CONNTRACK_LOCKS];
spin_lock_bh(lock);
hlist_add_head_rcu(&ct->nat_bysource,
&nf_nat_bysource[srchash]);
unsigned int h;
h = hash_by_src(nf_ct_net(ct), &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
- spin_lock_bh(&nf_nat_locks[h % ARRAY_SIZE(nf_nat_locks)]);
+ spin_lock_bh(&nf_nat_locks[h % CONNTRACK_LOCKS]);
hlist_del_rcu(&ct->nat_bysource);
- spin_unlock_bh(&nf_nat_locks[h % ARRAY_SIZE(nf_nat_locks)]);
+ spin_unlock_bh(&nf_nat_locks[h % CONNTRACK_LOCKS]);
}
static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
/* Leave them the same for the moment. */
nf_nat_htable_size = nf_conntrack_htable_size;
- if (nf_nat_htable_size < ARRAY_SIZE(nf_nat_locks))
- nf_nat_htable_size = ARRAY_SIZE(nf_nat_locks);
+ if (nf_nat_htable_size < CONNTRACK_LOCKS)
+ nf_nat_htable_size = CONNTRACK_LOCKS;
nf_nat_bysource = nf_ct_alloc_hashtable(&nf_nat_htable_size, 0);
if (!nf_nat_bysource)
return ret;
}
- for (i = 0; i < ARRAY_SIZE(nf_nat_locks); i++)
+ for (i = 0; i < CONNTRACK_LOCKS; i++)
spin_lock_init(&nf_nat_locks[i]);
nf_ct_helper_expectfn_register(&follow_master_nat);
mutex_unlock(nlk->cb_mutex);
+ ret = 0;
if (cb->start)
- cb->start(cb);
+ ret = cb->start(cb);
+
+ if (!ret)
+ ret = netlink_dump(sk);
- ret = netlink_dump(sk);
sock_put(sk);
if (ret)
mutex_lock(&fanout_mutex);
- err = -EINVAL;
- if (!po->running)
- goto out;
-
err = -EALREADY;
if (po->fanout)
goto out;
list_add(&match->list, &fanout_list);
}
err = -EINVAL;
- if (match->type == type &&
+
+ spin_lock(&po->bind_lock);
+ if (po->running &&
+ match->type == type &&
match->prot_hook.type == po->prot_hook.type &&
match->prot_hook.dev == po->prot_hook.dev) {
err = -ENOSPC;
err = 0;
}
}
+ spin_unlock(&po->bind_lock);
+
+ if (err && !refcount_read(&match->sk_ref)) {
+ list_del(&match->list);
+ kfree(match);
+ }
+
out:
if (err && rollover) {
kfree(rollover);
struct virtio_net_hdr vnet_hdr = { 0 };
int offset = 0;
struct packet_sock *po = pkt_sk(sk);
+ bool has_vnet_hdr = false;
int hlen, tlen, linear;
int extra_len = 0;
err = packet_snd_vnet_parse(msg, &len, &vnet_hdr);
if (err)
goto out_unlock;
+ has_vnet_hdr = true;
}
if (unlikely(sock_flag(sk, SOCK_NOFCS))) {
skb->priority = sk->sk_priority;
skb->mark = sockc.mark;
- if (po->has_vnet_hdr) {
+ if (has_vnet_hdr) {
err = virtio_net_hdr_to_skb(skb, &vnet_hdr, vio_le());
if (err)
goto out_free;
int ret = 0;
bool unlisted = false;
- if (po->fanout)
- return -EINVAL;
-
lock_sock(sk);
spin_lock(&po->bind_lock);
rcu_read_lock();
+ if (po->fanout) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
if (name) {
dev = dev_get_by_name_rcu(sock_net(sk), name);
if (!dev) {
if (!tc_flags_valid(fnew->flags)) {
err = -EINVAL;
- goto errout;
+ goto errout_idr;
}
}
err = fl_set_parms(net, tp, fnew, &mask, base, tb, tca[TCA_RATE], ovr);
if (err)
- goto errout;
+ goto errout_idr;
err = fl_check_assign_mask(head, &mask);
if (err)
- goto errout;
+ goto errout_idr;
if (!tc_skip_sw(fnew->flags)) {
if (!fold && fl_lookup(head, &fnew->mkey)) {
err = -EEXIST;
- goto errout;
+ goto errout_idr;
}
err = rhashtable_insert_fast(&head->ht, &fnew->ht_node,
head->ht_params);
if (err)
- goto errout;
+ goto errout_idr;
}
if (!tc_skip_hw(fnew->flags)) {
&mask.key,
fnew);
if (err)
- goto errout;
+ goto errout_idr;
}
if (!tc_in_hw(fnew->flags))
kfree(tb);
return 0;
+errout_idr:
+ if (fnew->handle)
+ idr_remove_ext(&head->handle_idr, fnew->handle);
errout:
tcf_exts_destroy(&fnew->exts);
kfree(fnew);
if (tc_skip_sw(head->flags))
return -1;
+ *res = head->res;
return tcf_exts_exec(skb, &head->exts, res);
}
qdisc->gso_skb = NULL;
}
qdisc->q.qlen = 0;
+ qdisc->qstats.backlog = 0;
}
EXPORT_SYMBOL(qdisc_reset);
}
if (cl != NULL) {
+ int old_flags;
+
if (parentid) {
if (cl->cl_parent &&
cl->cl_parent->cl_common.classid != parentid)
}
sch_tree_lock(sch);
+ old_flags = cl->cl_flags;
+
if (rsc != NULL)
hfsc_change_rsc(cl, rsc, cur_time);
if (fsc != NULL)
hfsc_change_usc(cl, usc, cur_time);
if (cl->qdisc->q.qlen != 0) {
- if (cl->cl_flags & HFSC_RSC)
- update_ed(cl, qdisc_peek_len(cl->qdisc));
- if (cl->cl_flags & HFSC_FSC)
- update_vf(cl, 0, cur_time);
+ int len = qdisc_peek_len(cl->qdisc);
+
+ if (cl->cl_flags & HFSC_RSC) {
+ if (old_flags & HFSC_RSC)
+ update_ed(cl, len);
+ else
+ init_ed(cl, len);
+ }
+
+ if (cl->cl_flags & HFSC_FSC) {
+ if (old_flags & HFSC_FSC)
+ update_vf(cl, 0, cur_time);
+ else
+ init_vf(cl, len);
+ }
}
sch_tree_unlock(sch);
.r = r,
.net_admin = netlink_net_capable(cb->skb, CAP_NET_ADMIN),
};
+ int pos = cb->args[2];
/* eps hashtable dumps
* args:
goto done;
sctp_for_each_transport(sctp_sock_filter, sctp_sock_dump,
- net, (int *)&cb->args[2], &commp);
+ net, &pos, &commp);
+ cb->args[2] = pos;
done:
cb->args[1] = cb->args[4];
__be32 *subnet, u8 *prefix_len)
{
struct dst_entry *dst = sk_dst_get(clcsock->sk);
+ struct in_device *in_dev;
struct sockaddr_in addr;
int rc = -ENOENT;
int len;
/* get address to which the internal TCP socket is bound */
kernel_getsockname(clcsock, (struct sockaddr *)&addr, &len);
/* analyze IPv4 specific data of net_device belonging to TCP socket */
- for_ifa(dst->dev->ip_ptr) {
- if (ifa->ifa_address != addr.sin_addr.s_addr)
+ rcu_read_lock();
+ in_dev = __in_dev_get_rcu(dst->dev);
+ for_ifa(in_dev) {
+ if (!inet_ifa_match(addr.sin_addr.s_addr, ifa))
continue;
*prefix_len = inet_mask_len(ifa->ifa_mask);
*subnet = ifa->ifa_address & ifa->ifa_mask;
rc = 0;
break;
- } endfor_ifa(dst->dev->ip_ptr);
+ } endfor_ifa(in_dev);
+ rcu_read_unlock();
out_rel:
dst_release(dst);
/* RDMA setup failed, switch back to TCP */
smc->use_fallback = true;
if (reason_code && (reason_code != SMC_CLC_DECL_REPLY)) {
- rc = smc_clc_send_decline(smc, reason_code, 0);
+ rc = smc_clc_send_decline(smc, reason_code);
if (rc < sizeof(struct smc_clc_msg_decline))
goto out_err;
}
rc = local_contact;
if (rc == -ENOMEM)
reason_code = SMC_CLC_DECL_MEM;/* insufficient memory*/
- else if (rc == -ENOLINK)
- reason_code = SMC_CLC_DECL_SYNCERR; /* synchr. error */
goto decline_rdma;
}
link = &new_smc->conn.lgr->lnk[SMC_SINGLE_LINK];
smc_conn_free(&new_smc->conn);
new_smc->use_fallback = true;
if (reason_code && (reason_code != SMC_CLC_DECL_REPLY)) {
- rc = smc_clc_send_decline(new_smc, reason_code, 0);
+ rc = smc_clc_send_decline(new_smc, reason_code);
if (rc < sizeof(struct smc_clc_msg_decline))
goto out_err;
}
atomic_t sndbuf_space; /* remaining space in sndbuf */
u16 tx_cdc_seq; /* sequence # for CDC send */
spinlock_t send_lock; /* protect wr_sends */
- struct work_struct tx_work; /* retry of smc_cdc_msg_send */
+ struct delayed_work tx_work; /* retry of smc_cdc_msg_send */
struct smc_host_cdc_msg local_rx_ctrl; /* filled during event_handl.
* .prod cf. TCP rcv_nxt
}
if (clcm->type == SMC_CLC_DECLINE) {
reason_code = SMC_CLC_DECL_REPLY;
- if (ntohl(((struct smc_clc_msg_decline *)buf)->peer_diagnosis)
- == SMC_CLC_DECL_SYNCERR)
+ if (((struct smc_clc_msg_decline *)buf)->hdr.flag) {
smc->conn.lgr->sync_err = true;
+ smc_lgr_terminate(smc->conn.lgr);
+ }
}
out:
}
/* send CLC DECLINE message across internal TCP socket */
-int smc_clc_send_decline(struct smc_sock *smc, u32 peer_diag_info,
- u8 out_of_sync)
+int smc_clc_send_decline(struct smc_sock *smc, u32 peer_diag_info)
{
struct smc_clc_msg_decline dclc;
struct msghdr msg;
dclc.hdr.type = SMC_CLC_DECLINE;
dclc.hdr.length = htons(sizeof(struct smc_clc_msg_decline));
dclc.hdr.version = SMC_CLC_V1;
- dclc.hdr.flag = out_of_sync ? 1 : 0;
+ dclc.hdr.flag = (peer_diag_info == SMC_CLC_DECL_SYNCERR) ? 1 : 0;
memcpy(dclc.id_for_peer, local_systemid, sizeof(local_systemid));
dclc.peer_diagnosis = htonl(peer_diag_info);
memcpy(dclc.trl.eyecatcher, SMC_EYECATCHER, sizeof(SMC_EYECATCHER));
int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
u8 expected_type);
-int smc_clc_send_decline(struct smc_sock *smc, u32 peer_diag_info,
- u8 out_of_sync);
+int smc_clc_send_decline(struct smc_sock *smc, u32 peer_diag_info);
int smc_clc_send_proposal(struct smc_sock *smc, struct smc_ib_device *smcibdev,
u8 ibport);
int smc_clc_send_confirm(struct smc_sock *smc);
{
struct smc_cdc_conn_state_flags *txflags =
&smc->conn.local_tx_ctrl.conn_state_flags;
- long timeout = SMC_MAX_STREAM_WAIT_TIMEOUT;
struct smc_connection *conn = &smc->conn;
struct sock *sk = &smc->sk;
int old_state;
+ long timeout;
int rc = 0;
- if (sock_flag(sk, SOCK_LINGER) &&
- !(current->flags & PF_EXITING))
- timeout = sk->sk_lingertime;
+ timeout = current->flags & PF_EXITING ?
+ 0 : sock_flag(sk, SOCK_LINGER) ?
+ sk->sk_lingertime : SMC_MAX_STREAM_WAIT_TIMEOUT;
again:
old_state = sk->sk_state;
case SMC_ACTIVE:
smc_close_stream_wait(smc, timeout);
release_sock(sk);
- cancel_work_sync(&conn->tx_work);
+ cancel_delayed_work_sync(&conn->tx_work);
lock_sock(sk);
if (sk->sk_state == SMC_ACTIVE) {
/* send close request */
if (!smc_cdc_rxed_any_close(conn))
smc_close_stream_wait(smc, timeout);
release_sock(sk);
- cancel_work_sync(&conn->tx_work);
+ cancel_delayed_work_sync(&conn->tx_work);
lock_sock(sk);
if (sk->sk_err != ECONNABORTED) {
/* confirm close from peer */
/* peer sending PeerConnectionClosed will cause transition */
break;
case SMC_PROCESSABORT:
- cancel_work_sync(&conn->tx_work);
+ release_sock(sk);
+ cancel_delayed_work_sync(&conn->tx_work);
+ lock_sock(sk);
smc_close_abort(conn);
sk->sk_state = SMC_CLOSED;
smc_close_wait_tx_pends(smc);
int smc_close_shutdown_write(struct smc_sock *smc)
{
struct smc_connection *conn = &smc->conn;
- long timeout = SMC_MAX_STREAM_WAIT_TIMEOUT;
struct sock *sk = &smc->sk;
int old_state;
+ long timeout;
int rc = 0;
- if (sock_flag(sk, SOCK_LINGER))
- timeout = sk->sk_lingertime;
+ timeout = current->flags & PF_EXITING ?
+ 0 : sock_flag(sk, SOCK_LINGER) ?
+ sk->sk_lingertime : SMC_MAX_STREAM_WAIT_TIMEOUT;
again:
old_state = sk->sk_state;
case SMC_ACTIVE:
smc_close_stream_wait(smc, timeout);
release_sock(sk);
- cancel_work_sync(&conn->tx_work);
+ cancel_delayed_work_sync(&conn->tx_work);
lock_sock(sk);
/* send close wr request */
rc = smc_close_wr(conn);
if (!smc_cdc_rxed_any_close(conn))
smc_close_stream_wait(smc, timeout);
release_sock(sk);
- cancel_work_sync(&conn->tx_work);
+ cancel_delayed_work_sync(&conn->tx_work);
lock_sock(sk);
/* confirm close from peer */
rc = smc_close_wr(conn);
#include "smc_cdc.h"
#include "smc_close.h"
-#define SMC_LGR_NUM_INCR 256
-#define SMC_LGR_FREE_DELAY (600 * HZ)
+#define SMC_LGR_NUM_INCR 256
+#define SMC_LGR_FREE_DELAY_SERV (600 * HZ)
+#define SMC_LGR_FREE_DELAY_CLNT (SMC_LGR_FREE_DELAY_SERV + 10)
static u32 smc_lgr_num; /* unique link group number */
__smc_lgr_unregister_conn(conn);
}
write_unlock_bh(&lgr->conns_lock);
- if (reduced && !lgr->conns_num)
- schedule_delayed_work(&lgr->free_work, SMC_LGR_FREE_DELAY);
+ if (!reduced || lgr->conns_num)
+ return;
+ /* client link group creation always follows the server link group
+ * creation. For client use a somewhat higher removal delay time,
+ * otherwise there is a risk of out-of-sync link groups.
+ */
+ mod_delayed_work(system_wq, &lgr->free_work,
+ lgr->role == SMC_CLNT ? SMC_LGR_FREE_DELAY_CLNT :
+ SMC_LGR_FREE_DELAY_SERV);
}
static void smc_lgr_free_work(struct work_struct *work)
ndev = smcibdev->ibdev->get_netdev(smcibdev->ibdev, ibport);
if (ndev) {
memcpy(&smcibdev->mac, ndev->dev_addr, ETH_ALEN);
+ dev_put(ndev);
} else if (!rc) {
memcpy(&smcibdev->mac[ibport - 1][0],
&smcibdev->gid[ibport - 1].raw[8], 3);
sizeof(new_pnetelem->ndev->name)) ||
smc_pnet_same_ibname(pnetelem,
new_pnetelem->smcibdev->ibdev->name,
- new_pnetelem->ib_port))
+ new_pnetelem->ib_port)) {
+ dev_put(pnetelem->ndev);
goto found;
+ }
}
list_add_tail(&new_pnetelem->list, &smc_pnettable.pnetlist);
rc = 0;
read_done = sock_intr_errno(timeo);
break;
}
+ if (!timeo)
+ return -EAGAIN;
}
if (!atomic_read(&conn->bytes_to_rcv)) {
#include "smc_cdc.h"
#include "smc_tx.h"
+#define SMC_TX_WORK_DELAY HZ
+
/***************************** sndbuf producer *******************************/
/* callback implementation for sk.sk_write_space()
goto out_unlock;
}
rc = 0;
- schedule_work(&conn->tx_work);
+ schedule_delayed_work(&conn->tx_work,
+ SMC_TX_WORK_DELAY);
}
goto out_unlock;
}
*/
static void smc_tx_work(struct work_struct *work)
{
- struct smc_connection *conn = container_of(work,
+ struct smc_connection *conn = container_of(to_delayed_work(work),
struct smc_connection,
tx_work);
struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
if (!rc)
rc = smc_cdc_msg_send(conn, wr_buf, pend);
if (rc < 0) {
- schedule_work(&conn->tx_work);
+ schedule_delayed_work(&conn->tx_work,
+ SMC_TX_WORK_DELAY);
return;
}
smc_curs_write(&conn->rx_curs_confirmed,
void smc_tx_init(struct smc_sock *smc)
{
smc->sk.sk_write_space = smc_tx_write_space;
- INIT_WORK(&smc->conn.tx_work, smc_tx_work);
+ INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work);
spin_lock_init(&smc->conn.send_lock);
}
int rc;
ib_req_notify_cq(link->smcibdev->roce_cq_send,
- IB_CQ_SOLICITED_MASK | IB_CQ_REPORT_MISSED_EVENTS);
+ IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
pend = container_of(priv, struct smc_wr_tx_pend, priv);
rc = ib_post_send(link->roce_qp, &link->wr_tx_ibs[pend->idx],
&failed_wr);
if (unlikely(n != mw->mw_nents))
goto out_mapmr_err;
- dprintk("RPC: %s: Using frmr %p to map %u segments (%u bytes)\n",
+ dprintk("RPC: %s: Using frmr %p to map %u segments (%llu bytes)\n",
__func__, frmr, mw->mw_nents, mr->length);
key = (u8)(mr->rkey & 0x000000FF);
return false;
if (msg_errcode(msg))
return false;
- *err = -TIPC_ERR_NO_NAME;
+ *err = TIPC_ERR_NO_NAME;
if (skb_linearize(skb))
return false;
msg = buf_msg(skb);
if (err)
return err;
+ if (!setup.chandef.chan)
+ return -EINVAL;
+
err = validate_beacon_tx_rate(rdev, setup.chandef.chan->band,
&setup.beacon_rate);
if (err)
if (err)
return err;
+ if (!tb[NL80211_REKEY_DATA_REPLAY_CTR] || !tb[NL80211_REKEY_DATA_KEK] ||
+ !tb[NL80211_REKEY_DATA_KCK])
+ return -EINVAL;
if (nla_len(tb[NL80211_REKEY_DATA_REPLAY_CTR]) != NL80211_REPLAY_CTR_LEN)
return -ERANGE;
if (nla_len(tb[NL80211_REKEY_DATA_KEK]) != NL80211_KEK_LEN)
endif
ifdef CONFIG_GCOV_KERNEL
objtool_args += --no-unreachable
+else
+objtool_args += $(call cc-ifversion, -lt, 0405, --no-unreachable)
endif
# 'OBJECT_FILES_NON_STANDARD := y': skip objtool checking for a directory
exit(0);
}
- if (!$is_patch && $file !~ /cover-letter\.patch$/) {
+ if (!$is_patch && $filename !~ /cover-letter\.patch$/) {
ERROR("NOT_UNIFIED_DIFF",
"Does not appear to be a unified-diff format patch\n");
}
compile_to_dts() {
dtx="$1"
+ dtc_include="$2"
if [ -d "${dtx}" ] ; then
# ----- input is DTS (source)
if ( cpp ${cpp_flags} -x assembler-with-cpp ${dtx} \
- | ${DTC} -I dts ) ; then
+ | ${DTC} ${dtc_include} -I dts ) ; then
return
fi
cpp_flags="\
-nostdinc \
- -I${srctree}/arch/${ARCH}/boot/dts \
-I${srctree}/scripts/dtc/include-prefixes \
- -I${srctree}/drivers/of/testcase-data \
-undef -D__DTS__"
-dtc_flags="\
- -i ${srctree}/arch/${ARCH}/boot/dts/ \
- -i ${srctree}/kernel/dts \
- ${dtx_path_1_dtc_include} \
- ${dtx_path_2_dtc_include}"
-
-DTC="${DTC} ${dtc_flags} -O dts -qq -f ${dtc_sort} -o -"
+DTC="\
+ ${DTC} \
+ -i ${srctree}/scripts/dtc/include-prefixes \
+ -O dts -qq -f ${dtc_sort} -o -"
# ----- do the diff or decompile
if (( ${cmd_diff} )) ; then
diff ${diff_flags} --label "${dtx_file_1}" --label "${dtx_file_2}" \
- <(compile_to_dts "${dtx_file_1}") \
- <(compile_to_dts "${dtx_file_2}")
+ <(compile_to_dts "${dtx_file_1}" "${dtx_path_1_dtc_include}") \
+ <(compile_to_dts "${dtx_file_2}" "${dtx_path_2_dtc_include}")
else
- compile_to_dts "${dtx_file_1}"
+ compile_to_dts "${dtx_file_1}" "${dtx_path_1_dtc_include}"
fi
+++ /dev/null
-if [ ! -f .version ]
-then
- echo 1
-else
- expr 0`cat .version` + 1
-fi
$(MAKE) clean
$(CONFIG_SHELL) $(MKSPEC) >$(objtree)/kernel.spec
$(call cmd,src_tar,$(KERNELPATH),kernel.spec)
- $(CONFIG_SHELL) $(srctree)/scripts/mkversion > $(objtree)/.tmp_version
- mv -f $(objtree)/.tmp_version $(objtree)/.version
rpmbuild $(RPMOPTS) --target $(UTS_MACHINE) -ta $(KERNELPATH).tar.gz
rm $(KERNELPATH).tar.gz kernel.spec
binrpm-pkg: FORCE
$(MAKE) KBUILD_SRC=
$(CONFIG_SHELL) $(MKSPEC) prebuilt > $(objtree)/binkernel.spec
- $(CONFIG_SHELL) $(srctree)/scripts/mkversion > $(objtree)/.tmp_version
- mv -f $(objtree)/.tmp_version $(objtree)/.version
-
rpmbuild $(RPMOPTS) --define "_builddir $(objtree)" --target \
$(UTS_MACHINE) -bb $(objtree)/binkernel.spec
rm binkernel.spec
fi
sourcename=$KDEB_SOURCENAME
tmpdir="$objtree/debian/tmp"
-fwdir="$objtree/debian/fwtmp"
kernel_headers_dir="$objtree/debian/hdrtmp"
libc_headers_dir="$objtree/debian/headertmp"
dbg_dir="$objtree/debian/dbgtmp"
packagename=linux-image-$version
-fwpackagename=linux-firmware-image-$version
kernel_headers_packagename=linux-headers-$version
libc_headers_packagename=linux-libc-dev
dbg_packagename=$packagename-dbg
BUILD_DEBUG="$(grep -s '^CONFIG_DEBUG_INFO=y' $KCONFIG_CONFIG || true)"
# Setup the directory structure
-rm -rf "$tmpdir" "$fwdir" "$kernel_headers_dir" "$libc_headers_dir" "$dbg_dir" $objtree/debian/files
+rm -rf "$tmpdir" "$kernel_headers_dir" "$libc_headers_dir" "$dbg_dir" $objtree/debian/files
mkdir -m 755 -p "$tmpdir/DEBIAN"
mkdir -p "$tmpdir/lib" "$tmpdir/boot"
-mkdir -p "$fwdir/lib/firmware/$version/"
mkdir -p "$kernel_headers_dir/lib/modules/$version/"
# Build and install the kernel
cat <<EOF >> debian/control
Package: $packagename
-Suggests: $fwpackagename
Architecture: any
Description: Linux kernel, version $version
This package contains the Linux kernel, modules and corresponding other
This is useful for people who need to build external modules
EOF
-# Do we have firmware? Move it out of the way and build it into a package.
-if [ -e "$tmpdir/lib/firmware" ]; then
- mv "$tmpdir/lib/firmware"/* "$fwdir/lib/firmware/$version/"
- rmdir "$tmpdir/lib/firmware"
-
- cat <<EOF >> debian/control
-
-Package: $fwpackagename
-Architecture: all
-Description: Linux kernel firmware, version $version
- This package contains firmware from the Linux kernel, version $version.
-EOF
-
- create_package "$fwpackagename" "$fwdir"
-fi
-
cat <<EOF >> debian/control
Package: $libc_headers_packagename
echo "Name: kernel"
echo "Summary: The Linux Kernel"
echo "Version: $__KERNELRELEASE"
-# we need to determine the NEXT version number so that uname and
-# rpm -q will agree
-echo "Release: `. $srctree/scripts/mkversion`"
+echo "Release: $(cat .version 2>/dev/null || echo 1)"
echo "License: GPL"
echo "Group: System Environment/Kernel"
echo "Vendor: The Linux Community"
echo "%build"
if ! $PREBUILT; then
-echo "make clean && make %{?_smp_mflags}"
+echo "make clean && make %{?_smp_mflags} KBUILD_BUILD_VERSION=%{release}"
echo ""
fi
echo "%else"
echo 'mkdir -p $RPM_BUILD_ROOT/boot $RPM_BUILD_ROOT/lib/modules'
echo "%endif"
-echo 'mkdir -p $RPM_BUILD_ROOT'"/lib/firmware/$KERNELRELEASE"
-echo 'INSTALL_MOD_PATH=$RPM_BUILD_ROOT make %{?_smp_mflags} KBUILD_SRC= mod-fw= modules_install'
-echo 'INSTALL_FW_PATH=$RPM_BUILD_ROOT'"/lib/firmware/$KERNELRELEASE"
-echo 'make INSTALL_FW_PATH=$INSTALL_FW_PATH' firmware_install
+echo 'INSTALL_MOD_PATH=$RPM_BUILD_ROOT make %{?_smp_mflags} KBUILD_SRC= modules_install'
echo "%ifarch ia64"
echo 'cp $KBUILD_IMAGE $RPM_BUILD_ROOT'"/boot/efi/vmlinuz-$KERNELRELEASE"
echo 'ln -s '"efi/vmlinuz-$KERNELRELEASE" '$RPM_BUILD_ROOT'"/boot/"
echo 'rm -f $RPM_BUILD_ROOT'"/lib/modules/$KERNELRELEASE/build"
echo 'rm -f $RPM_BUILD_ROOT'"/lib/modules/$KERNELRELEASE/source"
echo "mkdir -p "'$RPM_BUILD_ROOT'"/usr/src/kernels/$KERNELRELEASE"
-echo "EXCLUDES=\"$RCS_TAR_IGNORE --exclude .tmp_versions --exclude=*vmlinux* --exclude=*.o --exclude=*.ko --exclude=*.cmd --exclude=Documentation --exclude=firmware --exclude .config.old --exclude .missing-syscalls.d\""
+echo "EXCLUDES=\"$RCS_TAR_IGNORE --exclude .tmp_versions --exclude=*vmlinux* --exclude=*.o --exclude=*.ko --exclude=*.cmd --exclude=Documentation --exclude .config.old --exclude .missing-syscalls.d\""
echo "tar "'$EXCLUDES'" -cf- . | (cd "'$RPM_BUILD_ROOT'"/usr/src/kernels/$KERNELRELEASE;tar xvf -)"
echo 'cd $RPM_BUILD_ROOT'"/lib/modules/$KERNELRELEASE"
echo "ln -sf /usr/src/kernels/$KERNELRELEASE build"
echo "/lib/modules/$KERNELRELEASE"
echo "%exclude /lib/modules/$KERNELRELEASE/build"
echo "%exclude /lib/modules/$KERNELRELEASE/source"
-echo "/lib/firmware/$KERNELRELEASE"
echo "/boot/*"
echo ""
echo "%files headers"
adapater||adapter
addional||additional
additionaly||additionally
+additonal||additional
addres||address
adddress||address
addreses||addresses
afecting||affecting
againt||against
agaist||against
+aggreataon||aggregation
+aggreation||aggregation
albumns||albums
alegorical||allegorical
algined||aligned
alignement||alignment
allign||align
alligned||aligned
+alllocate||allocate
+alloated||allocated
allocatote||allocate
allocatrd||allocated
allocte||allocate
availavility||availability
availble||available
availiable||available
+availible||available
avalable||available
avaliable||available
aysnc||async
cacluated||calculated
caculation||calculation
calender||calendar
+calescing||coalescing
calle||called
callibration||calibration
calucate||calculate
cancelation||cancellation
cancle||cancel
capabilites||capabilities
+capabilty||capability
capabitilies||capabilities
capatibilities||capabilities
capapbilities||capabilities
contaisn||contains
contant||contact
contence||contents
+continious||continuous
continous||continuous
continously||continuously
continueing||continuing
diffrent||different
diffrentiate||differentiate
difinition||definition
+dimesions||dimensions
diplay||display
direectly||directly
disassocation||disassociation
equivelant||equivalent
equivilant||equivalent
eror||error
+errorr||error
estbalishment||establishment
etsablishment||establishment
etsbalishment||establishment
faillure||failure
failue||failure
failuer||failure
+failng||failing
faireness||fairness
falied||failed
faliure||failure
fetaures||features
fileystem||filesystem
fimware||firmware
+firware||firmware
finanize||finalize
findn||find
finilizes||finalizes
followign||following
followings||following
follwing||following
+fonud||found
forseeable||foreseeable
forse||force
fortan||fortran
grahical||graphical
grahpical||graphical
grapic||graphic
+grranted||granted
guage||gauge
guarenteed||guaranteed
guarentee||guarantee
harware||hardware
heirarchically||hierarchically
helpfull||helpful
+hybernate||hibernate
hierachy||hierarchy
hierarchie||hierarchy
howver||however
implementaiton||implementation
implementated||implemented
implemention||implementation
+implementd||implemented
implemetation||implementation
implemntation||implementation
implentation||implementation
implmentation||implementation
implmenting||implementing
+incative||inactive
incomming||incoming
incompatabilities||incompatibilities
incompatable||incompatible
inconsistant||inconsistent
increas||increase
+incremeted||incremented
incrment||increment
indendation||indentation
indended||intended
intermittant||intermittent
internel||internal
interoprability||interoperability
+interuupt||interrupt
interrface||interface
interrrupt||interrupt
interrup||interrupt
intterrupt||interrupt
intuative||intuitive
invaid||invalid
+invald||invalid
invalde||invalid
invalide||invalid
+invalidiate||invalidate
invalud||invalid
invididual||individual
invokation||invocation
mispelled||misspelled
mispelt||misspelt
mising||missing
+mismactch||mismatch
missmanaged||mismanaged
missmatch||mismatch
miximum||maximum
multple||multiple
mumber||number
muticast||multicast
+mutilcast||multicast
mutiple||multiple
mutli||multi
nams||names
positon||position
possibilites||possibilities
powerfull||powerful
+preample||preamble
preapre||prepare
preceeded||preceded
preceeding||preceding
subdirectoires||subdirectories
suble||subtle
substract||subtract
+submition||submission
succesfully||successfully
succesful||successful
successed||succeeded
suppported||supported
suppport||support
supress||suppress
+surpressed||suppressed
surpresses||suppresses
susbsystem||subsystem
suspeneded||suspended
swithcing||switching
swithed||switched
swithing||switching
+swtich||switch
symetric||symmetric
synax||syntax
synchonized||synchronized
thier||their
threds||threads
threshhold||threshold
+thresold||threshold
throught||through
+troughput||throughput
thses||these
tiggered||triggered
tipically||typically
torerable||tolerable
tramsmitted||transmitted
tramsmit||transmit
+tranasction||transaction
tranfer||transfer
transciever||transceiver
transferd||transferred
treshold||threshold
trigerring||triggering
trun||turn
+tunning||tuning
ture||true
tyep||type
udpate||update
vitual||virtual
wakeus||wakeups
wating||waiting
+wiat||wait
wether||whether
whataver||whatever
whcih||which
#
# Generated include files
#
+net_names.h
capability_names.h
rlim_names.h
apparmor-y := apparmorfs.o audit.o capability.o context.o ipc.o lib.o match.o \
path.o domain.o policy.o policy_unpack.o procattr.o lsm.o \
- resource.o secid.o file.o policy_ns.o label.o
+ resource.o secid.o file.o policy_ns.o label.o mount.o net.o
apparmor-$(CONFIG_SECURITY_APPARMOR_HASH) += crypto.o
-clean-files := capability_names.h rlim_names.h
+clean-files := capability_names.h rlim_names.h net_names.h
+# Build a lower case string table of address family names
+# Transform lines from
+# #define AF_LOCAL 1 /* POSIX name for AF_UNIX */
+# #define AF_INET 2 /* Internet IP Protocol */
+# to
+# [1] = "local",
+# [2] = "inet",
+#
+# and build the securityfs entries for the mapping.
+# Transforms lines from
+# #define AF_INET 2 /* Internet IP Protocol */
+# to
+# #define AA_SFS_AF_MASK "local inet"
+quiet_cmd_make-af = GEN $@
+cmd_make-af = echo "static const char *address_family_names[] = {" > $@ ;\
+ sed $< >>$@ -r -n -e "/AF_MAX/d" -e "/AF_LOCAL/d" -e "/AF_ROUTE/d" -e \
+ 's/^\#define[ \t]+AF_([A-Z0-9_]+)[ \t]+([0-9]+)(.*)/[\2] = "\L\1",/p';\
+ echo "};" >> $@ ;\
+ printf '%s' '\#define AA_SFS_AF_MASK "' >> $@ ;\
+ sed -r -n -e "/AF_MAX/d" -e "/AF_LOCAL/d" -e "/AF_ROUTE/d" -e \
+ 's/^\#define[ \t]+AF_([A-Z0-9_]+)[ \t]+([0-9]+)(.*)/\L\1/p'\
+ $< | tr '\n' ' ' | sed -e 's/ $$/"\n/' >> $@
+
+# Build a lower case string table of sock type names
+# Transform lines from
+# SOCK_STREAM = 1,
+# to
+# [1] = "stream",
+quiet_cmd_make-sock = GEN $@
+cmd_make-sock = echo "static const char *sock_type_names[] = {" >> $@ ;\
+ sed $^ >>$@ -r -n \
+ -e 's/^\tSOCK_([A-Z0-9_]+)[\t]+=[ \t]+([0-9]+)(.*)/[\2] = "\L\1",/p';\
+ echo "};" >> $@
# Build a lower case string table of capability names
# Transforms lines from
tr '\n' ' ' | sed -e 's/ $$/"\n/' >> $@
$(obj)/capability.o : $(obj)/capability_names.h
+$(obj)/net.o : $(obj)/net_names.h
$(obj)/resource.o : $(obj)/rlim_names.h
$(obj)/capability_names.h : $(srctree)/include/uapi/linux/capability.h \
$(src)/Makefile
$(obj)/rlim_names.h : $(srctree)/include/uapi/asm-generic/resource.h \
$(src)/Makefile
$(call cmd,make-rlim)
+$(obj)/net_names.h : $(srctree)/include/linux/socket.h \
+ $(srctree)/include/linux/net.h \
+ $(src)/Makefile
+ $(call cmd,make-af)
+ $(call cmd,make-sock)
#include "include/audit.h"
#include "include/context.h"
#include "include/crypto.h"
+#include "include/ipc.h"
#include "include/policy_ns.h"
#include "include/label.h"
#include "include/policy.h"
inode_lock(dir);
dentry = lookup_one_len(name, parent, strlen(name));
- if (IS_ERR(dentry))
+ if (IS_ERR(dentry)) {
+ error = PTR_ERR(dentry);
goto fail_lock;
+ }
if (d_really_is_positive(dentry)) {
error = -EEXIST;
{
int i;
+ AA_BUG(!old);
+ AA_BUG(!new);
+ AA_BUG(!mutex_is_locked(&profiles_ns(old)->lock));
+
for (i = 0; i < AAFS_PROF_SIZEOF; i++) {
new->dents[i] = old->dents[i];
if (new->dents[i])
struct dentry *dent = NULL, *dir;
int error;
+ AA_BUG(!profile);
+ AA_BUG(!mutex_is_locked(&profiles_ns(profile)->lock));
+
if (!parent) {
struct aa_profile *p;
p = aa_deref_parent(profile);
if (!ns)
return;
+ AA_BUG(!mutex_is_locked(&ns->lock));
list_for_each_entry(child, &ns->base.profiles, base.list)
__aafs_profile_rmdir(child);
{
struct aa_ns *parent, *next;
+ AA_BUG(!root);
+ AA_BUG(!ns);
+ AA_BUG(ns != root && !mutex_is_locked(&ns->parent->lock));
+
/* is next namespace a child */
if (!list_empty(&ns->sub_ns)) {
next = list_first_entry(&ns->sub_ns, typeof(*ns), base.list);
static struct aa_profile *__first_profile(struct aa_ns *root,
struct aa_ns *ns)
{
+ AA_BUG(!root);
+ AA_BUG(ns && !mutex_is_locked(&ns->lock));
+
for (; ns; ns = __next_ns(root, ns)) {
if (!list_empty(&ns->base.profiles))
return list_first_entry(&ns->base.profiles,
struct aa_profile *parent;
struct aa_ns *ns = p->ns;
+ AA_BUG(!mutex_is_locked(&profiles_ns(p)->lock));
+
/* is next profile a child */
if (!list_empty(&p->base.profiles))
return list_first_entry(&p->base.profiles, typeof(*p),
{ }
};
+static struct aa_sfs_entry aa_sfs_entry_signal[] = {
+ AA_SFS_FILE_STRING("mask", AA_SFS_SIG_MASK),
+ { }
+};
+
static struct aa_sfs_entry aa_sfs_entry_domain[] = {
AA_SFS_FILE_BOOLEAN("change_hat", 1),
AA_SFS_FILE_BOOLEAN("change_hatv", 1),
{ }
};
+static struct aa_sfs_entry aa_sfs_entry_mount[] = {
+ AA_SFS_FILE_STRING("mask", "mount umount pivot_root"),
+ { }
+};
+
static struct aa_sfs_entry aa_sfs_entry_ns[] = {
AA_SFS_FILE_BOOLEAN("profile", 1),
- AA_SFS_FILE_BOOLEAN("pivot_root", 1),
+ AA_SFS_FILE_BOOLEAN("pivot_root", 0),
{ }
};
AA_SFS_DIR("policy", aa_sfs_entry_policy),
AA_SFS_DIR("domain", aa_sfs_entry_domain),
AA_SFS_DIR("file", aa_sfs_entry_file),
+ AA_SFS_DIR("network", aa_sfs_entry_network),
+ AA_SFS_DIR("mount", aa_sfs_entry_mount),
AA_SFS_DIR("namespaces", aa_sfs_entry_ns),
AA_SFS_FILE_U64("capability", VFS_CAP_FLAGS_MASK),
AA_SFS_DIR("rlimit", aa_sfs_entry_rlimit),
AA_SFS_DIR("caps", aa_sfs_entry_caps),
AA_SFS_DIR("ptrace", aa_sfs_entry_ptrace),
+ AA_SFS_DIR("signal", aa_sfs_entry_signal),
AA_SFS_DIR("query", aa_sfs_entry_query),
{ }
};
static struct aa_sfs_entry aa_sfs_entry_apparmor[] = {
- AA_SFS_FILE_FOPS(".access", 0640, &aa_sfs_access),
+ AA_SFS_FILE_FOPS(".access", 0666, &aa_sfs_access),
AA_SFS_FILE_FOPS(".stacked", 0444, &seq_ns_stacked_fops),
AA_SFS_FILE_FOPS(".ns_stacked", 0444, &seq_ns_nsstacked_fops),
- AA_SFS_FILE_FOPS(".ns_level", 0666, &seq_ns_level_fops),
- AA_SFS_FILE_FOPS(".ns_name", 0640, &seq_ns_name_fops),
- AA_SFS_FILE_FOPS("profiles", 0440, &aa_sfs_profiles_fops),
+ AA_SFS_FILE_FOPS(".ns_level", 0444, &seq_ns_level_fops),
+ AA_SFS_FILE_FOPS(".ns_name", 0444, &seq_ns_name_fops),
+ AA_SFS_FILE_FOPS("profiles", 0444, &aa_sfs_profiles_fops),
AA_SFS_DIR("features", aa_sfs_entry_features),
{ }
};
*
* Returns: refcounted label, or NULL on failure (MAYBE NULL)
*/
-static struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
- const char **name)
+struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ const char **name)
{
struct aa_label *label = NULL;
u32 xtype = xindex & AA_X_TYPE_MASK;
#include "include/context.h"
#include "include/file.h"
#include "include/match.h"
+#include "include/net.h"
#include "include/path.h"
#include "include/policy.h"
#include "include/label.h"
return error;
}
+static int __file_sock_perm(const char *op, struct aa_label *label,
+ struct aa_label *flabel, struct file *file,
+ u32 request, u32 denied)
+{
+ struct socket *sock = (struct socket *) file->private_data;
+ int error;
+
+ AA_BUG(!sock);
+
+ /* revalidation due to label out of date. No revocation at this time */
+ if (!denied && aa_label_is_subset(flabel, label))
+ return 0;
+
+ /* TODO: improve to skip profiles cached in flabel */
+ error = aa_sock_file_perm(label, op, request, sock);
+ if (denied) {
+ /* TODO: improve to skip profiles checked above */
+ /* check every profile in file label to is cached */
+ last_error(error, aa_sock_file_perm(flabel, op, request, sock));
+ }
+ if (!error)
+ update_file_ctx(file_ctx(file), label, request);
+
+ return error;
+}
+
/**
* aa_file_perm - do permission revalidation check & audit for @file
* @op: operation being checked
error = __file_path_perm(op, label, flabel, file, request,
denied);
+ else if (S_ISSOCK(file_inode(file)->i_mode))
+ error = __file_sock_perm(op, label, flabel, file, request,
+ denied);
done:
rcu_read_unlock();
#define AA_CLASS_NET 4
#define AA_CLASS_RLIMITS 5
#define AA_CLASS_DOMAIN 6
+#define AA_CLASS_MOUNT 7
#define AA_CLASS_PTRACE 9
+#define AA_CLASS_SIGNAL 10
#define AA_CLASS_LABEL 16
#define AA_CLASS_LAST AA_CLASS_LABEL
#define OP_FMPROT "file_mprotect"
#define OP_INHERIT "file_inherit"
+#define OP_PIVOTROOT "pivotroot"
+#define OP_MOUNT "mount"
+#define OP_UMOUNT "umount"
+
#define OP_CREATE "create"
#define OP_POST_CREATE "post_create"
#define OP_BIND "bind"
#define OP_SHUTDOWN "socket_shutdown"
#define OP_PTRACE "ptrace"
+#define OP_SIGNAL "signal"
#define OP_EXEC "exec"
/* these entries require a custom callback fn */
struct {
struct aa_label *peer;
- struct {
- const char *target;
- kuid_t ouid;
- } fs;
+ union {
+ struct {
+ kuid_t ouid;
+ const char *target;
+ } fs;
+ struct {
+ int type, protocol;
+ struct sock *peer_sk;
+ void *addr;
+ int addrlen;
+ } net;
+ int signal;
+ struct {
+ int rlim;
+ unsigned long max;
+ } rlim;
+ };
};
struct {
- const char *name;
- long pos;
+ struct aa_profile *profile;
const char *ns;
+ long pos;
} iface;
struct {
- int rlim;
- unsigned long max;
- } rlim;
+ const char *src_name;
+ const char *type;
+ const char *trans;
+ const char *data;
+ unsigned long flags;
+ } mnt;
};
};
#include <linux/binfmts.h>
#include <linux/types.h>
+#include "label.h"
+
#ifndef __AA_DOMAIN_H
#define __AA_DOMAIN_H
#define AA_CHANGE_ONEXEC 4
#define AA_CHANGE_STACK 8
+struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ const char **name);
+
int apparmor_bprm_set_creds(struct linux_binprm *bprm);
void aa_free_domain_entries(struct aa_domain *domain);
#define AA_PTRACE_PERM_MASK (AA_PTRACE_READ | AA_PTRACE_TRACE | \
AA_MAY_BE_READ | AA_MAY_BE_TRACED)
+#define AA_SIGNAL_PERM_MASK (MAY_READ | MAY_WRITE)
+
+#define AA_SFS_SIG_MASK "hup int quit ill trap abrt bus fpe kill usr1 " \
+ "segv usr2 pipe alrm term stkflt chld cont stop stp ttin ttou urg " \
+ "xcpu xfsz vtalrm prof winch io pwr sys emt lost"
int aa_may_ptrace(struct aa_label *tracer, struct aa_label *tracee,
u32 request);
+int aa_may_signal(struct aa_label *sender, struct aa_label *target, int sig);
#endif /* __AA_IPC_H */
#define FLAG_SHOW_MODE 1
#define FLAG_VIEW_SUBNS 2
#define FLAG_HIDDEN_UNCONFINED 4
+#define FLAG_ABS_ROOT 8
int aa_label_snxprint(char *str, size_t size, struct aa_ns *view,
struct aa_label *label, int flags);
int aa_label_asxprint(char **strp, struct aa_ns *ns, struct aa_label *label,
--- /dev/null
+/*
+ * AppArmor security module
+ *
+ * This file contains AppArmor file mediation function definitions.
+ *
+ * Copyright 2017 Canonical Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ */
+
+#ifndef __AA_MOUNT_H
+#define __AA_MOUNT_H
+
+#include <linux/fs.h>
+#include <linux/path.h>
+
+#include "domain.h"
+#include "policy.h"
+
+/* mount perms */
+#define AA_MAY_PIVOTROOT 0x01
+#define AA_MAY_MOUNT 0x02
+#define AA_MAY_UMOUNT 0x04
+#define AA_AUDIT_DATA 0x40
+#define AA_MNT_CONT_MATCH 0x40
+
+#define AA_MS_IGNORE_MASK (MS_KERNMOUNT | MS_NOSEC | MS_ACTIVE | MS_BORN)
+
+int aa_remount(struct aa_label *label, const struct path *path,
+ unsigned long flags, void *data);
+
+int aa_bind_mount(struct aa_label *label, const struct path *path,
+ const char *old_name, unsigned long flags);
+
+
+int aa_mount_change_type(struct aa_label *label, const struct path *path,
+ unsigned long flags);
+
+int aa_move_mount(struct aa_label *label, const struct path *path,
+ const char *old_name);
+
+int aa_new_mount(struct aa_label *label, const char *dev_name,
+ const struct path *path, const char *type, unsigned long flags,
+ void *data);
+
+int aa_umount(struct aa_label *label, struct vfsmount *mnt, int flags);
+
+int aa_pivotroot(struct aa_label *label, const struct path *old_path,
+ const struct path *new_path);
+
+#endif /* __AA_MOUNT_H */
--- /dev/null
+/*
+ * AppArmor security module
+ *
+ * This file contains AppArmor network mediation definitions.
+ *
+ * Copyright (C) 1998-2008 Novell/SUSE
+ * Copyright 2009-2017 Canonical Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ */
+
+#ifndef __AA_NET_H
+#define __AA_NET_H
+
+#include <net/sock.h>
+#include <linux/path.h>
+
+#include "apparmorfs.h"
+#include "label.h"
+#include "perms.h"
+#include "policy.h"
+
+#define AA_MAY_SEND AA_MAY_WRITE
+#define AA_MAY_RECEIVE AA_MAY_READ
+
+#define AA_MAY_SHUTDOWN AA_MAY_DELETE
+
+#define AA_MAY_CONNECT AA_MAY_OPEN
+#define AA_MAY_ACCEPT 0x00100000
+
+#define AA_MAY_BIND 0x00200000
+#define AA_MAY_LISTEN 0x00400000
+
+#define AA_MAY_SETOPT 0x01000000
+#define AA_MAY_GETOPT 0x02000000
+
+#define NET_PERMS_MASK (AA_MAY_SEND | AA_MAY_RECEIVE | AA_MAY_CREATE | \
+ AA_MAY_SHUTDOWN | AA_MAY_BIND | AA_MAY_LISTEN | \
+ AA_MAY_CONNECT | AA_MAY_ACCEPT | AA_MAY_SETATTR | \
+ AA_MAY_GETATTR | AA_MAY_SETOPT | AA_MAY_GETOPT)
+
+#define NET_FS_PERMS (AA_MAY_SEND | AA_MAY_RECEIVE | AA_MAY_CREATE | \
+ AA_MAY_SHUTDOWN | AA_MAY_CONNECT | AA_MAY_RENAME |\
+ AA_MAY_SETATTR | AA_MAY_GETATTR | AA_MAY_CHMOD | \
+ AA_MAY_CHOWN | AA_MAY_CHGRP | AA_MAY_LOCK | \
+ AA_MAY_MPROT)
+
+#define NET_PEER_MASK (AA_MAY_SEND | AA_MAY_RECEIVE | AA_MAY_CONNECT | \
+ AA_MAY_ACCEPT)
+struct aa_sk_ctx {
+ struct aa_label *label;
+ struct aa_label *peer;
+ struct path path;
+};
+
+#define SK_CTX(X) ((X)->sk_security)
+#define SOCK_ctx(X) SOCK_INODE(X)->i_security
+#define DEFINE_AUDIT_NET(NAME, OP, SK, F, T, P) \
+ struct lsm_network_audit NAME ## _net = { .sk = (SK), \
+ .family = (F)}; \
+ DEFINE_AUDIT_DATA(NAME, \
+ ((SK) && (F) != AF_UNIX) ? LSM_AUDIT_DATA_NET : \
+ LSM_AUDIT_DATA_NONE, \
+ OP); \
+ NAME.u.net = &(NAME ## _net); \
+ aad(&NAME)->net.type = (T); \
+ aad(&NAME)->net.protocol = (P)
+
+#define DEFINE_AUDIT_SK(NAME, OP, SK) \
+ DEFINE_AUDIT_NET(NAME, OP, SK, (SK)->sk_family, (SK)->sk_type, \
+ (SK)->sk_protocol)
+
+/* struct aa_net - network confinement data
+ * @allow: basic network families permissions
+ * @audit: which network permissions to force audit
+ * @quiet: which network permissions to quiet rejects
+ */
+struct aa_net {
+ u16 allow[AF_MAX];
+ u16 audit[AF_MAX];
+ u16 quiet[AF_MAX];
+};
+
+
+extern struct aa_sfs_entry aa_sfs_entry_network[];
+
+void audit_net_cb(struct audit_buffer *ab, void *va);
+int aa_profile_af_perm(struct aa_profile *profile, struct common_audit_data *sa,
+ u32 request, u16 family, int type);
+int aa_af_perm(struct aa_label *label, const char *op, u32 request, u16 family,
+ int type, int protocol);
+static inline int aa_profile_af_sk_perm(struct aa_profile *profile,
+ struct common_audit_data *sa,
+ u32 request,
+ struct sock *sk)
+{
+ return aa_profile_af_perm(profile, sa, request, sk->sk_family,
+ sk->sk_type);
+}
+int aa_sk_perm(const char *op, u32 request, struct sock *sk);
+
+int aa_sock_file_perm(struct aa_label *label, const char *op, u32 request,
+ struct socket *sock);
+
+
+static inline void aa_free_net_rules(struct aa_net *new)
+{
+ /* NOP */
+}
+
+#endif /* __AA_NET_H */
void aa_perm_mask_to_str(char *str, const char *chrs, u32 mask);
-void aa_audit_perm_names(struct audit_buffer *ab, const char **names, u32 mask);
+void aa_audit_perm_names(struct audit_buffer *ab, const char * const *names,
+ u32 mask);
void aa_audit_perm_mask(struct audit_buffer *ab, u32 mask, const char *chrs,
- u32 chrsmask, const char **names, u32 namesmask);
+ u32 chrsmask, const char * const *names, u32 namesmask);
void aa_apply_modes_to_perms(struct aa_profile *profile,
struct aa_perms *perms);
void aa_compute_perms(struct aa_dfa *dfa, unsigned int state,
#include "file.h"
#include "lib.h"
#include "label.h"
+#include "net.h"
#include "perms.h"
#include "resource.h"
* @policy: general match rules governing policy
* @file: The set of rules governing basic file access and domain transitions
* @caps: capabilities for the profile
+ * @net: network controls for the profile
* @rlimits: rlimits for the profile
*
* @dents: dentries for the profiles file entries in apparmorfs
struct aa_policydb policy;
struct aa_file_rules file;
struct aa_caps caps;
+ struct aa_net net;
struct aa_rlimit rlimits;
struct aa_loaddata *rawdata;
return 0;
}
+static inline unsigned int PROFILE_MEDIATES_AF(struct aa_profile *profile,
+ u16 AF) {
+ unsigned int state = PROFILE_MEDIATES(profile, AA_CLASS_NET);
+ u16 be_af = cpu_to_be16(AF);
+
+ if (!state)
+ return 0;
+ return aa_dfa_match_len(profile->policy.dfa, state, (char *) &be_af, 2);
+}
+
/**
* aa_get_profile - increment refcount on profile @p
* @p: profile (MAYBE NULL)
--- /dev/null
+#include <linux/signal.h>
+
+#define SIGUNKNOWN 0
+#define MAXMAPPED_SIG 35
+/* provide a mapping of arch signal to internal signal # for mediation
+ * those that are always an alias SIGCLD for SIGCLHD and SIGPOLL for SIGIO
+ * map to the same entry those that may/or may not get a separate entry
+ */
+static const int sig_map[MAXMAPPED_SIG] = {
+ [0] = MAXMAPPED_SIG, /* existence test */
+ [SIGHUP] = 1,
+ [SIGINT] = 2,
+ [SIGQUIT] = 3,
+ [SIGILL] = 4,
+ [SIGTRAP] = 5, /* -, 5, - */
+ [SIGABRT] = 6, /* SIGIOT: -, 6, - */
+ [SIGBUS] = 7, /* 10, 7, 10 */
+ [SIGFPE] = 8,
+ [SIGKILL] = 9,
+ [SIGUSR1] = 10, /* 30, 10, 16 */
+ [SIGSEGV] = 11,
+ [SIGUSR2] = 12, /* 31, 12, 17 */
+ [SIGPIPE] = 13,
+ [SIGALRM] = 14,
+ [SIGTERM] = 15,
+#ifdef SIGSTKFLT
+ [SIGSTKFLT] = 16, /* -, 16, - */
+#endif
+ [SIGCHLD] = 17, /* 20, 17, 18. SIGCHLD -, -, 18 */
+ [SIGCONT] = 18, /* 19, 18, 25 */
+ [SIGSTOP] = 19, /* 17, 19, 23 */
+ [SIGTSTP] = 20, /* 18, 20, 24 */
+ [SIGTTIN] = 21, /* 21, 21, 26 */
+ [SIGTTOU] = 22, /* 22, 22, 27 */
+ [SIGURG] = 23, /* 16, 23, 21 */
+ [SIGXCPU] = 24, /* 24, 24, 30 */
+ [SIGXFSZ] = 25, /* 25, 25, 31 */
+ [SIGVTALRM] = 26, /* 26, 26, 28 */
+ [SIGPROF] = 27, /* 27, 27, 29 */
+ [SIGWINCH] = 28, /* 28, 28, 20 */
+ [SIGIO] = 29, /* SIGPOLL: 23, 29, 22 */
+ [SIGPWR] = 30, /* 29, 30, 19. SIGINFO 29, -, - */
+#ifdef SIGSYS
+ [SIGSYS] = 31, /* 12, 31, 12. often SIG LOST/UNUSED */
+#endif
+#ifdef SIGEMT
+ [SIGEMT] = 32, /* 7, - , 7 */
+#endif
+#if defined(SIGLOST) && SIGPWR != SIGLOST /* sparc */
+ [SIGLOST] = 33, /* unused on Linux */
+#endif
+#if defined(SIGUNUSED) && \
+ defined(SIGLOST) && defined(SIGSYS) && SIGLOST != SIGSYS
+ [SIGUNUSED] = 34, /* -, 31, - */
+#endif
+};
+
+/* this table is ordered post sig_map[sig] mapping */
+static const char *const sig_names[MAXMAPPED_SIG + 1] = {
+ "unknown",
+ "hup",
+ "int",
+ "quit",
+ "ill",
+ "trap",
+ "abrt",
+ "bus",
+ "fpe",
+ "kill",
+ "usr1",
+ "segv",
+ "usr2",
+ "pipe",
+ "alrm",
+ "term",
+ "stkflt",
+ "chld",
+ "cont",
+ "stop",
+ "stp",
+ "ttin",
+ "ttou",
+ "urg",
+ "xcpu",
+ "xfsz",
+ "vtalrm",
+ "prof",
+ "winch",
+ "io",
+ "pwr",
+ "sys",
+ "emt",
+ "lost",
+ "unused",
+
+ "exists", /* always last existence test mapped to MAXMAPPED_SIG */
+};
+
#include "include/context.h"
#include "include/policy.h"
#include "include/ipc.h"
+#include "include/sig_names.h"
/**
* audit_ptrace_mask - convert mask to permission string
}
+static inline int map_signal_num(int sig)
+{
+ if (sig > SIGRTMAX)
+ return SIGUNKNOWN;
+ else if (sig >= SIGRTMIN)
+ return sig - SIGRTMIN + 128; /* rt sigs mapped to 128 */
+ else if (sig <= MAXMAPPED_SIG)
+ return sig_map[sig];
+ return SIGUNKNOWN;
+}
+
+/**
+ * audit_file_mask - convert mask to permission string
+ * @buffer: buffer to write string to (NOT NULL)
+ * @mask: permission mask to convert
+ */
+static void audit_signal_mask(struct audit_buffer *ab, u32 mask)
+{
+ if (mask & MAY_READ)
+ audit_log_string(ab, "receive");
+ if (mask & MAY_WRITE)
+ audit_log_string(ab, "send");
+}
+
+/**
+ * audit_cb - call back for signal specific audit fields
+ * @ab: audit_buffer (NOT NULL)
+ * @va: audit struct to audit values of (NOT NULL)
+ */
+static void audit_signal_cb(struct audit_buffer *ab, void *va)
+{
+ struct common_audit_data *sa = va;
+
+ if (aad(sa)->request & AA_SIGNAL_PERM_MASK) {
+ audit_log_format(ab, " requested_mask=");
+ audit_signal_mask(ab, aad(sa)->request);
+ if (aad(sa)->denied & AA_SIGNAL_PERM_MASK) {
+ audit_log_format(ab, " denied_mask=");
+ audit_signal_mask(ab, aad(sa)->denied);
+ }
+ }
+ if (aad(sa)->signal <= MAXMAPPED_SIG)
+ audit_log_format(ab, " signal=%s", sig_names[aad(sa)->signal]);
+ else
+ audit_log_format(ab, " signal=rtmin+%d",
+ aad(sa)->signal - 128);
+ audit_log_format(ab, " peer=");
+ aa_label_xaudit(ab, labels_ns(aad(sa)->label), aad(sa)->peer,
+ FLAGS_NONE, GFP_ATOMIC);
+}
+
+/* TODO: update to handle compound name&name2, conditionals */
+static void profile_match_signal(struct aa_profile *profile, const char *label,
+ int signal, struct aa_perms *perms)
+{
+ unsigned int state;
+
+ /* TODO: secondary cache check <profile, profile, perm> */
+ state = aa_dfa_next(profile->policy.dfa,
+ profile->policy.start[AA_CLASS_SIGNAL],
+ signal);
+ state = aa_dfa_match(profile->policy.dfa, state, label);
+ aa_compute_perms(profile->policy.dfa, state, perms);
+}
+
+static int profile_signal_perm(struct aa_profile *profile,
+ struct aa_profile *peer, u32 request,
+ struct common_audit_data *sa)
+{
+ struct aa_perms perms;
+
+ if (profile_unconfined(profile) ||
+ !PROFILE_MEDIATES(profile, AA_CLASS_SIGNAL))
+ return 0;
+
+ aad(sa)->peer = &peer->label;
+ profile_match_signal(profile, peer->base.hname, aad(sa)->signal,
+ &perms);
+ aa_apply_modes_to_perms(profile, &perms);
+ return aa_check_perms(profile, &perms, request, sa, audit_signal_cb);
+}
+
+static int aa_signal_cross_perm(struct aa_profile *sender,
+ struct aa_profile *target,
+ struct common_audit_data *sa)
+{
+ return xcheck(profile_signal_perm(sender, target, MAY_WRITE, sa),
+ profile_signal_perm(target, sender, MAY_READ, sa));
+}
+
+int aa_may_signal(struct aa_label *sender, struct aa_label *target, int sig)
+{
+ DEFINE_AUDIT_DATA(sa, LSM_AUDIT_DATA_NONE, OP_SIGNAL);
+
+ aad(&sa)->signal = map_signal_num(sig);
+ return xcheck_labels_profiles(sender, target, aa_signal_cross_perm,
+ &sa);
+}
/* p->label will not updated any more as p is dead */
aa_put_label(rcu_dereference_protected(proxy->label, true));
memset(proxy, 0, sizeof(*proxy));
- proxy->label = (struct aa_label *) PROXY_POISON;
+ RCU_INIT_POINTER(proxy->label, (struct aa_label *)PROXY_POISON);
kfree(proxy);
}
}
* cached label name is present and visible
* @label->hname only exists if label is namespace hierachical
*/
-static inline bool use_label_hname(struct aa_ns *ns, struct aa_label *label)
+static inline bool use_label_hname(struct aa_ns *ns, struct aa_label *label,
+ int flags)
{
- if (label->hname && labels_ns(label) == ns)
+ if (label->hname && (!ns || labels_ns(label) == ns) &&
+ !(flags & ~FLAG_SHOW_MODE))
return true;
return false;
view = profiles_ns(profile);
if (view != profile->ns &&
- (!prev_ns || (prev_ns && *prev_ns != profile->ns))) {
+ (!prev_ns || (*prev_ns != profile->ns))) {
if (prev_ns)
*prev_ns = profile->ns;
ns_name = aa_ns_name(view, profile->ns,
AA_BUG(!str && size != 0);
AA_BUG(!label);
- if (!ns)
+ if (flags & FLAG_ABS_ROOT) {
+ ns = root_ns;
+ len = snprintf(str, size, "=");
+ update_for_len(total, len, size, str);
+ } else if (!ns) {
ns = labels_ns(label);
+ }
label_for_each(i, label, profile) {
if (aa_ns_visible(ns, profile->ns, flags & FLAG_VIEW_SUBNS)) {
AA_BUG(!ab);
AA_BUG(!label);
- if (!ns)
- ns = labels_ns(label);
-
- if (!use_label_hname(ns, label) || display_mode(ns, label, flags)) {
+ if (!use_label_hname(ns, label, flags) ||
+ display_mode(ns, label, flags)) {
len = aa_label_asxprint(&name, ns, label, flags, gfp);
if (len == -1) {
AA_DEBUG("label print error");
AA_BUG(!f);
AA_BUG(!label);
- if (!ns)
- ns = labels_ns(label);
-
- if (!use_label_hname(ns, label)) {
+ if (!use_label_hname(ns, label, flags)) {
char *str;
int len;
{
AA_BUG(!label);
- if (!ns)
- ns = labels_ns(label);
-
- if (!use_label_hname(ns, label)) {
+ if (!use_label_hname(ns, label, flags)) {
char *str;
int len;
if (*str == '&')
str++;
}
+ if (*str == '=')
+ base = &root_ns->unconfined->label;
+
error = vec_setup(profile, vec, len, gfp);
if (error)
return ERR_PTR(error);
*str = '\0';
}
-void aa_audit_perm_names(struct audit_buffer *ab, const char **names, u32 mask)
+void aa_audit_perm_names(struct audit_buffer *ab, const char * const *names,
+ u32 mask)
{
const char *fmt = "%s";
unsigned int i, perm = 1;
}
void aa_audit_perm_mask(struct audit_buffer *ab, u32 mask, const char *chrs,
- u32 chrsmask, const char **names, u32 namesmask)
+ u32 chrsmask, const char * const *names, u32 namesmask)
{
char str[33];
#include "include/context.h"
#include "include/file.h"
#include "include/ipc.h"
+#include "include/net.h"
#include "include/path.h"
#include "include/label.h"
#include "include/policy.h"
#include "include/policy_ns.h"
#include "include/procattr.h"
+#include "include/mount.h"
/* Flag indicating whether initialization completed */
int apparmor_initialized;
!(vma->vm_flags & VM_SHARED) ? MAP_PRIVATE : 0);
}
+static int apparmor_sb_mount(const char *dev_name, const struct path *path,
+ const char *type, unsigned long flags, void *data)
+{
+ struct aa_label *label;
+ int error = 0;
+
+ /* Discard magic */
+ if ((flags & MS_MGC_MSK) == MS_MGC_VAL)
+ flags &= ~MS_MGC_MSK;
+
+ flags &= ~AA_MS_IGNORE_MASK;
+
+ label = __begin_current_label_crit_section();
+ if (!unconfined(label)) {
+ if (flags & MS_REMOUNT)
+ error = aa_remount(label, path, flags, data);
+ else if (flags & MS_BIND)
+ error = aa_bind_mount(label, path, dev_name, flags);
+ else if (flags & (MS_SHARED | MS_PRIVATE | MS_SLAVE |
+ MS_UNBINDABLE))
+ error = aa_mount_change_type(label, path, flags);
+ else if (flags & MS_MOVE)
+ error = aa_move_mount(label, path, dev_name);
+ else
+ error = aa_new_mount(label, dev_name, path, type,
+ flags, data);
+ }
+ __end_current_label_crit_section(label);
+
+ return error;
+}
+
+static int apparmor_sb_umount(struct vfsmount *mnt, int flags)
+{
+ struct aa_label *label;
+ int error = 0;
+
+ label = __begin_current_label_crit_section();
+ if (!unconfined(label))
+ error = aa_umount(label, mnt, flags);
+ __end_current_label_crit_section(label);
+
+ return error;
+}
+
+static int apparmor_sb_pivotroot(const struct path *old_path,
+ const struct path *new_path)
+{
+ struct aa_label *label;
+ int error = 0;
+
+ label = aa_get_current_label();
+ if (!unconfined(label))
+ error = aa_pivotroot(label, old_path, new_path);
+ aa_put_label(label);
+
+ return error;
+}
+
static int apparmor_getprocattr(struct task_struct *task, char *name,
char **value)
{
return error;
}
+static int apparmor_task_kill(struct task_struct *target, struct siginfo *info,
+ int sig, u32 secid)
+{
+ struct aa_label *cl, *tl;
+ int error;
+
+ if (secid)
+ /* TODO: after secid to label mapping is done.
+ * Dealing with USB IO specific behavior
+ */
+ return 0;
+ cl = __begin_current_label_crit_section();
+ tl = aa_get_task_label(target);
+ error = aa_may_signal(cl, tl, sig);
+ aa_put_label(tl);
+ __end_current_label_crit_section(cl);
+
+ return error;
+}
+
+/**
+ * apparmor_sk_alloc_security - allocate and attach the sk_security field
+ */
+static int apparmor_sk_alloc_security(struct sock *sk, int family, gfp_t flags)
+{
+ struct aa_sk_ctx *ctx;
+
+ ctx = kzalloc(sizeof(*ctx), flags);
+ if (!ctx)
+ return -ENOMEM;
+
+ SK_CTX(sk) = ctx;
+
+ return 0;
+}
+
+/**
+ * apparmor_sk_free_security - free the sk_security field
+ */
+static void apparmor_sk_free_security(struct sock *sk)
+{
+ struct aa_sk_ctx *ctx = SK_CTX(sk);
+
+ SK_CTX(sk) = NULL;
+ aa_put_label(ctx->label);
+ aa_put_label(ctx->peer);
+ path_put(&ctx->path);
+ kfree(ctx);
+}
+
+/**
+ * apparmor_clone_security - clone the sk_security field
+ */
+static void apparmor_sk_clone_security(const struct sock *sk,
+ struct sock *newsk)
+{
+ struct aa_sk_ctx *ctx = SK_CTX(sk);
+ struct aa_sk_ctx *new = SK_CTX(newsk);
+
+ new->label = aa_get_label(ctx->label);
+ new->peer = aa_get_label(ctx->peer);
+ new->path = ctx->path;
+ path_get(&new->path);
+}
+
+static int aa_sock_create_perm(struct aa_label *label, int family, int type,
+ int protocol)
+{
+ AA_BUG(!label);
+ AA_BUG(in_interrupt());
+
+ return aa_af_perm(label, OP_CREATE, AA_MAY_CREATE, family, type,
+ protocol);
+}
+
+
+/**
+ * apparmor_socket_create - check perms before creating a new socket
+ */
+static int apparmor_socket_create(int family, int type, int protocol, int kern)
+{
+ struct aa_label *label;
+ int error = 0;
+
+ label = begin_current_label_crit_section();
+ if (!(kern || unconfined(label)))
+ error = aa_sock_create_perm(label, family, type, protocol);
+ end_current_label_crit_section(label);
+
+ return error;
+}
+
+/**
+ * apparmor_socket_post_create - setup the per-socket security struct
+ *
+ * Note:
+ * - kernel sockets currently labeled unconfined but we may want to
+ * move to a special kernel label
+ * - socket may not have sk here if created with sock_create_lite or
+ * sock_alloc. These should be accept cases which will be handled in
+ * sock_graft.
+ */
+static int apparmor_socket_post_create(struct socket *sock, int family,
+ int type, int protocol, int kern)
+{
+ struct aa_label *label;
+
+ if (kern) {
+ struct aa_ns *ns = aa_get_current_ns();
+
+ label = aa_get_label(ns_unconfined(ns));
+ aa_put_ns(ns);
+ } else
+ label = aa_get_current_label();
+
+ if (sock->sk) {
+ struct aa_sk_ctx *ctx = SK_CTX(sock->sk);
+
+ aa_put_label(ctx->label);
+ ctx->label = aa_get_label(label);
+ }
+ aa_put_label(label);
+
+ return 0;
+}
+
+/**
+ * apparmor_socket_bind - check perms before bind addr to socket
+ */
+static int apparmor_socket_bind(struct socket *sock,
+ struct sockaddr *address, int addrlen)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(!address);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(OP_BIND, AA_MAY_BIND, sock->sk);
+}
+
+/**
+ * apparmor_socket_connect - check perms before connecting @sock to @address
+ */
+static int apparmor_socket_connect(struct socket *sock,
+ struct sockaddr *address, int addrlen)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(!address);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(OP_CONNECT, AA_MAY_CONNECT, sock->sk);
+}
+
+/**
+ * apparmor_socket_list - check perms before allowing listen
+ */
+static int apparmor_socket_listen(struct socket *sock, int backlog)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(OP_LISTEN, AA_MAY_LISTEN, sock->sk);
+}
+
+/**
+ * apparmor_socket_accept - check perms before accepting a new connection.
+ *
+ * Note: while @newsock is created and has some information, the accept
+ * has not been done.
+ */
+static int apparmor_socket_accept(struct socket *sock, struct socket *newsock)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(!newsock);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(OP_ACCEPT, AA_MAY_ACCEPT, sock->sk);
+}
+
+static int aa_sock_msg_perm(const char *op, u32 request, struct socket *sock,
+ struct msghdr *msg, int size)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(!msg);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(op, request, sock->sk);
+}
+
+/**
+ * apparmor_socket_sendmsg - check perms before sending msg to another socket
+ */
+static int apparmor_socket_sendmsg(struct socket *sock,
+ struct msghdr *msg, int size)
+{
+ return aa_sock_msg_perm(OP_SENDMSG, AA_MAY_SEND, sock, msg, size);
+}
+
+/**
+ * apparmor_socket_recvmsg - check perms before receiving a message
+ */
+static int apparmor_socket_recvmsg(struct socket *sock,
+ struct msghdr *msg, int size, int flags)
+{
+ return aa_sock_msg_perm(OP_RECVMSG, AA_MAY_RECEIVE, sock, msg, size);
+}
+
+/* revaliation, get/set attr, shutdown */
+static int aa_sock_perm(const char *op, u32 request, struct socket *sock)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(op, request, sock->sk);
+}
+
+/**
+ * apparmor_socket_getsockname - check perms before getting the local address
+ */
+static int apparmor_socket_getsockname(struct socket *sock)
+{
+ return aa_sock_perm(OP_GETSOCKNAME, AA_MAY_GETATTR, sock);
+}
+
+/**
+ * apparmor_socket_getpeername - check perms before getting remote address
+ */
+static int apparmor_socket_getpeername(struct socket *sock)
+{
+ return aa_sock_perm(OP_GETPEERNAME, AA_MAY_GETATTR, sock);
+}
+
+/* revaliation, get/set attr, opt */
+static int aa_sock_opt_perm(const char *op, u32 request, struct socket *sock,
+ int level, int optname)
+{
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+ AA_BUG(in_interrupt());
+
+ return aa_sk_perm(op, request, sock->sk);
+}
+
+/**
+ * apparmor_getsockopt - check perms before getting socket options
+ */
+static int apparmor_socket_getsockopt(struct socket *sock, int level,
+ int optname)
+{
+ return aa_sock_opt_perm(OP_GETSOCKOPT, AA_MAY_GETOPT, sock,
+ level, optname);
+}
+
+/**
+ * apparmor_setsockopt - check perms before setting socket options
+ */
+static int apparmor_socket_setsockopt(struct socket *sock, int level,
+ int optname)
+{
+ return aa_sock_opt_perm(OP_SETSOCKOPT, AA_MAY_SETOPT, sock,
+ level, optname);
+}
+
+/**
+ * apparmor_socket_shutdown - check perms before shutting down @sock conn
+ */
+static int apparmor_socket_shutdown(struct socket *sock, int how)
+{
+ return aa_sock_perm(OP_SHUTDOWN, AA_MAY_SHUTDOWN, sock);
+}
+
+/**
+ * apparmor_socket_sock_recv_skb - check perms before associating skb to sk
+ *
+ * Note: can not sleep may be called with locks held
+ *
+ * dont want protocol specific in __skb_recv_datagram()
+ * to deny an incoming connection socket_sock_rcv_skb()
+ */
+static int apparmor_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+{
+ return 0;
+}
+
+
+static struct aa_label *sk_peer_label(struct sock *sk)
+{
+ struct aa_sk_ctx *ctx = SK_CTX(sk);
+
+ if (ctx->peer)
+ return ctx->peer;
+
+ return ERR_PTR(-ENOPROTOOPT);
+}
+
+/**
+ * apparmor_socket_getpeersec_stream - get security context of peer
+ *
+ * Note: for tcp only valid if using ipsec or cipso on lan
+ */
+static int apparmor_socket_getpeersec_stream(struct socket *sock,
+ char __user *optval,
+ int __user *optlen,
+ unsigned int len)
+{
+ char *name;
+ int slen, error = 0;
+ struct aa_label *label;
+ struct aa_label *peer;
+
+ label = begin_current_label_crit_section();
+ peer = sk_peer_label(sock->sk);
+ if (IS_ERR(peer)) {
+ error = PTR_ERR(peer);
+ goto done;
+ }
+ slen = aa_label_asxprint(&name, labels_ns(label), peer,
+ FLAG_SHOW_MODE | FLAG_VIEW_SUBNS |
+ FLAG_HIDDEN_UNCONFINED, GFP_KERNEL);
+ /* don't include terminating \0 in slen, it breaks some apps */
+ if (slen < 0) {
+ error = -ENOMEM;
+ } else {
+ if (slen > len) {
+ error = -ERANGE;
+ } else if (copy_to_user(optval, name, slen)) {
+ error = -EFAULT;
+ goto out;
+ }
+ if (put_user(slen, optlen))
+ error = -EFAULT;
+out:
+ kfree(name);
+
+ }
+
+done:
+ end_current_label_crit_section(label);
+
+ return error;
+}
+
+/**
+ * apparmor_socket_getpeersec_dgram - get security label of packet
+ * @sock: the peer socket
+ * @skb: packet data
+ * @secid: pointer to where to put the secid of the packet
+ *
+ * Sets the netlabel socket state on sk from parent
+ */
+static int apparmor_socket_getpeersec_dgram(struct socket *sock,
+ struct sk_buff *skb, u32 *secid)
+
+{
+ /* TODO: requires secid support */
+ return -ENOPROTOOPT;
+}
+
+/**
+ * apparmor_sock_graft - Initialize newly created socket
+ * @sk: child sock
+ * @parent: parent socket
+ *
+ * Note: could set off of SOCK_CTX(parent) but need to track inode and we can
+ * just set sk security information off of current creating process label
+ * Labeling of sk for accept case - probably should be sock based
+ * instead of task, because of the case where an implicitly labeled
+ * socket is shared by different tasks.
+ */
+static void apparmor_sock_graft(struct sock *sk, struct socket *parent)
+{
+ struct aa_sk_ctx *ctx = SK_CTX(sk);
+
+ if (!ctx->label)
+ ctx->label = aa_get_current_label();
+}
+
static struct security_hook_list apparmor_hooks[] __lsm_ro_after_init = {
LSM_HOOK_INIT(ptrace_access_check, apparmor_ptrace_access_check),
LSM_HOOK_INIT(ptrace_traceme, apparmor_ptrace_traceme),
LSM_HOOK_INIT(capget, apparmor_capget),
LSM_HOOK_INIT(capable, apparmor_capable),
+ LSM_HOOK_INIT(sb_mount, apparmor_sb_mount),
+ LSM_HOOK_INIT(sb_umount, apparmor_sb_umount),
+ LSM_HOOK_INIT(sb_pivotroot, apparmor_sb_pivotroot),
+
LSM_HOOK_INIT(path_link, apparmor_path_link),
LSM_HOOK_INIT(path_unlink, apparmor_path_unlink),
LSM_HOOK_INIT(path_symlink, apparmor_path_symlink),
LSM_HOOK_INIT(getprocattr, apparmor_getprocattr),
LSM_HOOK_INIT(setprocattr, apparmor_setprocattr),
+ LSM_HOOK_INIT(sk_alloc_security, apparmor_sk_alloc_security),
+ LSM_HOOK_INIT(sk_free_security, apparmor_sk_free_security),
+ LSM_HOOK_INIT(sk_clone_security, apparmor_sk_clone_security),
+
+ LSM_HOOK_INIT(socket_create, apparmor_socket_create),
+ LSM_HOOK_INIT(socket_post_create, apparmor_socket_post_create),
+ LSM_HOOK_INIT(socket_bind, apparmor_socket_bind),
+ LSM_HOOK_INIT(socket_connect, apparmor_socket_connect),
+ LSM_HOOK_INIT(socket_listen, apparmor_socket_listen),
+ LSM_HOOK_INIT(socket_accept, apparmor_socket_accept),
+ LSM_HOOK_INIT(socket_sendmsg, apparmor_socket_sendmsg),
+ LSM_HOOK_INIT(socket_recvmsg, apparmor_socket_recvmsg),
+ LSM_HOOK_INIT(socket_getsockname, apparmor_socket_getsockname),
+ LSM_HOOK_INIT(socket_getpeername, apparmor_socket_getpeername),
+ LSM_HOOK_INIT(socket_getsockopt, apparmor_socket_getsockopt),
+ LSM_HOOK_INIT(socket_setsockopt, apparmor_socket_setsockopt),
+ LSM_HOOK_INIT(socket_shutdown, apparmor_socket_shutdown),
+ LSM_HOOK_INIT(socket_sock_rcv_skb, apparmor_socket_sock_rcv_skb),
+ LSM_HOOK_INIT(socket_getpeersec_stream,
+ apparmor_socket_getpeersec_stream),
+ LSM_HOOK_INIT(socket_getpeersec_dgram,
+ apparmor_socket_getpeersec_dgram),
+ LSM_HOOK_INIT(sock_graft, apparmor_sock_graft),
+
LSM_HOOK_INIT(cred_alloc_blank, apparmor_cred_alloc_blank),
LSM_HOOK_INIT(cred_free, apparmor_cred_free),
LSM_HOOK_INIT(cred_prepare, apparmor_cred_prepare),
LSM_HOOK_INIT(bprm_committed_creds, apparmor_bprm_committed_creds),
LSM_HOOK_INIT(task_setrlimit, apparmor_task_setrlimit),
+ LSM_HOOK_INIT(task_kill, apparmor_task_kill),
};
/*
--- /dev/null
+/*
+ * AppArmor security module
+ *
+ * This file contains AppArmor mediation of files
+ *
+ * Copyright (C) 1998-2008 Novell/SUSE
+ * Copyright 2009-2017 Canonical Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ */
+
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/namei.h>
+
+#include "include/apparmor.h"
+#include "include/audit.h"
+#include "include/context.h"
+#include "include/domain.h"
+#include "include/file.h"
+#include "include/match.h"
+#include "include/mount.h"
+#include "include/path.h"
+#include "include/policy.h"
+
+
+static void audit_mnt_flags(struct audit_buffer *ab, unsigned long flags)
+{
+ if (flags & MS_RDONLY)
+ audit_log_format(ab, "ro");
+ else
+ audit_log_format(ab, "rw");
+ if (flags & MS_NOSUID)
+ audit_log_format(ab, ", nosuid");
+ if (flags & MS_NODEV)
+ audit_log_format(ab, ", nodev");
+ if (flags & MS_NOEXEC)
+ audit_log_format(ab, ", noexec");
+ if (flags & MS_SYNCHRONOUS)
+ audit_log_format(ab, ", sync");
+ if (flags & MS_REMOUNT)
+ audit_log_format(ab, ", remount");
+ if (flags & MS_MANDLOCK)
+ audit_log_format(ab, ", mand");
+ if (flags & MS_DIRSYNC)
+ audit_log_format(ab, ", dirsync");
+ if (flags & MS_NOATIME)
+ audit_log_format(ab, ", noatime");
+ if (flags & MS_NODIRATIME)
+ audit_log_format(ab, ", nodiratime");
+ if (flags & MS_BIND)
+ audit_log_format(ab, flags & MS_REC ? ", rbind" : ", bind");
+ if (flags & MS_MOVE)
+ audit_log_format(ab, ", move");
+ if (flags & MS_SILENT)
+ audit_log_format(ab, ", silent");
+ if (flags & MS_POSIXACL)
+ audit_log_format(ab, ", acl");
+ if (flags & MS_UNBINDABLE)
+ audit_log_format(ab, flags & MS_REC ? ", runbindable" :
+ ", unbindable");
+ if (flags & MS_PRIVATE)
+ audit_log_format(ab, flags & MS_REC ? ", rprivate" :
+ ", private");
+ if (flags & MS_SLAVE)
+ audit_log_format(ab, flags & MS_REC ? ", rslave" :
+ ", slave");
+ if (flags & MS_SHARED)
+ audit_log_format(ab, flags & MS_REC ? ", rshared" :
+ ", shared");
+ if (flags & MS_RELATIME)
+ audit_log_format(ab, ", relatime");
+ if (flags & MS_I_VERSION)
+ audit_log_format(ab, ", iversion");
+ if (flags & MS_STRICTATIME)
+ audit_log_format(ab, ", strictatime");
+ if (flags & MS_NOUSER)
+ audit_log_format(ab, ", nouser");
+}
+
+/**
+ * audit_cb - call back for mount specific audit fields
+ * @ab: audit_buffer (NOT NULL)
+ * @va: audit struct to audit values of (NOT NULL)
+ */
+static void audit_cb(struct audit_buffer *ab, void *va)
+{
+ struct common_audit_data *sa = va;
+
+ if (aad(sa)->mnt.type) {
+ audit_log_format(ab, " fstype=");
+ audit_log_untrustedstring(ab, aad(sa)->mnt.type);
+ }
+ if (aad(sa)->mnt.src_name) {
+ audit_log_format(ab, " srcname=");
+ audit_log_untrustedstring(ab, aad(sa)->mnt.src_name);
+ }
+ if (aad(sa)->mnt.trans) {
+ audit_log_format(ab, " trans=");
+ audit_log_untrustedstring(ab, aad(sa)->mnt.trans);
+ }
+ if (aad(sa)->mnt.flags) {
+ audit_log_format(ab, " flags=\"");
+ audit_mnt_flags(ab, aad(sa)->mnt.flags);
+ audit_log_format(ab, "\"");
+ }
+ if (aad(sa)->mnt.data) {
+ audit_log_format(ab, " options=");
+ audit_log_untrustedstring(ab, aad(sa)->mnt.data);
+ }
+}
+
+/**
+ * audit_mount - handle the auditing of mount operations
+ * @profile: the profile being enforced (NOT NULL)
+ * @op: operation being mediated (NOT NULL)
+ * @name: name of object being mediated (MAYBE NULL)
+ * @src_name: src_name of object being mediated (MAYBE_NULL)
+ * @type: type of filesystem (MAYBE_NULL)
+ * @trans: name of trans (MAYBE NULL)
+ * @flags: filesystem idependent mount flags
+ * @data: filesystem mount flags
+ * @request: permissions requested
+ * @perms: the permissions computed for the request (NOT NULL)
+ * @info: extra information message (MAYBE NULL)
+ * @error: 0 if operation allowed else failure error code
+ *
+ * Returns: %0 or error on failure
+ */
+static int audit_mount(struct aa_profile *profile, const char *op,
+ const char *name, const char *src_name,
+ const char *type, const char *trans,
+ unsigned long flags, const void *data, u32 request,
+ struct aa_perms *perms, const char *info, int error)
+{
+ int audit_type = AUDIT_APPARMOR_AUTO;
+ DEFINE_AUDIT_DATA(sa, LSM_AUDIT_DATA_NONE, op);
+
+ if (likely(!error)) {
+ u32 mask = perms->audit;
+
+ if (unlikely(AUDIT_MODE(profile) == AUDIT_ALL))
+ mask = 0xffff;
+
+ /* mask off perms that are not being force audited */
+ request &= mask;
+
+ if (likely(!request))
+ return 0;
+ audit_type = AUDIT_APPARMOR_AUDIT;
+ } else {
+ /* only report permissions that were denied */
+ request = request & ~perms->allow;
+
+ if (request & perms->kill)
+ audit_type = AUDIT_APPARMOR_KILL;
+
+ /* quiet known rejects, assumes quiet and kill do not overlap */
+ if ((request & perms->quiet) &&
+ AUDIT_MODE(profile) != AUDIT_NOQUIET &&
+ AUDIT_MODE(profile) != AUDIT_ALL)
+ request &= ~perms->quiet;
+
+ if (!request)
+ return error;
+ }
+
+ aad(&sa)->name = name;
+ aad(&sa)->mnt.src_name = src_name;
+ aad(&sa)->mnt.type = type;
+ aad(&sa)->mnt.trans = trans;
+ aad(&sa)->mnt.flags = flags;
+ if (data && (perms->audit & AA_AUDIT_DATA))
+ aad(&sa)->mnt.data = data;
+ aad(&sa)->info = info;
+ aad(&sa)->error = error;
+
+ return aa_audit(audit_type, profile, &sa, audit_cb);
+}
+
+/**
+ * match_mnt_flags - Do an ordered match on mount flags
+ * @dfa: dfa to match against
+ * @state: state to start in
+ * @flags: mount flags to match against
+ *
+ * Mount flags are encoded as an ordered match. This is done instead of
+ * checking against a simple bitmask, to allow for logical operations
+ * on the flags.
+ *
+ * Returns: next state after flags match
+ */
+static unsigned int match_mnt_flags(struct aa_dfa *dfa, unsigned int state,
+ unsigned long flags)
+{
+ unsigned int i;
+
+ for (i = 0; i <= 31 ; ++i) {
+ if ((1 << i) & flags)
+ state = aa_dfa_next(dfa, state, i + 1);
+ }
+
+ return state;
+}
+
+/**
+ * compute_mnt_perms - compute mount permission associated with @state
+ * @dfa: dfa to match against (NOT NULL)
+ * @state: state match finished in
+ *
+ * Returns: mount permissions
+ */
+static struct aa_perms compute_mnt_perms(struct aa_dfa *dfa,
+ unsigned int state)
+{
+ struct aa_perms perms;
+
+ perms.kill = 0;
+ perms.allow = dfa_user_allow(dfa, state);
+ perms.audit = dfa_user_audit(dfa, state);
+ perms.quiet = dfa_user_quiet(dfa, state);
+ perms.xindex = dfa_user_xindex(dfa, state);
+
+ return perms;
+}
+
+static const char * const mnt_info_table[] = {
+ "match succeeded",
+ "failed mntpnt match",
+ "failed srcname match",
+ "failed type match",
+ "failed flags match",
+ "failed data match"
+};
+
+/*
+ * Returns 0 on success else element that match failed in, this is the
+ * index into the mnt_info_table above
+ */
+static int do_match_mnt(struct aa_dfa *dfa, unsigned int start,
+ const char *mntpnt, const char *devname,
+ const char *type, unsigned long flags,
+ void *data, bool binary, struct aa_perms *perms)
+{
+ unsigned int state;
+
+ AA_BUG(!dfa);
+ AA_BUG(!perms);
+
+ state = aa_dfa_match(dfa, start, mntpnt);
+ state = aa_dfa_null_transition(dfa, state);
+ if (!state)
+ return 1;
+
+ if (devname)
+ state = aa_dfa_match(dfa, state, devname);
+ state = aa_dfa_null_transition(dfa, state);
+ if (!state)
+ return 2;
+
+ if (type)
+ state = aa_dfa_match(dfa, state, type);
+ state = aa_dfa_null_transition(dfa, state);
+ if (!state)
+ return 3;
+
+ state = match_mnt_flags(dfa, state, flags);
+ if (!state)
+ return 4;
+ *perms = compute_mnt_perms(dfa, state);
+ if (perms->allow & AA_MAY_MOUNT)
+ return 0;
+
+ /* only match data if not binary and the DFA flags data is expected */
+ if (data && !binary && (perms->allow & AA_MNT_CONT_MATCH)) {
+ state = aa_dfa_null_transition(dfa, state);
+ if (!state)
+ return 4;
+
+ state = aa_dfa_match(dfa, state, data);
+ if (!state)
+ return 5;
+ *perms = compute_mnt_perms(dfa, state);
+ if (perms->allow & AA_MAY_MOUNT)
+ return 0;
+ }
+
+ /* failed at end of flags match */
+ return 4;
+}
+
+
+static int path_flags(struct aa_profile *profile, const struct path *path)
+{
+ AA_BUG(!profile);
+ AA_BUG(!path);
+
+ return profile->path_flags |
+ (S_ISDIR(path->dentry->d_inode->i_mode) ? PATH_IS_DIR : 0);
+}
+
+/**
+ * match_mnt_path_str - handle path matching for mount
+ * @profile: the confining profile
+ * @mntpath: for the mntpnt (NOT NULL)
+ * @buffer: buffer to be used to lookup mntpath
+ * @devnme: string for the devname/src_name (MAY BE NULL OR ERRPTR)
+ * @type: string for the dev type (MAYBE NULL)
+ * @flags: mount flags to match
+ * @data: fs mount data (MAYBE NULL)
+ * @binary: whether @data is binary
+ * @devinfo: error str if (IS_ERR(@devname))
+ *
+ * Returns: 0 on success else error
+ */
+static int match_mnt_path_str(struct aa_profile *profile,
+ const struct path *mntpath, char *buffer,
+ const char *devname, const char *type,
+ unsigned long flags, void *data, bool binary,
+ const char *devinfo)
+{
+ struct aa_perms perms = { };
+ const char *mntpnt = NULL, *info = NULL;
+ int pos, error;
+
+ AA_BUG(!profile);
+ AA_BUG(!mntpath);
+ AA_BUG(!buffer);
+
+ error = aa_path_name(mntpath, path_flags(profile, mntpath), buffer,
+ &mntpnt, &info, profile->disconnected);
+ if (error)
+ goto audit;
+ if (IS_ERR(devname)) {
+ error = PTR_ERR(devname);
+ devname = NULL;
+ info = devinfo;
+ goto audit;
+ }
+
+ error = -EACCES;
+ pos = do_match_mnt(profile->policy.dfa,
+ profile->policy.start[AA_CLASS_MOUNT],
+ mntpnt, devname, type, flags, data, binary, &perms);
+ if (pos) {
+ info = mnt_info_table[pos];
+ goto audit;
+ }
+ error = 0;
+
+audit:
+ return audit_mount(profile, OP_MOUNT, mntpnt, devname, type, NULL,
+ flags, data, AA_MAY_MOUNT, &perms, info, error);
+}
+
+/**
+ * match_mnt - handle path matching for mount
+ * @profile: the confining profile
+ * @mntpath: for the mntpnt (NOT NULL)
+ * @buffer: buffer to be used to lookup mntpath
+ * @devpath: path devname/src_name (MAYBE NULL)
+ * @devbuffer: buffer to be used to lookup devname/src_name
+ * @type: string for the dev type (MAYBE NULL)
+ * @flags: mount flags to match
+ * @data: fs mount data (MAYBE NULL)
+ * @binary: whether @data is binary
+ *
+ * Returns: 0 on success else error
+ */
+static int match_mnt(struct aa_profile *profile, const struct path *path,
+ char *buffer, struct path *devpath, char *devbuffer,
+ const char *type, unsigned long flags, void *data,
+ bool binary)
+{
+ const char *devname = NULL, *info = NULL;
+ int error = -EACCES;
+
+ AA_BUG(!profile);
+ AA_BUG(devpath && !devbuffer);
+
+ if (devpath) {
+ error = aa_path_name(devpath, path_flags(profile, devpath),
+ devbuffer, &devname, &info,
+ profile->disconnected);
+ if (error)
+ devname = ERR_PTR(error);
+ }
+
+ return match_mnt_path_str(profile, path, buffer, devname, type, flags,
+ data, binary, info);
+}
+
+int aa_remount(struct aa_label *label, const struct path *path,
+ unsigned long flags, void *data)
+{
+ struct aa_profile *profile;
+ char *buffer = NULL;
+ bool binary;
+ int error;
+
+ AA_BUG(!label);
+ AA_BUG(!path);
+
+ binary = path->dentry->d_sb->s_type->fs_flags & FS_BINARY_MOUNTDATA;
+
+ get_buffers(buffer);
+ error = fn_for_each_confined(label, profile,
+ match_mnt(profile, path, buffer, NULL, NULL, NULL,
+ flags, data, binary));
+ put_buffers(buffer);
+
+ return error;
+}
+
+int aa_bind_mount(struct aa_label *label, const struct path *path,
+ const char *dev_name, unsigned long flags)
+{
+ struct aa_profile *profile;
+ char *buffer = NULL, *old_buffer = NULL;
+ struct path old_path;
+ int error;
+
+ AA_BUG(!label);
+ AA_BUG(!path);
+
+ if (!dev_name || !*dev_name)
+ return -EINVAL;
+
+ flags &= MS_REC | MS_BIND;
+
+ error = kern_path(dev_name, LOOKUP_FOLLOW|LOOKUP_AUTOMOUNT, &old_path);
+ if (error)
+ return error;
+
+ get_buffers(buffer, old_buffer);
+ error = fn_for_each_confined(label, profile,
+ match_mnt(profile, path, buffer, &old_path, old_buffer,
+ NULL, flags, NULL, false));
+ put_buffers(buffer, old_buffer);
+ path_put(&old_path);
+
+ return error;
+}
+
+int aa_mount_change_type(struct aa_label *label, const struct path *path,
+ unsigned long flags)
+{
+ struct aa_profile *profile;
+ char *buffer = NULL;
+ int error;
+
+ AA_BUG(!label);
+ AA_BUG(!path);
+
+ /* These are the flags allowed by do_change_type() */
+ flags &= (MS_REC | MS_SILENT | MS_SHARED | MS_PRIVATE | MS_SLAVE |
+ MS_UNBINDABLE);
+
+ get_buffers(buffer);
+ error = fn_for_each_confined(label, profile,
+ match_mnt(profile, path, buffer, NULL, NULL, NULL,
+ flags, NULL, false));
+ put_buffers(buffer);
+
+ return error;
+}
+
+int aa_move_mount(struct aa_label *label, const struct path *path,
+ const char *orig_name)
+{
+ struct aa_profile *profile;
+ char *buffer = NULL, *old_buffer = NULL;
+ struct path old_path;
+ int error;
+
+ AA_BUG(!label);
+ AA_BUG(!path);
+
+ if (!orig_name || !*orig_name)
+ return -EINVAL;
+
+ error = kern_path(orig_name, LOOKUP_FOLLOW, &old_path);
+ if (error)
+ return error;
+
+ get_buffers(buffer, old_buffer);
+ error = fn_for_each_confined(label, profile,
+ match_mnt(profile, path, buffer, &old_path, old_buffer,
+ NULL, MS_MOVE, NULL, false));
+ put_buffers(buffer, old_buffer);
+ path_put(&old_path);
+
+ return error;
+}
+
+int aa_new_mount(struct aa_label *label, const char *dev_name,
+ const struct path *path, const char *type, unsigned long flags,
+ void *data)
+{
+ struct aa_profile *profile;
+ char *buffer = NULL, *dev_buffer = NULL;
+ bool binary = true;
+ int error;
+ int requires_dev = 0;
+ struct path tmp_path, *dev_path = NULL;
+
+ AA_BUG(!label);
+ AA_BUG(!path);
+
+ if (type) {
+ struct file_system_type *fstype;
+
+ fstype = get_fs_type(type);
+ if (!fstype)
+ return -ENODEV;
+ binary = fstype->fs_flags & FS_BINARY_MOUNTDATA;
+ requires_dev = fstype->fs_flags & FS_REQUIRES_DEV;
+ put_filesystem(fstype);
+
+ if (requires_dev) {
+ if (!dev_name || !*dev_name)
+ return -ENOENT;
+
+ error = kern_path(dev_name, LOOKUP_FOLLOW, &tmp_path);
+ if (error)
+ return error;
+ dev_path = &tmp_path;
+ }
+ }
+
+ get_buffers(buffer, dev_buffer);
+ if (dev_path) {
+ error = fn_for_each_confined(label, profile,
+ match_mnt(profile, path, buffer, dev_path, dev_buffer,
+ type, flags, data, binary));
+ } else {
+ error = fn_for_each_confined(label, profile,
+ match_mnt_path_str(profile, path, buffer, dev_name,
+ type, flags, data, binary, NULL));
+ }
+ put_buffers(buffer, dev_buffer);
+ if (dev_path)
+ path_put(dev_path);
+
+ return error;
+}
+
+static int profile_umount(struct aa_profile *profile, struct path *path,
+ char *buffer)
+{
+ struct aa_perms perms = { };
+ const char *name = NULL, *info = NULL;
+ unsigned int state;
+ int error;
+
+ AA_BUG(!profile);
+ AA_BUG(!path);
+
+ error = aa_path_name(path, path_flags(profile, path), buffer, &name,
+ &info, profile->disconnected);
+ if (error)
+ goto audit;
+
+ state = aa_dfa_match(profile->policy.dfa,
+ profile->policy.start[AA_CLASS_MOUNT],
+ name);
+ perms = compute_mnt_perms(profile->policy.dfa, state);
+ if (AA_MAY_UMOUNT & ~perms.allow)
+ error = -EACCES;
+
+audit:
+ return audit_mount(profile, OP_UMOUNT, name, NULL, NULL, NULL, 0, NULL,
+ AA_MAY_UMOUNT, &perms, info, error);
+}
+
+int aa_umount(struct aa_label *label, struct vfsmount *mnt, int flags)
+{
+ struct aa_profile *profile;
+ char *buffer = NULL;
+ int error;
+ struct path path = { .mnt = mnt, .dentry = mnt->mnt_root };
+
+ AA_BUG(!label);
+ AA_BUG(!mnt);
+
+ get_buffers(buffer);
+ error = fn_for_each_confined(label, profile,
+ profile_umount(profile, &path, buffer));
+ put_buffers(buffer);
+
+ return error;
+}
+
+/* helper fn for transition on pivotroot
+ *
+ * Returns: label for transition or ERR_PTR. Does not return NULL
+ */
+static struct aa_label *build_pivotroot(struct aa_profile *profile,
+ const struct path *new_path,
+ char *new_buffer,
+ const struct path *old_path,
+ char *old_buffer)
+{
+ const char *old_name, *new_name = NULL, *info = NULL;
+ const char *trans_name = NULL;
+ struct aa_perms perms = { };
+ unsigned int state;
+ int error;
+
+ AA_BUG(!profile);
+ AA_BUG(!new_path);
+ AA_BUG(!old_path);
+
+ if (profile_unconfined(profile))
+ return aa_get_newest_label(&profile->label);
+
+ error = aa_path_name(old_path, path_flags(profile, old_path),
+ old_buffer, &old_name, &info,
+ profile->disconnected);
+ if (error)
+ goto audit;
+ error = aa_path_name(new_path, path_flags(profile, new_path),
+ new_buffer, &new_name, &info,
+ profile->disconnected);
+ if (error)
+ goto audit;
+
+ error = -EACCES;
+ state = aa_dfa_match(profile->policy.dfa,
+ profile->policy.start[AA_CLASS_MOUNT],
+ new_name);
+ state = aa_dfa_null_transition(profile->policy.dfa, state);
+ state = aa_dfa_match(profile->policy.dfa, state, old_name);
+ perms = compute_mnt_perms(profile->policy.dfa, state);
+
+ if (AA_MAY_PIVOTROOT & perms.allow)
+ error = 0;
+
+audit:
+ error = audit_mount(profile, OP_PIVOTROOT, new_name, old_name,
+ NULL, trans_name, 0, NULL, AA_MAY_PIVOTROOT,
+ &perms, info, error);
+ if (error)
+ return ERR_PTR(error);
+
+ return aa_get_newest_label(&profile->label);
+}
+
+int aa_pivotroot(struct aa_label *label, const struct path *old_path,
+ const struct path *new_path)
+{
+ struct aa_profile *profile;
+ struct aa_label *target = NULL;
+ char *old_buffer = NULL, *new_buffer = NULL, *info = NULL;
+ int error;
+
+ AA_BUG(!label);
+ AA_BUG(!old_path);
+ AA_BUG(!new_path);
+
+ get_buffers(old_buffer, new_buffer);
+ target = fn_label_build(label, profile, GFP_ATOMIC,
+ build_pivotroot(profile, new_path, new_buffer,
+ old_path, old_buffer));
+ if (!target) {
+ info = "label build failed";
+ error = -ENOMEM;
+ goto fail;
+ } else if (!IS_ERR(target)) {
+ error = aa_replace_current_label(target);
+ if (error) {
+ /* TODO: audit target */
+ aa_put_label(target);
+ goto out;
+ }
+ } else
+ /* already audited error */
+ error = PTR_ERR(target);
+out:
+ put_buffers(old_buffer, new_buffer);
+
+ return error;
+
+fail:
+ /* TODO: add back in auditing of new_name and old_name */
+ error = fn_for_each(label, profile,
+ audit_mount(profile, OP_PIVOTROOT, NULL /*new_name */,
+ NULL /* old_name */,
+ NULL, NULL,
+ 0, NULL, AA_MAY_PIVOTROOT, &nullperms, info,
+ error));
+ goto out;
+}
--- /dev/null
+/*
+ * AppArmor security module
+ *
+ * This file contains AppArmor network mediation
+ *
+ * Copyright (C) 1998-2008 Novell/SUSE
+ * Copyright 2009-2017 Canonical Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ */
+
+#include "include/apparmor.h"
+#include "include/audit.h"
+#include "include/context.h"
+#include "include/label.h"
+#include "include/net.h"
+#include "include/policy.h"
+
+#include "net_names.h"
+
+
+struct aa_sfs_entry aa_sfs_entry_network[] = {
+ AA_SFS_FILE_STRING("af_mask", AA_SFS_AF_MASK),
+ { }
+};
+
+static const char * const net_mask_names[] = {
+ "unknown",
+ "send",
+ "receive",
+ "unknown",
+
+ "create",
+ "shutdown",
+ "connect",
+ "unknown",
+
+ "setattr",
+ "getattr",
+ "setcred",
+ "getcred",
+
+ "chmod",
+ "chown",
+ "chgrp",
+ "lock",
+
+ "mmap",
+ "mprot",
+ "unknown",
+ "unknown",
+
+ "accept",
+ "bind",
+ "listen",
+ "unknown",
+
+ "setopt",
+ "getopt",
+ "unknown",
+ "unknown",
+
+ "unknown",
+ "unknown",
+ "unknown",
+ "unknown",
+};
+
+
+/* audit callback for net specific fields */
+void audit_net_cb(struct audit_buffer *ab, void *va)
+{
+ struct common_audit_data *sa = va;
+
+ audit_log_format(ab, " family=");
+ if (address_family_names[sa->u.net->family])
+ audit_log_string(ab, address_family_names[sa->u.net->family]);
+ else
+ audit_log_format(ab, "\"unknown(%d)\"", sa->u.net->family);
+ audit_log_format(ab, " sock_type=");
+ if (sock_type_names[aad(sa)->net.type])
+ audit_log_string(ab, sock_type_names[aad(sa)->net.type]);
+ else
+ audit_log_format(ab, "\"unknown(%d)\"", aad(sa)->net.type);
+ audit_log_format(ab, " protocol=%d", aad(sa)->net.protocol);
+
+ if (aad(sa)->request & NET_PERMS_MASK) {
+ audit_log_format(ab, " requested_mask=");
+ aa_audit_perm_mask(ab, aad(sa)->request, NULL, 0,
+ net_mask_names, NET_PERMS_MASK);
+
+ if (aad(sa)->denied & NET_PERMS_MASK) {
+ audit_log_format(ab, " denied_mask=");
+ aa_audit_perm_mask(ab, aad(sa)->denied, NULL, 0,
+ net_mask_names, NET_PERMS_MASK);
+ }
+ }
+ if (aad(sa)->peer) {
+ audit_log_format(ab, " peer=");
+ aa_label_xaudit(ab, labels_ns(aad(sa)->label), aad(sa)->peer,
+ FLAGS_NONE, GFP_ATOMIC);
+ }
+}
+
+
+/* Generic af perm */
+int aa_profile_af_perm(struct aa_profile *profile, struct common_audit_data *sa,
+ u32 request, u16 family, int type)
+{
+ struct aa_perms perms = { };
+
+ AA_BUG(family >= AF_MAX);
+ AA_BUG(type < 0 || type >= SOCK_MAX);
+
+ if (profile_unconfined(profile))
+ return 0;
+
+ perms.allow = (profile->net.allow[family] & (1 << type)) ?
+ ALL_PERMS_MASK : 0;
+ perms.audit = (profile->net.audit[family] & (1 << type)) ?
+ ALL_PERMS_MASK : 0;
+ perms.quiet = (profile->net.quiet[family] & (1 << type)) ?
+ ALL_PERMS_MASK : 0;
+ aa_apply_modes_to_perms(profile, &perms);
+
+ return aa_check_perms(profile, &perms, request, sa, audit_net_cb);
+}
+
+int aa_af_perm(struct aa_label *label, const char *op, u32 request, u16 family,
+ int type, int protocol)
+{
+ struct aa_profile *profile;
+ DEFINE_AUDIT_NET(sa, op, NULL, family, type, protocol);
+
+ return fn_for_each_confined(label, profile,
+ aa_profile_af_perm(profile, &sa, request, family,
+ type));
+}
+
+static int aa_label_sk_perm(struct aa_label *label, const char *op, u32 request,
+ struct sock *sk)
+{
+ struct aa_profile *profile;
+ DEFINE_AUDIT_SK(sa, op, sk);
+
+ AA_BUG(!label);
+ AA_BUG(!sk);
+
+ if (unconfined(label))
+ return 0;
+
+ return fn_for_each_confined(label, profile,
+ aa_profile_af_sk_perm(profile, &sa, request, sk));
+}
+
+int aa_sk_perm(const char *op, u32 request, struct sock *sk)
+{
+ struct aa_label *label;
+ int error;
+
+ AA_BUG(!sk);
+ AA_BUG(in_interrupt());
+
+ /* TODO: switch to begin_current_label ???? */
+ label = begin_current_label_crit_section();
+ error = aa_label_sk_perm(label, op, request, sk);
+ end_current_label_crit_section(label);
+
+ return error;
+}
+
+
+int aa_sock_file_perm(struct aa_label *label, const char *op, u32 request,
+ struct socket *sock)
+{
+ AA_BUG(!label);
+ AA_BUG(!sock);
+ AA_BUG(!sock->sk);
+
+ return aa_label_sk_perm(label, op, request, sock->sk);
+}
return NULL;
}
-/**
- * aa_new_null_profile - create or find a null-X learning profile
- * @parent: profile that caused this profile to be created (NOT NULL)
- * @hat: true if the null- learning profile is a hat
- * @base: name to base the null profile off of
- * @gfp: type of allocation
- *
- * Find/Create a null- complain mode profile used in learning mode. The
- * name of the profile is unique and follows the format of parent//null-XXX.
- * where XXX is based on the @name or if that fails or is not supplied
- * a unique number
- *
- * null profiles are added to the profile list but the list does not
- * hold a count on them so that they are automatically released when
- * not in use.
- *
- * Returns: new refcounted profile else NULL on failure
- */
-struct aa_profile *aa_new_null_profile(struct aa_profile *parent, bool hat,
- const char *base, gfp_t gfp)
-{
- struct aa_profile *profile;
- char *name;
-
- AA_BUG(!parent);
-
- if (base) {
- name = kmalloc(strlen(parent->base.hname) + 8 + strlen(base),
- gfp);
- if (name) {
- sprintf(name, "%s//null-%s", parent->base.hname, base);
- goto name;
- }
- /* fall through to try shorter uniq */
- }
-
- name = kmalloc(strlen(parent->base.hname) + 2 + 7 + 8, gfp);
- if (!name)
- return NULL;
- sprintf(name, "%s//null-%x", parent->base.hname,
- atomic_inc_return(&parent->ns->uniq_null));
-
-name:
- /* lookup to see if this is a dup creation */
- profile = aa_find_child(parent, basename(name));
- if (profile)
- goto out;
-
- profile = aa_alloc_profile(name, NULL, gfp);
- if (!profile)
- goto fail;
-
- profile->mode = APPARMOR_COMPLAIN;
- profile->label.flags |= FLAG_NULL;
- if (hat)
- profile->label.flags |= FLAG_HAT;
- profile->path_flags = parent->path_flags;
-
- /* released on free_profile */
- rcu_assign_pointer(profile->parent, aa_get_profile(parent));
- profile->ns = aa_get_ns(parent->ns);
- profile->file.dfa = aa_get_dfa(nulldfa);
- profile->policy.dfa = aa_get_dfa(nulldfa);
-
- mutex_lock(&profile->ns->lock);
- __add_profile(&parent->base.profiles, profile);
- mutex_unlock(&profile->ns->lock);
-
- /* refcount released by caller */
-out:
- kfree(name);
-
- return profile;
-
-fail:
- aa_free_profile(profile);
- return NULL;
-}
-
/* TODO: profile accounting - setup in remove */
/**
return profile;
}
+/**
+ * aa_new_null_profile - create or find a null-X learning profile
+ * @parent: profile that caused this profile to be created (NOT NULL)
+ * @hat: true if the null- learning profile is a hat
+ * @base: name to base the null profile off of
+ * @gfp: type of allocation
+ *
+ * Find/Create a null- complain mode profile used in learning mode. The
+ * name of the profile is unique and follows the format of parent//null-XXX.
+ * where XXX is based on the @name or if that fails or is not supplied
+ * a unique number
+ *
+ * null profiles are added to the profile list but the list does not
+ * hold a count on them so that they are automatically released when
+ * not in use.
+ *
+ * Returns: new refcounted profile else NULL on failure
+ */
+struct aa_profile *aa_new_null_profile(struct aa_profile *parent, bool hat,
+ const char *base, gfp_t gfp)
+{
+ struct aa_profile *p, *profile;
+ const char *bname;
+ char *name;
+
+ AA_BUG(!parent);
+
+ if (base) {
+ name = kmalloc(strlen(parent->base.hname) + 8 + strlen(base),
+ gfp);
+ if (name) {
+ sprintf(name, "%s//null-%s", parent->base.hname, base);
+ goto name;
+ }
+ /* fall through to try shorter uniq */
+ }
+
+ name = kmalloc(strlen(parent->base.hname) + 2 + 7 + 8, gfp);
+ if (!name)
+ return NULL;
+ sprintf(name, "%s//null-%x", parent->base.hname,
+ atomic_inc_return(&parent->ns->uniq_null));
+
+name:
+ /* lookup to see if this is a dup creation */
+ bname = basename(name);
+ profile = aa_find_child(parent, bname);
+ if (profile)
+ goto out;
+
+ profile = aa_alloc_profile(name, NULL, gfp);
+ if (!profile)
+ goto fail;
+
+ profile->mode = APPARMOR_COMPLAIN;
+ profile->label.flags |= FLAG_NULL;
+ if (hat)
+ profile->label.flags |= FLAG_HAT;
+ profile->path_flags = parent->path_flags;
+
+ /* released on free_profile */
+ rcu_assign_pointer(profile->parent, aa_get_profile(parent));
+ profile->ns = aa_get_ns(parent->ns);
+ profile->file.dfa = aa_get_dfa(nulldfa);
+ profile->policy.dfa = aa_get_dfa(nulldfa);
+
+ mutex_lock(&profile->ns->lock);
+ p = __find_child(&parent->base.profiles, bname);
+ if (p) {
+ aa_free_profile(profile);
+ profile = aa_get_profile(p);
+ } else {
+ __add_profile(&parent->base.profiles, profile);
+ }
+ mutex_unlock(&profile->ns->lock);
+
+ /* refcount released by caller */
+out:
+ kfree(name);
+
+ return profile;
+
+fail:
+ aa_free_profile(profile);
+ return NULL;
+}
+
/**
* replacement_allowed - test to see if replacement is allowed
* @profile: profile to test if it can be replaced (MAYBE NULL)
ns->unconfined->label.flags |= FLAG_IX_ON_NAME_ERROR |
FLAG_IMMUTIBLE | FLAG_NS_COUNT | FLAG_UNCONFINED;
ns->unconfined->mode = APPARMOR_UNCONFINED;
+ ns->unconfined->file.dfa = aa_get_dfa(nulldfa);
+ ns->unconfined->policy.dfa = aa_get_dfa(nulldfa);
/* ns and ns->unconfined share ns->unconfined refcount */
ns->unconfined->ns = ns;
audit_log_format(ab, " ns=");
audit_log_untrustedstring(ab, aad(sa)->iface.ns);
}
- if (aad(sa)->iface.name) {
+ if (aad(sa)->name) {
audit_log_format(ab, " name=");
- audit_log_untrustedstring(ab, aad(sa)->iface.name);
+ audit_log_untrustedstring(ab, aad(sa)->name);
}
if (aad(sa)->iface.pos)
audit_log_format(ab, " offset=%ld", aad(sa)->iface.pos);
aad(&sa)->iface.pos = e->pos - e->start;
aad(&sa)->iface.ns = ns_name;
if (new)
- aad(&sa)->iface.name = new->base.hname;
+ aad(&sa)->name = new->base.hname;
else
- aad(&sa)->iface.name = name;
+ aad(&sa)->name = name;
aad(&sa)->info = info;
aad(&sa)->error = error;
return 0;
}
+static bool unpack_u16(struct aa_ext *e, u16 *data, const char *name)
+{
+ if (unpack_nameX(e, AA_U16, name)) {
+ if (!inbounds(e, sizeof(u16)))
+ return 0;
+ if (data)
+ *data = le16_to_cpu(get_unaligned((__le16 *) e->pos));
+ e->pos += sizeof(u16);
+ return 1;
+ }
+ return 0;
+}
+
static bool unpack_u32(struct aa_ext *e, u32 *data, const char *name)
{
if (unpack_nameX(e, AA_U32, name)) {
*/
static bool unpack_trans_table(struct aa_ext *e, struct aa_profile *profile)
{
- void *pos = e->pos;
+ void *saved_pos = e->pos;
/* exec table is optional */
if (unpack_nameX(e, AA_STRUCT, "xtable")) {
fail:
aa_free_domain_entries(&profile->file.trans);
- e->pos = pos;
+ e->pos = saved_pos;
return 0;
}
{
struct aa_profile *profile = NULL;
const char *tmpname, *tmpns = NULL, *name = NULL;
- size_t ns_len;
+ const char *info = "failed to unpack profile";
+ size_t size = 0, ns_len;
struct rhashtable_params params = { 0 };
char *key = NULL;
struct aa_data *data;
tmpname = aa_splitn_fqname(name, strlen(name), &tmpns, &ns_len);
if (tmpns) {
*ns_name = kstrndup(tmpns, ns_len, GFP_KERNEL);
- if (!*ns_name)
+ if (!*ns_name) {
+ info = "out of memory";
goto fail;
+ }
name = tmpname;
}
if (IS_ERR(profile->xmatch)) {
error = PTR_ERR(profile->xmatch);
profile->xmatch = NULL;
+ info = "bad xmatch";
goto fail;
}
/* xmatch_len is not optional if xmatch is set */
if (profile->xmatch) {
- if (!unpack_u32(e, &tmp, NULL))
+ if (!unpack_u32(e, &tmp, NULL)) {
+ info = "missing xmatch len";
goto fail;
+ }
profile->xmatch_len = tmp;
}
(void) unpack_str(e, &profile->disconnected, "disconnected");
/* per profile debug flags (complain, audit) */
- if (!unpack_nameX(e, AA_STRUCT, "flags"))
+ if (!unpack_nameX(e, AA_STRUCT, "flags")) {
+ info = "profile missing flags";
goto fail;
+ }
+ info = "failed to unpack profile flags";
if (!unpack_u32(e, &tmp, NULL))
goto fail;
if (tmp & PACKED_FLAG_HAT)
/* set a default value if path_flags field is not present */
profile->path_flags = PATH_MEDIATE_DELETED;
+ info = "failed to unpack profile capabilities";
if (!unpack_u32(e, &(profile->caps.allow.cap[0]), NULL))
goto fail;
if (!unpack_u32(e, &(profile->caps.audit.cap[0]), NULL))
if (!unpack_u32(e, &tmpcap.cap[0], NULL))
goto fail;
+ info = "failed to unpack upper profile capabilities";
if (unpack_nameX(e, AA_STRUCT, "caps64")) {
/* optional upper half of 64 bit caps */
if (!unpack_u32(e, &(profile->caps.allow.cap[1]), NULL))
goto fail;
}
+ info = "failed to unpack extended profile capabilities";
if (unpack_nameX(e, AA_STRUCT, "capsx")) {
/* optional extended caps mediation mask */
if (!unpack_u32(e, &(profile->caps.extended.cap[0]), NULL))
goto fail;
}
- if (!unpack_rlimits(e, profile))
+ if (!unpack_rlimits(e, profile)) {
+ info = "failed to unpack profile rlimits";
goto fail;
+ }
+
+ size = unpack_array(e, "net_allowed_af");
+ if (size) {
+
+ for (i = 0; i < size; i++) {
+ /* discard extraneous rules that this kernel will
+ * never request
+ */
+ if (i >= AF_MAX) {
+ u16 tmp;
+
+ if (!unpack_u16(e, &tmp, NULL) ||
+ !unpack_u16(e, &tmp, NULL) ||
+ !unpack_u16(e, &tmp, NULL))
+ goto fail;
+ continue;
+ }
+ if (!unpack_u16(e, &profile->net.allow[i], NULL))
+ goto fail;
+ if (!unpack_u16(e, &profile->net.audit[i], NULL))
+ goto fail;
+ if (!unpack_u16(e, &profile->net.quiet[i], NULL))
+ goto fail;
+ }
+ if (!unpack_nameX(e, AA_ARRAYEND, NULL))
+ goto fail;
+ }
+ if (VERSION_LT(e->version, v7)) {
+ /* pre v7 policy always allowed these */
+ profile->net.allow[AF_UNIX] = 0xffff;
+ profile->net.allow[AF_NETLINK] = 0xffff;
+ }
if (unpack_nameX(e, AA_STRUCT, "policydb")) {
/* generic policy dfa - optional and may be NULL */
+ info = "failed to unpack policydb";
profile->policy.dfa = unpack_dfa(e);
if (IS_ERR(profile->policy.dfa)) {
error = PTR_ERR(profile->policy.dfa);
if (IS_ERR(profile->file.dfa)) {
error = PTR_ERR(profile->file.dfa);
profile->file.dfa = NULL;
+ info = "failed to unpack profile file rules";
goto fail;
} else if (profile->file.dfa) {
if (!unpack_u32(e, &profile->file.start, "dfa_start"))
} else
profile->file.dfa = aa_get_dfa(nulldfa);
- if (!unpack_trans_table(e, profile))
+ if (!unpack_trans_table(e, profile)) {
+ info = "failed to unpack profile transition table";
goto fail;
+ }
if (unpack_nameX(e, AA_STRUCT, "data")) {
+ info = "out of memory";
profile->data = kzalloc(sizeof(*profile->data), GFP_KERNEL);
if (!profile->data)
goto fail;
params.hashfn = strhash;
params.obj_cmpfn = datacmp;
- if (rhashtable_init(profile->data, ¶ms))
+ if (rhashtable_init(profile->data, ¶ms)) {
+ info = "failed to init key, value hash table";
goto fail;
+ }
while (unpack_strdup(e, &key, NULL)) {
data = kzalloc(sizeof(*data), GFP_KERNEL);
profile->data->p);
}
- if (!unpack_nameX(e, AA_STRUCTEND, NULL))
+ if (!unpack_nameX(e, AA_STRUCTEND, NULL)) {
+ info = "failed to unpack end of key, value data table";
goto fail;
+ }
}
- if (!unpack_nameX(e, AA_STRUCTEND, NULL))
+ if (!unpack_nameX(e, AA_STRUCTEND, NULL)) {
+ info = "failed to unpack end of profile";
goto fail;
+ }
return profile;
name = NULL;
else if (!name)
name = "unknown";
- audit_iface(profile, NULL, name, "failed to unpack profile", e,
- error);
+ audit_iface(profile, NULL, name, info, e, error);
aa_free_profile(profile);
return ERR_PTR(error);
* if not specified use previous version
* Mask off everything that is not kernel abi version
*/
- if (VERSION_LT(e->version, v5) && VERSION_GT(e->version, v7)) {
+ if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v7)) {
audit_iface(NULL, NULL, NULL, "unsupported interface version",
e, error);
return error;
*
* Determine if an inode having a change applied that's marked ATTR_KILL_PRIV
* affects the security markings on that inode, and if it is, should
- * inode_killpriv() be invoked or the change rejected?
+ * inode_killpriv() be invoked or the change rejected.
*
- * Returns 0 if granted; +ve if granted, but inode_killpriv() is required; and
- * -ve to deny the change.
+ * Returns 1 if security.capability has a value, meaning inode_killpriv()
+ * is required, 0 otherwise, meaning inode_killpriv() is not required.
*/
int cap_inode_need_killpriv(struct dentry *dentry)
{
bool "Large payload keys"
depends on KEYS
depends on TMPFS
- depends on (CRYPTO_ANSI_CPRNG = y || CRYPTO_DRBG = y)
select CRYPTO_AES
- select CRYPTO_ECB
- select CRYPTO_RNG
+ select CRYPTO_GCM
help
This option provides support for holding large keys within the kernel
(for example Kerberos ticket caches). The data may be stored out to
/* Large capacity key type
*
+ * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
* Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
#include <linux/shmem_fs.h>
#include <linux/err.h>
#include <linux/scatterlist.h>
+#include <linux/random.h>
#include <keys/user-type.h>
#include <keys/big_key-type.h>
-#include <crypto/rng.h>
-#include <crypto/skcipher.h>
+#include <crypto/aead.h>
/*
* Layout of key payload words.
/*
* Key size for big_key data encryption
*/
-#define ENC_KEY_SIZE 16
+#define ENC_KEY_SIZE 32
+
+/*
+ * Authentication tag length
+ */
+#define ENC_AUTHTAG_SIZE 16
/*
* big_key defined keys take an arbitrary string as the description and an
.destroy = big_key_destroy,
.describe = big_key_describe,
.read = big_key_read,
+ /* no ->update(); don't add it without changing big_key_crypt() nonce */
};
/*
- * Crypto names for big_key data encryption
+ * Crypto names for big_key data authenticated encryption
*/
-static const char big_key_rng_name[] = "stdrng";
-static const char big_key_alg_name[] = "ecb(aes)";
+static const char big_key_alg_name[] = "gcm(aes)";
/*
- * Crypto algorithms for big_key data encryption
+ * Crypto algorithms for big_key data authenticated encryption
*/
-static struct crypto_rng *big_key_rng;
-static struct crypto_skcipher *big_key_skcipher;
+static struct crypto_aead *big_key_aead;
/*
- * Generate random key to encrypt big_key data
+ * Since changing the key affects the entire object, we need a mutex.
*/
-static inline int big_key_gen_enckey(u8 *key)
-{
- return crypto_rng_get_bytes(big_key_rng, key, ENC_KEY_SIZE);
-}
+static DEFINE_MUTEX(big_key_aead_lock);
/*
* Encrypt/decrypt big_key data
*/
static int big_key_crypt(enum big_key_op op, u8 *data, size_t datalen, u8 *key)
{
- int ret = -EINVAL;
+ int ret;
struct scatterlist sgio;
- SKCIPHER_REQUEST_ON_STACK(req, big_key_skcipher);
-
- if (crypto_skcipher_setkey(big_key_skcipher, key, ENC_KEY_SIZE)) {
+ struct aead_request *aead_req;
+ /* We always use a zero nonce. The reason we can get away with this is
+ * because we're using a different randomly generated key for every
+ * different encryption. Notably, too, key_type_big_key doesn't define
+ * an .update function, so there's no chance we'll wind up reusing the
+ * key to encrypt updated data. Simply put: one key, one encryption.
+ */
+ u8 zero_nonce[crypto_aead_ivsize(big_key_aead)];
+
+ aead_req = aead_request_alloc(big_key_aead, GFP_KERNEL);
+ if (!aead_req)
+ return -ENOMEM;
+
+ memset(zero_nonce, 0, sizeof(zero_nonce));
+ sg_init_one(&sgio, data, datalen + (op == BIG_KEY_ENC ? ENC_AUTHTAG_SIZE : 0));
+ aead_request_set_crypt(aead_req, &sgio, &sgio, datalen, zero_nonce);
+ aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
+ aead_request_set_ad(aead_req, 0);
+
+ mutex_lock(&big_key_aead_lock);
+ if (crypto_aead_setkey(big_key_aead, key, ENC_KEY_SIZE)) {
ret = -EAGAIN;
goto error;
}
-
- skcipher_request_set_tfm(req, big_key_skcipher);
- skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
- NULL, NULL);
-
- sg_init_one(&sgio, data, datalen);
- skcipher_request_set_crypt(req, &sgio, &sgio, datalen, NULL);
-
if (op == BIG_KEY_ENC)
- ret = crypto_skcipher_encrypt(req);
+ ret = crypto_aead_encrypt(aead_req);
else
- ret = crypto_skcipher_decrypt(req);
-
- skcipher_request_zero(req);
-
+ ret = crypto_aead_decrypt(aead_req);
error:
+ mutex_unlock(&big_key_aead_lock);
+ aead_request_free(aead_req);
return ret;
}
*
* File content is stored encrypted with randomly generated key.
*/
- size_t enclen = ALIGN(datalen, crypto_skcipher_blocksize(big_key_skcipher));
+ size_t enclen = datalen + ENC_AUTHTAG_SIZE;
loff_t pos = 0;
- /* prepare aligned data to encrypt */
data = kmalloc(enclen, GFP_KERNEL);
if (!data)
return -ENOMEM;
-
memcpy(data, prep->data, datalen);
- memset(data + datalen, 0x00, enclen - datalen);
/* generate random key */
enckey = kmalloc(ENC_KEY_SIZE, GFP_KERNEL);
ret = -ENOMEM;
goto error;
}
-
- ret = big_key_gen_enckey(enckey);
- if (ret)
+ ret = get_random_bytes_wait(enckey, ENC_KEY_SIZE);
+ if (unlikely(ret))
goto err_enckey;
/* encrypt aligned data */
- ret = big_key_crypt(BIG_KEY_ENC, data, enclen, enckey);
+ ret = big_key_crypt(BIG_KEY_ENC, data, datalen, enckey);
if (ret)
goto err_enckey;
*path = file->f_path;
path_get(path);
fput(file);
- kfree(data);
+ kzfree(data);
} else {
/* Just store the data in a buffer */
void *data = kmalloc(datalen, GFP_KERNEL);
err_fput:
fput(file);
err_enckey:
- kfree(enckey);
+ kzfree(enckey);
error:
- kfree(data);
+ kzfree(data);
return ret;
}
path_put(path);
}
- kfree(prep->payload.data[big_key_data]);
+ kzfree(prep->payload.data[big_key_data]);
}
/*
path->mnt = NULL;
path->dentry = NULL;
}
- kfree(key->payload.data[big_key_data]);
+ kzfree(key->payload.data[big_key_data]);
key->payload.data[big_key_data] = NULL;
}
struct file *file;
u8 *data;
u8 *enckey = (u8 *)key->payload.data[big_key_data];
- size_t enclen = ALIGN(datalen, crypto_skcipher_blocksize(big_key_skcipher));
+ size_t enclen = datalen + ENC_AUTHTAG_SIZE;
loff_t pos = 0;
data = kmalloc(enclen, GFP_KERNEL);
err_fput:
fput(file);
error:
- kfree(data);
+ kzfree(data);
} else {
ret = datalen;
if (copy_to_user(buffer, key->payload.data[big_key_data],
*/
static int __init big_key_init(void)
{
- struct crypto_skcipher *cipher;
- struct crypto_rng *rng;
int ret;
- rng = crypto_alloc_rng(big_key_rng_name, 0, 0);
- if (IS_ERR(rng)) {
- pr_err("Can't alloc rng: %ld\n", PTR_ERR(rng));
- return PTR_ERR(rng);
- }
-
- big_key_rng = rng;
-
- /* seed RNG */
- ret = crypto_rng_reset(rng, NULL, crypto_rng_seedsize(rng));
- if (ret) {
- pr_err("Can't reset rng: %d\n", ret);
- goto error_rng;
- }
-
/* init block cipher */
- cipher = crypto_alloc_skcipher(big_key_alg_name, 0, CRYPTO_ALG_ASYNC);
- if (IS_ERR(cipher)) {
- ret = PTR_ERR(cipher);
+ big_key_aead = crypto_alloc_aead(big_key_alg_name, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(big_key_aead)) {
+ ret = PTR_ERR(big_key_aead);
pr_err("Can't alloc crypto: %d\n", ret);
- goto error_rng;
+ return ret;
+ }
+ ret = crypto_aead_setauthsize(big_key_aead, ENC_AUTHTAG_SIZE);
+ if (ret < 0) {
+ pr_err("Can't set crypto auth tag len: %d\n", ret);
+ goto free_aead;
}
-
- big_key_skcipher = cipher;
ret = register_key_type(&key_type_big_key);
if (ret < 0) {
pr_err("Can't register type: %d\n", ret);
- goto error_cipher;
+ goto free_aead;
}
return 0;
-error_cipher:
- crypto_free_skcipher(big_key_skcipher);
-error_rng:
- crypto_free_rng(big_key_rng);
+free_aead:
+ crypto_free_aead(big_key_aead);
return ret;
}
extern key_ref_t search_my_process_keyrings(struct keyring_search_context *ctx);
extern key_ref_t search_process_keyrings(struct keyring_search_context *ctx);
-extern struct key *find_keyring_by_name(const char *name, bool skip_perm_check);
+extern struct key *find_keyring_by_name(const char *name, bool uid_keyring);
extern int install_user_keyrings(void);
extern int install_thread_keyring_to_cred(struct cred *);
struct key_user *key_user_lookup(kuid_t uid)
{
struct key_user *candidate = NULL, *user;
- struct rb_node *parent = NULL;
- struct rb_node **p;
+ struct rb_node *parent, **p;
try_again:
+ parent = NULL;
p = &key_user_tree.rb_node;
spin_lock(&key_user_lock);
key->flags |= 1 << KEY_FLAG_IN_QUOTA;
if (flags & KEY_ALLOC_BUILT_IN)
key->flags |= 1 << KEY_FLAG_BUILTIN;
+ if (flags & KEY_ALLOC_UID_KEYRING)
+ key->flags |= 1 << KEY_FLAG_UID_KEYRING;
#ifdef KEY_DEBUGGING
key->magic = KEY_DEBUG_MAGIC;
key = key_ref_to_ptr(key_ref);
+ if (test_bit(KEY_FLAG_NEGATIVE, &key->flags)) {
+ ret = -ENOKEY;
+ goto error2;
+ }
+
/* see if we can read it directly */
ret = key_permission(key_ref, KEY_NEED_READ);
if (ret == 0)
goto can_read_key;
if (ret != -EACCES)
- goto error;
+ goto error2;
/* we can't; see if it's searchable from this process's keyrings
* - we automatically take account of the fact that it may be
}
ret = keyctl_change_reqkey_auth(authkey);
- if (ret < 0)
- goto error;
+ if (ret == 0)
+ ret = authkey->serial;
key_put(authkey);
-
- ret = authkey->serial;
error:
return ret;
}
}
struct keyring_read_iterator_context {
- size_t qty;
+ size_t buflen;
size_t count;
key_serial_t __user *buffer;
};
int ret;
kenter("{%s,%d},,{%zu/%zu}",
- key->type->name, key->serial, ctx->count, ctx->qty);
+ key->type->name, key->serial, ctx->count, ctx->buflen);
- if (ctx->count >= ctx->qty)
+ if (ctx->count >= ctx->buflen)
return 1;
ret = put_user(key->serial, ctx->buffer);
return 0;
/* Calculate how much data we could return */
- ctx.qty = nr_keys * sizeof(key_serial_t);
-
if (!buffer || !buflen)
- return ctx.qty;
-
- if (buflen > ctx.qty)
- ctx.qty = buflen;
+ return nr_keys * sizeof(key_serial_t);
/* Copy the IDs of the subscribed keys into the buffer */
ctx.buffer = (key_serial_t __user *)buffer;
+ ctx.buflen = buflen;
ctx.count = 0;
ret = assoc_array_iterate(&keyring->keys, keyring_read_iterator, &ctx);
if (ret < 0) {
/*
* Find a keyring with the specified name.
*
- * All named keyrings in the current user namespace are searched, provided they
- * grant Search permission directly to the caller (unless this check is
- * skipped). Keyrings whose usage points have reached zero or who have been
- * revoked are skipped.
+ * Only keyrings that have nonzero refcount, are not revoked, and are owned by a
+ * user in the current user namespace are considered. If @uid_keyring is %true,
+ * the keyring additionally must have been allocated as a user or user session
+ * keyring; otherwise, it must grant Search permission directly to the caller.
*
* Returns a pointer to the keyring with the keyring's refcount having being
* incremented on success. -ENOKEY is returned if a key could not be found.
*/
-struct key *find_keyring_by_name(const char *name, bool skip_perm_check)
+struct key *find_keyring_by_name(const char *name, bool uid_keyring)
{
struct key *keyring;
int bucket;
if (strcmp(keyring->description, name) != 0)
continue;
- if (!skip_perm_check &&
- key_permission(make_key_ref(keyring, 0),
- KEY_NEED_SEARCH) < 0)
- continue;
+ if (uid_keyring) {
+ if (!test_bit(KEY_FLAG_UID_KEYRING,
+ &keyring->flags))
+ continue;
+ } else {
+ if (key_permission(make_key_ref(keyring, 0),
+ KEY_NEED_SEARCH) < 0)
+ continue;
+ }
/* we've got a match but we might end up racing with
* key_cleanup() if the keyring is currently 'dead'
struct keyring_search_context ctx = {
.index_key.type = key->type,
.index_key.description = key->description,
- .cred = current_cred(),
+ .cred = m->file->f_cred,
.match_data.cmp = lookup_user_key_possessed,
.match_data.raw_data = key,
.match_data.lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT,
}
}
- /* check whether the current task is allowed to view the key (assuming
- * non-possession)
- * - the caller holds a spinlock, and thus the RCU read lock, making our
- * access to __current_cred() safe
- */
+ /* check whether the current task is allowed to view the key */
rc = key_task_permission(key_ref, ctx.cred, KEY_NEED_VIEW);
if (rc < 0)
return 0;
if (IS_ERR(uid_keyring)) {
uid_keyring = keyring_alloc(buf, user->uid, INVALID_GID,
cred, user_keyring_perm,
- KEY_ALLOC_IN_QUOTA,
+ KEY_ALLOC_UID_KEYRING |
+ KEY_ALLOC_IN_QUOTA,
NULL, NULL);
if (IS_ERR(uid_keyring)) {
ret = PTR_ERR(uid_keyring);
session_keyring =
keyring_alloc(buf, user->uid, INVALID_GID,
cred, user_keyring_perm,
- KEY_ALLOC_IN_QUOTA,
+ KEY_ALLOC_UID_KEYRING |
+ KEY_ALLOC_IN_QUOTA,
NULL, NULL);
if (IS_ERR(session_keyring)) {
ret = PTR_ERR(session_keyring);
}
}
+static void free_request_key_auth(struct request_key_auth *rka)
+{
+ if (!rka)
+ return;
+ key_put(rka->target_key);
+ key_put(rka->dest_keyring);
+ if (rka->cred)
+ put_cred(rka->cred);
+ kfree(rka->callout_info);
+ kfree(rka);
+}
+
/*
* Destroy an instantiation authorisation token key.
*/
kenter("{%d}", key->serial);
- if (rka->cred) {
- put_cred(rka->cred);
- rka->cred = NULL;
- }
-
- key_put(rka->target_key);
- key_put(rka->dest_keyring);
- kfree(rka->callout_info);
- kfree(rka);
+ free_request_key_auth(rka);
}
/*
const struct cred *cred = current->cred;
struct key *authkey = NULL;
char desc[20];
- int ret;
+ int ret = -ENOMEM;
kenter("%d,", target->serial);
/* allocate a auth record */
- rka = kmalloc(sizeof(*rka), GFP_KERNEL);
- if (!rka) {
- kleave(" = -ENOMEM");
- return ERR_PTR(-ENOMEM);
- }
- rka->callout_info = kmalloc(callout_len, GFP_KERNEL);
- if (!rka->callout_info) {
- kleave(" = -ENOMEM");
- kfree(rka);
- return ERR_PTR(-ENOMEM);
- }
+ rka = kzalloc(sizeof(*rka), GFP_KERNEL);
+ if (!rka)
+ goto error;
+ rka->callout_info = kmemdup(callout_info, callout_len, GFP_KERNEL);
+ if (!rka->callout_info)
+ goto error_free_rka;
+ rka->callout_len = callout_len;
/* see if the calling process is already servicing the key request of
* another process */
/* if the auth key has been revoked, then the key we're
* servicing is already instantiated */
- if (test_bit(KEY_FLAG_REVOKED, &cred->request_key_auth->flags))
- goto auth_key_revoked;
+ if (test_bit(KEY_FLAG_REVOKED,
+ &cred->request_key_auth->flags)) {
+ up_read(&cred->request_key_auth->sem);
+ ret = -EKEYREVOKED;
+ goto error_free_rka;
+ }
irka = cred->request_key_auth->payload.data[0];
rka->cred = get_cred(irka->cred);
rka->target_key = key_get(target);
rka->dest_keyring = key_get(dest_keyring);
- memcpy(rka->callout_info, callout_info, callout_len);
- rka->callout_len = callout_len;
/* allocate the auth key */
sprintf(desc, "%x", target->serial);
KEY_USR_VIEW, KEY_ALLOC_NOT_IN_QUOTA, NULL);
if (IS_ERR(authkey)) {
ret = PTR_ERR(authkey);
- goto error_alloc;
+ goto error_free_rka;
}
/* construct the auth key */
ret = key_instantiate_and_link(authkey, rka, 0, NULL, NULL);
if (ret < 0)
- goto error_inst;
+ goto error_put_authkey;
kleave(" = {%d,%d}", authkey->serial, refcount_read(&authkey->usage));
return authkey;
-auth_key_revoked:
- up_read(&cred->request_key_auth->sem);
- kfree(rka->callout_info);
- kfree(rka);
- kleave("= -EKEYREVOKED");
- return ERR_PTR(-EKEYREVOKED);
-
-error_inst:
- key_revoke(authkey);
+error_put_authkey:
key_put(authkey);
-error_alloc:
- key_put(rka->target_key);
- key_put(rka->dest_keyring);
- kfree(rka->callout_info);
- kfree(rka);
+error_free_rka:
+ free_request_key_auth(rka);
+error:
kleave("= %d", ret);
return ERR_PTR(ret);
}
* @inode: the object
* @name: attribute name
* @buffer: where to put the result
- * @alloc: unused
+ * @alloc: duplicate memory
*
* Returns the size of the attribute or an error code
*/
struct super_block *sbp;
struct inode *ip = (struct inode *)inode;
struct smack_known *isp;
- int ilen;
- int rc = 0;
- if (strcmp(name, XATTR_SMACK_SUFFIX) == 0) {
+ if (strcmp(name, XATTR_SMACK_SUFFIX) == 0)
isp = smk_of_inode(inode);
- ilen = strlen(isp->smk_known);
- *buffer = isp->smk_known;
- return ilen;
- }
+ else {
+ /*
+ * The rest of the Smack xattrs are only on sockets.
+ */
+ sbp = ip->i_sb;
+ if (sbp->s_magic != SOCKFS_MAGIC)
+ return -EOPNOTSUPP;
- /*
- * The rest of the Smack xattrs are only on sockets.
- */
- sbp = ip->i_sb;
- if (sbp->s_magic != SOCKFS_MAGIC)
- return -EOPNOTSUPP;
+ sock = SOCKET_I(ip);
+ if (sock == NULL || sock->sk == NULL)
+ return -EOPNOTSUPP;
- sock = SOCKET_I(ip);
- if (sock == NULL || sock->sk == NULL)
- return -EOPNOTSUPP;
-
- ssp = sock->sk->sk_security;
+ ssp = sock->sk->sk_security;
- if (strcmp(name, XATTR_SMACK_IPIN) == 0)
- isp = ssp->smk_in;
- else if (strcmp(name, XATTR_SMACK_IPOUT) == 0)
- isp = ssp->smk_out;
- else
- return -EOPNOTSUPP;
+ if (strcmp(name, XATTR_SMACK_IPIN) == 0)
+ isp = ssp->smk_in;
+ else if (strcmp(name, XATTR_SMACK_IPOUT) == 0)
+ isp = ssp->smk_out;
+ else
+ return -EOPNOTSUPP;
+ }
- ilen = strlen(isp->smk_known);
- if (rc == 0) {
- *buffer = isp->smk_known;
- rc = ilen;
+ if (alloc) {
+ *buffer = kstrdup(isp->smk_known, GFP_KERNEL);
+ if (*buffer == NULL)
+ return -ENOMEM;
}
- return rc;
+ return strlen(isp->smk_known);
}
static int snd_compress_dev_register(struct snd_device *device)
{
int ret = -EINVAL;
- char str[16];
struct snd_compr *compr;
if (snd_BUG_ON(!device || !device->device_data))
return -EBADFD;
compr = device->device_data;
- pr_debug("reg %s for device %s, direction %d\n", str, compr->name,
+ pr_debug("reg device %s, direction %d\n", compr->name,
compr->direction);
/* register compressed device */
ret = snd_register_device(SNDRV_DEVICE_TYPE_COMPRESS,
u32 pad2; /* alignment */
struct timespec tstamp;
s32 suspended_state;
+ s32 pad3;
struct timespec audio_tstamp;
} __packed;
void __user *puhr;
union hpi_message_buffer_v1 *hm;
union hpi_response_buffer_v1 *hr;
+ u16 msg_size;
u16 res_max_size;
u32 uncopied_bytes;
int err = 0;
}
/* Now read the message size and data from user space. */
- if (get_user(hm->h.size, (u16 __user *)puhm)) {
+ if (get_user(msg_size, (u16 __user *)puhm)) {
err = -EFAULT;
goto out;
}
- if (hm->h.size > sizeof(*hm))
- hm->h.size = sizeof(*hm);
+ if (msg_size > sizeof(*hm))
+ msg_size = sizeof(*hm);
/* printk(KERN_INFO "message size %d\n", hm->h.wSize); */
- uncopied_bytes = copy_from_user(hm, puhm, hm->h.size);
+ uncopied_bytes = copy_from_user(hm, puhm, msg_size);
if (uncopied_bytes) {
HPI_DEBUG_LOG(ERROR, "uncopied bytes %d\n", uncopied_bytes);
err = -EFAULT;
goto out;
}
+ /* Override h.size in case it is changed between two userspace fetches */
+ hm->h.size = msg_size;
+
if (get_user(res_max_size, (u16 __user *)puhr)) {
err = -EFAULT;
goto out;
chip = snd_kcontrol_chip(kcontrol);
uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ uinfo->count = 1;
uinfo->value.integer.min = ECHOGAIN_MINOUT;
uinfo->value.integer.max = ECHOGAIN_MAXOUT;
uinfo->dimen.d[0] = num_busses_out(chip);
uinfo->dimen.d[1] = num_busses_in(chip);
- uinfo->count = uinfo->dimen.d[0] * uinfo->dimen.d[1];
return 0;
}
chip = snd_kcontrol_chip(kcontrol);
uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ uinfo->count = 1;
uinfo->value.integer.min = ECHOGAIN_MINOUT;
uinfo->value.integer.max = ECHOGAIN_MAXOUT;
uinfo->dimen.d[0] = num_busses_out(chip);
uinfo->dimen.d[1] = num_pipes_out(chip);
- uinfo->count = uinfo->dimen.d[0] * uinfo->dimen.d[1];
return 0;
}
struct snd_ctl_elem_info *uinfo)
{
uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ uinfo->count = 96;
uinfo->value.integer.min = ECHOGAIN_MINOUT;
uinfo->value.integer.max = 0;
#ifdef ECHOCARD_HAS_VMIXER
#endif
uinfo->dimen.d[1] = 16; /* 16 channels */
uinfo->dimen.d[2] = 2; /* 0=level, 1=peak */
- uinfo->count = uinfo->dimen.d[0] * uinfo->dimen.d[1] * uinfo->dimen.d[2];
return 0;
}
hda_nid_t pin_nid, u32 stream_tag, int format)
{
struct hdmi_spec *spec = codec->spec;
+ unsigned int param;
int err;
err = spec->ops.pin_hbr_setup(codec, pin_nid, is_hbr_format(format));
return err;
}
+ if (is_haswell_plus(codec)) {
+
+ /*
+ * on recent platforms IEC Coding Type is required for HBR
+ * support, read current Digital Converter settings and set
+ * ICT bitfield if needed.
+ */
+ param = snd_hda_codec_read(codec, cvt_nid, 0,
+ AC_VERB_GET_DIGI_CONVERT_1, 0);
+
+ param = (param >> 16) & ~(AC_DIG3_ICT);
+
+ /* on recent platforms ICT mode is required for HBR support */
+ if (is_hbr_format(format))
+ param |= 0x1;
+
+ snd_hda_codec_write(codec, cvt_nid, 0,
+ AC_VERB_SET_DIGI_CONVERT_3, param);
+ }
+
snd_hda_codec_setup_stream(codec, cvt_nid, stream_tag, 0, format);
return 0;
}
struct usb_interface_descriptor *altsd;
void *control_header;
int i, protocol;
+ int rest_bytes;
/* find audiocontrol interface */
host_iface = &usb_ifnum_to_if(dev, ctrlif)->altsetting[0];
return -EINVAL;
}
+ rest_bytes = (void *)(host_iface->extra + host_iface->extralen) -
+ control_header;
+
+ /* just to be sure -- this shouldn't hit at all */
+ if (rest_bytes <= 0) {
+ dev_err(&dev->dev, "invalid control header\n");
+ return -EINVAL;
+ }
+
switch (protocol) {
default:
dev_warn(&dev->dev,
case UAC_VERSION_1: {
struct uac1_ac_header_descriptor *h1 = control_header;
+ if (rest_bytes < sizeof(*h1)) {
+ dev_err(&dev->dev, "too short v1 buffer descriptor\n");
+ return -EINVAL;
+ }
+
if (!h1->bInCollection) {
dev_info(&dev->dev, "skipping empty audio interface (v1)\n");
return -EINVAL;
}
+ if (rest_bytes < h1->bLength) {
+ dev_err(&dev->dev, "invalid buffer length (v1)\n");
+ return -EINVAL;
+ }
+
if (h1->bLength < sizeof(*h1) + h1->bInCollection) {
dev_err(&dev->dev, "invalid UAC_HEADER (v1)\n");
return -EINVAL;
case USB_ID(0x047F, 0x02F7): /* Plantronics BT-600 */
case USB_ID(0x047F, 0x0415): /* Plantronics BT-300 */
case USB_ID(0x047F, 0xAA05): /* Plantronics DA45 */
+ case USB_ID(0x047F, 0xC022): /* Plantronics C310 */
+ case USB_ID(0x047F, 0xC036): /* Plantronics C520-M */
case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */
case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */
}
pg = get_order(read_size);
- sk->s = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP|__GFP_ZERO, pg);
+ sk->s = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP|__GFP_ZERO|
+ __GFP_NOWARN, pg);
if (!sk->s) {
snd_printk(KERN_WARNING "couldn't __get_free_pages()\n");
goto out;
pg = get_order(write_size);
sk->write_page =
- (void *)__get_free_pages(GFP_KERNEL|__GFP_COMP|__GFP_ZERO, pg);
+ (void *)__get_free_pages(GFP_KERNEL|__GFP_COMP|__GFP_ZERO|
+ __GFP_NOWARN, pg);
if (!sk->write_page) {
snd_printk(KERN_WARNING "couldn't __get_free_pages()\n");
usb_stream_free(sk);
/* kvm attributes for KVM_S390_VM_TOD */
#define KVM_S390_VM_TOD_LOW 0
#define KVM_S390_VM_TOD_HIGH 1
+#define KVM_S390_VM_TOD_EXT 2
+
+struct kvm_s390_vm_tod_clock {
+ __u8 epoch_idx;
+ __u64 tod;
+};
/* kvm attributes for KVM_S390_VM_CPU_MODEL */
/* processor related attributes are r/w */
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
+#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */
#define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */
#define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */
#define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */
+#define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx), word 16 */
#define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
# define DISABLE_K6_MTRR (1<<(X86_FEATURE_K6_MTRR & 31))
# define DISABLE_CYRIX_ARR (1<<(X86_FEATURE_CYRIX_ARR & 31))
# define DISABLE_CENTAUR_MCR (1<<(X86_FEATURE_CENTAUR_MCR & 31))
+# define DISABLE_PCID 0
#else
# define DISABLE_VME 0
# define DISABLE_K6_MTRR 0
# define DISABLE_CYRIX_ARR 0
# define DISABLE_CENTAUR_MCR 0
+# define DISABLE_PCID (1<<(X86_FEATURE_PCID & 31))
#endif /* CONFIG_X86_64 */
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
#define DISABLED_MASK1 0
#define DISABLED_MASK2 0
#define DISABLED_MASK3 (DISABLE_CYRIX_ARR|DISABLE_CENTAUR_MCR|DISABLE_K6_MTRR)
-#define DISABLED_MASK4 0
+#define DISABLED_MASK4 (DISABLE_PCID)
#define DISABLED_MASK5 0
#define DISABLED_MASK6 0
#define DISABLED_MASK7 0
--- /dev/null
+#ifndef _ASM_GENERIC_HUGETLB_ENCODE_H_
+#define _ASM_GENERIC_HUGETLB_ENCODE_H_
+
+/*
+ * Several system calls take a flag to request "hugetlb" huge pages.
+ * Without further specification, these system calls will use the
+ * system's default huge page size. If a system supports multiple
+ * huge page sizes, the desired huge page size can be specified in
+ * bits [26:31] of the flag arguments. The value in these 6 bits
+ * will encode the log2 of the huge page size.
+ *
+ * The following definitions are associated with this huge page size
+ * encoding in flag arguments. System call specific header files
+ * that use this encoding should include this file. They can then
+ * provide definitions based on these with their own specific prefix.
+ * for example:
+ * #define MAP_HUGE_SHIFT HUGETLB_FLAG_ENCODE_SHIFT
+ */
+
+#define HUGETLB_FLAG_ENCODE_SHIFT 26
+#define HUGETLB_FLAG_ENCODE_MASK 0x3f
+
+#define HUGETLB_FLAG_ENCODE_64KB (16 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_512KB (19 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_1MB (20 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_2MB (21 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_8MB (23 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_16MB (24 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_256MB (28 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_1GB (30 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_2GB (31 << HUGETLB_FLAG_ENCODE_SHIFT)
+#define HUGETLB_FLAG_ENCODE_16GB (34 << HUGETLB_FLAG_ENCODE_SHIFT)
+
+#endif /* _ASM_GENERIC_HUGETLB_ENCODE_H_ */
overrides the coredump filter bits */
#define MADV_DODUMP 17 /* Clear the MADV_DONTDUMP flag */
+#define MADV_WIPEONFORK 18 /* Zero memory on fork, child only */
+#define MADV_KEEPONFORK 19 /* Undo MADV_WIPEONFORK */
+
/* compatibility flags */
#define MAP_FILE 0
-/*
- * When MAP_HUGETLB is set bits [26:31] encode the log2 of the huge page size.
- * This gives us 6 bits, which is enough until someone invents 128 bit address
- * spaces.
- *
- * Assume these are all power of twos.
- * When 0 use the default page size.
- */
-#define MAP_HUGE_SHIFT 26
-#define MAP_HUGE_MASK 0x3f
-
#define PKEY_DISABLE_ACCESS 0x1
#define PKEY_DISABLE_WRITE 0x2
#define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\
struct drm_syncobj_create {
__u32 handle;
+#define DRM_SYNCOBJ_CREATE_SIGNALED (1 << 0)
__u32 flags;
};
__u32 pad;
};
+#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL (1 << 0)
+#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT (1 << 1)
+struct drm_syncobj_wait {
+ __u64 handles;
+ /* absolute timeout */
+ __s64 timeout_nsec;
+ __u32 count_handles;
+ __u32 flags;
+ __u32 first_signaled; /* only valid when not waiting all */
+ __u32 pad;
+};
+
+struct drm_syncobj_array {
+ __u64 handles;
+ __u32 count_handles;
+ __u32 pad;
+};
+
#if defined(__cplusplus)
}
#endif
#define DRM_IOCTL_SYNCOBJ_DESTROY DRM_IOWR(0xC0, struct drm_syncobj_destroy)
#define DRM_IOCTL_SYNCOBJ_HANDLE_TO_FD DRM_IOWR(0xC1, struct drm_syncobj_handle)
#define DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE DRM_IOWR(0xC2, struct drm_syncobj_handle)
+#define DRM_IOCTL_SYNCOBJ_WAIT DRM_IOWR(0xC3, struct drm_syncobj_wait)
+#define DRM_IOCTL_SYNCOBJ_RESET DRM_IOWR(0xC4, struct drm_syncobj_array)
+#define DRM_IOCTL_SYNCOBJ_SIGNAL DRM_IOWR(0xC5, struct drm_syncobj_array)
/**
* Device specific ioctls should only be in their respective headers
#define DRM_I915_GEM_CONTEXT_GETPARAM 0x34
#define DRM_I915_GEM_CONTEXT_SETPARAM 0x35
#define DRM_I915_PERF_OPEN 0x36
+#define DRM_I915_PERF_ADD_CONFIG 0x37
+#define DRM_I915_PERF_REMOVE_CONFIG 0x38
#define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t)
#define DRM_IOCTL_I915_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH)
#define DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_GETPARAM, struct drm_i915_gem_context_param)
#define DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_SETPARAM, struct drm_i915_gem_context_param)
#define DRM_IOCTL_I915_PERF_OPEN DRM_IOW(DRM_COMMAND_BASE + DRM_I915_PERF_OPEN, struct drm_i915_perf_open_param)
+#define DRM_IOCTL_I915_PERF_ADD_CONFIG DRM_IOW(DRM_COMMAND_BASE + DRM_I915_PERF_ADD_CONFIG, struct drm_i915_perf_oa_config)
+#define DRM_IOCTL_I915_PERF_REMOVE_CONFIG DRM_IOW(DRM_COMMAND_BASE + DRM_I915_PERF_REMOVE_CONFIG, __u64)
/* Allow drivers to submit batchbuffers directly to hardware, relying
* on the security mechanisms provided by hardware.
*/
#define I915_PARAM_HAS_EXEC_BATCH_FIRST 48
+/* Query whether DRM_I915_GEM_EXECBUFFER2 supports supplying an array of
+ * drm_i915_gem_exec_fence structures. See I915_EXEC_FENCE_ARRAY.
+ */
+#define I915_PARAM_HAS_EXEC_FENCE_ARRAY 49
+
typedef struct drm_i915_getparam {
__s32 param;
/*
__u64 rsvd2;
};
+struct drm_i915_gem_exec_fence {
+ /**
+ * User's handle for a drm_syncobj to wait on or signal.
+ */
+ __u32 handle;
+
+#define I915_EXEC_FENCE_WAIT (1<<0)
+#define I915_EXEC_FENCE_SIGNAL (1<<1)
+ __u32 flags;
+};
+
struct drm_i915_gem_execbuffer2 {
/**
* List of gem_exec_object2 structs
__u32 DR1;
__u32 DR4;
__u32 num_cliprects;
- /** This is a struct drm_clip_rect *cliprects */
+ /**
+ * This is a struct drm_clip_rect *cliprects if I915_EXEC_FENCE_ARRAY
+ * is not set. If I915_EXEC_FENCE_ARRAY is set, then this is a
+ * struct drm_i915_gem_exec_fence *fences.
+ */
__u64 cliprects_ptr;
#define I915_EXEC_RING_MASK (7<<0)
#define I915_EXEC_DEFAULT (0<<0)
* element).
*/
#define I915_EXEC_BATCH_FIRST (1<<18)
-#define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_BATCH_FIRST<<1))
+
+/* Setting I915_FENCE_ARRAY implies that num_cliprects and cliprects_ptr
+ * define an array of i915_gem_exec_fence structures which specify a set of
+ * dma fences to wait upon or signal.
+ */
+#define I915_EXEC_FENCE_ARRAY (1<<19)
+
+#define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_ARRAY<<1))
#define I915_EXEC_CONTEXT_ID_MASK (0xffffffff)
#define i915_execbuffer2_set_context_id(eb2, context) \
DRM_I915_PERF_RECORD_MAX /* non-ABI */
};
+/**
+ * Structure to upload perf dynamic configuration into the kernel.
+ */
+struct drm_i915_perf_oa_config {
+ /** String formatted like "%08x-%04x-%04x-%04x-%012x" */
+ char uuid[36];
+
+ __u32 n_mux_regs;
+ __u32 n_boolean_regs;
+ __u32 n_flex_regs;
+
+ __u64 __user mux_regs_ptr;
+ __u64 __user boolean_regs_ptr;
+ __u64 __user flex_regs_ptr;
+};
+
#if defined(__cplusplus)
}
#endif
#define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE
-enum bpf_sockmap_flags {
- BPF_SOCKMAP_UNSPEC,
- BPF_SOCKMAP_STRPARSER,
- __MAX_BPF_SOCKMAP_FLAG
-};
-
/* If BPF_F_ALLOW_OVERRIDE flag is used in BPF_PROG_ATTACH command
* to the given target_fd cgroup the descendent cgroup will be able to
* override effective bpf program that was inherited from this cgroup
* int bpf_redirect(ifindex, flags)
* redirect to another netdev
* @ifindex: ifindex of the net device
- * @flags: bit 0 - if set, redirect to ingress instead of egress
- * other bits - reserved
- * Return: TC_ACT_REDIRECT
+ * @flags:
+ * cls_bpf:
+ * bit 0 - if set, redirect to ingress instead of egress
+ * other bits - reserved
+ * xdp_bpf:
+ * all bits - reserved
+ * Return: cls_bpf: TC_ACT_REDIRECT on success or TC_ACT_SHOT on error
+ * xdp_bfp: XDP_REDIRECT on success or XDP_ABORT on error
+ * int bpf_redirect_map(map, key, flags)
+ * redirect to endpoint in map
+ * @map: pointer to dev map
+ * @key: index in map to lookup
+ * @flags: --
+ * Return: XDP_REDIRECT on success or XDP_ABORT on error
*
* u32 bpf_get_route_realm(skb)
* retrieve a dst's tclassid
FN(skb_adjust_room), \
FN(redirect_map), \
FN(sk_redirect_map), \
- FN(sock_map_update),
+ FN(sock_map_update), \
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
__u32 family;
__u32 type;
__u32 protocol;
+ __u32 mark;
+ __u32 priority;
};
#define XDP_PACKET_HEADROOM 256
/* User return codes for XDP prog type.
* A valid XDP program must return one of these defined values. All other
- * return codes are reserved for future use. Unknown return codes will result
- * in packet drop.
+ * return codes are reserved for future use. Unknown return codes will
+ * result in packet drops and a warning via bpf_warn_invalid_xdp_action().
*/
enum xdp_action {
XDP_ABORTED = 0,
XDP_DROP,
XDP_PASS,
XDP_TX,
+ XDP_REDIRECT,
};
/* user accessible metadata for XDP packet hook
struct kvm_ppc_smmu_info {
__u64 flags;
__u32 slb_size;
- __u32 pad;
+ __u16 data_keys; /* # storage keys supported for data */
+ __u16 instr_keys; /* # storage keys supported for instructions */
struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
};
#ifndef _UAPI_LINUX_MMAN_H
#define _UAPI_LINUX_MMAN_H
-#include <uapi/asm/mman.h>
+#include <asm/mman.h>
+#include <asm-generic/hugetlb_encode.h>
#define MREMAP_MAYMOVE 1
#define MREMAP_FIXED 2
#define OVERCOMMIT_ALWAYS 1
#define OVERCOMMIT_NEVER 2
+/*
+ * Huge page size encoding when MAP_HUGETLB is specified, and a huge page
+ * size other than the default is desired. See hugetlb_encode.h.
+ * All known huge page size encodings are provided here. It is the
+ * responsibility of the application to know which sizes are supported on
+ * the running system. See mmap(2) man page for details.
+ */
+#define MAP_HUGE_SHIFT HUGETLB_FLAG_ENCODE_SHIFT
+#define MAP_HUGE_MASK HUGETLB_FLAG_ENCODE_MASK
+
+#define MAP_HUGE_64KB HUGETLB_FLAG_ENCODE_64KB
+#define MAP_HUGE_512KB HUGETLB_FLAG_ENCODE_512KB
+#define MAP_HUGE_1MB HUGETLB_FLAG_ENCODE_1MB
+#define MAP_HUGE_2MB HUGETLB_FLAG_ENCODE_2MB
+#define MAP_HUGE_8MB HUGETLB_FLAG_ENCODE_8MB
+#define MAP_HUGE_16MB HUGETLB_FLAG_ENCODE_16MB
+#define MAP_HUGE_256MB HUGETLB_FLAG_ENCODE_256MB
+#define MAP_HUGE_1GB HUGETLB_FLAG_ENCODE_1GB
+#define MAP_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB
+#define MAP_HUGE_16GB HUGETLB_FLAG_ENCODE_16GB
+
#endif /* _UAPI_LINUX_MMAN_H */
If it's a GCC-compiled .c file, the error may be because the function
uses an inline asm() statement which has a "call" instruction. An
asm() statement with a call instruction must declare the use of the
- stack pointer in its output operand. For example, on x86_64:
+ stack pointer in its output operand. On x86_64, this means adding
+ the ASM_CALL_CONSTRAINT as an output constraint:
- register void *__sp asm("rsp");
- asm volatile("call func" : "+r" (__sp));
+ asm volatile("call func" : ASM_CALL_CONSTRAINT);
Otherwise the stack frame may not get created before the call.
break;
case 0x89:
- if (rex == 0x48 && modrm == 0xe5) {
+ if (rex_w && !rex_r && modrm_mod == 3 && modrm_reg == 4) {
- /* mov %rsp, %rbp */
+ /* mov %rsp, reg */
*type = INSN_STACK;
op->src.type = OP_SRC_REG;
op->src.reg = CFI_SP;
op->dest.type = OP_DEST_REG;
- op->dest.reg = CFI_BP;
+ op->dest.reg = op_to_cfi_reg[modrm_rm][rex_b];
break;
}
case 0x8d:
if (sib == 0x24 && rex_w && !rex_b && !rex_x) {
- /* lea disp(%rsp), reg */
*type = INSN_STACK;
- op->src.type = OP_SRC_ADD;
+ if (!insn.displacement.value) {
+ /* lea (%rsp), reg */
+ op->src.type = OP_SRC_REG;
+ } else {
+ /* lea disp(%rsp), reg */
+ op->src.type = OP_SRC_ADD;
+ op->src.offset = insn.displacement.value;
+ }
op->src.reg = CFI_SP;
- op->src.offset = insn.displacement.value;
op->dest.type = OP_DEST_REG;
op->dest.reg = op_to_cfi_reg[modrm_reg][rex_r];
switch (op->src.type) {
case OP_SRC_REG:
- if (op->src.reg == CFI_SP && op->dest.reg == CFI_BP) {
+ if (op->src.reg == CFI_SP && op->dest.reg == CFI_BP &&
+ cfa->base == CFI_SP &&
+ regs[CFI_BP].base == CFI_CFA &&
+ regs[CFI_BP].offset == -cfa->offset) {
+
+ /* mov %rsp, %rbp */
+ cfa->base = op->dest.reg;
+ state->bp_scratch = false;
+ }
- if (cfa->base == CFI_SP &&
- regs[CFI_BP].base == CFI_CFA &&
- regs[CFI_BP].offset == -cfa->offset) {
+ else if (op->src.reg == CFI_SP &&
+ op->dest.reg == CFI_BP && state->drap) {
- /* mov %rsp, %rbp */
- cfa->base = op->dest.reg;
- state->bp_scratch = false;
- }
+ /* drap: mov %rsp, %rbp */
+ regs[CFI_BP].base = CFI_BP;
+ regs[CFI_BP].offset = -state->stack_size;
+ state->bp_scratch = false;
+ }
- else if (state->drap) {
+ else if (op->src.reg == CFI_SP && cfa->base == CFI_SP) {
- /* drap: mov %rsp, %rbp */
- regs[CFI_BP].base = CFI_BP;
- regs[CFI_BP].offset = -state->stack_size;
- state->bp_scratch = false;
- }
+ /*
+ * mov %rsp, %reg
+ *
+ * This is needed for the rare case where GCC
+ * does:
+ *
+ * mov %rsp, %rax
+ * ...
+ * mov %rax, %rsp
+ */
+ state->vals[op->dest.reg].base = CFI_CFA;
+ state->vals[op->dest.reg].offset = -state->stack_size;
}
else if (op->dest.reg == cfa->base) {
return -1;
}
- sec->data = elf_getdata(s, NULL);
- if (!sec->data) {
- WARN_ELF("elf_getdata");
- return -1;
- }
-
- if (sec->data->d_off != 0 ||
- sec->data->d_size != sec->sh.sh_size) {
- WARN("unexpected data attributes for %s", sec->name);
- return -1;
+ if (sec->sh.sh_size != 0) {
+ sec->data = elf_getdata(s, NULL);
+ if (!sec->data) {
+ WARN_ELF("elf_getdata");
+ return -1;
+ }
+ if (sec->data->d_off != 0 ||
+ sec->data->d_size != sec->sh.sh_size) {
+ WARN("unexpected data attributes for %s",
+ sec->name);
+ return -1;
+ }
}
-
- sec->len = sec->data->d_size;
+ sec->len = sec->sh.sh_size;
}
/* sanity check, one more call to elf_nextscn() should return NULL */
strcat(relaname, base->name);
sec = elf_create_section(elf, relaname, sizeof(GElf_Rela), 0);
+ free(relaname);
if (!sec)
return NULL;
struct section *sec;
Elf_Scn *s;
+ /* Update section headers for changed sections: */
list_for_each_entry(sec, &elf->sections, list) {
if (sec->changed) {
s = elf_getscn(elf->elf, sec->idx);
WARN_ELF("elf_getscn");
return -1;
}
- if (!gelf_update_shdr (s, &sec->sh)) {
+ if (!gelf_update_shdr(s, &sec->sh)) {
WARN_ELF("gelf_update_shdr");
return -1;
}
}
}
+ /* Make sure the new section header entries get updated properly. */
+ elf_flagelf(elf->elf, ELF_C_SET, ELF_F_DIRTY);
+
+ /* Write all changes to the file. */
if (elf_update(elf->elf, ELF_C_WRITE) < 0) {
WARN_ELF("elf_update");
return -1;
tools/perf
-tools/arch/alpha/include/asm/barrier.h
-tools/arch/arm/include/asm/barrier.h
-tools/arch/arm64/include/asm/barrier.h
-tools/arch/ia64/include/asm/barrier.h
-tools/arch/mips/include/asm/barrier.h
-tools/arch/powerpc/include/asm/barrier.h
-tools/arch/s390/include/asm/barrier.h
-tools/arch/sh/include/asm/barrier.h
-tools/arch/sparc/include/asm/barrier.h
-tools/arch/sparc/include/asm/barrier_32.h
-tools/arch/sparc/include/asm/barrier_64.h
-tools/arch/tile/include/asm/barrier.h
-tools/arch/x86/include/asm/barrier.h
-tools/arch/x86/include/asm/cmpxchg.h
-tools/arch/x86/include/asm/cpufeatures.h
-tools/arch/x86/include/asm/disabled-features.h
-tools/arch/x86/include/asm/required-features.h
-tools/arch/x86/include/uapi/asm/svm.h
-tools/arch/x86/include/uapi/asm/vmx.h
-tools/arch/x86/include/uapi/asm/kvm.h
-tools/arch/x86/include/uapi/asm/kvm_perf.h
-tools/arch/x86/lib/memcpy_64.S
-tools/arch/x86/lib/memset_64.S
-tools/arch/s390/include/uapi/asm/kvm_perf.h
-tools/arch/s390/include/uapi/asm/sie.h
-tools/arch/xtensa/include/asm/barrier.h
+tools/arch
tools/scripts
tools/build
-tools/arch/x86/include/asm/atomic.h
-tools/arch/x86/include/asm/rmwcc.h
+tools/include
tools/lib/traceevent
tools/lib/api
tools/lib/bpf
tools/lib/bitmap.c
tools/lib/str_error_r.c
tools/lib/vsprintf.c
-tools/include/asm/alternative-asm.h
-tools/include/asm/atomic.h
-tools/include/asm/barrier.h
-tools/include/asm/bug.h
-tools/include/asm-generic/atomic-gcc.h
-tools/include/asm-generic/barrier.h
-tools/include/asm-generic/bitops/arch_hweight.h
-tools/include/asm-generic/bitops/atomic.h
-tools/include/asm-generic/bitops/const_hweight.h
-tools/include/asm-generic/bitops/__ffs.h
-tools/include/asm-generic/bitops/__ffz.h
-tools/include/asm-generic/bitops/__fls.h
-tools/include/asm-generic/bitops/find.h
-tools/include/asm-generic/bitops/fls64.h
-tools/include/asm-generic/bitops/fls.h
-tools/include/asm-generic/bitops/hweight.h
-tools/include/asm-generic/bitops.h
-tools/include/linux/atomic.h
-tools/include/linux/bitops.h
-tools/include/linux/compiler.h
-tools/include/linux/compiler-gcc.h
-tools/include/linux/coresight-pmu.h
-tools/include/linux/bug.h
-tools/include/linux/filter.h
-tools/include/linux/hash.h
-tools/include/linux/kernel.h
-tools/include/linux/list.h
-tools/include/linux/log2.h
-tools/include/uapi/asm-generic/fcntl.h
-tools/include/uapi/asm-generic/ioctls.h
-tools/include/uapi/asm-generic/mman-common.h
-tools/include/uapi/asm-generic/mman.h
-tools/include/uapi/drm/drm.h
-tools/include/uapi/drm/i915_drm.h
-tools/include/uapi/linux/bpf.h
-tools/include/uapi/linux/bpf_common.h
-tools/include/uapi/linux/fcntl.h
-tools/include/uapi/linux/hw_breakpoint.h
-tools/include/uapi/linux/kvm.h
-tools/include/uapi/linux/mman.h
-tools/include/uapi/linux/perf_event.h
-tools/include/uapi/linux/sched.h
-tools/include/uapi/linux/stat.h
-tools/include/uapi/linux/vhost.h
-tools/include/uapi/sound/asound.h
-tools/include/linux/poison.h
-tools/include/linux/rbtree.h
-tools/include/linux/rbtree_augmented.h
-tools/include/linux/refcount.h
-tools/include/linux/string.h
-tools/include/linux/stringify.h
-tools/include/linux/types.h
-tools/include/linux/err.h
-tools/include/linux/bitmap.h
-tools/include/linux/time64.h
-tools/arch/*/include/uapi/asm/mman.h
-tools/arch/*/include/uapi/asm/perf_regs.h
libperf-y += header.o
-libperf-y += sym-handling.o
libperf-y += kvm-stat.o
libperf-$(CONFIG_DWARF) += dwarf-regs.o
+++ /dev/null
-/*
- * Architecture specific ELF symbol handling and relocation mapping.
- *
- * Copyright 2017 IBM Corp.
- * Author(s): Thomas Richter <tmricht@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License (version 2 only)
- * as published by the Free Software Foundation.
- */
-
-#include "symbol.h"
-
-#ifdef HAVE_LIBELF_SUPPORT
-bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)
-{
- if (ehdr.e_type == ET_EXEC)
- return false;
- return ehdr.e_type == ET_REL || ehdr.e_type == ET_DYN;
-}
-
-void arch__adjust_sym_map_offset(GElf_Sym *sym,
- GElf_Shdr *shdr __maybe_unused,
- struct map *map)
-{
- if (map->type == MAP__FUNCTION)
- sym->st_value += map->start;
-}
-#endif
callchain_param.mode = CHAIN_FOLDED;
return 0;
}
-
- pr_err("Invalid callchain mode: %s\n", value);
return -1;
}
callchain_param.order_set = true;
return 0;
}
-
- pr_err("Invalid callchain order: %s\n", value);
return -1;
}
callchain_param.branch_callstack = 1;
return 0;
}
-
- pr_err("Invalid callchain sort key: %s\n", value);
return -1;
}
callchain_param.value = CCVAL_COUNT;
return 0;
}
-
- pr_err("Invalid callchain config key: %s\n", value);
return -1;
}
return ret;
}
- if (!strcmp(var, "print-type"))
- return parse_callchain_mode(value);
- if (!strcmp(var, "order"))
- return parse_callchain_order(value);
- if (!strcmp(var, "sort-key"))
- return parse_callchain_sort_key(value);
+ if (!strcmp(var, "print-type")){
+ int ret;
+ ret = parse_callchain_mode(value);
+ if (ret == -1)
+ pr_err("Invalid callchain mode: %s\n", value);
+ return ret;
+ }
+ if (!strcmp(var, "order")){
+ int ret;
+ ret = parse_callchain_order(value);
+ if (ret == -1)
+ pr_err("Invalid callchain order: %s\n", value);
+ return ret;
+ }
+ if (!strcmp(var, "sort-key")){
+ int ret;
+ ret = parse_callchain_sort_key(value);
+ if (ret == -1)
+ pr_err("Invalid callchain sort key: %s\n", value);
+ return ret;
+ }
if (!strcmp(var, "threshold")) {
callchain_param.min_percent = strtod(value, &endptr);
if (value == endptr) {
return evsel;
}
+static bool perf_event_can_profile_kernel(void)
+{
+ return geteuid() == 0 || perf_event_paranoid() == -1;
+}
+
struct perf_evsel *perf_evsel__new_cycles(bool precise)
{
struct perf_event_attr attr = {
.type = PERF_TYPE_HARDWARE,
.config = PERF_COUNT_HW_CPU_CYCLES,
- .exclude_kernel = geteuid() != 0,
+ .exclude_kernel = !perf_event_can_profile_kernel(),
};
struct perf_evsel *evsel;
void __weak arch__sym_update(struct symbol *s __maybe_unused,
GElf_Sym *sym __maybe_unused) { }
-void __weak arch__adjust_sym_map_offset(GElf_Sym *sym, GElf_Shdr *shdr,
- struct map *map __maybe_unused)
-{
- sym->st_value -= shdr->sh_addr - shdr->sh_offset;
-}
-
int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
struct symsrc *runtime_ss, int kmodule)
{
/* Adjust symbol to map to file offset */
if (adjust_kernel_syms)
- arch__adjust_sym_map_offset(&sym, &shdr, map);
+ sym.st_value -= shdr.sh_addr - shdr.sh_offset;
if (strcmp(section_name,
(curr_dso->short_name +
#ifdef HAVE_LIBELF_SUPPORT
bool elf__needs_adjust_symbols(GElf_Ehdr ehdr);
void arch__sym_update(struct symbol *s, GElf_Sym *sym);
-void arch__adjust_sym_map_offset(GElf_Sym *sym,
- GElf_Shdr *shdr __maybe_unused,
- struct map *map __maybe_unused);
#endif
#define SYMBOL_A 0
#include "syscalltbl.h"
#include <stdlib.h>
+#include <linux/compiler.h>
#ifdef HAVE_SYSCALL_TABLE
-#include <linux/compiler.h>
#include <string.h>
#include "string2.h"
#include "util.h"
set_bit(ND_CMD_ARS_START, &acpi_desc->bus_cmd_force_en);
set_bit(ND_CMD_ARS_STATUS, &acpi_desc->bus_cmd_force_en);
set_bit(ND_CMD_CLEAR_ERROR, &acpi_desc->bus_cmd_force_en);
- set_bit(ND_CMD_GET_CONFIG_SIZE, &acpi_desc->dimm_cmd_force_en);
- set_bit(ND_CMD_GET_CONFIG_DATA, &acpi_desc->dimm_cmd_force_en);
- set_bit(ND_CMD_SET_CONFIG_DATA, &acpi_desc->dimm_cmd_force_en);
}
static int nfit_test_blk_do_io(struct nd_blk_region *ndbr, resource_size_t dpa,
override MAKEFLAGS =
endif
+ifneq ($(KBUILD_SRC),)
+override LDFLAGS =
+endif
+
BUILD := $(O)
ifndef BUILD
BUILD := $(KBUILD_OUTPUT)
export BUILD
all:
- for TARGET in $(TARGETS); do \
+ @for TARGET in $(TARGETS); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
mkdir $$BUILD_TARGET -p; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET;\
done;
run_tests: all
- for TARGET in $(TARGETS); do \
+ @for TARGET in $(TARGETS); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests;\
done;
hotplug:
- for TARGET in $(TARGETS_HOTPLUG); do \
+ @for TARGET in $(TARGETS_HOTPLUG); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET;\
done;
run_hotplug: hotplug
- for TARGET in $(TARGETS_HOTPLUG); do \
+ @for TARGET in $(TARGETS_HOTPLUG); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET run_full_test;\
done;
clean_hotplug:
- for TARGET in $(TARGETS_HOTPLUG); do \
+ @for TARGET in $(TARGETS_HOTPLUG); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET clean;\
done;
ifdef INSTALL_PATH
@# Ask all targets to install their files
mkdir -p $(INSTALL_PATH)
- for TARGET in $(TARGETS); do \
+ @for TARGET in $(TARGETS); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET INSTALL_PATH=$(INSTALL_PATH)/$$TARGET install; \
done;
endif
clean:
- for TARGET in $(TARGETS); do \
+ @for TARGET in $(TARGETS); do \
BUILD_TARGET=$$BUILD/$$TARGET; \
make OUTPUT=$$BUILD_TARGET -C $$TARGET clean;\
done;
unsigned int start, end, possible_cpus = 0;
char buff[128];
FILE *fp;
+ int n;
fp = fopen(fcpu, "r");
if (!fp) {
}
while (fgets(buff, sizeof(buff), fp)) {
- if (sscanf(buff, "%u-%u", &start, &end) == 2) {
- possible_cpus = start == 0 ? end + 1 : 0;
- break;
+ n = sscanf(buff, "%u-%u", &start, &end);
+ if (n == 0) {
+ printf("Failed to retrieve # possible CPUs!\n");
+ exit(1);
+ } else if (n == 1) {
+ end = start;
}
+ possible_cpus = start == 0 ? end + 1 : 0;
+ break;
}
-
fclose(fp);
- if (!possible_cpus) {
- printf("Failed to retrieve # possible CPUs!\n");
- exit(1);
- }
return possible_cpus;
}
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+TEST_GEN_PROGS := step_after_suspend_test
+
ifeq ($(ARCH),x86)
-TEST_GEN_PROGS := breakpoint_test
+TEST_GEN_PROGS += breakpoint_test
endif
ifneq (,$(filter $(ARCH),aarch64 arm64))
-TEST_GEN_PROGS := breakpoint_test_arm64
+TEST_GEN_PROGS += breakpoint_test_arm64
endif
-TEST_GEN_PROGS += step_after_suspend_test
-
include ../lib.mk
#!/bin/sh
# description: Register/unregister many kprobe events
+[ -f kprobe_events ] || exit_unsupported # this is configurable
+
# ftrace fentry skip size depends on the machine architecture.
# Currently HAVE_KPROBES_ON_FTRACE defined on x86 and powerpc64le
case `uname -m` in
include ../lib.mk
all:
- for DIR in $(SUBDIRS); do \
+ @for DIR in $(SUBDIRS); do \
BUILD_TARGET=$(OUTPUT)/$$DIR; \
mkdir $$BUILD_TARGET -p; \
make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
+ if [ -e $$DIR/$(TEST_PROGS) ]; then
+ rsync -a $$DIR/$(TEST_PROGS) $$BUILD_TARGET/;
+ fi
done
override define RUN_TESTS
- $(OUTPUT)/run.sh
+ @cd $(OUTPUT); ./run.sh
endef
override define INSTALL_RULE
endef
override define CLEAN
- for DIR in $(SUBDIRS); do \
+ @for DIR in $(SUBDIRS); do \
BUILD_TARGET=$(OUTPUT)/$$DIR; \
mkdir $$BUILD_TARGET -p; \
make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
CFLAGS := $(CFLAGS) -Wall -D_GNU_SOURCE
LDLIBS := $(LDLIBS) -lm
+ifeq (,$(filter $(ARCH),x86))
TEST_GEN_FILES := msr aperf
+endif
TEST_PROGS := run.sh
EVALUATE_ONLY=0
-max_cpus=$(($(nproc)-1))
+if ! uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ | grep -q x86; then
+ echo "$0 # Skipped: Test can only run on x86 architectures."
+ exit 0
+fi
-# compile programs
-gcc aperf.c -Wall -D_GNU_SOURCE -o aperf -lm
-[ $? -ne 0 ] && echo "Problem compiling aperf.c." && exit 1
-gcc -o msr msr.c -lm
-[ $? -ne 0 ] && echo "Problem compiling msr.c." && exit 1
+max_cpus=$(($(nproc)-1))
function run_test () {
OUTPUT := $(shell pwd)
endif
+# The following are built by lib.mk common compile rules.
+# TEST_CUSTOM_PROGS should be used by tests that require
+# custom build rule and prevent common build rule use.
+# TEST_PROGS are for test shell scripts.
+# TEST_CUSTOM_PROGS and TEST_PROGS will be run by common run_tests
+# and install targets. Common clean doesn't touch them.
TEST_GEN_PROGS := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS))
+TEST_GEN_PROGS_EXTENDED := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS_EXTENDED))
TEST_GEN_FILES := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_FILES))
all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES)
test_num=`echo $$test_num+1 | bc`; \
echo "selftests: $$BASENAME_TEST"; \
echo "========================================"; \
- if [ ! -x $$BASENAME_TEST ]; then \
+ if [ ! -x $$TEST ]; then \
echo "selftests: Warning: file $$BASENAME_TEST is not executable, correct this.";\
echo "not ok 1..$$test_num selftests: $$BASENAME_TEST [FAIL]"; \
else \
- cd `dirname $$TEST` > /dev/null; (./$$BASENAME_TEST && echo "ok 1..$$test_num selftests: $$BASENAME_TEST [PASS]") || echo "not ok 1..$$test_num selftests: $$BASENAME_TEST [FAIL]"; cd - > /dev/null;\
+ cd `dirname $$TEST` > /dev/null; (./$$BASENAME_TEST > /tmp/$$BASENAME_TEST 2>&1 && echo "ok 1..$$test_num selftests: $$BASENAME_TEST [PASS]") || echo "not ok 1..$$test_num selftests: $$BASENAME_TEST [FAIL]"; cd - > /dev/null;\
fi; \
done;
endef
run_tests: all
- $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_PROGS))
+ifneq ($(KBUILD_SRC),)
+ @if [ "X$(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES)" != "X" ]; then
+ @rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT)
+ fi
+ @if [ "X$(TEST_PROGS)" != "X" ]; then
+ $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(OUTPUT)/$(TEST_PROGS))
+ else
+ $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS))
+ fi
+else
+ $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(TEST_PROGS))
+endif
define INSTALL_RULE
@if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \
echo "rsync -a $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(INSTALL_PATH)/"; \
rsync -a $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(INSTALL_PATH)/; \
fi
- @if [ "X$(TEST_GEN_PROGS)$(TEST_GEN_PROGS_EXTENDED)$(TEST_GEN_FILES)" != "X" ]; then \
+ @if [ "X$(TEST_GEN_PROGS)$(TEST_CUSTOM_PROGS)$(TEST_GEN_PROGS_EXTENDED)$(TEST_GEN_FILES)" != "X" ]; then \
mkdir -p ${INSTALL_PATH}; \
- echo "rsync -a $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(INSTALL_PATH)/"; \
- rsync -a $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(INSTALL_PATH)/; \
+ echo "rsync -a $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(INSTALL_PATH)/"; \
+ rsync -a $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(INSTALL_PATH)/; \
fi
endef
endif
define EMIT_TESTS
- @for TEST in $(TEST_GEN_PROGS) $(TEST_PROGS); do \
+ @for TEST in $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(TEST_PROGS); do \
BASENAME_TEST=`basename $$TEST`; \
- echo "(./$$BASENAME_TEST && echo \"selftests: $$BASENAME_TEST [PASS]\") || echo \"selftests: $$BASENAME_TEST [FAIL]\""; \
+ echo "(./$$BASENAME_TEST > /tmp/$$BASENAME_TEST 2>&1 && echo \"selftests: $$BASENAME_TEST [PASS]\") || echo \"selftests: $$BASENAME_TEST [FAIL]\""; \
done;
endef
emit_tests:
$(EMIT_TESTS)
+# define if isn't already. It is undefined in make O= case.
+ifeq ($(RM),)
+RM := rm -f
+endif
+
define CLEAN
$(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN)
endef
clean:
$(CLEAN)
+# When make O= with kselftest target from main level
+# the following aren't defined.
+#
+ifneq ($(KBUILD_SRC),)
+LINK.c = $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH)
+COMPILE.S = $(CC) $(ASFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c
+LINK.S = $(CC) $(ASFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH)
+endif
+
$(OUTPUT)/%:%.c
$(LINK.c) $^ $(LDLIBS) -o $@
include ../lib.mk
override define RUN_TESTS
- @./mq_open_tests /test1 || echo "selftests: mq_open_tests [FAIL]"
- @./mq_perf_tests || echo "selftests: mq_perf_tests [FAIL]"
+ $(OUTPUT)/mq_open_tests /test1 || echo "selftests: mq_open_tests [FAIL]"
+ $(OUTPUT)//mq_perf_tests || echo "selftests: mq_perf_tests [FAIL]"
endef
override define EMIT_TESTS
reuseport_bpf_cpu
reuseport_bpf_numa
reuseport_dualstack
+reuseaddr_conflict
TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh rtnetlink.sh
TEST_GEN_FILES = socket
-TEST_GEN_FILES += psock_fanout psock_tpacket
-TEST_GEN_FILES += reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
-TEST_GEN_FILES += reuseport_dualstack msg_zerocopy
+TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy
+TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
+TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict
include ../lib.mk
#include <unistd.h>
#ifndef SO_EE_ORIGIN_ZEROCOPY
-#define SO_EE_ORIGIN_ZEROCOPY SO_EE_ORIGIN_UPAGE
+#define SO_EE_ORIGIN_ZEROCOPY 5
#endif
#ifndef SO_ZEROCOPY
exit 0
fi
-ip -Version 2>/dev/null >/dev/null
+ip link show 2>/dev/null >/dev/null
if [ $? -ne 0 ];then
echo "SKIP: Could not run test without the ip tool"
exit 0
--- /dev/null
+/*
+ * Test for the regression introduced by
+ *
+ * b9470c27607b ("inet: kill smallest_size and smallest_port")
+ *
+ * If we open an ipv4 socket on a port with reuseaddr we shouldn't reset the tb
+ * when we open the ipv6 conterpart, which is what was happening previously.
+ */
+#include <errno.h>
+#include <error.h>
+#include <arpa/inet.h>
+#include <netinet/in.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <sys/socket.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#define PORT 9999
+
+int open_port(int ipv6, int any)
+{
+ int fd = -1;
+ int reuseaddr = 1;
+ int v6only = 1;
+ int addrlen;
+ int ret = -1;
+ struct sockaddr *addr;
+ int family = ipv6 ? AF_INET6 : AF_INET;
+
+ struct sockaddr_in6 addr6 = {
+ .sin6_family = AF_INET6,
+ .sin6_port = htons(PORT),
+ .sin6_addr = in6addr_any
+ };
+ struct sockaddr_in addr4 = {
+ .sin_family = AF_INET,
+ .sin_port = htons(PORT),
+ .sin_addr.s_addr = any ? htonl(INADDR_ANY) : inet_addr("127.0.0.1"),
+ };
+
+
+ if (ipv6) {
+ addr = (struct sockaddr*)&addr6;
+ addrlen = sizeof(addr6);
+ } else {
+ addr = (struct sockaddr*)&addr4;
+ addrlen = sizeof(addr4);
+ }
+
+ if ((fd = socket(family, SOCK_STREAM, IPPROTO_TCP)) < 0) {
+ perror("socket");
+ goto out;
+ }
+
+ if (ipv6 && setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY, (void*)&v6only,
+ sizeof(v6only)) < 0) {
+ perror("setsockopt IPV6_V6ONLY");
+ goto out;
+ }
+
+ if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &reuseaddr,
+ sizeof(reuseaddr)) < 0) {
+ perror("setsockopt SO_REUSEADDR");
+ goto out;
+ }
+
+ if (bind(fd, addr, addrlen) < 0) {
+ perror("bind");
+ goto out;
+ }
+
+ if (any)
+ return fd;
+
+ if (listen(fd, 1) < 0) {
+ perror("listen");
+ goto out;
+ }
+ return fd;
+out:
+ close(fd);
+ return ret;
+}
+
+int main(void)
+{
+ int listenfd;
+ int fd1, fd2;
+
+ fprintf(stderr, "Opening 127.0.0.1:%d\n", PORT);
+ listenfd = open_port(0, 0);
+ if (listenfd < 0)
+ error(1, errno, "Couldn't open listen socket");
+ fprintf(stderr, "Opening INADDR_ANY:%d\n", PORT);
+ fd1 = open_port(0, 1);
+ if (fd1 >= 0)
+ error(1, 0, "Was allowed to create an ipv4 reuseport on a already bound non-reuseport socket");
+ fprintf(stderr, "Opening in6addr_any:%d\n", PORT);
+ fd1 = open_port(1, 1);
+ if (fd1 < 0)
+ error(1, errno, "Couldn't open ipv6 reuseport");
+ fprintf(stderr, "Opening INADDR_ANY:%d\n", PORT);
+ fd2 = open_port(0, 1);
+ if (fd2 >= 0)
+ error(1, 0, "Was allowed to create an ipv4 reuseport on a already bound non-reuseport socket");
+ close(fd1);
+ fprintf(stderr, "Opening INADDR_ANY:%d after closing ipv6 socket\n", PORT);
+ fd1 = open_port(0, 1);
+ if (fd1 >= 0)
+ error(1, 0, "Was allowed to create an ipv4 reuseport on an already bound non-reuseport socket with no ipv6");
+ fprintf(stderr, "Success");
+ return 0;
+}
-TEST_GEN_PROGS := seccomp_bpf
-CFLAGS += -Wl,-no-as-needed -Wall
-LDFLAGS += -lpthread
+all:
include ../lib.mk
-$(TEST_GEN_PROGS): seccomp_bpf.c ../kselftest_harness.h
- $(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
+.PHONY: all clean
+
+BINARIES := seccomp_bpf seccomp_benchmark
+CFLAGS += -Wl,-no-as-needed -Wall
+
+seccomp_bpf: seccomp_bpf.c ../kselftest_harness.h
+ $(CC) $(CFLAGS) $(LDFLAGS) -lpthread $< -o $@
+
+TEST_PROGS += $(BINARIES)
+EXTRA_CLEAN := $(BINARIES)
+
+all: $(BINARIES)
--- /dev/null
+/*
+ * Strictly speaking, this is not a test. But it can report during test
+ * runs so relative performace can be measured.
+ */
+#define _GNU_SOURCE
+#include <assert.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <time.h>
+#include <unistd.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <sys/prctl.h>
+#include <sys/syscall.h>
+#include <sys/types.h>
+
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+
+unsigned long long timing(clockid_t clk_id, unsigned long long samples)
+{
+ pid_t pid, ret;
+ unsigned long long i;
+ struct timespec start, finish;
+
+ pid = getpid();
+ assert(clock_gettime(clk_id, &start) == 0);
+ for (i = 0; i < samples; i++) {
+ ret = syscall(__NR_getpid);
+ assert(pid == ret);
+ }
+ assert(clock_gettime(clk_id, &finish) == 0);
+
+ i = finish.tv_sec - start.tv_sec;
+ i *= 1000000000;
+ i += finish.tv_nsec - start.tv_nsec;
+
+ printf("%lu.%09lu - %lu.%09lu = %llu\n",
+ finish.tv_sec, finish.tv_nsec,
+ start.tv_sec, start.tv_nsec,
+ i);
+
+ return i;
+}
+
+unsigned long long calibrate(void)
+{
+ unsigned long long i;
+
+ printf("Calibrating reasonable sample size...\n");
+
+ for (i = 5; ; i++) {
+ unsigned long long samples = 1 << i;
+
+ /* Find something that takes more than 5 seconds to run. */
+ if (timing(CLOCK_REALTIME, samples) / 1000000000ULL > 5)
+ return samples;
+ }
+}
+
+int main(int argc, char *argv[])
+{
+ struct sock_filter filter[] = {
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
+ };
+ struct sock_fprog prog = {
+ .len = (unsigned short)ARRAY_SIZE(filter),
+ .filter = filter,
+ };
+ long ret;
+ unsigned long long samples;
+ unsigned long long native, filtered;
+
+ if (argc > 1)
+ samples = strtoull(argv[1], NULL, 0);
+ else
+ samples = calibrate();
+
+ printf("Benchmarking %llu samples...\n", samples);
+
+ native = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples;
+ printf("getpid native: %llu ns\n", native);
+
+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
+ assert(ret == 0);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog);
+ assert(ret == 0);
+
+ filtered = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples;
+ printf("getpid RET_ALLOW: %llu ns\n", filtered);
+
+ printf("Estimated seccomp overhead per syscall: %llu ns\n",
+ filtered - native);
+
+ if (filtered == native)
+ printf("Trying running again with more samples.\n");
+
+ return 0;
+}
*/
#include <sys/types.h>
-#include <asm/siginfo.h>
-#define __have_siginfo_t 1
-#define __have_sigval_t 1
-#define __have_sigevent_t 1
+
+/*
+ * glibc 2.26 and later have SIGSYS in siginfo_t. Before that,
+ * we need to use the kernel's siginfo.h file and trick glibc
+ * into accepting it.
+ */
+#if !__GLIBC_PREREQ(2, 26)
+# include <asm/siginfo.h>
+# define __have_siginfo_t 1
+# define __have_sigval_t 1
+# define __have_sigevent_t 1
+#endif
#include <errno.h>
#include <linux/filter.h>
#define SECCOMP_MODE_FILTER 2
#endif
-#ifndef SECCOMP_RET_KILL
-#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
-#define SECCOMP_RET_TRAP 0x00030000U /* disallow and force a SIGSYS */
-#define SECCOMP_RET_ERRNO 0x00050000U /* returns an errno */
-#define SECCOMP_RET_TRACE 0x7ff00000U /* pass to a tracer or disallow */
-#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
-
-/* Masks for the return value sections. */
-#define SECCOMP_RET_ACTION 0x7fff0000U
-#define SECCOMP_RET_DATA 0x0000ffffU
-
+#ifndef SECCOMP_RET_ALLOW
struct seccomp_data {
int nr;
__u32 arch;
};
#endif
+#ifndef SECCOMP_RET_KILL_PROCESS
+#define SECCOMP_RET_KILL_PROCESS 0x80000000U /* kill the process */
+#define SECCOMP_RET_KILL_THREAD 0x00000000U /* kill the thread */
+#endif
+#ifndef SECCOMP_RET_KILL
+#define SECCOMP_RET_KILL SECCOMP_RET_KILL_THREAD
+#define SECCOMP_RET_TRAP 0x00030000U /* disallow and force a SIGSYS */
+#define SECCOMP_RET_ERRNO 0x00050000U /* returns an errno */
+#define SECCOMP_RET_TRACE 0x7ff00000U /* pass to a tracer or disallow */
+#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
+#endif
+#ifndef SECCOMP_RET_LOG
+#define SECCOMP_RET_LOG 0x7ffc0000U /* allow after logging */
+#endif
+
+#ifndef __NR_seccomp
+# if defined(__i386__)
+# define __NR_seccomp 354
+# elif defined(__x86_64__)
+# define __NR_seccomp 317
+# elif defined(__arm__)
+# define __NR_seccomp 383
+# elif defined(__aarch64__)
+# define __NR_seccomp 277
+# elif defined(__hppa__)
+# define __NR_seccomp 338
+# elif defined(__powerpc__)
+# define __NR_seccomp 358
+# elif defined(__s390__)
+# define __NR_seccomp 348
+# else
+# warning "seccomp syscall number unknown for this architecture"
+# define __NR_seccomp 0xffff
+# endif
+#endif
+
+#ifndef SECCOMP_SET_MODE_STRICT
+#define SECCOMP_SET_MODE_STRICT 0
+#endif
+
+#ifndef SECCOMP_SET_MODE_FILTER
+#define SECCOMP_SET_MODE_FILTER 1
+#endif
+
+#ifndef SECCOMP_GET_ACTION_AVAIL
+#define SECCOMP_GET_ACTION_AVAIL 2
+#endif
+
+#ifndef SECCOMP_FILTER_FLAG_TSYNC
+#define SECCOMP_FILTER_FLAG_TSYNC 1
+#endif
+
+#ifndef SECCOMP_FILTER_FLAG_LOG
+#define SECCOMP_FILTER_FLAG_LOG 2
+#endif
+
+#ifndef seccomp
+int seccomp(unsigned int op, unsigned int flags, void *args)
+{
+ errno = 0;
+ return syscall(__NR_seccomp, op, flags, args);
+}
+#endif
+
#if __BYTE_ORDER == __LITTLE_ENDIAN
#define syscall_arg(_n) (offsetof(struct seccomp_data, args[_n]))
#elif __BYTE_ORDER == __BIG_ENDIAN
}
}
-/* Tests kernel support by checking for a copy_from_user() fault on * NULL. */
+/* Tests kernel support by checking for a copy_from_user() fault on NULL. */
TEST(mode_filter_support)
{
long ret;
EXPECT_EQ(EINVAL, errno);
}
+TEST(log_all)
+{
+ struct sock_filter filter[] = {
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_LOG),
+ };
+ struct sock_fprog prog = {
+ .len = (unsigned short)ARRAY_SIZE(filter),
+ .filter = filter,
+ };
+ long ret;
+ pid_t parent = getppid();
+
+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
+ ASSERT_EQ(0, ret);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog);
+ ASSERT_EQ(0, ret);
+
+ /* getppid() should succeed and be logged (no check for logging) */
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+}
+
TEST_SIGNAL(unknown_ret_is_kill_inside, SIGSYS)
{
struct sock_filter filter[] = {
close(fd);
}
+/* This is a thread task to die via seccomp filter violation. */
+void *kill_thread(void *data)
+{
+ bool die = (bool)data;
+
+ if (die) {
+ prctl(PR_GET_SECCOMP, 0, 0, 0, 0);
+ return (void *)SIBLING_EXIT_FAILURE;
+ }
+
+ return (void *)SIBLING_EXIT_UNKILLED;
+}
+
+/* Prepare a thread that will kill itself or both of us. */
+void kill_thread_or_group(struct __test_metadata *_metadata, bool kill_process)
+{
+ pthread_t thread;
+ void *status;
+ /* Kill only when calling __NR_prctl. */
+ struct sock_filter filter_thread[] = {
+ BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
+ offsetof(struct seccomp_data, nr)),
+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL_THREAD),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
+ };
+ struct sock_fprog prog_thread = {
+ .len = (unsigned short)ARRAY_SIZE(filter_thread),
+ .filter = filter_thread,
+ };
+ struct sock_filter filter_process[] = {
+ BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
+ offsetof(struct seccomp_data, nr)),
+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL_PROCESS),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
+ };
+ struct sock_fprog prog_process = {
+ .len = (unsigned short)ARRAY_SIZE(filter_process),
+ .filter = filter_process,
+ };
+
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+ TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
+ }
+
+ ASSERT_EQ(0, seccomp(SECCOMP_SET_MODE_FILTER, 0,
+ kill_process ? &prog_process : &prog_thread));
+
+ /*
+ * Add the KILL_THREAD rule again to make sure that the KILL_PROCESS
+ * flag cannot be downgraded by a new filter.
+ */
+ ASSERT_EQ(0, seccomp(SECCOMP_SET_MODE_FILTER, 0, &prog_thread));
+
+ /* Start a thread that will exit immediately. */
+ ASSERT_EQ(0, pthread_create(&thread, NULL, kill_thread, (void *)false));
+ ASSERT_EQ(0, pthread_join(thread, &status));
+ ASSERT_EQ(SIBLING_EXIT_UNKILLED, (unsigned long)status);
+
+ /* Start a thread that will die immediately. */
+ ASSERT_EQ(0, pthread_create(&thread, NULL, kill_thread, (void *)true));
+ ASSERT_EQ(0, pthread_join(thread, &status));
+ ASSERT_NE(SIBLING_EXIT_FAILURE, (unsigned long)status);
+
+ /*
+ * If we get here, only the spawned thread died. Let the parent know
+ * the whole process didn't die (i.e. this thread, the spawner,
+ * stayed running).
+ */
+ exit(42);
+}
+
+TEST(KILL_thread)
+{
+ int status;
+ pid_t child_pid;
+
+ child_pid = fork();
+ ASSERT_LE(0, child_pid);
+ if (child_pid == 0) {
+ kill_thread_or_group(_metadata, false);
+ _exit(38);
+ }
+
+ ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0));
+
+ /* If only the thread was killed, we'll see exit 42. */
+ ASSERT_TRUE(WIFEXITED(status));
+ ASSERT_EQ(42, WEXITSTATUS(status));
+}
+
+TEST(KILL_process)
+{
+ int status;
+ pid_t child_pid;
+
+ child_pid = fork();
+ ASSERT_LE(0, child_pid);
+ if (child_pid == 0) {
+ kill_thread_or_group(_metadata, true);
+ _exit(38);
+ }
+
+ ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0));
+
+ /* If the entire process was killed, we'll see SIGSYS. */
+ ASSERT_TRUE(WIFSIGNALED(status));
+ ASSERT_EQ(SIGSYS, WTERMSIG(status));
+}
+
/* TODO(wad) add 64-bit versus 32-bit arg tests. */
TEST(arg_out_of_range)
{
EXPECT_EQ(EINVAL, errno);
}
+#define ERRNO_FILTER(name, errno) \
+ struct sock_filter _read_filter_##name[] = { \
+ BPF_STMT(BPF_LD|BPF_W|BPF_ABS, \
+ offsetof(struct seccomp_data, nr)), \
+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1), \
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | errno), \
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW), \
+ }; \
+ struct sock_fprog prog_##name = { \
+ .len = (unsigned short)ARRAY_SIZE(_read_filter_##name), \
+ .filter = _read_filter_##name, \
+ }
+
+/* Make sure basic errno values are correctly passed through a filter. */
TEST(ERRNO_valid)
{
- struct sock_filter filter[] = {
- BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
- offsetof(struct seccomp_data, nr)),
- BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1),
- BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | E2BIG),
- BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
- };
- struct sock_fprog prog = {
- .len = (unsigned short)ARRAY_SIZE(filter),
- .filter = filter,
- };
+ ERRNO_FILTER(valid, E2BIG);
long ret;
pid_t parent = getppid();
ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
ASSERT_EQ(0, ret);
- ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog_valid);
ASSERT_EQ(0, ret);
EXPECT_EQ(parent, syscall(__NR_getppid));
EXPECT_EQ(E2BIG, errno);
}
+/* Make sure an errno of zero is correctly handled by the arch code. */
TEST(ERRNO_zero)
{
- struct sock_filter filter[] = {
- BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
- offsetof(struct seccomp_data, nr)),
- BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1),
- BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | 0),
- BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
- };
- struct sock_fprog prog = {
- .len = (unsigned short)ARRAY_SIZE(filter),
- .filter = filter,
- };
+ ERRNO_FILTER(zero, 0);
long ret;
pid_t parent = getppid();
ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
ASSERT_EQ(0, ret);
- ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog_zero);
ASSERT_EQ(0, ret);
EXPECT_EQ(parent, syscall(__NR_getppid));
EXPECT_EQ(0, read(0, NULL, 0));
}
+/*
+ * The SECCOMP_RET_DATA mask is 16 bits wide, but errno is smaller.
+ * This tests that the errno value gets capped correctly, fixed by
+ * 580c57f10768 ("seccomp: cap SECCOMP_RET_ERRNO data to MAX_ERRNO").
+ */
TEST(ERRNO_capped)
{
- struct sock_filter filter[] = {
- BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
- offsetof(struct seccomp_data, nr)),
- BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1),
- BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | 4096),
- BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
- };
- struct sock_fprog prog = {
- .len = (unsigned short)ARRAY_SIZE(filter),
- .filter = filter,
- };
+ ERRNO_FILTER(capped, 4096);
long ret;
pid_t parent = getppid();
ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
ASSERT_EQ(0, ret);
- ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog_capped);
ASSERT_EQ(0, ret);
EXPECT_EQ(parent, syscall(__NR_getppid));
EXPECT_EQ(4095, errno);
}
+/*
+ * Filters are processed in reverse order: last applied is executed first.
+ * Since only the SECCOMP_RET_ACTION mask is tested for return values, the
+ * SECCOMP_RET_DATA mask results will follow the most recently applied
+ * matching filter return (and not the lowest or highest value).
+ */
+TEST(ERRNO_order)
+{
+ ERRNO_FILTER(first, 11);
+ ERRNO_FILTER(second, 13);
+ ERRNO_FILTER(third, 12);
+ long ret;
+ pid_t parent = getppid();
+
+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
+ ASSERT_EQ(0, ret);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog_first);
+ ASSERT_EQ(0, ret);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog_second);
+ ASSERT_EQ(0, ret);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog_third);
+ ASSERT_EQ(0, ret);
+
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+ EXPECT_EQ(-1, read(0, NULL, 0));
+ EXPECT_EQ(12, errno);
+}
+
FIXTURE_DATA(TRAP) {
struct sock_fprog prog;
};
syscall(__NR_getpid);
}
-static struct siginfo TRAP_info;
+static siginfo_t TRAP_info;
static volatile int TRAP_nr;
static void TRAP_action(int nr, siginfo_t *info, void *void_context)
{
FIXTURE_DATA(precedence) {
struct sock_fprog allow;
+ struct sock_fprog log;
struct sock_fprog trace;
struct sock_fprog error;
struct sock_fprog trap;
struct sock_filter allow_insns[] = {
BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
};
+ struct sock_filter log_insns[] = {
+ BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
+ offsetof(struct seccomp_data, nr)),
+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_LOG),
+ };
struct sock_filter trace_insns[] = {
BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
offsetof(struct seccomp_data, nr)),
memcpy(self->_x.filter, &_x##_insns, sizeof(_x##_insns)); \
self->_x.len = (unsigned short)ARRAY_SIZE(_x##_insns)
FILTER_ALLOC(allow);
+ FILTER_ALLOC(log);
FILTER_ALLOC(trace);
FILTER_ALLOC(error);
FILTER_ALLOC(trap);
{
#define FILTER_FREE(_x) if (self->_x.filter) free(self->_x.filter)
FILTER_FREE(allow);
+ FILTER_FREE(log);
FILTER_FREE(trace);
FILTER_FREE(error);
FILTER_FREE(trap);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trap);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trap);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->error);
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->trace);
ASSERT_EQ(0, ret);
/* Should work just fine. */
ASSERT_EQ(0, ret);
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
/* Should work just fine. */
EXPECT_EQ(parent, syscall(__NR_getppid));
/* No ptracer */
EXPECT_EQ(-1, syscall(__NR_getpid));
}
+TEST_F(precedence, log_is_fifth)
+{
+ pid_t mypid, parent;
+ long ret;
+
+ mypid = getpid();
+ parent = getppid();
+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
+ ASSERT_EQ(0, ret);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
+ ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
+ /* Should work just fine. */
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+ /* Should also work just fine */
+ EXPECT_EQ(mypid, syscall(__NR_getpid));
+}
+
+TEST_F(precedence, log_is_fifth_in_any_order)
+{
+ pid_t mypid, parent;
+ long ret;
+
+ mypid = getpid();
+ parent = getppid();
+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
+ ASSERT_EQ(0, ret);
+
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->log);
+ ASSERT_EQ(0, ret);
+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->allow);
+ ASSERT_EQ(0, ret);
+ /* Should work just fine. */
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+ /* Should also work just fine */
+ EXPECT_EQ(mypid, syscall(__NR_getpid));
+}
+
#ifndef PTRACE_O_TRACESECCOMP
#define PTRACE_O_TRACESECCOMP 0x00000080
#endif
# error "Do not know how to find your architecture's registers and syscalls"
#endif
+/* When the syscall return can't be changed, stub out the tests for it. */
+#ifdef SYSCALL_NUM_RET_SHARE_REG
+# define EXPECT_SYSCALL_RETURN(val, action) EXPECT_EQ(-1, action)
+#else
+# define EXPECT_SYSCALL_RETURN(val, action) EXPECT_EQ(val, action)
+#endif
+
/* Use PTRACE_GETREGS and PTRACE_SETREGS when available. This is useful for
* architectures without HAVE_ARCH_TRACEHOOK (e.g. User-mode Linux).
*/
#ifdef SYSCALL_NUM_RET_SHARE_REG
TH_LOG("Can't modify syscall return on this architecture");
#else
- regs.SYSCALL_RET = 1;
+ regs.SYSCALL_RET = EPERM;
#endif
#ifdef HAVE_GETREGS
if (nr == __NR_getpid)
change_syscall(_metadata, tracee, __NR_getppid);
+ if (nr == __NR_open)
+ change_syscall(_metadata, tracee, -1);
}
FIXTURE_DATA(TRACE_syscall) {
free(self->prog.filter);
}
+TEST_F(TRACE_syscall, ptrace_syscall_redirected)
+{
+ /* Swap SECCOMP_RET_TRACE tracer for PTRACE_SYSCALL tracer. */
+ teardown_trace_fixture(_metadata, self->tracer);
+ self->tracer = setup_trace_fixture(_metadata, tracer_ptrace, NULL,
+ true);
+
+ /* Tracer will redirect getpid to getppid. */
+ EXPECT_NE(self->mypid, syscall(__NR_getpid));
+}
+
+TEST_F(TRACE_syscall, ptrace_syscall_dropped)
+{
+ /* Swap SECCOMP_RET_TRACE tracer for PTRACE_SYSCALL tracer. */
+ teardown_trace_fixture(_metadata, self->tracer);
+ self->tracer = setup_trace_fixture(_metadata, tracer_ptrace, NULL,
+ true);
+
+ /* Tracer should skip the open syscall, resulting in EPERM. */
+ EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_open));
+}
+
TEST_F(TRACE_syscall, syscall_allowed)
{
long ret;
ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->prog, 0, 0);
ASSERT_EQ(0, ret);
-#ifdef SYSCALL_NUM_RET_SHARE_REG
- /* gettid has been skipped */
- EXPECT_EQ(-1, syscall(__NR_gettid));
-#else
/* gettid has been skipped and an altered return value stored. */
- EXPECT_EQ(1, syscall(__NR_gettid));
-#endif
+ EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_gettid));
EXPECT_NE(self->mytid, syscall(__NR_gettid));
}
ASSERT_EQ(0, ret);
/* Tracer will redirect getpid to getppid, and we should see EPERM. */
+ errno = 0;
EXPECT_EQ(-1, syscall(__NR_getpid));
EXPECT_EQ(EPERM, errno);
}
EXPECT_NE(self->mypid, syscall(__NR_getpid));
}
-#ifndef __NR_seccomp
-# if defined(__i386__)
-# define __NR_seccomp 354
-# elif defined(__x86_64__)
-# define __NR_seccomp 317
-# elif defined(__arm__)
-# define __NR_seccomp 383
-# elif defined(__aarch64__)
-# define __NR_seccomp 277
-# elif defined(__hppa__)
-# define __NR_seccomp 338
-# elif defined(__powerpc__)
-# define __NR_seccomp 358
-# elif defined(__s390__)
-# define __NR_seccomp 348
-# else
-# warning "seccomp syscall number unknown for this architecture"
-# define __NR_seccomp 0xffff
-# endif
-#endif
-
-#ifndef SECCOMP_SET_MODE_STRICT
-#define SECCOMP_SET_MODE_STRICT 0
-#endif
-
-#ifndef SECCOMP_SET_MODE_FILTER
-#define SECCOMP_SET_MODE_FILTER 1
-#endif
-
-#ifndef SECCOMP_FILTER_FLAG_TSYNC
-#define SECCOMP_FILTER_FLAG_TSYNC 1
-#endif
-
-#ifndef seccomp
-int seccomp(unsigned int op, unsigned int flags, void *args)
-{
- errno = 0;
- return syscall(__NR_seccomp, op, flags, args);
-}
-#endif
-
TEST(seccomp_syscall)
{
struct sock_filter filter[] = {
}
}
+/*
+ * Test detection of known and unknown filter flags. Userspace needs to be able
+ * to check if a filter flag is supported by the current kernel and a good way
+ * of doing that is by attempting to enter filter mode, with the flag bit in
+ * question set, and a NULL pointer for the _args_ parameter. EFAULT indicates
+ * that the flag is valid and EINVAL indicates that the flag is invalid.
+ */
+TEST(detect_seccomp_filter_flags)
+{
+ unsigned int flags[] = { SECCOMP_FILTER_FLAG_TSYNC,
+ SECCOMP_FILTER_FLAG_LOG };
+ unsigned int flag, all_flags;
+ int i;
+ long ret;
+
+ /* Test detection of known-good filter flags */
+ for (i = 0, all_flags = 0; i < ARRAY_SIZE(flags); i++) {
+ flag = flags[i];
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);
+ ASSERT_NE(ENOSYS, errno) {
+ TH_LOG("Kernel does not support seccomp syscall!");
+ }
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EFAULT, errno) {
+ TH_LOG("Failed to detect that a known-good filter flag (0x%X) is supported!",
+ flag);
+ }
+
+ all_flags |= flag;
+ }
+
+ /* Test detection of all known-good filter flags */
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, all_flags, NULL);
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EFAULT, errno) {
+ TH_LOG("Failed to detect that all known-good filter flags (0x%X) are supported!",
+ all_flags);
+ }
+
+ /* Test detection of an unknown filter flag */
+ flag = -1;
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EINVAL, errno) {
+ TH_LOG("Failed to detect that an unknown filter flag (0x%X) is unsupported!",
+ flag);
+ }
+
+ /*
+ * Test detection of an unknown filter flag that may simply need to be
+ * added to this test
+ */
+ flag = flags[ARRAY_SIZE(flags) - 1] << 1;
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EINVAL, errno) {
+ TH_LOG("Failed to detect that an unknown filter flag (0x%X) is unsupported! Does a new flag need to be added to this test?",
+ flag);
+ }
+}
+
TEST(TSYNC_first)
{
struct sock_filter filter[] = {
_metadata->passed = 0;
}
+TEST_SIGNAL(filter_flag_log, SIGSYS)
+{
+ struct sock_filter allow_filter[] = {
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
+ };
+ struct sock_filter kill_filter[] = {
+ BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
+ offsetof(struct seccomp_data, nr)),
+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
+ };
+ struct sock_fprog allow_prog = {
+ .len = (unsigned short)ARRAY_SIZE(allow_filter),
+ .filter = allow_filter,
+ };
+ struct sock_fprog kill_prog = {
+ .len = (unsigned short)ARRAY_SIZE(kill_filter),
+ .filter = kill_filter,
+ };
+ long ret;
+ pid_t parent = getppid();
+
+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
+ ASSERT_EQ(0, ret);
+
+ /* Verify that the FILTER_FLAG_LOG flag isn't accepted in strict mode */
+ ret = seccomp(SECCOMP_SET_MODE_STRICT, SECCOMP_FILTER_FLAG_LOG,
+ &allow_prog);
+ ASSERT_NE(ENOSYS, errno) {
+ TH_LOG("Kernel does not support seccomp syscall!");
+ }
+ EXPECT_NE(0, ret) {
+ TH_LOG("Kernel accepted FILTER_FLAG_LOG flag in strict mode!");
+ }
+ EXPECT_EQ(EINVAL, errno) {
+ TH_LOG("Kernel returned unexpected errno for FILTER_FLAG_LOG flag in strict mode!");
+ }
+
+ /* Verify that a simple, permissive filter can be added with no flags */
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, 0, &allow_prog);
+ EXPECT_EQ(0, ret);
+
+ /* See if the same filter can be added with the FILTER_FLAG_LOG flag */
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, SECCOMP_FILTER_FLAG_LOG,
+ &allow_prog);
+ ASSERT_NE(EINVAL, errno) {
+ TH_LOG("Kernel does not support the FILTER_FLAG_LOG flag!");
+ }
+ EXPECT_EQ(0, ret);
+
+ /* Ensure that the kill filter works with the FILTER_FLAG_LOG flag */
+ ret = seccomp(SECCOMP_SET_MODE_FILTER, SECCOMP_FILTER_FLAG_LOG,
+ &kill_prog);
+ EXPECT_EQ(0, ret);
+
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+ /* getpid() should never return. */
+ EXPECT_EQ(0, syscall(__NR_getpid));
+}
+
+TEST(get_action_avail)
+{
+ __u32 actions[] = { SECCOMP_RET_KILL_THREAD, SECCOMP_RET_TRAP,
+ SECCOMP_RET_ERRNO, SECCOMP_RET_TRACE,
+ SECCOMP_RET_LOG, SECCOMP_RET_ALLOW };
+ __u32 unknown_action = 0x10000000U;
+ int i;
+ long ret;
+
+ ret = seccomp(SECCOMP_GET_ACTION_AVAIL, 0, &actions[0]);
+ ASSERT_NE(ENOSYS, errno) {
+ TH_LOG("Kernel does not support seccomp syscall!");
+ }
+ ASSERT_NE(EINVAL, errno) {
+ TH_LOG("Kernel does not support SECCOMP_GET_ACTION_AVAIL operation!");
+ }
+ EXPECT_EQ(ret, 0);
+
+ for (i = 0; i < ARRAY_SIZE(actions); i++) {
+ ret = seccomp(SECCOMP_GET_ACTION_AVAIL, 0, &actions[i]);
+ EXPECT_EQ(ret, 0) {
+ TH_LOG("Expected action (0x%X) not available!",
+ actions[i]);
+ }
+ }
+
+ /* Check that an unknown action is handled properly (EOPNOTSUPP) */
+ ret = seccomp(SECCOMP_GET_ACTION_AVAIL, 0, &unknown_action);
+ EXPECT_EQ(ret, -1);
+ EXPECT_EQ(errno, EOPNOTSUPP);
+}
+
/*
* TODO:
* - add microbenchmarks
* - endianness checking when appropriate
* - 64-bit arg prodding
* - arch value testing (x86 modes especially)
+ * - verify that FILTER_FLAG_LOG filters generate log messages
+ * - verify that RET_LOG generates log messages
* - ...
*/
stack_t stk;
struct stk_data *p;
+#if __s390x__
+ register unsigned long sp asm("%15");
+#else
register unsigned long sp asm("sp");
+#endif
if (sp < (unsigned long)sstack ||
sp >= (unsigned long)sstack + SIGSTKSZ) {
CFLAGS += -I../../../../usr/include/
LDFLAGS += -pthread
-TEST_PROGS = sync_test
-
-all: $(TEST_PROGS)
+.PHONY: all clean
include ../lib.mk
+# lib.mk TEST_CUSTOM_PROGS var is for custom tests that need special
+# build rules. lib.mk will run and install them.
+
+TEST_CUSTOM_PROGS := $(OUTPUT)/sync_test
+all: $(TEST_CUSTOM_PROGS)
+
OBJS = sync_test.o sync.o
TESTS += sync_alloc.o
TESTS += sync_stress_consumer.o
TESTS += sync_stress_merge.o
-sync_test: $(OBJS) $(TESTS)
+OBJS := $(patsubst %,$(OUTPUT)/%,$(OBJS))
+TESTS := $(patsubst %,$(OUTPUT)/%,$(TESTS))
+
+$(TEST_CUSTOM_PROGS): $(TESTS) $(OBJS)
+ $(CC) -o $(TEST_CUSTOM_PROGS) $(OBJS) $(TESTS) $(CFLAGS) $(LDFLAGS)
+
+$(OBJS): $(OUTPUT)/%.o: %.c
+ $(CC) -c $^ -o $@
+
+$(TESTS): $(OUTPUT)/%.o: %.c
+ $(CC) -c $^ -o $@
-EXTRA_CLEAN := sync_test $(OBJS) $(TESTS)
+EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(OBJS) $(TESTS)
printf("%-22s %s missing CAP_WAKE_ALARM? : [UNSUPPORTED]\n",
clockstring(clock_id),
flags ? "ABSTIME":"RELTIME");
- return 0;
+ /* Indicate timer isn't set, so caller doesn't wait */
+ return 1;
}
printf("%s - timer_create() failed\n", clockstring(clock_id));
return -1;
int err;
err = setup_timer(clock_id, flags, interval, &tm1);
+ /* Unsupported case - return 0 to not fail the test */
if (err)
- return err;
+ return err == 1 ? 0 : err;
while (alarmcount < 5)
sleep(1);
timer_t tm1;
const int interval = 0;
struct timeval timeout;
- fd_set fds;
int err;
err = setup_timer(clock_id, flags, interval, &tm1);
+ /* Unsupported case - return 0 to not fail the test */
if (err)
- return err;
+ return err == 1 ? 0 : err;
memset(&timeout, 0, sizeof(timeout));
timeout.tv_sec = 5;
- FD_ZERO(&fds);
do {
- err = select(FD_SETSIZE, &fds, NULL, NULL, &timeout);
+ err = select(0, NULL, NULL, NULL, &timeout);
} while (err == -1 && errno == EINTR);
timer_delete(tm1);
-TEST_PROGS := watchdog-test
-
-all: $(TEST_PROGS)
+TEST_GEN_PROGS := watchdog-test
include ../lib.mk
-
-clean:
- rm -fr $(TEST_PROGS)
{
if (args->flags & ~(KVM_IRQFD_FLAG_DEASSIGN | KVM_IRQFD_FLAG_RESAMPLE))
return -EINVAL;
- if (args->gsi >= KVM_MAX_IRQ_ROUTES)
- return -EINVAL;
if (args->flags & KVM_IRQFD_FLAG_DEASSIGN)
return kvm_irqfd_deassign(kvm, args);